text
string
id
string
dump
string
url
string
file_path
string
language
string
language_score
float64
token_count
int64
score
float64
int_score
int64
embedding
list
count
int64
Content
string
Tokens
int64
Top_Lang
string
Top_Conf
float64
The story of Smalls's dramatic escape from slavery attracted a great deal of media attention in the North. Many newspapers and magazines published articles about him and called him a war hero. Admiral Samuel DuPont (1803-1865), the commander in charge of the Union naval blockade of Charleston, called Smalls's escape "one of the coolest and most gallant [brave and daring] naval acts of war." Of course, people in the South were not so thrilled by the news. A newspaper in Richmond, Virginia, called the loss of the Planter "one of the most shameful events in this or any other war." Smalls and the other former slaves on board the Planter were accepted into the Union as "contrabands" (the Union Army was authorized to seize any Confederate property used in the war effort, including slaves, as "contraband of war"). The U.S. Congress granted Smalls a $1,500 cash reward for delivering the ship, and gave several hundred dollars to each member of his crew. Smalls continued to help the Union Navy by providing valuable information about Confederate defenses in the Charleston area. After all, he had explored many rivers and inlets during his supply missions on the Planter. At that time, black men were not allowed to serve as Union soldiers. Smalls joined a group of prominent black leaders who tried to convince President Abraham Lincoln (1809-1865; see entry) to allow black men to join the army. Lincoln eventually allowed an all-black regiment—the First South Carolina Volunteers—to be formed on the South Carolina coastal islands, near Smalls's home. Smalls helped recruit black men to join the war effort both in his home state and in the North. Smalls himself served in the Union Navy. When he was promoted to captain of the Planter, he became the first black man ever allowed to command an American warship. He continued to carry supplies along the coast—this time for the Union—and also fought in seventeen naval battles. Was this article helpful?
<urn:uuid:0c7b4a6d-b89a-475d-a67b-d0cc9a6d1f39>
CC-MAIN-2020-05
https://www.minecreek.info/southern-states/becomes-a-union-war-hero.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00454.warc.gz
en
0.980314
419
4.15625
4
[ -0.03726011514663696, 0.31811344623565674, -0.05204455181956291, -0.06969435513019562, 0.2411615252494812, -0.12761013209819794, 0.16406375169754028, -0.21850936114788055, -0.09408030658960342, 0.1859416365623474, 0.407867968082428, 0.28125306963920593, 0.2169533222913742, 0.38860854506492...
1
The story of Smalls's dramatic escape from slavery attracted a great deal of media attention in the North. Many newspapers and magazines published articles about him and called him a war hero. Admiral Samuel DuPont (1803-1865), the commander in charge of the Union naval blockade of Charleston, called Smalls's escape "one of the coolest and most gallant [brave and daring] naval acts of war." Of course, people in the South were not so thrilled by the news. A newspaper in Richmond, Virginia, called the loss of the Planter "one of the most shameful events in this or any other war." Smalls and the other former slaves on board the Planter were accepted into the Union as "contrabands" (the Union Army was authorized to seize any Confederate property used in the war effort, including slaves, as "contraband of war"). The U.S. Congress granted Smalls a $1,500 cash reward for delivering the ship, and gave several hundred dollars to each member of his crew. Smalls continued to help the Union Navy by providing valuable information about Confederate defenses in the Charleston area. After all, he had explored many rivers and inlets during his supply missions on the Planter. At that time, black men were not allowed to serve as Union soldiers. Smalls joined a group of prominent black leaders who tried to convince President Abraham Lincoln (1809-1865; see entry) to allow black men to join the army. Lincoln eventually allowed an all-black regiment—the First South Carolina Volunteers—to be formed on the South Carolina coastal islands, near Smalls's home. Smalls helped recruit black men to join the war effort both in his home state and in the North. Smalls himself served in the Union Navy. When he was promoted to captain of the Planter, he became the first black man ever allowed to command an American warship. He continued to carry supplies along the coast—this time for the Union—and also fought in seventeen naval battles. Was this article helpful?
421
ENGLISH
1
Thomas Alva Edison, was a small town, country boy who became a United States inventor. He is the most famous of all Americans to make a career of inventing; Edison was called the “Wizard of Menlo Park,” which was his laboratory for sometime, found in New Jersey. He was especially important for his electrical inventions. Like many inventors of his era, Edison struggled to perfect a system of practical electrical home lighting. He experimented with arc lighting in 1875, but became convinced that successful home lighting would have to be incandescent; that is, use a material that would glow when an electric current passed through it, but not burn in the process. He studied earlier experiments and in 1878 announced that he had the technical problems solved and would create a practical incandescent lamp within six months. The greatest problem was not creating a light for others had done that before but finding a filament that would not quickly burn out, and producing the lamp cheaply enough to compete with gas lighting. Edison began by experimenting with carbon as a filament, but rejected it and tried using platinum. He discovered that a platinum filament would have to be very thin to provide the resistance necessary for use in the high- voltage electrical system he envisioned. However, when made thin enough, the filaments were too fragile and broke. After numerous experiments with platinum, Edison returned to carbon filaments. In October, 1879, Edison and his assistants began to experiment with a filament made of carbonized cotton thread. Enclosed in a glass bulb with a near- perfect vacuum, it shed a bright light and burned for many hours. The practical incandescent lamp had become a reality. Edison and his assistants continued to search for a better filament material. They tried carbonized paper, and tested some species of vegetable fibers. They have experimented bamboo, then Tungsten and the Nitrogen for vacuum, but essentially Edison’s lamp was the same as those used today. The incandescent lamp or the electric light bulb brought new opportunities for Edison and to his country, America. Several new industries, including the electric light and power industry, were built based on his invention. He was awarded 1, 093 United States patents. One of his greatest...
<urn:uuid:fbe850d4-20e3-4330-9cb2-ce1587e9f533>
CC-MAIN-2020-05
https://ostatic.com/essays/thomas-edison-the-us-inventor-university-of-la-essay
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00031.warc.gz
en
0.987364
460
3.4375
3
[ -0.1139068603515625, 0.4031676650047302, 0.2862244248390198, 0.11167558282613754, -0.3855993449687958, 0.23175378143787384, 0.2964448630809784, 0.29132750630378723, -0.2551786005496979, 0.1934415102005005, 0.0744863748550415, 0.1742514669895172, -0.05534958466887474, 0.3944772779941559, ...
2
Thomas Alva Edison, was a small town, country boy who became a United States inventor. He is the most famous of all Americans to make a career of inventing; Edison was called the “Wizard of Menlo Park,” which was his laboratory for sometime, found in New Jersey. He was especially important for his electrical inventions. Like many inventors of his era, Edison struggled to perfect a system of practical electrical home lighting. He experimented with arc lighting in 1875, but became convinced that successful home lighting would have to be incandescent; that is, use a material that would glow when an electric current passed through it, but not burn in the process. He studied earlier experiments and in 1878 announced that he had the technical problems solved and would create a practical incandescent lamp within six months. The greatest problem was not creating a light for others had done that before but finding a filament that would not quickly burn out, and producing the lamp cheaply enough to compete with gas lighting. Edison began by experimenting with carbon as a filament, but rejected it and tried using platinum. He discovered that a platinum filament would have to be very thin to provide the resistance necessary for use in the high- voltage electrical system he envisioned. However, when made thin enough, the filaments were too fragile and broke. After numerous experiments with platinum, Edison returned to carbon filaments. In October, 1879, Edison and his assistants began to experiment with a filament made of carbonized cotton thread. Enclosed in a glass bulb with a near- perfect vacuum, it shed a bright light and burned for many hours. The practical incandescent lamp had become a reality. Edison and his assistants continued to search for a better filament material. They tried carbonized paper, and tested some species of vegetable fibers. They have experimented bamboo, then Tungsten and the Nitrogen for vacuum, but essentially Edison’s lamp was the same as those used today. The incandescent lamp or the electric light bulb brought new opportunities for Edison and to his country, America. Several new industries, including the electric light and power industry, were built based on his invention. He was awarded 1, 093 United States patents. One of his greatest...
462
ENGLISH
1
Kristalnacht: "Night of Broken Glass": Start of the Holocaust What Happened the Night of Broken Glass? On November 9, 1938, the world was forever changed by a tragic incident given a beautiful name Kristallnacht, because it left the streets littered with broken glass from stores and synagogue windows. For two straight nights, Nazis demolished German cities. Although it only lasted two nights, the impact of this event and the events following would affect people all over the world for years to come. Kristallnacht, which is German for Crystal Night, is also known as Night of Broken Glass or November Pogroms. It marked the beginning of the Holocaust. That night German Nazis did their first horrific act and subjected thousands of Jews to terror and violence. They destroyed over 1,000 synagogues and 7,500 Jewish businesses throughout Germany by smashing and torching. Jewish hospitals, schools, homes, and cemeteries were vandalized. 30,000 Jewish men between 16 and 60 years old were arrested and then sent to the concentration camps Buchenwald, Dachau, and Sachsenhausen. Due to the massive influx of people sent to these camps, they expanded in order to accommodate. Nazis murdered 91 Jews. All of this occurred in less than 48 hours. Many that attacked Jewish families were their own neighbors. Throughout this time, all police officers and firefighters were ordered not to intervene. The only exception was that firefighters were allowed to put out fires that could be detrimental to the home of someone who was of the Aryan race. Who Was Held Responsible for Kristallnacht? Their attacks were not just physical. The Nazis held the Jewish community responsible for the damage done those two nights, and they imposed a one billion Reichsmarks (which is equal to $400 million during 1938) fine on them according to the United States Holocaust Memorial Museum. They also confiscated any reimbursements that would they normally have bn compensated to the Jews for insurance claims. The Nazis also expected the Jewish community to clean up the mess themselves. These horrific events were a surprise to those all around the world. Although Hitler had been chancellor of Germany since 1933 and already began repressive policies, up until then, most repressions were not violent. Kristallnacht was the beginning of worsening conditions for Jewish people throughout Europe. It was after this that anti-Jewish legislations were set in place including: - Jewish businesses and factories were to be taken over by the Nazis - Jewish people were not allowed in most public areas. - Jewish children were no longer allowed in German schools. - Jewish people had a stringent curfew. - Jewish people were forced to emigrate out of Germany. - Jewish people were required to wear a badge with the Star of David for identification. What Led Up to These Events? Although few foresaw the events that occurred on Kristallnacht, there were steps that Hitler took that eventually led to that night. Five years prior, Adolf Hitler became Germany's chancellor. His first course of action was instituting policies that isolated and persecuted the Jewish community in Germany. He had asked the citizens to boycott Jewish businesses, and he dismissed all active Jews that held civil service jobs. Then in May, he burned all books written by un-German authors and Jewish people at a ceremony held at Berlin's Opera House. Within two years, businesses overtly denied serving Jewish persons. That same year on September 15, 1935, the Nuremberg Laws were passed, which was an addendum to the Reich Citizenship Law. Although antisemitism was already extreme, this gave the regime more control and became more organized in their mission to rid the world of the "virus," a term Hitler used in Mein Kampf for the German people. What Were the Nuremburg Laws? The Nuremberg Laws state that only Aryans (non-Jewish Germans) could be full German citizens. Jewish Germans were considered subjects of the German Reich. By being classified subjects, they were supposedly under the protection of the Reich and therefore were obligated to it. Unfortunately, it also meant they had no legal or political rights and were left entirely to the will of the state. They were not allowed to vote nor own rural property either. Since they were now considered aliens in the country, they were required to pay double the amount of taxes than other German citizens. Due to the Nazi goal of keeping the Aryan race pure, it became illegal for Aryan and Jews to marry or even have intercourse. Three years later, on April 11, 1938, all German citizens were required to prove their status as Aryan by providing birth certificates, marriage licenses, and questionnaires about genealogy. If a parent or grandparent was Jewish, they were no longer considered Aryan. The law stated at the time, "A Jew is a Jew is a Jew," which meant that they would look three generations back to know if their blood was "pure." Assasination of Ernst von Rath Although the Nuremberg Laws played a large part in the Holocaust, the assassination of Ernst vom Rath was its turning point. Although many were deeply affected by the discriminatory laws, one young man decided to fight against them after his family was directly affected. He was a Polish Jewish student named Herschel Grynszpan, who had lived his entire life in Germany, but was currently studying in France, while his family was exiled to Poland. Before the exile, the Polish government foresaw what the Nazi's were planning, and sent out a decree stating that citizens of Poland who lived abroad would be annulled unless they received a special stamp from a Polish official by October 31st. Without this, they would not be allowed to reenter Poland. Yet, they never gave out these stamps, which affected 50,000 Polish Jews. Unfortunately, when the German government got wind that they were not allowed to return, they decided to expel 12,000 Polish-born Jews. They were given only one night to leave Germany and were only allowed to bring the belongings they could carry in one suitcase. They did this only four days before the cutoff on October 27, 1938. They were dropped off at a station in Zbaszyn on the border of the two countries without permission to enter either country. Eventually, Poland allowed 7,000 of these people to stay in Poland, but the remaining stayed in the station without food, money, or housing. Herschel Grynszpan learned that his family was among those expelled from Germany when, on November 3rd, he received a postcard from his sister explaining what happened. Grynszpan chose to take immediate action. Three days later, he bought a gun and bullets; the next day, he went to the German Embassy to shoot the Ambassador. He never got the opportunity but did shoot the Third Secretary in the German Embassy, Ernst von Rath. Von Rath died two days later. Hitler felt close to the Secretary and attended his funeral. Joseph Goebbels, the Nazi minister, took this as an opportunity to rally anger against Jews. Adolf Hitler played off of this as well and used it as an opportunity to punish the Jewish community and retaliate by planning the Night of Broken Glass. Their first plan of attack was to denounce the Jewish community as murderers by stating it in the newspapers on November 8th. The next day von Rath died. Goebbel and Hitler decided to punish them further through a more "spontaneous demonstration" of violence. Goebbels wrote about the decision by stating: "He decides: demonstrations should be allowed to continue. The police should be withdrawn. For once the Jews should get the feel of popular anger. … I immediately give the necessary instructions to the police and the Party. Then I briefly speak in that vein to the Party leadership. Stormy applause. All are instantly at the phones. Now people will act.” They then sent out telephone and telegram orders throughout Germany and some to Austria by Gestapo chief Heinrich Müller. The orders said, “in shortest order, actions against Jews and especially their synagogues will take place in all of Germany. These are not to be interfered with.” The police were to arrest any able-bodied male Jews. Firefighters were asked to stand by synagogues with orders to let them burn, and only control if the flames are going to harm Aryan homes or businesses. As Kristallnacht proceeded, the first major deportation of Jews to concentration camps occurred, and so too did the Holocaust. It was then, on November 15, 1938, after that Nazi government no longer allowed Jews to attend German schools. Then soon after, all Jews were given a strict curfew. By December, Jews were not allowed in public places. Hitler began what he called the "Final Solution," which was to exterminate the entire Jewish population. Although he did not fully succeed, he did murder 6 million European Jews and 4-6 million non-Jews that were either Catholic, mentally impaired, handicapped, or any other person who did not fit the specific Aryan ideal type. By 1939, World War II broke out and would continue through 1945 in a desperate hope to stop Adolf Hitler. Although the United States did not immediately join the war, Franklin D. Roosevelt was quick to denounce anti-Semitism during a speech to the citizens of America on November 15, 1938. Kristallnacht was a turning point that led to worsening violence and repressive treatments of Jewish people by the German government. Although the German people had mixed feelings about the treatment of the Jews; some supported the night of Kristallnacht, some felt the Jews should be punished, but not so violently, while others thought it was pure evil. Kristallnacht remains one of the most horrific single events. It also marks the beginning of the Holocaust and the ambitions of an evil man. Although given a beautiful name, it symbolizes an especially harrowing event. Survivors Remember Kristalnacht - Berenbaum, Michael. "Kristallnacht." Encyclopædia Britannica. May 15, 2017. Accessed February 10, 2018. https://www.britannica.com/event/Kristallnacht. - History.com Staff. "Kristallnacht." History.com. 2009. Accessed February 10, 2018. http://www.history.com/topics/kristallnacht. - "Kristallnacht: November 9-10." The Center for Holocaust and Humanity Education. Accessed February 10, 2018. http://www.holocaustandhumanity.org/kristallnacht/kristallnacht-november-9-10/. Questions & Answers © 2018 Angela Michelle Schultz
<urn:uuid:f8e72f6b-42a7-4e25-8b5b-ae780a42caa9>
CC-MAIN-2020-05
https://owlcation.com/humanities/Kristalnacht-Night-of-Broken-Glass
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601040.47/warc/CC-MAIN-20200120224950-20200121013950-00336.warc.gz
en
0.983193
2,207
3.921875
4
[ 0.09255906939506531, 0.41001924872398376, 0.30274641513824463, -0.0875743106007576, 0.3566054701805115, 0.11306435614824295, 0.1687280684709549, 0.2232726663351059, -0.007235929369926453, -0.16641804575920105, 0.1581588238477707, 0.3069963753223419, -0.49453186988830566, 0.5277457237243652...
2
Kristalnacht: "Night of Broken Glass": Start of the Holocaust What Happened the Night of Broken Glass? On November 9, 1938, the world was forever changed by a tragic incident given a beautiful name Kristallnacht, because it left the streets littered with broken glass from stores and synagogue windows. For two straight nights, Nazis demolished German cities. Although it only lasted two nights, the impact of this event and the events following would affect people all over the world for years to come. Kristallnacht, which is German for Crystal Night, is also known as Night of Broken Glass or November Pogroms. It marked the beginning of the Holocaust. That night German Nazis did their first horrific act and subjected thousands of Jews to terror and violence. They destroyed over 1,000 synagogues and 7,500 Jewish businesses throughout Germany by smashing and torching. Jewish hospitals, schools, homes, and cemeteries were vandalized. 30,000 Jewish men between 16 and 60 years old were arrested and then sent to the concentration camps Buchenwald, Dachau, and Sachsenhausen. Due to the massive influx of people sent to these camps, they expanded in order to accommodate. Nazis murdered 91 Jews. All of this occurred in less than 48 hours. Many that attacked Jewish families were their own neighbors. Throughout this time, all police officers and firefighters were ordered not to intervene. The only exception was that firefighters were allowed to put out fires that could be detrimental to the home of someone who was of the Aryan race. Who Was Held Responsible for Kristallnacht? Their attacks were not just physical. The Nazis held the Jewish community responsible for the damage done those two nights, and they imposed a one billion Reichsmarks (which is equal to $400 million during 1938) fine on them according to the United States Holocaust Memorial Museum. They also confiscated any reimbursements that would they normally have bn compensated to the Jews for insurance claims. The Nazis also expected the Jewish community to clean up the mess themselves. These horrific events were a surprise to those all around the world. Although Hitler had been chancellor of Germany since 1933 and already began repressive policies, up until then, most repressions were not violent. Kristallnacht was the beginning of worsening conditions for Jewish people throughout Europe. It was after this that anti-Jewish legislations were set in place including: - Jewish businesses and factories were to be taken over by the Nazis - Jewish people were not allowed in most public areas. - Jewish children were no longer allowed in German schools. - Jewish people had a stringent curfew. - Jewish people were forced to emigrate out of Germany. - Jewish people were required to wear a badge with the Star of David for identification. What Led Up to These Events? Although few foresaw the events that occurred on Kristallnacht, there were steps that Hitler took that eventually led to that night. Five years prior, Adolf Hitler became Germany's chancellor. His first course of action was instituting policies that isolated and persecuted the Jewish community in Germany. He had asked the citizens to boycott Jewish businesses, and he dismissed all active Jews that held civil service jobs. Then in May, he burned all books written by un-German authors and Jewish people at a ceremony held at Berlin's Opera House. Within two years, businesses overtly denied serving Jewish persons. That same year on September 15, 1935, the Nuremberg Laws were passed, which was an addendum to the Reich Citizenship Law. Although antisemitism was already extreme, this gave the regime more control and became more organized in their mission to rid the world of the "virus," a term Hitler used in Mein Kampf for the German people. What Were the Nuremburg Laws? The Nuremberg Laws state that only Aryans (non-Jewish Germans) could be full German citizens. Jewish Germans were considered subjects of the German Reich. By being classified subjects, they were supposedly under the protection of the Reich and therefore were obligated to it. Unfortunately, it also meant they had no legal or political rights and were left entirely to the will of the state. They were not allowed to vote nor own rural property either. Since they were now considered aliens in the country, they were required to pay double the amount of taxes than other German citizens. Due to the Nazi goal of keeping the Aryan race pure, it became illegal for Aryan and Jews to marry or even have intercourse. Three years later, on April 11, 1938, all German citizens were required to prove their status as Aryan by providing birth certificates, marriage licenses, and questionnaires about genealogy. If a parent or grandparent was Jewish, they were no longer considered Aryan. The law stated at the time, "A Jew is a Jew is a Jew," which meant that they would look three generations back to know if their blood was "pure." Assasination of Ernst von Rath Although the Nuremberg Laws played a large part in the Holocaust, the assassination of Ernst vom Rath was its turning point. Although many were deeply affected by the discriminatory laws, one young man decided to fight against them after his family was directly affected. He was a Polish Jewish student named Herschel Grynszpan, who had lived his entire life in Germany, but was currently studying in France, while his family was exiled to Poland. Before the exile, the Polish government foresaw what the Nazi's were planning, and sent out a decree stating that citizens of Poland who lived abroad would be annulled unless they received a special stamp from a Polish official by October 31st. Without this, they would not be allowed to reenter Poland. Yet, they never gave out these stamps, which affected 50,000 Polish Jews. Unfortunately, when the German government got wind that they were not allowed to return, they decided to expel 12,000 Polish-born Jews. They were given only one night to leave Germany and were only allowed to bring the belongings they could carry in one suitcase. They did this only four days before the cutoff on October 27, 1938. They were dropped off at a station in Zbaszyn on the border of the two countries without permission to enter either country. Eventually, Poland allowed 7,000 of these people to stay in Poland, but the remaining stayed in the station without food, money, or housing. Herschel Grynszpan learned that his family was among those expelled from Germany when, on November 3rd, he received a postcard from his sister explaining what happened. Grynszpan chose to take immediate action. Three days later, he bought a gun and bullets; the next day, he went to the German Embassy to shoot the Ambassador. He never got the opportunity but did shoot the Third Secretary in the German Embassy, Ernst von Rath. Von Rath died two days later. Hitler felt close to the Secretary and attended his funeral. Joseph Goebbels, the Nazi minister, took this as an opportunity to rally anger against Jews. Adolf Hitler played off of this as well and used it as an opportunity to punish the Jewish community and retaliate by planning the Night of Broken Glass. Their first plan of attack was to denounce the Jewish community as murderers by stating it in the newspapers on November 8th. The next day von Rath died. Goebbel and Hitler decided to punish them further through a more "spontaneous demonstration" of violence. Goebbels wrote about the decision by stating: "He decides: demonstrations should be allowed to continue. The police should be withdrawn. For once the Jews should get the feel of popular anger. … I immediately give the necessary instructions to the police and the Party. Then I briefly speak in that vein to the Party leadership. Stormy applause. All are instantly at the phones. Now people will act.” They then sent out telephone and telegram orders throughout Germany and some to Austria by Gestapo chief Heinrich Müller. The orders said, “in shortest order, actions against Jews and especially their synagogues will take place in all of Germany. These are not to be interfered with.” The police were to arrest any able-bodied male Jews. Firefighters were asked to stand by synagogues with orders to let them burn, and only control if the flames are going to harm Aryan homes or businesses. As Kristallnacht proceeded, the first major deportation of Jews to concentration camps occurred, and so too did the Holocaust. It was then, on November 15, 1938, after that Nazi government no longer allowed Jews to attend German schools. Then soon after, all Jews were given a strict curfew. By December, Jews were not allowed in public places. Hitler began what he called the "Final Solution," which was to exterminate the entire Jewish population. Although he did not fully succeed, he did murder 6 million European Jews and 4-6 million non-Jews that were either Catholic, mentally impaired, handicapped, or any other person who did not fit the specific Aryan ideal type. By 1939, World War II broke out and would continue through 1945 in a desperate hope to stop Adolf Hitler. Although the United States did not immediately join the war, Franklin D. Roosevelt was quick to denounce anti-Semitism during a speech to the citizens of America on November 15, 1938. Kristallnacht was a turning point that led to worsening violence and repressive treatments of Jewish people by the German government. Although the German people had mixed feelings about the treatment of the Jews; some supported the night of Kristallnacht, some felt the Jews should be punished, but not so violently, while others thought it was pure evil. Kristallnacht remains one of the most horrific single events. It also marks the beginning of the Holocaust and the ambitions of an evil man. Although given a beautiful name, it symbolizes an especially harrowing event. Survivors Remember Kristalnacht - Berenbaum, Michael. "Kristallnacht." Encyclopædia Britannica. May 15, 2017. Accessed February 10, 2018. https://www.britannica.com/event/Kristallnacht. - History.com Staff. "Kristallnacht." History.com. 2009. Accessed February 10, 2018. http://www.history.com/topics/kristallnacht. - "Kristallnacht: November 9-10." The Center for Holocaust and Humanity Education. Accessed February 10, 2018. http://www.holocaustandhumanity.org/kristallnacht/kristallnacht-november-9-10/. Questions & Answers © 2018 Angela Michelle Schultz
2,282
ENGLISH
1
A spear is a pole weapon consisting of a shaft, usually of wood, with a pointed head. The head may be simply the sharpened end of the shaft itself, as is the case with fire hardened spears, or it may be made of a more durable material fastened to the shaft, such as flint, obsidian, iron, steel or bronze. The most common design for hunting or combat spears since ancient times has incorporated a metal spearhead shaped like a triangle, lozenge, or leaf. The heads of fishing spears usually feature barbs or serrated edges. The word spear comes from the Old English spere, from the Proto-Germanic speri, from a Proto-Indo-European root *sper- "spear, pole". Spears can be divided into two broad categories: those designed for thrusting in melee combat and those designed for throwing (usually referred to as javelins). The spear has been used throughout human history both as a hunting and fishing tool and as a weapon. Along with the axe, knife and club, it is one of the earliest and most important tools developed by early humans. As a weapon, it may be wielded with either one hand or two. It was used in virtually every conflict up until the modern era, where even then it continues on in the form of the fixed bayonet, and is probably the most commonly used weapon in history. Spear manufacture and use is not confined to humans. It is also practiced by the western chimpanzee. Chimpanzees near Kédougou, Senegal have been observed to create spears by breaking straight limbs off trees, stripping them of their bark and side branches, and sharpening one end with their teeth. They then used the weapons to hunt galagos sleeping in hollows. Archaeological evidence found in present-day Germany documents that wooden spears have been used for hunting since at least 400,000 years ago, and a 2012 study from the site of Kathu Pan in South Africa suggests that hominids, possibly Homo heidelbergensis, may have developed the technology of hafted stone-tipped spears in Africa about 500,000 years ago. Wood does not preserve well, however, and Craig Stanford, a primatologist and professor of anthropology at the University of Southern California, has suggested that the discovery of spear use by chimpanzees probably means that early humans used wooden spears as well, perhaps, five million years ago. From circa 200,000 BCE onwards, Middle Paleolithic humans began to make complex stone blades with flaked edges which were used as spear heads. These stone heads could be fixed to the spear shaft by gum or resin or by bindings made of animal sinew, leather strips or vegetable matter. During this period, a clear difference remained between spears designed to be thrown and those designed to be used in hand-to-hand combat. By the Magdalenian period (c. 15,000–9500 BCE), spear-throwers similar to the later atlatl were in use. The spear is the main weapon of the warriors of Homer's Iliad. The use of both a single thrusting spear and two throwing spears are mentioned. It has been suggested that two styles of combat are being described; an early style, with thrusting spears, dating to the Mycenaean period in which the Iliad is set, and, anachronistically, a later style, with throwing spears, from Homer's own Archaic period. In the 7th century BCE, the Greeks evolved a new close-order infantry formation, the phalanx. The key to this formation was the hoplite, who was equipped with a large, circular, bronze-faced shield (aspis) and a 7–9 ft (2.1–2.7 m) spear with an iron head and bronze butt-spike (doru). The hoplite phalanx dominated warfare among the Greek City States from the 7th into the 4th century BCE. The 4th century saw major changes. One was the greater use of peltasts, light infantry armed with spear and javelins. The other was the development of the sarissa, a two-handed pike 18 ft (5.5 m) in length, by the Macedonians under Phillip of Macedon and Alexander the Great. The pike phalanx, supported by peltasts and cavalry, became the dominant mode of warfare among the Greeks from the late 4th century onward until Greek military systems were supplanted by the Roman legions. In the pre-Marian Roman armies, the first two lines of battle, the hastati and principes, often fought with a sword called a gladius and pila, heavy javelins that were specifically designed to be thrown at an enemy to pierce and foul a target's shield. Originally the principes were armed with a short spear called a hasta, but these gradually fell out of use, eventually being replaced by the gladius. The third line, the triarii, continued to use the hasta. From the late 2nd century BCE, all legionaries were equipped with the pilum. The pilum continued to be the standard legionary spear until the end of the 2nd century CE. Auxilia, however, were equipped with a simple hasta and, perhaps, throwing spears. During the 3rd century CE, although the pilum continued to be used, legionaries usually were equipped with other forms of throwing and thrusting spear, similar to auxilia of the previous century. By the 4th century, the pilum had effectively disappeared from common use. In the late period of the Roman Empire, the spear became more often used because of its anti-cavalry capacities as the barbarian invasions were often conducted by people with a developed culture of cavalry in warfare. Muslim warriors used a spear that was called an az-zaġāyah. Berbers pronounced it zaġāya, but the English term, derived from the Old French via Berber, is "assegai". It is a pole weapon used for throwing or hurling, usually a light spear or javelin made of hard wood and pointed with a forged iron tip.The az-zaġāyah played an important role during the Islamic conquest as well as during later periods, well into the 20th century. A longer pole az-zaġāyah was being used as a hunting weapon from horseback. The az-zaġāyah was widely used. It existed in various forms in areas stretching from Southern Africa to the Indian subcontinent, although these places already had their own variants of the spear. This javelin was the weapon of choice during the Fulani jihad as well as during the Mahdist War in Sudan. It is still being used by Sikh Nihang in the Punjab as well as certain wandering Sufi ascetics (Derwishes). After the fall of the Western Roman Empire, the spear and shield continued to be used by nearly all Western European cultures. Since a medieval spear required only a small amount of steel along the sharpened edges (most of the spear-tip was wrought iron), it was an economical weapon. Quick to manufacture, and needing less smithing skill than a sword, it remained the main weapon of the common soldier. The Vikings, for instance, although often portrayed with axe or sword in hand, were armed mostly with spears, as were their Anglo-Saxon, Irish, or continental contemporaries. Broadly speaking, spears were either designed to be used in melee, or to be thrown. Within this simple classification, there was a remarkable range of types. For example, M. J. Swanton identified thirty different spearhead categories and sub-categories in early Saxon England. Most medieval spearheads were generally leaf-shaped. Notable types of early medieval spears include the angon, a throwing spear with a long head similar to the Roman pilum, used by the Franks and Anglo-Saxons, and the winged (or lugged) spear, which had two prominent wings at the base of the spearhead, either to prevent the spear penetrating too far into an enemy or to aid in spear fencing. Originally a Frankish weapon, the winged spear also was popular with the Vikings. It would become the ancestor of later medieval polearms, such as the partisan and spetum. The thrusting spear also has the advantage of reach, being considerably longer than other weapon types. Exact spear lengths are hard to deduce as few spear shafts survive archaeologically but 6–8 ft (1.8–2.4 m) would seem to have been the norm. Some nations were noted for their long spears, including the Scots and the Flemish. Spears usually were used in tightly ordered formations, such as the shield wall or the schiltron. To resist cavalry, spear shafts could be planted against the ground. William Wallace drew up his schiltrons in a circle at the Battle of Falkirk in 1298 to deter charging cavalry; this was a widespread tactic sometimes known as the "crown" formation. Throwing spears became rarer as the Middle Ages drew on, but survived in the hands of specialists such as the Catalan Almogavars. They were commonly used in Ireland until the end of the 16th century. Spears began to lose fashion among the infantry during the 14th century, being replaced by pole weapons that combined the thrusting properties of the spear with the cutting properties of the axe, such as the halberd. Where spears were retained they grew in length, eventually evolving into pikes, which would be a dominant infantry weapon in the 16th and 17th centuries. Cavalry spears were originally the same as infantry spears and were often used with two hands or held with one hand overhead. In the 12th century, after the adoption of stirrups and a high-cantled saddle, the spear became a decidedly more powerful weapon. A mounted knight would secure the lance by holding it with one hand and tucking it under the armpit (the couched lance technique) This allowed all the momentum of the horse and knight to be focused on the weapon's tip, whilst still retaining accuracy and control. This use of the spear spurred the development of the lance as a distinct weapon that was perfected in the medieval sport of jousting. In the 14th century, tactical developments meant that knights and men-at-arms often fought on foot. This led to the practice of shortening the lance to about 5 ft (1.5 m).) to make it more manageable. As dismounting became commonplace, specialist pole weapons such as the pollaxe were adopted by knights and this practice ceased. Spears were used first as hunting weapons amongst the ancient Chinese. They became popular as infantry weapons during the Warring States and Qin era, when spearmen were used as especially highly disciplined soldiers in organized group attacks. When used in formation fighting, spearmen would line up their large rectangular or circular shields in a shieldwall manner. The Qin also employed long spears (more akin to a pike) in formations similar to Swiss pikemen in order to ward off cavalry. The Han Empire would use similar tactics as its Qin predecessors. Halberds, polearms, and dagger axes were also common weapons during this time. Spears were also common weaponry for Warring States, Qin, and Han era cavalry units. During these eras, the spear would develop into a longer lance-like weapon used for cavalry charges. There are many words in Chinese that would be classified as a spear in English. The Mao is the predecessor of the Qiang. The first bronze Mao appeared in the Shang dynasty. This weapon was less prominent on the battlefield than the ge (dagger-axe). In some archaeological examples two tiny holes or ears can be found in the blade of the spearhead near the socket, these holes were presumably used to attach tassels, much like modern day wushu spears. In the early Shang, the Mao appeared to have a relatively short shaft as well as a relatively narrow shaft as opposed to Mao in the later Shang and Western Zhou period. Some Mao from this era are heavily decorated as is evidenced by a Warring States period Mao from the Ba Shu area. In the Han dynasty the Mao and the Ji (戟 Ji can be loosely defined as a halberd) rose to prominence in the military. Interesting to note is that the amount of iron Mao-heads found exceeds the number of bronze heads. By the end of the Han dynasty (Eastern Han) the process of replacement of the iron Mao had been completed and the bronze Mao had been rendered completely obsolete. After the Han dynasty toward the Sui and Tang dynasties the Mao used by cavalry were fitted with much longer shafts, as is mentioned above. During this era, the use of the Shuo (矟) was widespread among the footmen. The Shuo can be likened to a pike or simply a long spear. After the Tang dynasty, the popularity of the Mao declined and was replaced by the Qiang (枪). The Tang dynasty divided the Qiang in four categories: "一曰漆枪, 二曰木枪, 三曰白杆枪, 四曰扑头枪。” Roughly translated the four categories are: Qi (a kind of wood) Spears, Wooden Spears, Bai Gan (A kind of wood) Spears and Pu Tou Qiang. The Qiang that were produced in the Song and Ming dynasties consisted of four major parts: Spearhead, Shaft, End Spike and Tassel. The types of Qiang that exist are many. Among the types there are cavalry Qiang that were the length of one zhang (eleven feet and nine inches or 3.58 m), Litte-Flower Spears (Xiao Hua Qiang 小花枪) that are the length of one person and their arm extended above his head, double hooked spears, single hooked spears, ringed spears and many more. There is some confusion as to how to distinguish the Qiang from the Mao, as they are obviously very similar. Some people say that a Mao is longer than a Qiang, others say that the main difference is between the stiffness of the shaft, where the Qiang would be flexible and the Mao would be stiff. Scholars seem to lean toward the latter explanation more than the former. Because of the difference in the construction of the Mao and the Qiang, the usage is also different, though there is no definitive answer as to what exactly the differences are between the Mao and the Qiang. Spears in the Indian society were used both in missile and non-missile form, both by cavalry and foot-soldiers. Mounted spear-fighting was practiced using with a ten-foot, ball-tipped wooden lance called a bothati, the end of which was covered in dye so that hits may be confirmed. Spears were constructed from a variety of materials such as the sang made completely of steel, and the ballam which had a bamboo shaft. The Rajputs wielded a type of spear for infantrymen which had a club integrated into the spearhead, and a pointed butt end. Other spears had forked blades, several spear-points, and numerous other innovations. One particular spear unique to India was the vita or corded lance. Used by the Maratha army, it had a rope connecting the spear with the user's wrist, allowing the weapon to be thrown and pulled back. The Vel is a type of spear or lance, originated in Southern India, primarily used by Tamils. The hoko spear was used in ancient Japan sometime between the Yayoi period and the Heian period, but it became unpopular as early samurai often acted as horseback archers. Medieval Japan employed spears again for infantrymen to use, but it was not until the 11th century in that samurai began to prefer spears over bows. Several polearms were used in the Japanese theatres; the naginata was a glaive-like weapon with a long, curved blade popularly among the samurai and the Buddhist warrior-monks, often used against cavalry; the yari was a longer polearm, with a straight-bladed spearhead, which became the weapon of choice of both the samurai and the ashigaru (footmen) during the Warring States Era; the horseback samurai used shorter yari for his single-armed combat; on the other hand, ashigaru infantries used long yari (similar with European pike) for their massed combat formation. Filipino spears (sibat) were used as both a weapon and a tool throughout the Philippines. It is also called a bangkaw (after the Bankaw Revolt.), sumbling or palupad in the islands of Visayas and Mindanao. Sibat are typically made from rattan, either with a sharpened tip or a head made from metal. These heads may either be single-edged, double-edged or barbed. Styles vary according to function and origin. For example, a sibat designed for fishing may not be the same as those used for hunting. The spear was used as the primary weapon in expeditions and battles against neighbouring island kingdoms and it became famous during the 1521 Battle of Mactan, where the chieftain Lapu Lapu of Cebu fought against Spanish forces led by Ferdinand Magellan who was subsequently killed. As advanced metallurgy was largely unknown in pre-Columbian America outside of Western Mexico and South America, most weapons in Meso-America were made of wood or obsidian. This did not mean that they were less lethal, as obsidian may be sharpened to become many times sharper than steel. Meso-American spears varied greatly in shape and size. While the Aztecs preferred the sword-like macuahuitl for fighting, the advantage of a far-reaching thrusting weapon was recognised, and a large portion of the army would carry the tepoztopilli into battle. The tepoztopilli was a pole-arm, and to judge from depictions in various Aztec codices, it was roughly the height of a man, with a broad wooden head about twice the length of the users' palm or shorter, edged with razor-sharp obsidian blades which were deeply set in grooves carved into the head, and cemented in place with bitumen or plant resin as an adhesive. The tepoztopilli was able both to thrust and slash effectively. Throwing spears also were used extensively in Meso-American warfare, usually with the help of an atlatl. Throwing spears were typically shorter and more stream-lined than the tepoztopilli, and some had obsidian edges for greater penetration. Typically, most spears made by Native Americans were created with materials surrounded by their communities. Usually, the shaft of the spear was made with a wooden stick while the head of the spear was fashioned from arrowheads, pieces of metal such as copper, or a bone that had been sharpened. Spears were a preferred weapon by many since it was inexpensive to create, could more easily be taught to others, and could be made quickly and in large quantities. Native Americans used the Buffalo Pound method to kill buffalo, which required a hunter to dress as a buffalo and lure one into a ravine where other hunters were hiding. Once the buffalo appeared, the other hunters would kill him with spears. A variation of this technique, called the Buffalo Jump, was when a runner would lead the animals towards a cliff. As the buffalo got close to the cliff, other members of the tribe would jump out from behind rocks or trees and scare the buffalo over the cliff. Other hunters would be waiting at the bottom of the cliff to spear the animal to death. The development of both the long, two-handed pike and gunpowder in Renaissance Europe saw an ever-increasing focus on integrated infantry tactics. Those infantry not armed with these weapons carried variations on the pole-arm, including the halberd and the bill. Ultimately, the spear proper was rendered obsolete on the battlefield. Its last flowering was the half-pike or spontoon, a shortened version of the pike carried by officers and NCOs. While originally a weapon, this came to be seen more as a badge of office, or leading staff by which troops were directed. The half-pike, sometimes known as a boarding pike, was also used as a weapon on board ships until the 19th century. At the start of the Renaissance, cavalry remained predominantly lance-armed; gendarmes with the heavy knightly lance and lighter cavalry with a variety of lighter lances. By the 1540s, however, pistol-armed cavalry called reiters were beginning to make their mark. Cavalry armed with pistols and other lighter firearms, along with a sword, had virtually replaced lance armed cavalry in Western Europe by the beginning of the 17th century. One of the earliest forms of killing prey for humans, hunting game with a spear and spear fishing continues to this day as both a means of catching food and as a cultural activity. Some of the most common prey for early humans were mega fauna such as mammoths which were hunted with various kinds of spear. One theory for the Quaternary extinction event was that most of these animals were hunted to extinction by humans with spears. Even after the invention of other hunting weapons such as the bow the spear continued to be used, either as a projectile weapon or used in the hand as was common in boar hunting. Spear hunting fell out of favour in most of Europe in the 18th century, but continued in Germany, enjoying a revival in the 1930s. Spear hunting is still practiced in the United States. Animals taken are primarily wild boar and deer, although trophy animals such as cats and big game as large as a Cape Buffalo are hunted with spears. Alligator are hunted in Florida with a type of harpoon. The Celts would symbolically destroy a dead warrior's spear either to prevent its use by another or as a sacrificial offering. In classical Greek mythology Zeus' bolts of lightning may be interpreted as a symbolic spear. Some would carry that interpretation to the spear that frequently is associated with Athena, interpreting her spear as a symbolic connection to some of Zeus' power beyond the Aegis once he rose to replacing other deities in the pantheon. Athena was depicted with a spear prior to that change in myths, however. Chiron's wedding-gift to Peleus when he married the nymph Thetis in classical Greek mythology, was an ashen spear as the nature of ashwood with its straight grain made it an ideal choice of wood for a spear. The Romans and their early enemies would force prisoners to walk underneath a 'yoke of spears', which humiliated them. The yoke would consist of three spears, two upright with a third tied between them at a height which made the prisoners stoop. It has been suggested that the arrangement has a magical origin, a way to trap evil spirits. The word subjugate has its origins in this practice (from Latin sub = under, jugum = yoke). In Norse mythology, the god Odin's spear (named Gungnir) was made by the sons of Ivaldi. It had the special property that it never missed its mark. During the War with the Vanir, Odin symbolically threw Gungnir into the Vanir host. This practice of symbolically casting a spear into the enemy ranks at the start of a fight was sometimes used in historic clashes, to seek Odin's support in the coming battle. In Wagner's opera Siegfried, the haft of Gungnir is said to be from the "World-Tree" Yggdrasil. Sir James George Frazer in The Golden Bough noted the phallic nature of the spear and suggested that in the Arthurian legends the spear or lance functioned as a symbol of male fertility, paired with the Grail (as a symbol of female fertility). The term spear is also used (in a somewhat archaic manner) to describe the male line of a family, as opposed to the distaff or female line.
<urn:uuid:91d1e897-1ea3-48d6-8652-0e0d9da1c2eb>
CC-MAIN-2020-05
https://dir.md/wiki/Spear?host=en.wikipedia.org
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00207.warc.gz
en
0.980465
5,065
3.5625
4
[ -0.12755025923252106, 0.2719011604785919, 0.024914775043725967, -0.1511344611644745, 0.09402404725551605, -0.288913369178772, 0.260496586561203, 0.03215315192937851, -0.25846803188323975, 0.004145259968936443, 0.26713305711746216, -0.4864790737628937, 0.026873622089624405, 0.27266296744346...
1
A spear is a pole weapon consisting of a shaft, usually of wood, with a pointed head. The head may be simply the sharpened end of the shaft itself, as is the case with fire hardened spears, or it may be made of a more durable material fastened to the shaft, such as flint, obsidian, iron, steel or bronze. The most common design for hunting or combat spears since ancient times has incorporated a metal spearhead shaped like a triangle, lozenge, or leaf. The heads of fishing spears usually feature barbs or serrated edges. The word spear comes from the Old English spere, from the Proto-Germanic speri, from a Proto-Indo-European root *sper- "spear, pole". Spears can be divided into two broad categories: those designed for thrusting in melee combat and those designed for throwing (usually referred to as javelins). The spear has been used throughout human history both as a hunting and fishing tool and as a weapon. Along with the axe, knife and club, it is one of the earliest and most important tools developed by early humans. As a weapon, it may be wielded with either one hand or two. It was used in virtually every conflict up until the modern era, where even then it continues on in the form of the fixed bayonet, and is probably the most commonly used weapon in history. Spear manufacture and use is not confined to humans. It is also practiced by the western chimpanzee. Chimpanzees near Kédougou, Senegal have been observed to create spears by breaking straight limbs off trees, stripping them of their bark and side branches, and sharpening one end with their teeth. They then used the weapons to hunt galagos sleeping in hollows. Archaeological evidence found in present-day Germany documents that wooden spears have been used for hunting since at least 400,000 years ago, and a 2012 study from the site of Kathu Pan in South Africa suggests that hominids, possibly Homo heidelbergensis, may have developed the technology of hafted stone-tipped spears in Africa about 500,000 years ago. Wood does not preserve well, however, and Craig Stanford, a primatologist and professor of anthropology at the University of Southern California, has suggested that the discovery of spear use by chimpanzees probably means that early humans used wooden spears as well, perhaps, five million years ago. From circa 200,000 BCE onwards, Middle Paleolithic humans began to make complex stone blades with flaked edges which were used as spear heads. These stone heads could be fixed to the spear shaft by gum or resin or by bindings made of animal sinew, leather strips or vegetable matter. During this period, a clear difference remained between spears designed to be thrown and those designed to be used in hand-to-hand combat. By the Magdalenian period (c. 15,000–9500 BCE), spear-throwers similar to the later atlatl were in use. The spear is the main weapon of the warriors of Homer's Iliad. The use of both a single thrusting spear and two throwing spears are mentioned. It has been suggested that two styles of combat are being described; an early style, with thrusting spears, dating to the Mycenaean period in which the Iliad is set, and, anachronistically, a later style, with throwing spears, from Homer's own Archaic period. In the 7th century BCE, the Greeks evolved a new close-order infantry formation, the phalanx. The key to this formation was the hoplite, who was equipped with a large, circular, bronze-faced shield (aspis) and a 7–9 ft (2.1–2.7 m) spear with an iron head and bronze butt-spike (doru). The hoplite phalanx dominated warfare among the Greek City States from the 7th into the 4th century BCE. The 4th century saw major changes. One was the greater use of peltasts, light infantry armed with spear and javelins. The other was the development of the sarissa, a two-handed pike 18 ft (5.5 m) in length, by the Macedonians under Phillip of Macedon and Alexander the Great. The pike phalanx, supported by peltasts and cavalry, became the dominant mode of warfare among the Greeks from the late 4th century onward until Greek military systems were supplanted by the Roman legions. In the pre-Marian Roman armies, the first two lines of battle, the hastati and principes, often fought with a sword called a gladius and pila, heavy javelins that were specifically designed to be thrown at an enemy to pierce and foul a target's shield. Originally the principes were armed with a short spear called a hasta, but these gradually fell out of use, eventually being replaced by the gladius. The third line, the triarii, continued to use the hasta. From the late 2nd century BCE, all legionaries were equipped with the pilum. The pilum continued to be the standard legionary spear until the end of the 2nd century CE. Auxilia, however, were equipped with a simple hasta and, perhaps, throwing spears. During the 3rd century CE, although the pilum continued to be used, legionaries usually were equipped with other forms of throwing and thrusting spear, similar to auxilia of the previous century. By the 4th century, the pilum had effectively disappeared from common use. In the late period of the Roman Empire, the spear became more often used because of its anti-cavalry capacities as the barbarian invasions were often conducted by people with a developed culture of cavalry in warfare. Muslim warriors used a spear that was called an az-zaġāyah. Berbers pronounced it zaġāya, but the English term, derived from the Old French via Berber, is "assegai". It is a pole weapon used for throwing or hurling, usually a light spear or javelin made of hard wood and pointed with a forged iron tip.The az-zaġāyah played an important role during the Islamic conquest as well as during later periods, well into the 20th century. A longer pole az-zaġāyah was being used as a hunting weapon from horseback. The az-zaġāyah was widely used. It existed in various forms in areas stretching from Southern Africa to the Indian subcontinent, although these places already had their own variants of the spear. This javelin was the weapon of choice during the Fulani jihad as well as during the Mahdist War in Sudan. It is still being used by Sikh Nihang in the Punjab as well as certain wandering Sufi ascetics (Derwishes). After the fall of the Western Roman Empire, the spear and shield continued to be used by nearly all Western European cultures. Since a medieval spear required only a small amount of steel along the sharpened edges (most of the spear-tip was wrought iron), it was an economical weapon. Quick to manufacture, and needing less smithing skill than a sword, it remained the main weapon of the common soldier. The Vikings, for instance, although often portrayed with axe or sword in hand, were armed mostly with spears, as were their Anglo-Saxon, Irish, or continental contemporaries. Broadly speaking, spears were either designed to be used in melee, or to be thrown. Within this simple classification, there was a remarkable range of types. For example, M. J. Swanton identified thirty different spearhead categories and sub-categories in early Saxon England. Most medieval spearheads were generally leaf-shaped. Notable types of early medieval spears include the angon, a throwing spear with a long head similar to the Roman pilum, used by the Franks and Anglo-Saxons, and the winged (or lugged) spear, which had two prominent wings at the base of the spearhead, either to prevent the spear penetrating too far into an enemy or to aid in spear fencing. Originally a Frankish weapon, the winged spear also was popular with the Vikings. It would become the ancestor of later medieval polearms, such as the partisan and spetum. The thrusting spear also has the advantage of reach, being considerably longer than other weapon types. Exact spear lengths are hard to deduce as few spear shafts survive archaeologically but 6–8 ft (1.8–2.4 m) would seem to have been the norm. Some nations were noted for their long spears, including the Scots and the Flemish. Spears usually were used in tightly ordered formations, such as the shield wall or the schiltron. To resist cavalry, spear shafts could be planted against the ground. William Wallace drew up his schiltrons in a circle at the Battle of Falkirk in 1298 to deter charging cavalry; this was a widespread tactic sometimes known as the "crown" formation. Throwing spears became rarer as the Middle Ages drew on, but survived in the hands of specialists such as the Catalan Almogavars. They were commonly used in Ireland until the end of the 16th century. Spears began to lose fashion among the infantry during the 14th century, being replaced by pole weapons that combined the thrusting properties of the spear with the cutting properties of the axe, such as the halberd. Where spears were retained they grew in length, eventually evolving into pikes, which would be a dominant infantry weapon in the 16th and 17th centuries. Cavalry spears were originally the same as infantry spears and were often used with two hands or held with one hand overhead. In the 12th century, after the adoption of stirrups and a high-cantled saddle, the spear became a decidedly more powerful weapon. A mounted knight would secure the lance by holding it with one hand and tucking it under the armpit (the couched lance technique) This allowed all the momentum of the horse and knight to be focused on the weapon's tip, whilst still retaining accuracy and control. This use of the spear spurred the development of the lance as a distinct weapon that was perfected in the medieval sport of jousting. In the 14th century, tactical developments meant that knights and men-at-arms often fought on foot. This led to the practice of shortening the lance to about 5 ft (1.5 m).) to make it more manageable. As dismounting became commonplace, specialist pole weapons such as the pollaxe were adopted by knights and this practice ceased. Spears were used first as hunting weapons amongst the ancient Chinese. They became popular as infantry weapons during the Warring States and Qin era, when spearmen were used as especially highly disciplined soldiers in organized group attacks. When used in formation fighting, spearmen would line up their large rectangular or circular shields in a shieldwall manner. The Qin also employed long spears (more akin to a pike) in formations similar to Swiss pikemen in order to ward off cavalry. The Han Empire would use similar tactics as its Qin predecessors. Halberds, polearms, and dagger axes were also common weapons during this time. Spears were also common weaponry for Warring States, Qin, and Han era cavalry units. During these eras, the spear would develop into a longer lance-like weapon used for cavalry charges. There are many words in Chinese that would be classified as a spear in English. The Mao is the predecessor of the Qiang. The first bronze Mao appeared in the Shang dynasty. This weapon was less prominent on the battlefield than the ge (dagger-axe). In some archaeological examples two tiny holes or ears can be found in the blade of the spearhead near the socket, these holes were presumably used to attach tassels, much like modern day wushu spears. In the early Shang, the Mao appeared to have a relatively short shaft as well as a relatively narrow shaft as opposed to Mao in the later Shang and Western Zhou period. Some Mao from this era are heavily decorated as is evidenced by a Warring States period Mao from the Ba Shu area. In the Han dynasty the Mao and the Ji (戟 Ji can be loosely defined as a halberd) rose to prominence in the military. Interesting to note is that the amount of iron Mao-heads found exceeds the number of bronze heads. By the end of the Han dynasty (Eastern Han) the process of replacement of the iron Mao had been completed and the bronze Mao had been rendered completely obsolete. After the Han dynasty toward the Sui and Tang dynasties the Mao used by cavalry were fitted with much longer shafts, as is mentioned above. During this era, the use of the Shuo (矟) was widespread among the footmen. The Shuo can be likened to a pike or simply a long spear. After the Tang dynasty, the popularity of the Mao declined and was replaced by the Qiang (枪). The Tang dynasty divided the Qiang in four categories: "一曰漆枪, 二曰木枪, 三曰白杆枪, 四曰扑头枪。” Roughly translated the four categories are: Qi (a kind of wood) Spears, Wooden Spears, Bai Gan (A kind of wood) Spears and Pu Tou Qiang. The Qiang that were produced in the Song and Ming dynasties consisted of four major parts: Spearhead, Shaft, End Spike and Tassel. The types of Qiang that exist are many. Among the types there are cavalry Qiang that were the length of one zhang (eleven feet and nine inches or 3.58 m), Litte-Flower Spears (Xiao Hua Qiang 小花枪) that are the length of one person and their arm extended above his head, double hooked spears, single hooked spears, ringed spears and many more. There is some confusion as to how to distinguish the Qiang from the Mao, as they are obviously very similar. Some people say that a Mao is longer than a Qiang, others say that the main difference is between the stiffness of the shaft, where the Qiang would be flexible and the Mao would be stiff. Scholars seem to lean toward the latter explanation more than the former. Because of the difference in the construction of the Mao and the Qiang, the usage is also different, though there is no definitive answer as to what exactly the differences are between the Mao and the Qiang. Spears in the Indian society were used both in missile and non-missile form, both by cavalry and foot-soldiers. Mounted spear-fighting was practiced using with a ten-foot, ball-tipped wooden lance called a bothati, the end of which was covered in dye so that hits may be confirmed. Spears were constructed from a variety of materials such as the sang made completely of steel, and the ballam which had a bamboo shaft. The Rajputs wielded a type of spear for infantrymen which had a club integrated into the spearhead, and a pointed butt end. Other spears had forked blades, several spear-points, and numerous other innovations. One particular spear unique to India was the vita or corded lance. Used by the Maratha army, it had a rope connecting the spear with the user's wrist, allowing the weapon to be thrown and pulled back. The Vel is a type of spear or lance, originated in Southern India, primarily used by Tamils. The hoko spear was used in ancient Japan sometime between the Yayoi period and the Heian period, but it became unpopular as early samurai often acted as horseback archers. Medieval Japan employed spears again for infantrymen to use, but it was not until the 11th century in that samurai began to prefer spears over bows. Several polearms were used in the Japanese theatres; the naginata was a glaive-like weapon with a long, curved blade popularly among the samurai and the Buddhist warrior-monks, often used against cavalry; the yari was a longer polearm, with a straight-bladed spearhead, which became the weapon of choice of both the samurai and the ashigaru (footmen) during the Warring States Era; the horseback samurai used shorter yari for his single-armed combat; on the other hand, ashigaru infantries used long yari (similar with European pike) for their massed combat formation. Filipino spears (sibat) were used as both a weapon and a tool throughout the Philippines. It is also called a bangkaw (after the Bankaw Revolt.), sumbling or palupad in the islands of Visayas and Mindanao. Sibat are typically made from rattan, either with a sharpened tip or a head made from metal. These heads may either be single-edged, double-edged or barbed. Styles vary according to function and origin. For example, a sibat designed for fishing may not be the same as those used for hunting. The spear was used as the primary weapon in expeditions and battles against neighbouring island kingdoms and it became famous during the 1521 Battle of Mactan, where the chieftain Lapu Lapu of Cebu fought against Spanish forces led by Ferdinand Magellan who was subsequently killed. As advanced metallurgy was largely unknown in pre-Columbian America outside of Western Mexico and South America, most weapons in Meso-America were made of wood or obsidian. This did not mean that they were less lethal, as obsidian may be sharpened to become many times sharper than steel. Meso-American spears varied greatly in shape and size. While the Aztecs preferred the sword-like macuahuitl for fighting, the advantage of a far-reaching thrusting weapon was recognised, and a large portion of the army would carry the tepoztopilli into battle. The tepoztopilli was a pole-arm, and to judge from depictions in various Aztec codices, it was roughly the height of a man, with a broad wooden head about twice the length of the users' palm or shorter, edged with razor-sharp obsidian blades which were deeply set in grooves carved into the head, and cemented in place with bitumen or plant resin as an adhesive. The tepoztopilli was able both to thrust and slash effectively. Throwing spears also were used extensively in Meso-American warfare, usually with the help of an atlatl. Throwing spears were typically shorter and more stream-lined than the tepoztopilli, and some had obsidian edges for greater penetration. Typically, most spears made by Native Americans were created with materials surrounded by their communities. Usually, the shaft of the spear was made with a wooden stick while the head of the spear was fashioned from arrowheads, pieces of metal such as copper, or a bone that had been sharpened. Spears were a preferred weapon by many since it was inexpensive to create, could more easily be taught to others, and could be made quickly and in large quantities. Native Americans used the Buffalo Pound method to kill buffalo, which required a hunter to dress as a buffalo and lure one into a ravine where other hunters were hiding. Once the buffalo appeared, the other hunters would kill him with spears. A variation of this technique, called the Buffalo Jump, was when a runner would lead the animals towards a cliff. As the buffalo got close to the cliff, other members of the tribe would jump out from behind rocks or trees and scare the buffalo over the cliff. Other hunters would be waiting at the bottom of the cliff to spear the animal to death. The development of both the long, two-handed pike and gunpowder in Renaissance Europe saw an ever-increasing focus on integrated infantry tactics. Those infantry not armed with these weapons carried variations on the pole-arm, including the halberd and the bill. Ultimately, the spear proper was rendered obsolete on the battlefield. Its last flowering was the half-pike or spontoon, a shortened version of the pike carried by officers and NCOs. While originally a weapon, this came to be seen more as a badge of office, or leading staff by which troops were directed. The half-pike, sometimes known as a boarding pike, was also used as a weapon on board ships until the 19th century. At the start of the Renaissance, cavalry remained predominantly lance-armed; gendarmes with the heavy knightly lance and lighter cavalry with a variety of lighter lances. By the 1540s, however, pistol-armed cavalry called reiters were beginning to make their mark. Cavalry armed with pistols and other lighter firearms, along with a sword, had virtually replaced lance armed cavalry in Western Europe by the beginning of the 17th century. One of the earliest forms of killing prey for humans, hunting game with a spear and spear fishing continues to this day as both a means of catching food and as a cultural activity. Some of the most common prey for early humans were mega fauna such as mammoths which were hunted with various kinds of spear. One theory for the Quaternary extinction event was that most of these animals were hunted to extinction by humans with spears. Even after the invention of other hunting weapons such as the bow the spear continued to be used, either as a projectile weapon or used in the hand as was common in boar hunting. Spear hunting fell out of favour in most of Europe in the 18th century, but continued in Germany, enjoying a revival in the 1930s. Spear hunting is still practiced in the United States. Animals taken are primarily wild boar and deer, although trophy animals such as cats and big game as large as a Cape Buffalo are hunted with spears. Alligator are hunted in Florida with a type of harpoon. The Celts would symbolically destroy a dead warrior's spear either to prevent its use by another or as a sacrificial offering. In classical Greek mythology Zeus' bolts of lightning may be interpreted as a symbolic spear. Some would carry that interpretation to the spear that frequently is associated with Athena, interpreting her spear as a symbolic connection to some of Zeus' power beyond the Aegis once he rose to replacing other deities in the pantheon. Athena was depicted with a spear prior to that change in myths, however. Chiron's wedding-gift to Peleus when he married the nymph Thetis in classical Greek mythology, was an ashen spear as the nature of ashwood with its straight grain made it an ideal choice of wood for a spear. The Romans and their early enemies would force prisoners to walk underneath a 'yoke of spears', which humiliated them. The yoke would consist of three spears, two upright with a third tied between them at a height which made the prisoners stoop. It has been suggested that the arrangement has a magical origin, a way to trap evil spirits. The word subjugate has its origins in this practice (from Latin sub = under, jugum = yoke). In Norse mythology, the god Odin's spear (named Gungnir) was made by the sons of Ivaldi. It had the special property that it never missed its mark. During the War with the Vanir, Odin symbolically threw Gungnir into the Vanir host. This practice of symbolically casting a spear into the enemy ranks at the start of a fight was sometimes used in historic clashes, to seek Odin's support in the coming battle. In Wagner's opera Siegfried, the haft of Gungnir is said to be from the "World-Tree" Yggdrasil. Sir James George Frazer in The Golden Bough noted the phallic nature of the spear and suggested that in the Arthurian legends the spear or lance functioned as a symbol of male fertility, paired with the Grail (as a symbol of female fertility). The term spear is also used (in a somewhat archaic manner) to describe the male line of a family, as opposed to the distaff or female line.
5,051
ENGLISH
1
The Saints of Shaivism: In ancient India Saivism took shape as a distinct and major religious movement mostly in the south due to the untiring work of many great saints who were dedicated to Siva in every conceivable way and showed exemplary devotion to their beloved Lord through their lives and works of great merit. Outstanding among them were the four great teachers, namely Manikkavachaka, Appar, Jnanasambhanda and Sundarmurthy. They are said to be the originators of the four main paths of Saivism, namely the sat marga, dasa marga, the satpura marga and the saha marga. Mention may also be made of Auvai, a famous woman saint from Tamilnadu who composed many devotional poems of highest fervor. Sakyanayanar was originally a Buddhist monk who became an ardent devotee of Siva in the later part of his life. He was blessed with a vision of Siva with Parvathi. Nandanar was an untouchable by birth, but was accepted as a great devotee of Siva and allowed to enter the famous Chidambaram temple because of his great devotion. There were many such saints who contributed greatly to the spread of Saivism in ancient India through their devotion and selfless acts of service.
<urn:uuid:7627ae0d-fe04-4a19-abf2-5ad3ac468159>
CC-MAIN-2020-05
https://templesinindiainfo.com/shiva-and-shaivism-the-saints-of-shaivism/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00433.warc.gz
en
0.98747
276
3.359375
3
[ 0.02188803255558014, 0.19731107354164124, -0.13488121330738068, -0.0327860563993454, -0.17791901528835297, -0.09743085503578186, 0.24425150454044342, 0.023268984630703926, 0.01119376253336668, -0.23389023542404175, 0.012505237944424152, -0.4192272126674652, 0.3356248140335083, 0.1345473974...
2
The Saints of Shaivism: In ancient India Saivism took shape as a distinct and major religious movement mostly in the south due to the untiring work of many great saints who were dedicated to Siva in every conceivable way and showed exemplary devotion to their beloved Lord through their lives and works of great merit. Outstanding among them were the four great teachers, namely Manikkavachaka, Appar, Jnanasambhanda and Sundarmurthy. They are said to be the originators of the four main paths of Saivism, namely the sat marga, dasa marga, the satpura marga and the saha marga. Mention may also be made of Auvai, a famous woman saint from Tamilnadu who composed many devotional poems of highest fervor. Sakyanayanar was originally a Buddhist monk who became an ardent devotee of Siva in the later part of his life. He was blessed with a vision of Siva with Parvathi. Nandanar was an untouchable by birth, but was accepted as a great devotee of Siva and allowed to enter the famous Chidambaram temple because of his great devotion. There were many such saints who contributed greatly to the spread of Saivism in ancient India through their devotion and selfless acts of service.
270
ENGLISH
1
Shakespeare essay William Shakespeare Essay Creating an essay is an extremely interesting and useful occupation. The essay genre suggests creative freedom and imaginative manoeuvre: Creating an essay is thinking about something we have once heard, read or experienced. An essay is often about talking and expressing emotions and imagery in a straightforward way. He was born on 26 April in Stratford-upon-Avon. His father was a successful local businessman and his mother was the daughter of a landowner. Shakespeare is widely regarded as the greatest writer in the English language and the world's pre-eminent dramatist. He is often called England's national poet and nicknamed the Bard of Avon. He wrote about 38 plays, sonnets, two long narrative poems, and a few other verses, of which the authorship of some is uncertain. His plays have been translated into every major living language and are performed more often than those of any other playwright. Marriage and career Shakespeare married Anne Hathaway at the age of She was eight years older than him. They had three children: Susanna, and twins Hamnet and Judith. After his marriage information about his life became very rare. But he is thought to have spent most of his time in London writing and performing in his plays. Between andhe began a successful career in London as an actor, writer, and part-owner of a playing company called the Lord Chamberlain's Men, later known as the King's Men. Retirement and death Aroundat the age of 49, he retired to Stratfordwhere he died three years later. Few records of Shakespeare's private life survive. He died on 23 Aprilat the age of He died within a month of signing his will, a document which he begins by describing himself as being in "perfect health". In his will, Shakespeare left the bulk of his large estate to his elder daughter Susanna. His work Shakespeare produced most of his known work between and His early plays were mainly comedies and histories and these works remain regarded as some of the best work produced in these genres. He then wrote mainly tragedies until aboutincluding Hamlet, Othello, King Lear, and Macbeth, considered some of the finest works in the English language. In his last phase, he wrote tragicomedies, also known as romances, and collaborated with other playwrights. Shakespeare's plays remain highly popular today and are constantly studied, performed, and reinterpreted in diverse cultural and political contexts throughout the world.The two books, A Life of William Shakespeare, written by Sir Sidney Lee, and The Complete Works of William Shakespeare, Volume Twenty, written by William himself, were used to summarize his life and works. The life and works of william shakespeare essay. Posted on 11/26/ by. The life and works of william shakespeare essay. 5 stars based on 58 reviews alphabetnyc.com Essay. Watch video · William Shakespeare (baptized on April 26, – April 23, ) was an English playwright, actor and poet who also known as the “Bard of . William Shakespeare's Life, Words, and the Globe Theater Essay - William Shakespeare's Life, Words, and the Globe Theater William Shakespeare's life is a mystery even if his works have been read by millions of people. "The Bard" is one of history favorite characters. His plays are . The Life of William Shakespeare Essay William Shakespeare William Shakespeare was the greatest playwright the world has ever known! His talent with using the English language has never had any competition, not even today. The Life and Works of William Shakespeare Words | 8 Pages William Shakespeare was born the third child and the first son of John Shakespeare and Mary Arden.
<urn:uuid:dadbf938-0423-4915-8613-db86e83ad457>
CC-MAIN-2020-05
https://pocofyfaguqinade.alphabetnyc.com/the-life-and-works-of-william-shakespeare-essay-29067zm.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00547.warc.gz
en
0.987519
761
3.625
4
[ -0.07040664553642273, -0.05364608019590378, 0.5190249681472778, -0.09237196296453476, -0.5188038349151611, 0.1013987734913826, 0.1987704336643219, 0.3048754930496216, -0.06731350719928741, -0.4528869390487671, -0.17528775334358215, 0.05276733636856079, 0.04333123564720154, 0.28417378664016...
1
Shakespeare essay William Shakespeare Essay Creating an essay is an extremely interesting and useful occupation. The essay genre suggests creative freedom and imaginative manoeuvre: Creating an essay is thinking about something we have once heard, read or experienced. An essay is often about talking and expressing emotions and imagery in a straightforward way. He was born on 26 April in Stratford-upon-Avon. His father was a successful local businessman and his mother was the daughter of a landowner. Shakespeare is widely regarded as the greatest writer in the English language and the world's pre-eminent dramatist. He is often called England's national poet and nicknamed the Bard of Avon. He wrote about 38 plays, sonnets, two long narrative poems, and a few other verses, of which the authorship of some is uncertain. His plays have been translated into every major living language and are performed more often than those of any other playwright. Marriage and career Shakespeare married Anne Hathaway at the age of She was eight years older than him. They had three children: Susanna, and twins Hamnet and Judith. After his marriage information about his life became very rare. But he is thought to have spent most of his time in London writing and performing in his plays. Between andhe began a successful career in London as an actor, writer, and part-owner of a playing company called the Lord Chamberlain's Men, later known as the King's Men. Retirement and death Aroundat the age of 49, he retired to Stratfordwhere he died three years later. Few records of Shakespeare's private life survive. He died on 23 Aprilat the age of He died within a month of signing his will, a document which he begins by describing himself as being in "perfect health". In his will, Shakespeare left the bulk of his large estate to his elder daughter Susanna. His work Shakespeare produced most of his known work between and His early plays were mainly comedies and histories and these works remain regarded as some of the best work produced in these genres. He then wrote mainly tragedies until aboutincluding Hamlet, Othello, King Lear, and Macbeth, considered some of the finest works in the English language. In his last phase, he wrote tragicomedies, also known as romances, and collaborated with other playwrights. Shakespeare's plays remain highly popular today and are constantly studied, performed, and reinterpreted in diverse cultural and political contexts throughout the world.The two books, A Life of William Shakespeare, written by Sir Sidney Lee, and The Complete Works of William Shakespeare, Volume Twenty, written by William himself, were used to summarize his life and works. The life and works of william shakespeare essay. Posted on 11/26/ by. The life and works of william shakespeare essay. 5 stars based on 58 reviews alphabetnyc.com Essay. Watch video · William Shakespeare (baptized on April 26, – April 23, ) was an English playwright, actor and poet who also known as the “Bard of . William Shakespeare's Life, Words, and the Globe Theater Essay - William Shakespeare's Life, Words, and the Globe Theater William Shakespeare's life is a mystery even if his works have been read by millions of people. "The Bard" is one of history favorite characters. His plays are . The Life of William Shakespeare Essay William Shakespeare William Shakespeare was the greatest playwright the world has ever known! His talent with using the English language has never had any competition, not even today. The Life and Works of William Shakespeare Words | 8 Pages William Shakespeare was born the third child and the first son of John Shakespeare and Mary Arden.
753
ENGLISH
1
The mound is the second largest Adena mound in West Virginia and is believed to have been developed between 250 and 150 B.C. Some evidence suggests the site was used by Native Americans as late as 1650. The Criel Mound was originally one of 50 mounds and prehistoric earthworks that extended from present-day Charleston, West Virginia, to near Institute. Most were destroyed during the industrialization of the Kanawha Valley, which followed the completion of the Chesapeake & Ohio Railway in 1872. The nearby Sunset Mound and the Dunbar Mound at Shawnee Park in Dunbar, West Virginia, also survive. The South Charleston Mound was significantly altered in the late 1800s when a race track for horses was built around its base. Its top was the flattened to accommodate a podium for race judges. The mound originally measured about 175 feet in diameter and 35 feet in height. It is now approximately 140 feet in diameter and 25 feet in height. The mound was excavated by Professor P. W. Norris of the Smithsonian Institute in 1883-84. Norris provided the following description of the excavation in "Ancient Works Near Charleston" for the U. S. Bureau of Ethnology: "At the depth of three feet, in the center of the shaft, some human bones were discovered, doubtless parts of a skeleton said to have been dug up before or at the time of the construction of the judges' stand. At the depth of four feet, in a bed of hard earth composed of mixed clay and ashes, were two skeletons, both lying extended on their backs, heads south, and feet near the center of the shaft. Near the heads lay two celts, two stone hoes, one lance head, and two disks." At a depth of 31 feet, numerous other skeletons were found, including a burial vault with the remains of eleven persons believed to an Adena leader and ten of his servants. Numerous artifacts, including various jewelry and weapons, were found during the excavation.
<urn:uuid:a764857c-2d50-478f-b297-cf120556ede9>
CC-MAIN-2020-05
https://wvexplorer.com/attractions/prehistoric/south-charleston-mound/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00470.warc.gz
en
0.982563
410
4.125
4
[ -0.3805343806743622, 0.024541020393371582, 0.36253446340560913, 0.265217661857605, -0.010273868218064308, -0.01865699514746666, -0.3177219033241272, 0.06735791265964508, -0.27710774540901184, 0.2258397787809372, 0.3515057861804962, -0.5063388347625732, -0.019862262532114983, 0.471764802932...
2
The mound is the second largest Adena mound in West Virginia and is believed to have been developed between 250 and 150 B.C. Some evidence suggests the site was used by Native Americans as late as 1650. The Criel Mound was originally one of 50 mounds and prehistoric earthworks that extended from present-day Charleston, West Virginia, to near Institute. Most were destroyed during the industrialization of the Kanawha Valley, which followed the completion of the Chesapeake & Ohio Railway in 1872. The nearby Sunset Mound and the Dunbar Mound at Shawnee Park in Dunbar, West Virginia, also survive. The South Charleston Mound was significantly altered in the late 1800s when a race track for horses was built around its base. Its top was the flattened to accommodate a podium for race judges. The mound originally measured about 175 feet in diameter and 35 feet in height. It is now approximately 140 feet in diameter and 25 feet in height. The mound was excavated by Professor P. W. Norris of the Smithsonian Institute in 1883-84. Norris provided the following description of the excavation in "Ancient Works Near Charleston" for the U. S. Bureau of Ethnology: "At the depth of three feet, in the center of the shaft, some human bones were discovered, doubtless parts of a skeleton said to have been dug up before or at the time of the construction of the judges' stand. At the depth of four feet, in a bed of hard earth composed of mixed clay and ashes, were two skeletons, both lying extended on their backs, heads south, and feet near the center of the shaft. Near the heads lay two celts, two stone hoes, one lance head, and two disks." At a depth of 31 feet, numerous other skeletons were found, including a burial vault with the remains of eleven persons believed to an Adena leader and ten of his servants. Numerous artifacts, including various jewelry and weapons, were found during the excavation.
438
ENGLISH
1
From its inception, China’s socialist state defined gender equality primarily as women’s right to equal work and equal pay. Women’s liberation was understood as full participation in paid, public work. To achieve this end, women were to be emancipated from domestic drudgery, and pregnant women and young mothers were to enjoy state protection so that they could combine the raising of children with productive work. Issues of concern to Western second and third-wave feminists – sexual and reproductive autonomy, recognition of the diversity of gendered experiences, freedom from oppressive gender norms – were seen as secondary, if they were perceived at all. Women who grew up in the Mao years often embraced this work-centred view of women’s liberation, even though many recognised that it fell short of its declared aims. And state feminism under Mao did indeed improve the lot of Chinese working women. Women’s employment rates under Mao were high by international standards and the fall in women’s employment since the Mao years – from close to 90 percent in the 1970s to 64 percent in 2014 – is a clear indication of China’s retreat from gender equality. Not that things were perfect under Mao: “equal pay for equal work” remained an empty slogan as long as most women were employed in low-pay sectors such as service and light industry, while men worked in heavy industry. Leadership positions were almost universally reserved for men, while cooking, cleaning, and childcare remained female domains. In state enterprises, socialised childcare and canteens reduced the time women spent on household chores, but not all women had access to such services. Nonetheless, there can be little doubt that urban women in Maoist China saw marked improvements in their lives, not only compared to the situation before 1949 but also compared to other East Asian countries such as South Korea, Taiwan, Hong Kong, and Japan. Urban women’s gains, however, were not shared by the more than 80 percent of Chinese women who lived in the countryside. True, rural women, too, were encouraged to step out of the confines of the household and to participate in public work: after the collectivisation of agriculture in 1956-57, all able-bodied women were expected to work alongside men in agriculture and infrastructure. In order to do so, however, they first had to be liberated of domestic work – work that was far more backbreaking and time-consuming than most modern readers appreciate. Despite half a century of treaty-port industrialisation, despite massive state investment in modern industry after 1949, Mao-era China remained a poor and mostly agrarian country, and most of the country’s wealth was channeled to the cities. In central Shaanxi in the 1970s – an area of average wealth, with good rail and road connections to China’s developed coastal regions – people lived material lives that were in most aspects unchanged from the early twentieth (and, in fact, the nineteenth) century. New socialist commodities introduced in the 1950s included flashlights, rubber boots, thermos flasks, enamel wash basins, calendar posters, and bar soap. These brightened people’s daily lives but left women’s work routines mostly unchanged. All food with the exception of salt, vinegar, and soy sauce was produced locally. Processed food was virtually unknown; women prepared noodles, steamed rolls, pickles, and condiment from scratch at home. Under these conditions, preparing a simple meal took hours: grain had to be milled and winnowed manually, fuel was gathered in the fields, water was drawn from local wells, etc. Clothing a family was even more time-consuming. In theory, the state rationing system ensured that every person in the country – men and women, rural and urban – had access to cheap factory-made cotton cloth. However, rural rations fell short of the most basic needs. In most years, per capita rations were around 5 meters (60 cm wide and of poor quality), which is enough for one summer suit but not for winter gear, underwear, cloth shoes, socks, and bedding. Moreover, many rural people had so little cash income that they could not afford the cloth they were entitled to under the rationing system. In consequence, women in China’s cotton-producing regions (which include China’s coastal plains and the Yangzi and Yellow River valleys, in other words, China’s most densely populated areas) continued to spin and weave well into the collective period – in many cases, until the very end of the Mao years. Since the cotton harvest belonged to the state, they could do so only with cotton obtained through illicit means: pilfered from the collective fields, obtained on the black market, or secretly handed out by collective units that hid the harvest from the state. Textile work alone could keep a woman busy for the best part of the year: reports from the early years of the PRC estimate that a woman who was the sole textile provider for a family of four spent six months every year carding cotton, spinning yarn, weaving cloth, and making bedding, shoes, and clothing. An entire generation of rural Chinese women was thus caught between two labour regimes. Participation in agricultural production was non-negotiable: women with small children could take off a shift once in a while, but most women were expected to work three daily shifts in the fields. Women’s participation in farm work was crucial to the state’s development aims: industrial growth depended on the mass production of cheap agricultural inputs for the factories; in the absence of capital investment (which was reserved for urban industry), increases in farm output could be achieved only by mobilising more labour power, and rural women were China’s largest untapped labour source. Women thus had no choice but to work full-time in agriculture. At the same time, the old labour regime, in which women worked at home to feed and clothe their families, remained largely in place. The Soviet Union and its Eastern European allies had dealt with similar situations by offering its rural women an implicit social contract: in exchange for their participation in paid, public work, women would be liberated from household drudgery in its myriad forms. This was achieved partly by socialising childcare and providing meals in canteens, partly by eradicating feudal customs that burdened women with senseless and demeaning work. For the most part, however, women were freed for socialist work by the planned provision of ready-made consumer goods that reduced the amount of work that women performed at home. China, being much poorer than the Soviet Union, could not follow that path. Women were mobilised for socialised work in collective agriculture before their domestic workloads were reduced. The consequences for individual women were severe. An entire generation of rural Chinese women worked full shifts in collective agriculture and full shifts at home. These women also had more children than any generation before or after them: they reached reproductive age at a time when child mortality was rapidly declining but before contraception was widely available. Women dealt with their double and triple burden by working longer hours, up to and beyond the limits of endurance. They rose before men and children and went on working long after the men had gone to bed. In contrast to men, who took naps in the afternoon, women filled their short breaks with textile work. Many women rested only when they fell sick—but even then they often went back to work before full recovery. Their domestic work was not recognised, since from the state’s point of view, household work was reproduction and did not count as productive “work.” Yet to a significant degree, it was rural women’s unrecognised, invisible work that laid the foundation for China’s current prosperity. Jacob Eyferth is Associate Professor in Chinese History in East Asian Languages and Civilisations at the University of Chicago. His research interests include Social and cultural history of twentieth-century China, in particular rural China; history of work, technology, gender, and everyday life. He is author of Eating Rice from Bamboo Roots: The Social History of a Community of Handicraft Papermakers in Rural Sichuan, 1920–2000 (Harvard University Press, 2009). Image credit: Jacob Eyferth.
<urn:uuid:620b89ff-05a3-417a-bdae-729d934f6f37>
CC-MAIN-2020-05
https://www.wagic.org/blank-2/2018/03/21/Double-Shifts-The-Working-Lives-of-Rural-Women-under-Mao
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00491.warc.gz
en
0.981027
1,697
3.984375
4
[ -0.48055148124694824, 0.12233403325080872, -0.09439573436975479, 0.1400301456451416, 0.027858082205057144, 0.6876978278160095, -0.05623432993888855, -0.02279416099190712, -0.04093250632286072, 0.0996735692024231, 0.22011935710906982, -0.3084205985069275, 0.2095850557088852, 0.1180218234658...
3
From its inception, China’s socialist state defined gender equality primarily as women’s right to equal work and equal pay. Women’s liberation was understood as full participation in paid, public work. To achieve this end, women were to be emancipated from domestic drudgery, and pregnant women and young mothers were to enjoy state protection so that they could combine the raising of children with productive work. Issues of concern to Western second and third-wave feminists – sexual and reproductive autonomy, recognition of the diversity of gendered experiences, freedom from oppressive gender norms – were seen as secondary, if they were perceived at all. Women who grew up in the Mao years often embraced this work-centred view of women’s liberation, even though many recognised that it fell short of its declared aims. And state feminism under Mao did indeed improve the lot of Chinese working women. Women’s employment rates under Mao were high by international standards and the fall in women’s employment since the Mao years – from close to 90 percent in the 1970s to 64 percent in 2014 – is a clear indication of China’s retreat from gender equality. Not that things were perfect under Mao: “equal pay for equal work” remained an empty slogan as long as most women were employed in low-pay sectors such as service and light industry, while men worked in heavy industry. Leadership positions were almost universally reserved for men, while cooking, cleaning, and childcare remained female domains. In state enterprises, socialised childcare and canteens reduced the time women spent on household chores, but not all women had access to such services. Nonetheless, there can be little doubt that urban women in Maoist China saw marked improvements in their lives, not only compared to the situation before 1949 but also compared to other East Asian countries such as South Korea, Taiwan, Hong Kong, and Japan. Urban women’s gains, however, were not shared by the more than 80 percent of Chinese women who lived in the countryside. True, rural women, too, were encouraged to step out of the confines of the household and to participate in public work: after the collectivisation of agriculture in 1956-57, all able-bodied women were expected to work alongside men in agriculture and infrastructure. In order to do so, however, they first had to be liberated of domestic work – work that was far more backbreaking and time-consuming than most modern readers appreciate. Despite half a century of treaty-port industrialisation, despite massive state investment in modern industry after 1949, Mao-era China remained a poor and mostly agrarian country, and most of the country’s wealth was channeled to the cities. In central Shaanxi in the 1970s – an area of average wealth, with good rail and road connections to China’s developed coastal regions – people lived material lives that were in most aspects unchanged from the early twentieth (and, in fact, the nineteenth) century. New socialist commodities introduced in the 1950s included flashlights, rubber boots, thermos flasks, enamel wash basins, calendar posters, and bar soap. These brightened people’s daily lives but left women’s work routines mostly unchanged. All food with the exception of salt, vinegar, and soy sauce was produced locally. Processed food was virtually unknown; women prepared noodles, steamed rolls, pickles, and condiment from scratch at home. Under these conditions, preparing a simple meal took hours: grain had to be milled and winnowed manually, fuel was gathered in the fields, water was drawn from local wells, etc. Clothing a family was even more time-consuming. In theory, the state rationing system ensured that every person in the country – men and women, rural and urban – had access to cheap factory-made cotton cloth. However, rural rations fell short of the most basic needs. In most years, per capita rations were around 5 meters (60 cm wide and of poor quality), which is enough for one summer suit but not for winter gear, underwear, cloth shoes, socks, and bedding. Moreover, many rural people had so little cash income that they could not afford the cloth they were entitled to under the rationing system. In consequence, women in China’s cotton-producing regions (which include China’s coastal plains and the Yangzi and Yellow River valleys, in other words, China’s most densely populated areas) continued to spin and weave well into the collective period – in many cases, until the very end of the Mao years. Since the cotton harvest belonged to the state, they could do so only with cotton obtained through illicit means: pilfered from the collective fields, obtained on the black market, or secretly handed out by collective units that hid the harvest from the state. Textile work alone could keep a woman busy for the best part of the year: reports from the early years of the PRC estimate that a woman who was the sole textile provider for a family of four spent six months every year carding cotton, spinning yarn, weaving cloth, and making bedding, shoes, and clothing. An entire generation of rural Chinese women was thus caught between two labour regimes. Participation in agricultural production was non-negotiable: women with small children could take off a shift once in a while, but most women were expected to work three daily shifts in the fields. Women’s participation in farm work was crucial to the state’s development aims: industrial growth depended on the mass production of cheap agricultural inputs for the factories; in the absence of capital investment (which was reserved for urban industry), increases in farm output could be achieved only by mobilising more labour power, and rural women were China’s largest untapped labour source. Women thus had no choice but to work full-time in agriculture. At the same time, the old labour regime, in which women worked at home to feed and clothe their families, remained largely in place. The Soviet Union and its Eastern European allies had dealt with similar situations by offering its rural women an implicit social contract: in exchange for their participation in paid, public work, women would be liberated from household drudgery in its myriad forms. This was achieved partly by socialising childcare and providing meals in canteens, partly by eradicating feudal customs that burdened women with senseless and demeaning work. For the most part, however, women were freed for socialist work by the planned provision of ready-made consumer goods that reduced the amount of work that women performed at home. China, being much poorer than the Soviet Union, could not follow that path. Women were mobilised for socialised work in collective agriculture before their domestic workloads were reduced. The consequences for individual women were severe. An entire generation of rural Chinese women worked full shifts in collective agriculture and full shifts at home. These women also had more children than any generation before or after them: they reached reproductive age at a time when child mortality was rapidly declining but before contraception was widely available. Women dealt with their double and triple burden by working longer hours, up to and beyond the limits of endurance. They rose before men and children and went on working long after the men had gone to bed. In contrast to men, who took naps in the afternoon, women filled their short breaks with textile work. Many women rested only when they fell sick—but even then they often went back to work before full recovery. Their domestic work was not recognised, since from the state’s point of view, household work was reproduction and did not count as productive “work.” Yet to a significant degree, it was rural women’s unrecognised, invisible work that laid the foundation for China’s current prosperity. Jacob Eyferth is Associate Professor in Chinese History in East Asian Languages and Civilisations at the University of Chicago. His research interests include Social and cultural history of twentieth-century China, in particular rural China; history of work, technology, gender, and everyday life. He is author of Eating Rice from Bamboo Roots: The Social History of a Community of Handicraft Papermakers in Rural Sichuan, 1920–2000 (Harvard University Press, 2009). Image credit: Jacob Eyferth.
1,677
ENGLISH
1
The bride’s henna ritual was the principal rite of passage for women in Yemen. This ritual was an important stage in preparing the bride for her new life, as she changed from a girl-youth into a man’s wife, became separated from her family, and went to live with her husband’s home. It expressed a rigid gender separation and a non-egalitarian system in which femininity was shackled in structural inferiority. After immigrating to Israel and becoming exposed to a western society with egalitarian messages, Yemenite women became less dependent and subservient and more empowered. However, they also maintained traditional thought patterns. The change in their status, as well as the mixed trends towards change and preservation in communal tradition, influenced the performance of the henna ritual in Israel, and it became syncretic. During the last few decades, as part of the process of Mirzahi young people return of their roots, the custom of holding a henna ritual has been revived among young Yemenite in Israel, mainly as a symbol of their ethnic identity. Today, however, the ritual is characterized by a breaking of the social order and hierarchy. It is focused on the couple, and its importance as a female rite of passage has diminished.
<urn:uuid:a6ca04ee-1d26-4668-87d5-741859dcd796>
CC-MAIN-2020-05
https://www.fouziasalon.com/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00012.warc.gz
en
0.984286
260
3.375
3
[ -0.24959734082221985, 0.30913615226745605, -0.17022408545017242, -0.09975139796733856, -0.23502977192401886, 0.7877645492553711, 0.2298915535211563, -0.2716739773750305, 0.3486706614494324, 0.2842424809932709, -0.06692797690629959, -0.05043159797787666, 0.010898825712502003, 0.198209911584...
12
The bride’s henna ritual was the principal rite of passage for women in Yemen. This ritual was an important stage in preparing the bride for her new life, as she changed from a girl-youth into a man’s wife, became separated from her family, and went to live with her husband’s home. It expressed a rigid gender separation and a non-egalitarian system in which femininity was shackled in structural inferiority. After immigrating to Israel and becoming exposed to a western society with egalitarian messages, Yemenite women became less dependent and subservient and more empowered. However, they also maintained traditional thought patterns. The change in their status, as well as the mixed trends towards change and preservation in communal tradition, influenced the performance of the henna ritual in Israel, and it became syncretic. During the last few decades, as part of the process of Mirzahi young people return of their roots, the custom of holding a henna ritual has been revived among young Yemenite in Israel, mainly as a symbol of their ethnic identity. Today, however, the ritual is characterized by a breaking of the social order and hierarchy. It is focused on the couple, and its importance as a female rite of passage has diminished.
253
ENGLISH
1
In this chapter, we will explain Mary’ s troubled relationship with alcohol and drugs, as well as why she and other people on the reservation drink; also, it will be discussed what cured her. In a few words, alcohol and drugs were needed for Mary in times of Pine Ringe reservation, after Catholic boarding school, as ways of compensation for hostility and racial injustice. So, she and other common people on the reservation drank because they were growing up in poverty, without running water and electricity, without jobs and ways of compensation. In other words, people drank because they had not another opportunity for their personal realizations in the society, which was predominantly white and racially unjustly. Therefore, in terms of Native Indians warrior memory, males from the reservation believed that only with an assistance of strong drinks they would find a glorious death; sometimes and very often indeed, Native Indian males felt like compensating in terms of their strict home conversations with their own females, and Native Indian females felt themselves like slaves in their own families when males came home being over-drunk after difficult jobs. From the other side, Mary Crow Dog was cured by AIM, bringing new freedom and promises for the Native Indians. But, from the other perspective, alcohol and drugs were not the only way for compensating, and Mary Crow Dog was also shoplifting during her after-Catholic boarding school period. Many Indians found their addictive way in alcohol and drugs, but author stated that that was a problem bringing up by ‘ white man. ’ (p. 143) According to these statements, only AIM was powerful enough to give some new opportunities for the Native Indians. For such active people, alcohol and drugs were like weak ways, and they chose battles and struggle instead of passive lifestyle; that is why Wounded Knee, 1973, must be seen as a triumph over some passive strategies, like alcohol and drugs. Please type your essay title, choose your document type, enter your email and we send you essay samples
<urn:uuid:85807f99-a60e-4814-9321-691b5b44a921>
CC-MAIN-2020-05
https://freeessayhelp.com/essay-book-report-1099867
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00539.warc.gz
en
0.98056
407
3.46875
3
[ 0.07785966992378235, 0.1004011407494545, 0.4915817677974701, -0.2791990637779236, -0.2246384173631668, 0.22389298677444458, 0.25647497177124023, -0.03821198642253876, 0.04214801266789436, -0.21864525973796844, -0.18705973029136658, -0.0363483764231205, -0.19575875997543335, 0.0878320932388...
2
In this chapter, we will explain Mary’ s troubled relationship with alcohol and drugs, as well as why she and other people on the reservation drink; also, it will be discussed what cured her. In a few words, alcohol and drugs were needed for Mary in times of Pine Ringe reservation, after Catholic boarding school, as ways of compensation for hostility and racial injustice. So, she and other common people on the reservation drank because they were growing up in poverty, without running water and electricity, without jobs and ways of compensation. In other words, people drank because they had not another opportunity for their personal realizations in the society, which was predominantly white and racially unjustly. Therefore, in terms of Native Indians warrior memory, males from the reservation believed that only with an assistance of strong drinks they would find a glorious death; sometimes and very often indeed, Native Indian males felt like compensating in terms of their strict home conversations with their own females, and Native Indian females felt themselves like slaves in their own families when males came home being over-drunk after difficult jobs. From the other side, Mary Crow Dog was cured by AIM, bringing new freedom and promises for the Native Indians. But, from the other perspective, alcohol and drugs were not the only way for compensating, and Mary Crow Dog was also shoplifting during her after-Catholic boarding school period. Many Indians found their addictive way in alcohol and drugs, but author stated that that was a problem bringing up by ‘ white man. ’ (p. 143) According to these statements, only AIM was powerful enough to give some new opportunities for the Native Indians. For such active people, alcohol and drugs were like weak ways, and they chose battles and struggle instead of passive lifestyle; that is why Wounded Knee, 1973, must be seen as a triumph over some passive strategies, like alcohol and drugs. Please type your essay title, choose your document type, enter your email and we send you essay samples
403
ENGLISH
1
María Eugenia Choque, an Aymara woman, is teaching indigenous women how to assume decision-making positions in the resurgent ayllu system of governance in the Bolivian Andes. The New Idea María Eugenia Choque is training indigenous women to assume positions of power equal to those of men in the Aymara system of government, called the ayllu. Working through an old system of women's organizations called the Confederation of Women, she creates forums where women learn to help each other speak out and question authority without fear of being ridiculed. María Eugenia teaches them to play soccer in order to learn how to compete and the value of working together. She is finding that women who learn leadership roles in the ayllu maintain their new habits even when their families are forced to migrate to cities during periods of drought or other changes.María Eugenia confronts head-on the macho culture that does not tolerate or value women's participation. Based on her concept that gender roles are learned, she believes that if men and women learn to work together, eventually they can change patterns of inequality. While she trains the women, she also works with the men to help them understand that local government that includes women can be a benefit to the community. Some researchers, including María Eugenia, are convinced that before the Incas invaded what is now Bolivia, the indigenous Aymara race included its women on a more equal footing with the men in their traditional system of local governance called the ayllu. Land is still held in common and leadership rotated among the families of the ayllu communities. Since the Inca empire and subsequent Spanish conquest, all the Aymara have been subjects in their land. They have faced continuous racial discrimination and have also been pushed aside politically. Many Bolivian communities, beginning in 1952, have marginalized the ayllu local governments by introducing a new official system based on popular vote, political parties and labor unions. During 500 years of colonial domination, the low status of the Aymara has been intensified for the indigenous women, who are subjects not only of the colonizers but also of their own husbands, brothers and fathers. They have suffered physical abuse; they have been almost completely excluded from the political process: they are not represented by the labor unions or political parties and they are excluded from decision making in the ayllu system of government in their own communities. Women are permitted to take leadership positions within the Confederation of Women, but in meetings that involve the whole community, women become silent and sit on the floor while the men sit on chairs and benches. In the Aymara culture, a woman is seen as having identity only in association with a man: men speak for women. This remains true even though many men migrate while the women typically maintain schools and homes and tend to a family's agriculture and grazing and even though there is a high incidence of single mothers with no laws to compel child support. At the end of the day, the decisions are made by a son or uncle or brother, if not a husband. Such a system means that women are in the position of adjusting to decisions that affect them but in which they had no voice. A health center, for example, might be scheduled by the men ayllu leaders to be open during hours that happen to be when all the women are preparing food. Though a better alternative might be obvious, women do not develop the skills to represent themselves alongside men-how to identify a problem, propose a solution, even raise their hands, much less persist in the face of opposition.Recent legal and political developments provide a greater level of support for women and indigenous communities generally. New laws prohibit violence against women and protect their right to own property. Women are directly affected by Bolivia's new Popular Participation Law, under which indigenous groups that are officially recognized, as the Aymara are, have a new level of autonomy to create their own projects and receive government funding directly. However, the task of creating the mechanisms to implement these laws remains. Ashoka Fellow Carlos Mamani has been instrumental in securing political legitimacy for the ayllu and strengthening their political skills. María Eugenia is addressing the obstacles that are specific to women. María Eugenia began by identifying some 5,000 Aymara women who were good candidates to be potential leaders. They were the women who were already known to be fine weavers or to raise the best sheep. Some owned small businesses or were teachers. They became the core participants in a series of 100 workshops that María Eugenia has conducted with existing women's and neighborhood groups throughout the Altiplano of Bolivia, part of the Confederation of Women. In the workshops she teaches the women to speak up, to question authority and assume leadership within their own groups. She shows them how to nominate each other for positions in the group and how to validate each other's comments and observations during discussion. Then she teaches them how to compete by playing soccer! The qualities that get the ball into the net by passing it to each other all the way down the field carry over into ability to act as a team to reach a political goal.While María Eugenia is preparing the women in the rural farming communities, she also works with the men who are part of the ayllu leadership. She talks to them separately about how their decisions could become more efficient, even easier, with input from the women. Eventually they become willing to invite the women to meet with them. She integrates the prepared women into the ayllu groups: the women raise important issues they want to discuss, often about family and health. If the men ignore them, María Eugenia has trained them to persist; if that fails, she shows them how to use the press and radio as alternative ways to make their voices heard. Their situation is newsworthy in Bolivia, where women have newly emerged in public life in the last ten years and the whole country is assimilating new laws for the protection of women. During 1994 and 1995, violence against women was outlawed and there are attempts under way to pass a law requiring that one-third of all political appointments and party nominations must be women. The discussion of women's rights is alive, stimulated by human rights and women's rights groups.In addition to training women, María Eugenia's workshops are also her method for multiplying her impact. When she finds a successful group of women who work well together and have become effective participants in their ayllu, she sends them into other communities as a model, accompanied when possible by a few men who have learned how to work with women in decision-making discussions. New workshops spread the training and educate women in their rights and their history, using the group oral-tradition methods that are part of their culture. The model has spread throughout Bolivia's altiplano and is replicable in ayllu communities throughout the Andes and the old Inca Empire.If current trends continue, it is estimated that within five years a civil society of more than 50,000 people in the Andean region will live under ayllu governance. As an indigenous Aymara women, María Eugenia has witnessed and experienced first-hand discrimination from many levels of society and the education system of Bolivia. Her parents did not read or write and she did not grow up with books in the house. All of her education was in Spanish, but at home her family only spoke Aymara. She still remembers the ordeal of writing her thesis at the university, thinking in Aymara while writing in Spanish. In the 1950s the family migrated to the city and María Eugenia's mother worked in a factory where she became part of the labor movement. This led to problems with her father, who worked in the house. Both of her parents insisted that she study and they also insisted that she dress in the clothes of the teachers, not allowing her to wear traditional dress. While in high school, María Eugenia decided to become a nun. The nuns told her that her mother would have to get rid of her indigenous dress and customs if María Eugenia was to pass as a nun. The majority of the students in the school were white and the children with indigenous names sat in the back of the room. Eventually she left the school. Her studies and experiences during the process of acquiring a master's degree in Andean history have only confirmed and intensified her beliefs that indigenous women's status must be raised in Bolivia and elsewhere. During the last ten years, she has published twenty articles in various professional journals, magazines and newspapers on various subjects, mainly focusing on indigenous women and politics. As a licensed social worker, she has seen the effect that marginalizing indigenous women has had on families and communities. She has participated in or been a guest speaker at over twenty fora, workshops and television programs on a variety of subjects over the past twelve years. She was first an investigator and later the director of the Andean Oral History Workshop.
<urn:uuid:341003cf-533d-42ee-b8c9-1337bdde0df1>
CC-MAIN-2020-05
https://www.ashoka.org/en/fellow/maria-eugenia-choque
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00136.warc.gz
en
0.980159
1,838
3.875
4
[ -0.059928048402071, 0.2661183476448059, 0.1695071905851364, 0.004522372502833605, -0.10189419239759445, 0.20057253539562225, 0.0939011424779892, 0.060886695981025696, 0.23542863130569458, 0.03172772005200386, -0.1933732032775879, -0.41169798374176025, -0.019473619759082794, 0.2700875997543...
2
María Eugenia Choque, an Aymara woman, is teaching indigenous women how to assume decision-making positions in the resurgent ayllu system of governance in the Bolivian Andes. The New Idea María Eugenia Choque is training indigenous women to assume positions of power equal to those of men in the Aymara system of government, called the ayllu. Working through an old system of women's organizations called the Confederation of Women, she creates forums where women learn to help each other speak out and question authority without fear of being ridiculed. María Eugenia teaches them to play soccer in order to learn how to compete and the value of working together. She is finding that women who learn leadership roles in the ayllu maintain their new habits even when their families are forced to migrate to cities during periods of drought or other changes.María Eugenia confronts head-on the macho culture that does not tolerate or value women's participation. Based on her concept that gender roles are learned, she believes that if men and women learn to work together, eventually they can change patterns of inequality. While she trains the women, she also works with the men to help them understand that local government that includes women can be a benefit to the community. Some researchers, including María Eugenia, are convinced that before the Incas invaded what is now Bolivia, the indigenous Aymara race included its women on a more equal footing with the men in their traditional system of local governance called the ayllu. Land is still held in common and leadership rotated among the families of the ayllu communities. Since the Inca empire and subsequent Spanish conquest, all the Aymara have been subjects in their land. They have faced continuous racial discrimination and have also been pushed aside politically. Many Bolivian communities, beginning in 1952, have marginalized the ayllu local governments by introducing a new official system based on popular vote, political parties and labor unions. During 500 years of colonial domination, the low status of the Aymara has been intensified for the indigenous women, who are subjects not only of the colonizers but also of their own husbands, brothers and fathers. They have suffered physical abuse; they have been almost completely excluded from the political process: they are not represented by the labor unions or political parties and they are excluded from decision making in the ayllu system of government in their own communities. Women are permitted to take leadership positions within the Confederation of Women, but in meetings that involve the whole community, women become silent and sit on the floor while the men sit on chairs and benches. In the Aymara culture, a woman is seen as having identity only in association with a man: men speak for women. This remains true even though many men migrate while the women typically maintain schools and homes and tend to a family's agriculture and grazing and even though there is a high incidence of single mothers with no laws to compel child support. At the end of the day, the decisions are made by a son or uncle or brother, if not a husband. Such a system means that women are in the position of adjusting to decisions that affect them but in which they had no voice. A health center, for example, might be scheduled by the men ayllu leaders to be open during hours that happen to be when all the women are preparing food. Though a better alternative might be obvious, women do not develop the skills to represent themselves alongside men-how to identify a problem, propose a solution, even raise their hands, much less persist in the face of opposition.Recent legal and political developments provide a greater level of support for women and indigenous communities generally. New laws prohibit violence against women and protect their right to own property. Women are directly affected by Bolivia's new Popular Participation Law, under which indigenous groups that are officially recognized, as the Aymara are, have a new level of autonomy to create their own projects and receive government funding directly. However, the task of creating the mechanisms to implement these laws remains. Ashoka Fellow Carlos Mamani has been instrumental in securing political legitimacy for the ayllu and strengthening their political skills. María Eugenia is addressing the obstacles that are specific to women. María Eugenia began by identifying some 5,000 Aymara women who were good candidates to be potential leaders. They were the women who were already known to be fine weavers or to raise the best sheep. Some owned small businesses or were teachers. They became the core participants in a series of 100 workshops that María Eugenia has conducted with existing women's and neighborhood groups throughout the Altiplano of Bolivia, part of the Confederation of Women. In the workshops she teaches the women to speak up, to question authority and assume leadership within their own groups. She shows them how to nominate each other for positions in the group and how to validate each other's comments and observations during discussion. Then she teaches them how to compete by playing soccer! The qualities that get the ball into the net by passing it to each other all the way down the field carry over into ability to act as a team to reach a political goal.While María Eugenia is preparing the women in the rural farming communities, she also works with the men who are part of the ayllu leadership. She talks to them separately about how their decisions could become more efficient, even easier, with input from the women. Eventually they become willing to invite the women to meet with them. She integrates the prepared women into the ayllu groups: the women raise important issues they want to discuss, often about family and health. If the men ignore them, María Eugenia has trained them to persist; if that fails, she shows them how to use the press and radio as alternative ways to make their voices heard. Their situation is newsworthy in Bolivia, where women have newly emerged in public life in the last ten years and the whole country is assimilating new laws for the protection of women. During 1994 and 1995, violence against women was outlawed and there are attempts under way to pass a law requiring that one-third of all political appointments and party nominations must be women. The discussion of women's rights is alive, stimulated by human rights and women's rights groups.In addition to training women, María Eugenia's workshops are also her method for multiplying her impact. When she finds a successful group of women who work well together and have become effective participants in their ayllu, she sends them into other communities as a model, accompanied when possible by a few men who have learned how to work with women in decision-making discussions. New workshops spread the training and educate women in their rights and their history, using the group oral-tradition methods that are part of their culture. The model has spread throughout Bolivia's altiplano and is replicable in ayllu communities throughout the Andes and the old Inca Empire.If current trends continue, it is estimated that within five years a civil society of more than 50,000 people in the Andean region will live under ayllu governance. As an indigenous Aymara women, María Eugenia has witnessed and experienced first-hand discrimination from many levels of society and the education system of Bolivia. Her parents did not read or write and she did not grow up with books in the house. All of her education was in Spanish, but at home her family only spoke Aymara. She still remembers the ordeal of writing her thesis at the university, thinking in Aymara while writing in Spanish. In the 1950s the family migrated to the city and María Eugenia's mother worked in a factory where she became part of the labor movement. This led to problems with her father, who worked in the house. Both of her parents insisted that she study and they also insisted that she dress in the clothes of the teachers, not allowing her to wear traditional dress. While in high school, María Eugenia decided to become a nun. The nuns told her that her mother would have to get rid of her indigenous dress and customs if María Eugenia was to pass as a nun. The majority of the students in the school were white and the children with indigenous names sat in the back of the room. Eventually she left the school. Her studies and experiences during the process of acquiring a master's degree in Andean history have only confirmed and intensified her beliefs that indigenous women's status must be raised in Bolivia and elsewhere. During the last ten years, she has published twenty articles in various professional journals, magazines and newspapers on various subjects, mainly focusing on indigenous women and politics. As a licensed social worker, she has seen the effect that marginalizing indigenous women has had on families and communities. She has participated in or been a guest speaker at over twenty fora, workshops and television programs on a variety of subjects over the past twelve years. She was first an investigator and later the director of the Andean Oral History Workshop.
1,821
ENGLISH
1
Tuvia Bielski was born in Stankiewicze, in western Belorussia, in 1906. When Germany invaded the Soviet Union in June of 1941, Tuvia and his younger brother Zus vowed never to be caught by the Germans. Tuvia's extensive knowledge of the area saved his life, allowing him to move around frequently to avoid being captured by the Germans, who had a warrant for his arrest. In early 1942, Tuvia began hearing rumors about partisans, and decided that if he and his fellow Jews were to survive, he must acquire arms and organize all-Jewish resistance groups. Along with two of his brothers, Zus and Asael, Tuvia began organizing Jews. By May of 1942, Tuvia was in command of a small group, which by the end of the war had grown to 1,200 people, and was known as the Bielski otriad (otriad is the Russian word for a partisan detachment). Tuvia had focused on saving as many Jews as possible, and would accept any Jew into his group. Many came through the family of Konstantin Kozlovski, a non-Jew, who provided shelter for Jews escaping from the Novogrudok ghetto and worked with the partisans to free hundreds of Jews from the ghetto. The Bielski otriad carried out food raids, killed German collaborators, and sometimes joined with a Russian partisan group in anti-Nazi missions, such as burning the ripe wheat crop so the German soldiers couldn't collect and eat the wheat. Additionally, the Bielski otriad would seek out Jews in the ghetto willing to risk escape to the forest, and send in guides to help them. By the summer of 1943, Tuvia was the leader of 700 people. In the Naliboki forest, Tuvia set up a functioning community, with everyone working to support the community in a variety of ways. There was a hospital, classrooms for the children, a soap factory, a Turkish bath, tailors, butchers, and even a group of musicians who played at festivals. Beyond meeting the needs of its own members, the Bielski otriad was able to provide services to other partisan groups in exchange for food and arms. By the summer of 1944, the group had grown to 1,200. The group consisted mainly of the elderly, women, and children. Tuvia's group was the largest of the Jewish partisan groups. A high percentage of those he led survived, due to Tuvia's strong and effective leadership, and his determination to save as many Jews as possible. After the war, Tuvia moved first to Israel and later to the United States, where he died at age 81. The amazing story of the Bielski partisans was turned into the motion picture Defiance in 2009. Critical Thinking Questions - What obstacles and limitations did Jews face when considering resistance? - What pressures and motivations may have influenced Tuvia Bielski's decisions and actions? Are these factors unique to this history or universal? - How can societies, communities, and individuals reinforce and strengthen the willingness to stand up for others?
<urn:uuid:5cb4e77a-5ff2-4bf1-880c-5bc96c1d5664>
CC-MAIN-2020-05
https://encyclopedia.ushmm.org/content/en/article/tuvia-bielski
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00247.warc.gz
en
0.981408
638
3.359375
3
[ -0.23362410068511963, 0.21631592512130737, 0.15631455183029175, -0.17877480387687683, -0.08939812332391739, 0.13006477057933807, 0.16981783509254456, 0.4229399561882019, -0.47023651003837585, -0.2795376479625702, 0.2662833631038666, -0.4438798427581787, 0.06820307672023773, 0.1434540897607...
1
Tuvia Bielski was born in Stankiewicze, in western Belorussia, in 1906. When Germany invaded the Soviet Union in June of 1941, Tuvia and his younger brother Zus vowed never to be caught by the Germans. Tuvia's extensive knowledge of the area saved his life, allowing him to move around frequently to avoid being captured by the Germans, who had a warrant for his arrest. In early 1942, Tuvia began hearing rumors about partisans, and decided that if he and his fellow Jews were to survive, he must acquire arms and organize all-Jewish resistance groups. Along with two of his brothers, Zus and Asael, Tuvia began organizing Jews. By May of 1942, Tuvia was in command of a small group, which by the end of the war had grown to 1,200 people, and was known as the Bielski otriad (otriad is the Russian word for a partisan detachment). Tuvia had focused on saving as many Jews as possible, and would accept any Jew into his group. Many came through the family of Konstantin Kozlovski, a non-Jew, who provided shelter for Jews escaping from the Novogrudok ghetto and worked with the partisans to free hundreds of Jews from the ghetto. The Bielski otriad carried out food raids, killed German collaborators, and sometimes joined with a Russian partisan group in anti-Nazi missions, such as burning the ripe wheat crop so the German soldiers couldn't collect and eat the wheat. Additionally, the Bielski otriad would seek out Jews in the ghetto willing to risk escape to the forest, and send in guides to help them. By the summer of 1943, Tuvia was the leader of 700 people. In the Naliboki forest, Tuvia set up a functioning community, with everyone working to support the community in a variety of ways. There was a hospital, classrooms for the children, a soap factory, a Turkish bath, tailors, butchers, and even a group of musicians who played at festivals. Beyond meeting the needs of its own members, the Bielski otriad was able to provide services to other partisan groups in exchange for food and arms. By the summer of 1944, the group had grown to 1,200. The group consisted mainly of the elderly, women, and children. Tuvia's group was the largest of the Jewish partisan groups. A high percentage of those he led survived, due to Tuvia's strong and effective leadership, and his determination to save as many Jews as possible. After the war, Tuvia moved first to Israel and later to the United States, where he died at age 81. The amazing story of the Bielski partisans was turned into the motion picture Defiance in 2009. Critical Thinking Questions - What obstacles and limitations did Jews face when considering resistance? - What pressures and motivations may have influenced Tuvia Bielski's decisions and actions? Are these factors unique to this history or universal? - How can societies, communities, and individuals reinforce and strengthen the willingness to stand up for others?
684
ENGLISH
1
World War One Sessia and Newmills Young Robert Morrow In the Fusiliers Winning the V.C. The Victoria Cross Private Morrow's V.C. Private Morrow's death After Robert's death Other Memorials and Tributes The V.C. Bridge Errors & Updates Newmills VC Group World War One Britain declared war on Germany on the 4th August 1914. Within two weeks the British Expeditionary Force (B.E.F.) was in action at Mons. They were driven back by superior German forces to the banks of the rivers Marne and Aisne just fifty miles east of Paris. Although the B.E.F. had suffered heavy casualties, they prevented the Germans capturing Paris and henceforth trench warfare would become the norm as the Germans retreated 60 miles. The Irish Nation would supply three volunteer divisions to the war effort, the 10th (Irish), 16th (Irish) and 36th (Ulster) Divisions. Many other Irishmen were already serving in regular units of the Army, Royal Navy and the Royal Flying Corps, later to become the Royal Air Force. The 10th (Irish) Division were the first to see action, at Gallipoli in August 1915. This poorly planned and ill-fated campaign would see them leave within two months and the remainder of the Allies leave the area by January 1916, having suffered horrendous casualties. Reported estimates were of 3,000 casualties within the Division, with more than 2,000 dead. The 36th (Ulster) Division would first see action in 1916 during the Battle of the Somme, which began on 1st July 1916 and lasted until 18th November 1916. The 36th (Ulster) Division suffered some 5,000 casualties with approximately 2,000 dead. The overall casualty total for the British Army on 1st July was approximately 60,000 casualties with approximately 20,000 dead. Most of the casualties occurred before lunchtime. The 16th (Irish) Division were involved in the Battle of Hulluch near Loos in April 1916, the week after the Easter Rebellion. Moving on to the Somme, they succeeded in capturing the villages of Ginchy and Guillemont on the Somme in early September with around 4,000 casualties and 1,200 dead. The 10th Division were now serving in the Balkans and would remain there until 1917, before transferring to Egypt and finishing the war in the Holy Land. The losses suffered by the 36th (Ulster) Division and 16th (Irish) Division could not be sustained by Irish volunteers. Both divisions for the remainder of the war would contain many conscripts from the British mainland. The 16th and 36th Divisions would fight alongside each other at the Battle of Messines in early June 1917, as the Allies tried to enlarge the area known as the Ypres Salient. Early success was tempered by the 3rd Battle of Ypres, better known as the Battle of Passchendaele, in July and August. Both divisions suffered heavily in the quagmire of mud and water. Once again the two divisions moved to Cambrai in France and both divisions took part in the Battle of Cambrai in November 1917. Initial successes, following the first large scale use of tanks, were once again nullified by lack of support. The 1st and 2nd Battalions of the Irish regiments contained men who had initially been serving along with reservists at the outbreak of the war, and known as the regular battalions. They had served in various divisions, but February 1918 saw them move to the 16th and 36th Division. Each Division would now contain only four battalions formed in September 1914, the remainder would be disbanded and confined to history. The addition of the regular battalions ensured the two divisions retained their Irish character. The USA had declared war on Germany in April 1917 but only began to send men in large numbers in the spring of 1918. The Germans launched a massive attack in late March 1918 to try and win the war before the USA troops arrived. The Spring Offensive drove the Allies back approximately sixty miles and the Germans were only were stopped because they became over-stretched. Both the 16th and 36th Divisions were in line at St Quentin when the attack began. Both divisions again sustained heavy losses to the extent that the 16th Division remained in name only and the 36th Division would see the war end with greatly reduced numbers. The Germans were halted by the end of April 1918. With the Allies becoming more tactical aware, they began to push the Germans back. From early June the Germans were in retreat, and 100 days later on 11th November, the Armistice was signed. The British and Empire Forces had lost around one million men, and now, one hundred years later, those losses still haunt many families of those who were bereaved. Many men who survived would also have to endure poor health, and live with the nightmares of what they had seen. 628 Victoria Crosses where awarded during World War 1. Private Robert Morrow won his Victoria Cross on 12th April 1915 just South of Messines in Belgium. Sadly he was killed on 26th April 1915. His Commanding Officer stated "he was a man devoid of fear". His actions, like all winners of this highest award for gallantry in the face of the enemy, were undertaken without thinking of himself or the consequences. Robert Morrow V.C. © 2015-20
<urn:uuid:e240f459-0c73-4efd-9594-2b22fa89c65f>
CC-MAIN-2020-05
http://robertmorrowvc.co.uk/html/world-war-one.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00076.warc.gz
en
0.984612
1,132
3.375
3
[ -0.1903858184814453, 0.3880687355995178, 0.08992945402860641, 0.13937675952911377, 0.046980202198028564, 0.025575999170541763, -0.34285369515419006, 0.014577139168977737, 0.1310231238603592, -0.22974886000156403, 0.3086618483066559, -0.6237918138504028, -0.13073915243148804, 0.349115610122...
2
World War One Sessia and Newmills Young Robert Morrow In the Fusiliers Winning the V.C. The Victoria Cross Private Morrow's V.C. Private Morrow's death After Robert's death Other Memorials and Tributes The V.C. Bridge Errors & Updates Newmills VC Group World War One Britain declared war on Germany on the 4th August 1914. Within two weeks the British Expeditionary Force (B.E.F.) was in action at Mons. They were driven back by superior German forces to the banks of the rivers Marne and Aisne just fifty miles east of Paris. Although the B.E.F. had suffered heavy casualties, they prevented the Germans capturing Paris and henceforth trench warfare would become the norm as the Germans retreated 60 miles. The Irish Nation would supply three volunteer divisions to the war effort, the 10th (Irish), 16th (Irish) and 36th (Ulster) Divisions. Many other Irishmen were already serving in regular units of the Army, Royal Navy and the Royal Flying Corps, later to become the Royal Air Force. The 10th (Irish) Division were the first to see action, at Gallipoli in August 1915. This poorly planned and ill-fated campaign would see them leave within two months and the remainder of the Allies leave the area by January 1916, having suffered horrendous casualties. Reported estimates were of 3,000 casualties within the Division, with more than 2,000 dead. The 36th (Ulster) Division would first see action in 1916 during the Battle of the Somme, which began on 1st July 1916 and lasted until 18th November 1916. The 36th (Ulster) Division suffered some 5,000 casualties with approximately 2,000 dead. The overall casualty total for the British Army on 1st July was approximately 60,000 casualties with approximately 20,000 dead. Most of the casualties occurred before lunchtime. The 16th (Irish) Division were involved in the Battle of Hulluch near Loos in April 1916, the week after the Easter Rebellion. Moving on to the Somme, they succeeded in capturing the villages of Ginchy and Guillemont on the Somme in early September with around 4,000 casualties and 1,200 dead. The 10th Division were now serving in the Balkans and would remain there until 1917, before transferring to Egypt and finishing the war in the Holy Land. The losses suffered by the 36th (Ulster) Division and 16th (Irish) Division could not be sustained by Irish volunteers. Both divisions for the remainder of the war would contain many conscripts from the British mainland. The 16th and 36th Divisions would fight alongside each other at the Battle of Messines in early June 1917, as the Allies tried to enlarge the area known as the Ypres Salient. Early success was tempered by the 3rd Battle of Ypres, better known as the Battle of Passchendaele, in July and August. Both divisions suffered heavily in the quagmire of mud and water. Once again the two divisions moved to Cambrai in France and both divisions took part in the Battle of Cambrai in November 1917. Initial successes, following the first large scale use of tanks, were once again nullified by lack of support. The 1st and 2nd Battalions of the Irish regiments contained men who had initially been serving along with reservists at the outbreak of the war, and known as the regular battalions. They had served in various divisions, but February 1918 saw them move to the 16th and 36th Division. Each Division would now contain only four battalions formed in September 1914, the remainder would be disbanded and confined to history. The addition of the regular battalions ensured the two divisions retained their Irish character. The USA had declared war on Germany in April 1917 but only began to send men in large numbers in the spring of 1918. The Germans launched a massive attack in late March 1918 to try and win the war before the USA troops arrived. The Spring Offensive drove the Allies back approximately sixty miles and the Germans were only were stopped because they became over-stretched. Both the 16th and 36th Divisions were in line at St Quentin when the attack began. Both divisions again sustained heavy losses to the extent that the 16th Division remained in name only and the 36th Division would see the war end with greatly reduced numbers. The Germans were halted by the end of April 1918. With the Allies becoming more tactical aware, they began to push the Germans back. From early June the Germans were in retreat, and 100 days later on 11th November, the Armistice was signed. The British and Empire Forces had lost around one million men, and now, one hundred years later, those losses still haunt many families of those who were bereaved. Many men who survived would also have to endure poor health, and live with the nightmares of what they had seen. 628 Victoria Crosses where awarded during World War 1. Private Robert Morrow won his Victoria Cross on 12th April 1915 just South of Messines in Belgium. Sadly he was killed on 26th April 1915. His Commanding Officer stated "he was a man devoid of fear". His actions, like all winners of this highest award for gallantry in the face of the enemy, were undertaken without thinking of himself or the consequences. Robert Morrow V.C. © 2015-20
1,274
ENGLISH
1
I have met them at close of day Coming with vivid faces From counter or desk among grey Eighteenth-century houses. A small militant faction of the Irish Volunteers would go on to fight in the Easter Rising of April Both men were personally loyal to the empire and sympathised with the objectives of the war, but they were also aware of the need to retain influence with the British government in order to Yeats easter rising and ideal world the terms of the Home Rule Act which was passed but suspended for the duration of the war. But while support for the war effort united Irish Unionists, Redmond's decision to support Britain was strongly opposed by a small militant faction of the Irish Volunteers. It was this group which would go on to fight in the Easter Rising of April Redmond's decision to gamble his party's popularity on the war appeared successful. As in other European countries, the conflict which 'Redmondites' depicted as a struggle for the freedom of other small nations like 'Catholic' Belgium initially met with widespread enthusiasm. OverIrishmen served with the British forces,of them as volunteers. But by early recruitment had declined sharply as the war became increasingly unpopular. Many nationalists resented the preferential treatment which Ulster's Unionist volunteers had received within the British army, while the growing fear of conscription undermined confidence in the Irish Party. As the Redmondite Nationalist Volunteers faded into obscurity, the separatist Irish Volunteers began to attract more support due to the growth of anti-war sentiment. It was against this background that plans for a rebellion took shape. They were also known as the 'Fenians', and this group planned the rebellion. Although the IRB was divided on the merits of a rising, a radical faction led by Tom Clarke and Sean MacDermott established a secret military council to plan the rising. They did not, as is sometimes thought, willingly seek martyrdom or a mystical 'blood sacrifice'. Rather they felt that a heroic gesture was required to reawaken the spirit of militant Irish nationalism. The rising was partly based on the traditional Fenian dictum that England's difficulty was Ireland's opportunity. They were motivated by frustration and pessimism rather than revolutionary optimism. They believed that the British government's resolution of the land and Home Rule questions, and the decline of Irish cultural identity, had almost extinguished true Irish nationality, rendering the Irish, like the Welsh and Scots, acquiescent subjects of the United Kingdom. They hoped for success, but believed that even failure was preferable to inaction, as it would reassert, and possibly reinvigorate, the long tradition of violent opposition to British authority adding another historic date to the unsuccessful risings of, and The decision to rise was also based on the traditional Fenian dictum that England's difficulty was Ireland's opportunity. Fenians had long believed that only in time of war, with England distracted and the availability of a powerful European ally, could they hope to mount a successful challenge to the superior might of the British empire. Top Anti-war Irish Volunteers The second key group involved in the rising was the anti-war Irish Volunteers who had split from Redmond's Volunteers in They were led by Eoin MacNeill, a history professor who opposed the idea of an unprovoked rebellion, but the IRB secretly exercised considerable influence within the militia, controlling many of its leaders and officers. MacNeill and other moderate Volunteers opposed the idea of an unprovoked rebellion because they felt it had no realistic prospect of success. MacNeill argued that they should wait until a more opportune time, such as when Britain introduced conscription or suppressed the Volunteers, so that they could fight in self-defence with mass support. The poet Patrick Pearses would ultimately become the public face of the Easter Rising. The IRB responded to MacNeill's opposition by planning the rising without his knowledge as the Volunteers remained essential to its hopes for a large-scale insurrection. The conspirators gradually broadened the military council by recruiting Volunteer leaders who did support the policy of insurrection. The most important of these was Patrick Pearse, a cultural nationalist and poet, who ran his own Gaelic-speaking school. Pearse would ultimately become the public face of the Easter Rising - it was he who wrote much of the Proclamation and was declared president of the short-lived republic established by the revolutionaries. For this reason, Pearse's distinctive ideas particularly the 'blood sacrifice' ideal came to be identified with the rebellion as a whole. Deeply influenced by both Christianity and the pagan tradition of Irish sagas, Pearse's writings and poetry indicated an intense spiritual desire for a martyrdom which would redeem his nation and ensure his own immortality. He was not alone in such beliefs. Although much criticised for his violent rhetoric - 'bloodshed is a cleansing and a sanctifying thing and a nation which regards it as the final horror has lost its manhood' - Pearse's sacrificial rhetoric was echoed by young men, intellectuals and politicians intoxicated by the militarism of wartime Europe. A leading figure in Ireland's trade union and socialist movements, Connolly's participation in the rising was an attempt to reconcile his Marxism with nationalism an ideology which he had previously criticised. Like many revolutionary socialists, he had believed that international working-class solidarity would prevent such a war, and it may have been his disillusionment which led him to join forces with 'bourgeois' Irish separatism in the hope of sparking a wider revolution throughout war-weary Europe. By the time he was co-opted to the military council in earlyhis rhetoric had begun to resemble that of Pearse: Top Build up to the rising An insurrection with any real prospect of challenging British military control of Ireland required two elements to fall into place. First, the rebels needed a large supply of arms and ammunition. Although they had successfully made contact with Germany, the steamer sent to Ireland, the 'Aud', was intercepted by the British navy on Easter Saturday, dooming the rising to failure. The second crucial requirement was a successful mobilisation of the Irish Volunteers. The rebels were again foiled when Eoin MacNeill discovered their plans and issued a countermanding order instructing Irish Volunteers not to turn out for the 'manoeuvres' that had been arranged throughout the country on Easter Sunday. Although it was clear that a rising no longer had any chance of success, the military council decided to strike. The secrecy with which the rising had been planned ensured that few Volunteers, even those who would willingly have taken part in an insurrection, knew what was really planned and remained at home. Although it was clear by Easter Sunday both to the rebel leaders and the British authorities who had finally uncovered the conspiracy that a rising no longer had any chance of success, the military council decided to strike the following day.At this time, Ireland was a simple agricultural society. Irish art had begun to rutadeltambor.com people had come as invaders, and more invaders followed from Britain, France and rutadeltambor.comnts, coins and weaponry from the Bronze and Iron Age have been uncovered by archaeologists. The Romans never conquered Ireland, although it is a matter of controversy whether they actually set foot on the island. Oct 14, · The Wonderful & Frightening World of W.B. Yeats - BoB The Easter Rising (real footage of aftermath) - Duration: Liam Neeson reads WB Yeats' Easter | RTÉ. rutadeltambor.com is the place to go to get the answers you need and to ask the questions you want. Michael Joseph O’Rahilly and the Easter Rising in Ireland - The role of Michael Joseph O’Rahilly (also known as “The O’Rahilly”) in the Easter Rising of , is not much talked about, and this, in my opinion, makes it all the more fascinating. Robert Browning, (born May 7, , London—died Dec. 12, , Venice), major English poet of the Victorian age, noted for his mastery of dramatic monologue and psychological rutadeltambor.com most noted work was The Ring and the Book (–69), the story of a Roman murder trial in 12 books. Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products.
<urn:uuid:644461a9-fae7-42cb-9a6e-beb099b706fe>
CC-MAIN-2020-05
https://tefirogefohurudy.rutadeltambor.com/yeats-easter-rising-and-ideal-world-60332gu.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00390.warc.gz
en
0.980907
1,735
3.296875
3
[ -0.3227587342262268, -0.07957955449819565, -0.027758952230215073, 0.08792899549007416, 0.2278626561164856, -0.009997714310884476, -0.03278013691306114, -0.18015196919441223, -0.08443813025951385, -0.06298920512199402, -0.2909257709980011, -0.06765852868556976, 0.03346488997340202, 0.294726...
1
I have met them at close of day Coming with vivid faces From counter or desk among grey Eighteenth-century houses. A small militant faction of the Irish Volunteers would go on to fight in the Easter Rising of April Both men were personally loyal to the empire and sympathised with the objectives of the war, but they were also aware of the need to retain influence with the British government in order to Yeats easter rising and ideal world the terms of the Home Rule Act which was passed but suspended for the duration of the war. But while support for the war effort united Irish Unionists, Redmond's decision to support Britain was strongly opposed by a small militant faction of the Irish Volunteers. It was this group which would go on to fight in the Easter Rising of April Redmond's decision to gamble his party's popularity on the war appeared successful. As in other European countries, the conflict which 'Redmondites' depicted as a struggle for the freedom of other small nations like 'Catholic' Belgium initially met with widespread enthusiasm. OverIrishmen served with the British forces,of them as volunteers. But by early recruitment had declined sharply as the war became increasingly unpopular. Many nationalists resented the preferential treatment which Ulster's Unionist volunteers had received within the British army, while the growing fear of conscription undermined confidence in the Irish Party. As the Redmondite Nationalist Volunteers faded into obscurity, the separatist Irish Volunteers began to attract more support due to the growth of anti-war sentiment. It was against this background that plans for a rebellion took shape. They were also known as the 'Fenians', and this group planned the rebellion. Although the IRB was divided on the merits of a rising, a radical faction led by Tom Clarke and Sean MacDermott established a secret military council to plan the rising. They did not, as is sometimes thought, willingly seek martyrdom or a mystical 'blood sacrifice'. Rather they felt that a heroic gesture was required to reawaken the spirit of militant Irish nationalism. The rising was partly based on the traditional Fenian dictum that England's difficulty was Ireland's opportunity. They were motivated by frustration and pessimism rather than revolutionary optimism. They believed that the British government's resolution of the land and Home Rule questions, and the decline of Irish cultural identity, had almost extinguished true Irish nationality, rendering the Irish, like the Welsh and Scots, acquiescent subjects of the United Kingdom. They hoped for success, but believed that even failure was preferable to inaction, as it would reassert, and possibly reinvigorate, the long tradition of violent opposition to British authority adding another historic date to the unsuccessful risings of, and The decision to rise was also based on the traditional Fenian dictum that England's difficulty was Ireland's opportunity. Fenians had long believed that only in time of war, with England distracted and the availability of a powerful European ally, could they hope to mount a successful challenge to the superior might of the British empire. Top Anti-war Irish Volunteers The second key group involved in the rising was the anti-war Irish Volunteers who had split from Redmond's Volunteers in They were led by Eoin MacNeill, a history professor who opposed the idea of an unprovoked rebellion, but the IRB secretly exercised considerable influence within the militia, controlling many of its leaders and officers. MacNeill and other moderate Volunteers opposed the idea of an unprovoked rebellion because they felt it had no realistic prospect of success. MacNeill argued that they should wait until a more opportune time, such as when Britain introduced conscription or suppressed the Volunteers, so that they could fight in self-defence with mass support. The poet Patrick Pearses would ultimately become the public face of the Easter Rising. The IRB responded to MacNeill's opposition by planning the rising without his knowledge as the Volunteers remained essential to its hopes for a large-scale insurrection. The conspirators gradually broadened the military council by recruiting Volunteer leaders who did support the policy of insurrection. The most important of these was Patrick Pearse, a cultural nationalist and poet, who ran his own Gaelic-speaking school. Pearse would ultimately become the public face of the Easter Rising - it was he who wrote much of the Proclamation and was declared president of the short-lived republic established by the revolutionaries. For this reason, Pearse's distinctive ideas particularly the 'blood sacrifice' ideal came to be identified with the rebellion as a whole. Deeply influenced by both Christianity and the pagan tradition of Irish sagas, Pearse's writings and poetry indicated an intense spiritual desire for a martyrdom which would redeem his nation and ensure his own immortality. He was not alone in such beliefs. Although much criticised for his violent rhetoric - 'bloodshed is a cleansing and a sanctifying thing and a nation which regards it as the final horror has lost its manhood' - Pearse's sacrificial rhetoric was echoed by young men, intellectuals and politicians intoxicated by the militarism of wartime Europe. A leading figure in Ireland's trade union and socialist movements, Connolly's participation in the rising was an attempt to reconcile his Marxism with nationalism an ideology which he had previously criticised. Like many revolutionary socialists, he had believed that international working-class solidarity would prevent such a war, and it may have been his disillusionment which led him to join forces with 'bourgeois' Irish separatism in the hope of sparking a wider revolution throughout war-weary Europe. By the time he was co-opted to the military council in earlyhis rhetoric had begun to resemble that of Pearse: Top Build up to the rising An insurrection with any real prospect of challenging British military control of Ireland required two elements to fall into place. First, the rebels needed a large supply of arms and ammunition. Although they had successfully made contact with Germany, the steamer sent to Ireland, the 'Aud', was intercepted by the British navy on Easter Saturday, dooming the rising to failure. The second crucial requirement was a successful mobilisation of the Irish Volunteers. The rebels were again foiled when Eoin MacNeill discovered their plans and issued a countermanding order instructing Irish Volunteers not to turn out for the 'manoeuvres' that had been arranged throughout the country on Easter Sunday. Although it was clear that a rising no longer had any chance of success, the military council decided to strike. The secrecy with which the rising had been planned ensured that few Volunteers, even those who would willingly have taken part in an insurrection, knew what was really planned and remained at home. Although it was clear by Easter Sunday both to the rebel leaders and the British authorities who had finally uncovered the conspiracy that a rising no longer had any chance of success, the military council decided to strike the following day.At this time, Ireland was a simple agricultural society. Irish art had begun to rutadeltambor.com people had come as invaders, and more invaders followed from Britain, France and rutadeltambor.comnts, coins and weaponry from the Bronze and Iron Age have been uncovered by archaeologists. The Romans never conquered Ireland, although it is a matter of controversy whether they actually set foot on the island. Oct 14, · The Wonderful & Frightening World of W.B. Yeats - BoB The Easter Rising (real footage of aftermath) - Duration: Liam Neeson reads WB Yeats' Easter | RTÉ. rutadeltambor.com is the place to go to get the answers you need and to ask the questions you want. Michael Joseph O’Rahilly and the Easter Rising in Ireland - The role of Michael Joseph O’Rahilly (also known as “The O’Rahilly”) in the Easter Rising of , is not much talked about, and this, in my opinion, makes it all the more fascinating. Robert Browning, (born May 7, , London—died Dec. 12, , Venice), major English poet of the Victorian age, noted for his mastery of dramatic monologue and psychological rutadeltambor.com most noted work was The Ring and the Book (–69), the story of a Roman murder trial in 12 books. Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products.
1,709
ENGLISH
1
By the early 1800s, a variety of ball and stick games had also become popular in North America. Many people in northeastern cities such as Boston, New York, and Philadelphia played cricket, but rounders also began to take hold. Rounders most closely resembled modern baseball as we know it now. This early version of baseball history then known as rounders, required a batter to strike a ball and run around bases without being caught out. Balls that were caught on the fly, or in some cases after one bounce, were commonly known as outs. Varieties of rounders also involved the practice of plugging, soaking, or stinging the batter. This was where fielders could put runners out by throwing the ball at them as they ran between the bases. People used various names to describe it rounders depending on what part of the country you were in. It was also known as town ball, one o’ cat, and base ball (hence the now shortened version we know as baseball). Americans began playing baseball in informal competitions in the early 1800s. By the 1860s, the sport, was being described as America's "national pastime." In 1845 Alexander Cartwright and the members of the New York Knickerbocker Base Ball Club, devised the first rules and regulations for the modern game of baseball. The first game was held at the Elysian Fields, in New Jersey. In 1858, the National Association of Base Ball Players, the first organized baseball league with tournaments and competitions between clubs was formed and in 1876 the first major league, the National League, was formed. This allowed states to play against other states. State teams were fed players from local leagues where the cream of the crop was selected to play for the states League team. Baseball is know one of the most popular sports in the country. Article Source: https://www.bharatbhasha.com Article Url: https://www.bharatbhasha.com/sports.php/37277 Article Added on Sunday, February 5, 2006 |sports >> Top 50 Articles on Sports| |Category - >|
<urn:uuid:f1c66c0e-0deb-4d2c-9ad2-584b301187f0>
CC-MAIN-2020-05
https://www.bharatbhasha.com/sports.php/37277
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00370.warc.gz
en
0.988046
438
3.515625
4
[ -0.12257125973701477, 0.05291450396180153, 0.2039240002632141, -0.1499239057302475, 0.047545477747917175, 0.18692569434642792, 0.32613086700439453, -0.043436188250780106, 0.01967061124742031, 0.14682018756866455, 0.007597008254379034, -0.0963481143116951, -0.20062939822673798, 0.0204592458...
2
By the early 1800s, a variety of ball and stick games had also become popular in North America. Many people in northeastern cities such as Boston, New York, and Philadelphia played cricket, but rounders also began to take hold. Rounders most closely resembled modern baseball as we know it now. This early version of baseball history then known as rounders, required a batter to strike a ball and run around bases without being caught out. Balls that were caught on the fly, or in some cases after one bounce, were commonly known as outs. Varieties of rounders also involved the practice of plugging, soaking, or stinging the batter. This was where fielders could put runners out by throwing the ball at them as they ran between the bases. People used various names to describe it rounders depending on what part of the country you were in. It was also known as town ball, one o’ cat, and base ball (hence the now shortened version we know as baseball). Americans began playing baseball in informal competitions in the early 1800s. By the 1860s, the sport, was being described as America's "national pastime." In 1845 Alexander Cartwright and the members of the New York Knickerbocker Base Ball Club, devised the first rules and regulations for the modern game of baseball. The first game was held at the Elysian Fields, in New Jersey. In 1858, the National Association of Base Ball Players, the first organized baseball league with tournaments and competitions between clubs was formed and in 1876 the first major league, the National League, was formed. This allowed states to play against other states. State teams were fed players from local leagues where the cream of the crop was selected to play for the states League team. Baseball is know one of the most popular sports in the country. Article Source: https://www.bharatbhasha.com Article Url: https://www.bharatbhasha.com/sports.php/37277 Article Added on Sunday, February 5, 2006 |sports >> Top 50 Articles on Sports| |Category - >|
459
ENGLISH
1
Imagine the United States building a statue of Ho Chi Minh in the middle of New York City. Or one of Nikita Khrushchev in Washington D.C. As unlikely as its sounds for a mighty empire to build such a monument to a once-great, potentially vanquished foe, that's how Ancient Rome used to roll. No matter what your high school history teacher told you, the Romans were not always the preeminent ancient group of indefatigable soldiers history gives them credit for. Mighty Carthage would field its greatest commander, Hannibal Barca, against Rome. He would turn out to be a leader so great even the Romans would build statues in his honor. Don't get it twisted, Rome in its heyday did conquer plenty of foreign tribes from Londinium to Mesopotamia and is worthy of its reputation. But before any of that, the young Roman Empire wasn't even as big as modern-day Italy. In the Punic Wars, they chose the wrong empire to square off against. Carthage was much more powerful than tiny Rome, and its leadership was much better at fielding armies. One of those was the military leader known to history simply as Hannibal. Hannibal was involved in the fight against Rome from the start of the very first Punic War, but it was the Second Punic War where Hannibal's strategic ability was really unleashed. After crushing Roman allies in modern-day Spain, he left on his now-famous crossing of the Alps to hit Rome from behind, a move no one expected—least of all Rome. It was a move that shocked the ancient world and allowed Hannibal to plunder parts of northern Italy for almost a year. The following spring, he crushed a Roman army at Cannae, killing or capturing some 70,000 men. For almost a decade, Hannibal and his army slogged around the Italian Peninsula, defeating the Romans and killing thousands in battles at Tarentum, Capua, Silarus, Herndonia, and Petelia. Tens of thousands of Romans died at the hands of Hannibal and his army, but time was not on his side. The Romans would not give in, and Carthage was losing ground elsewhere. Rome gained new allies and fresh troops, while Hannibal couldn't take a Roman harbor. It ultimately doomed him. He would be recalled to Africa where he was defeated by the Romans at the Battle of Zama, his invincibility finally shattered. Rome would never get its hands on its greatest enemy. Hannibal died after escaping from Roman soldiers, circumstances unknown. To this day, no one is sure where he escaped to or where his final resting place was. What they know is that for decades, Romans lived in fear that he might mount an army and return to exact revenge. When Rome was in its full glory days, and the threat of Hannibal's return was diminished by time, the Romans built statues of the man in the streets, an advertisement that they were able to beat such their most worthy adversary. More from We Are The Mighty - These are the 8 reincarnations of General George S. Patton - What happened to the German mercenaries who fought against the American Revolution - Why a signals intelligence aircraft tried to destroy intel using coffee - This B-17 survived one of the most infamous mid-air collisions of WW2 - This was the most fearsome army in the Vietnam War Featured photo of Hannibal trekking across the Alps during the Second Punic War: Wikimedia Commons
<urn:uuid:e0cb4fc6-3050-4000-9c97-c02c87cd65f1>
CC-MAIN-2020-05
https://explorethearchive.com/why-ancient-romans-built-statues-of-greatest-enemy
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00280.warc.gz
en
0.980708
711
3.421875
3
[ -0.21833078563213348, 0.5611863136291504, 0.6007137298583984, 0.23288026452064514, -0.5164479613304138, -0.2525631785392761, -0.055975787341594696, 0.39529597759246826, 0.09404614567756653, -0.24314221739768982, 0.05231170356273651, -0.5158162117004395, 0.14550548791885376, 0.6709268093109...
3
Imagine the United States building a statue of Ho Chi Minh in the middle of New York City. Or one of Nikita Khrushchev in Washington D.C. As unlikely as its sounds for a mighty empire to build such a monument to a once-great, potentially vanquished foe, that's how Ancient Rome used to roll. No matter what your high school history teacher told you, the Romans were not always the preeminent ancient group of indefatigable soldiers history gives them credit for. Mighty Carthage would field its greatest commander, Hannibal Barca, against Rome. He would turn out to be a leader so great even the Romans would build statues in his honor. Don't get it twisted, Rome in its heyday did conquer plenty of foreign tribes from Londinium to Mesopotamia and is worthy of its reputation. But before any of that, the young Roman Empire wasn't even as big as modern-day Italy. In the Punic Wars, they chose the wrong empire to square off against. Carthage was much more powerful than tiny Rome, and its leadership was much better at fielding armies. One of those was the military leader known to history simply as Hannibal. Hannibal was involved in the fight against Rome from the start of the very first Punic War, but it was the Second Punic War where Hannibal's strategic ability was really unleashed. After crushing Roman allies in modern-day Spain, he left on his now-famous crossing of the Alps to hit Rome from behind, a move no one expected—least of all Rome. It was a move that shocked the ancient world and allowed Hannibal to plunder parts of northern Italy for almost a year. The following spring, he crushed a Roman army at Cannae, killing or capturing some 70,000 men. For almost a decade, Hannibal and his army slogged around the Italian Peninsula, defeating the Romans and killing thousands in battles at Tarentum, Capua, Silarus, Herndonia, and Petelia. Tens of thousands of Romans died at the hands of Hannibal and his army, but time was not on his side. The Romans would not give in, and Carthage was losing ground elsewhere. Rome gained new allies and fresh troops, while Hannibal couldn't take a Roman harbor. It ultimately doomed him. He would be recalled to Africa where he was defeated by the Romans at the Battle of Zama, his invincibility finally shattered. Rome would never get its hands on its greatest enemy. Hannibal died after escaping from Roman soldiers, circumstances unknown. To this day, no one is sure where he escaped to or where his final resting place was. What they know is that for decades, Romans lived in fear that he might mount an army and return to exact revenge. When Rome was in its full glory days, and the threat of Hannibal's return was diminished by time, the Romans built statues of the man in the streets, an advertisement that they were able to beat such their most worthy adversary. More from We Are The Mighty - These are the 8 reincarnations of General George S. Patton - What happened to the German mercenaries who fought against the American Revolution - Why a signals intelligence aircraft tried to destroy intel using coffee - This B-17 survived one of the most infamous mid-air collisions of WW2 - This was the most fearsome army in the Vietnam War Featured photo of Hannibal trekking across the Alps during the Second Punic War: Wikimedia Commons
726
ENGLISH
1
"F IRE!" shouted the soldiers. Fire, indeed, not of a burning house, but in the form of huge flames that shot up from a hole in the ground. The whole army halted to watch the strange sight, and the general, whose name was Sulla, called up the soothsayers to explain the meaning of the fire. They whispered among each other for a while, and then one of them spoke: "General, just as this flame has shot up suddenly from the earth, so there will arise in Italy a noble man, brave and handsome, who will put an end to the disorders that trouble the Roman Republic." "That man is myself. As to beauty, my golden locks of hair are proof of that. As to courage, I have been through battles enough to show my mettle." Perhaps the flames were a kind of volcanic fire. Other strange omens (or signs) took place, and were supposed to foretell the terrible events that were to happen in Italy. One day, the sky being bright and clear, there came from the heavens the sound of a trumpet, loud and shrill; and yet no trumpet was seen! And on another day, while the Roman senate were sitting, a sparrow flew into the hall where they were assembled, with a grasshopper in its mouth. It bit the grasshopper in two. The diviners (or soothsayers) then declared this to be a sign that the people of Italy would be divided into two parties. The people were, alas! divided into two parties in war; but you need not believe in the tale of the trumpet. As to the other story, it was not a very wonderful thing that a sparrow should bite an insect into two parts! The name "Sulla," or Sylla, means "red," and this Roman general was so named because his skin was of a strong red color. His eyes were blue and fierce. His temper was wilful and cruel. And yet he sometimes seemed to care only for mirth and jollity, and he would spend hours and days in the company of clowns and dancers. He lived from about 138 B.C. to 78 B.C. The King of Pontus (in Asia Minor) was Mithridates (Mith-ri-da-teez), and he had sent his armies into Greece. The Romans sent Sulla to turn them out. The Red General halted before a Greek city—it opened its gates; before another—it opened its gates; before another—it opened its gates. Everywhere the citizens had the sense to yield to Rome, for they knew Rome would be sure to master the King of Pontus. But the city of Athens would not yield. Sulla laid siege to the city. So resolved was he to take it that he brought up against its walls an immense number of siege-engines; so many that ten thousand mules were employed to draw them. Being very eager to obtain money to carry on the war, he sent a messenger to the famous temple of Apollo the Sun-god at Delphi (Del fi), bidding the priests give up their treasures. "Hark!" said the priests to the messenger, "do you not hear the sound of a lyre? It is the Sun-god himself who strikes the strings and makes music in the inner chamber of the temple." The messenger wrote a letter relating this story to Sulla. The Red General laughed, and replied that the Sun-god was playing a melody to show how pleased he would be to oblige Sulla with his gold! So the poor priests had to surrender their precious store, and even had to hand over a huge silver urn which they prized very much. Meanwhile the people of Athens were starving. They had to eat roots, and even gnawed leather. The commander of the garrison at last sent out some men to beg for peace. But they stupidly talked in a boastful manner about the great heroes who fought for Athens in the olden days. "Go, my noble souls," said Sulla to them, in a sneering tone, "and take back your fine speeches with you. I was not sent to Athens to learn its ancient history, but to chastise its rebellious people." Soon afterward the city was taken, and many were the slain in its streets. An army of the King of Pontus held a strong position on a rocky hill. Two Greeks came to Sulla, and offered to lead a band of men to the top, so as to surprise the foe from the rear. Sulla gave them a small troop of Romans. They climbed a narrow path, unobserved by the Asiatics. Sulla attacked in front. The Romans at the summit of the mountain raised a loud yell, and began to descend. The enemy hurried down, springing from rock to rock, only to be met by the spears of Sulla's legions. Fifteen thousand men in the Asiatic army were slaves. They had been promised their freedom if they beat the Romans; but only a few of them escaped with their lives. Not long afterward a second battle was fought. The foe were posted near a marsh. Sulla ordered his men to cut trenches, so that these ditches should keep the Asiatics from escaping one way, while his horsemen drove them toward the muddy marshes in another direction. But the enemy set furiously upon the diggers, who fled in confusion. Then the Red General seized a wooden eagle from a standard-bearer, and pushed his way through the runners, crying: "Yonder, Romans, is the bed of honor I am to die in! When you are asked where you deserted your general, mind you say it was here!" These words roused a sense of shame in his men. They rallied to his support, and the struggle ended in another victory for the soldiers of the republic. Soon Greece was free from the power of Mithridates, and he was fain to make peace. Sulla suffered from the gout, and he betook himself to a hot spring, the waters of which were said to have a healing effect; and there he bathed his swollen feet, and lived lazily for a while, and sported with his dancers and buffoons. When on his march to the shores of the Adriatic Sea, on his return to Italy, he passed a place where the grass and trees were of a most beautiful green. And here was brought to the Red General a most peculiar-looking person—a Wild Man of the Woods—who had been found asleep on the ground. "This is a satyr," said the people, who led the strange creature to Sulla. A satyr (sat-ir) was often carved by the old Greek sculptors. They made him appear as a mischievous-looking man, with a pug nose, curly hair, ears with pointed tips like goats' ears, and short tail. The satyrs used to play travellers in the woods many tricks, and then laugh at the vexation they caused. According to the story, the satyr who was shown to Sulla could not talk any language. He was asked questions in Latin, in Greek, in Persian, but all to no purpose; he replied in a noise that sounded like the neigh of a horse or the bleat of a goat. Sulla was shocked at the sight, and ordered the so-called satyr to be taken away. Well, it was indeed sad to see this deformed creature, and hear his harsh voice. But what shall we say to Sulla himself? He had the form of a man; his limbs were well-shapen; his mind was clever; yet his deeds were brutal. When he arrived in Italy he made his way toward Rome. It was his intention to crush down the people's party—the plebeians. He belonged to the upper class, or patricians. All over Italy there were brave and honest men who worked hard in field or trade, or served in the Roman armies, and yet were not allowed to rank as freemen, and had no vote in public affairs. Many of these men had raised a rebellion, and some had received the title of freemen; but there was still sore discontent over the land, and great was the hatred between the mass of the common folk and the rich patrician class to which Sulla belonged. A battle took place close to the walls of Rome. Sulla won, and entered the city. There is a dreadful tale that he had six thousand prisoners crowded into a yard and all put to death, and that he made a speech to the Roman senate while the cries of the unhappy prisoners were plainly heard. He had lists of citizens written up in a public place, the lists being the names of "proscribed," or condemned, citizens. All must die, and their property was given to strangers. One day eighty were proscribed; the next day, two hundred and twenty; the third day, two hundred and twenty more. He declared himself dictator, having all power of life and death. The people's party were in deep distress; the patricians were glad. When he thought he had quite cowed the people's party he gave up his high office, and lived as a common citizen, and walked about the streets without a guard. Then he retired to a villa at the seaside, and died in the year 79 B.C. At his funeral a vast amount of cinnamon and other sweet spices was burned. But his memory was not sweet. Who could love the memory of a man who had caused so much pain and grief? Rather would we honor the memory of a Roman in a certain city which was doomed by Sulla. An enormous number of captives, whom Sulla called rebels, were ordered to be slain—all except one, at whose house the Red General had once passed some agreeable hours. "No," said this noble Roman, "I will not live while so many of my fellow-citizens die unjustly." And he mixed with the people, and his dead body lay with theirs. His name is unknown, but we will salute the nameless hero.
<urn:uuid:456c4544-310f-4a81-aa00-c2d797260c14>
CC-MAIN-2020-05
http://www.gatewaytotheclassics.com/browse/displayitem.php?item=books/gould/romans/red
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00061.warc.gz
en
0.990132
2,108
3.28125
3
[ -0.5497533082962036, 0.5793207287788391, 0.32744845747947693, 0.0031902482733130455, 0.18553006649017334, -0.17058995366096497, 0.45687803626060486, 0.4485197961330414, -0.07199928164482117, -0.3284948170185089, -0.07376599311828613, -0.5493195652961731, -0.01627376675605774, 0.44090688228...
1
"F IRE!" shouted the soldiers. Fire, indeed, not of a burning house, but in the form of huge flames that shot up from a hole in the ground. The whole army halted to watch the strange sight, and the general, whose name was Sulla, called up the soothsayers to explain the meaning of the fire. They whispered among each other for a while, and then one of them spoke: "General, just as this flame has shot up suddenly from the earth, so there will arise in Italy a noble man, brave and handsome, who will put an end to the disorders that trouble the Roman Republic." "That man is myself. As to beauty, my golden locks of hair are proof of that. As to courage, I have been through battles enough to show my mettle." Perhaps the flames were a kind of volcanic fire. Other strange omens (or signs) took place, and were supposed to foretell the terrible events that were to happen in Italy. One day, the sky being bright and clear, there came from the heavens the sound of a trumpet, loud and shrill; and yet no trumpet was seen! And on another day, while the Roman senate were sitting, a sparrow flew into the hall where they were assembled, with a grasshopper in its mouth. It bit the grasshopper in two. The diviners (or soothsayers) then declared this to be a sign that the people of Italy would be divided into two parties. The people were, alas! divided into two parties in war; but you need not believe in the tale of the trumpet. As to the other story, it was not a very wonderful thing that a sparrow should bite an insect into two parts! The name "Sulla," or Sylla, means "red," and this Roman general was so named because his skin was of a strong red color. His eyes were blue and fierce. His temper was wilful and cruel. And yet he sometimes seemed to care only for mirth and jollity, and he would spend hours and days in the company of clowns and dancers. He lived from about 138 B.C. to 78 B.C. The King of Pontus (in Asia Minor) was Mithridates (Mith-ri-da-teez), and he had sent his armies into Greece. The Romans sent Sulla to turn them out. The Red General halted before a Greek city—it opened its gates; before another—it opened its gates; before another—it opened its gates. Everywhere the citizens had the sense to yield to Rome, for they knew Rome would be sure to master the King of Pontus. But the city of Athens would not yield. Sulla laid siege to the city. So resolved was he to take it that he brought up against its walls an immense number of siege-engines; so many that ten thousand mules were employed to draw them. Being very eager to obtain money to carry on the war, he sent a messenger to the famous temple of Apollo the Sun-god at Delphi (Del fi), bidding the priests give up their treasures. "Hark!" said the priests to the messenger, "do you not hear the sound of a lyre? It is the Sun-god himself who strikes the strings and makes music in the inner chamber of the temple." The messenger wrote a letter relating this story to Sulla. The Red General laughed, and replied that the Sun-god was playing a melody to show how pleased he would be to oblige Sulla with his gold! So the poor priests had to surrender their precious store, and even had to hand over a huge silver urn which they prized very much. Meanwhile the people of Athens were starving. They had to eat roots, and even gnawed leather. The commander of the garrison at last sent out some men to beg for peace. But they stupidly talked in a boastful manner about the great heroes who fought for Athens in the olden days. "Go, my noble souls," said Sulla to them, in a sneering tone, "and take back your fine speeches with you. I was not sent to Athens to learn its ancient history, but to chastise its rebellious people." Soon afterward the city was taken, and many were the slain in its streets. An army of the King of Pontus held a strong position on a rocky hill. Two Greeks came to Sulla, and offered to lead a band of men to the top, so as to surprise the foe from the rear. Sulla gave them a small troop of Romans. They climbed a narrow path, unobserved by the Asiatics. Sulla attacked in front. The Romans at the summit of the mountain raised a loud yell, and began to descend. The enemy hurried down, springing from rock to rock, only to be met by the spears of Sulla's legions. Fifteen thousand men in the Asiatic army were slaves. They had been promised their freedom if they beat the Romans; but only a few of them escaped with their lives. Not long afterward a second battle was fought. The foe were posted near a marsh. Sulla ordered his men to cut trenches, so that these ditches should keep the Asiatics from escaping one way, while his horsemen drove them toward the muddy marshes in another direction. But the enemy set furiously upon the diggers, who fled in confusion. Then the Red General seized a wooden eagle from a standard-bearer, and pushed his way through the runners, crying: "Yonder, Romans, is the bed of honor I am to die in! When you are asked where you deserted your general, mind you say it was here!" These words roused a sense of shame in his men. They rallied to his support, and the struggle ended in another victory for the soldiers of the republic. Soon Greece was free from the power of Mithridates, and he was fain to make peace. Sulla suffered from the gout, and he betook himself to a hot spring, the waters of which were said to have a healing effect; and there he bathed his swollen feet, and lived lazily for a while, and sported with his dancers and buffoons. When on his march to the shores of the Adriatic Sea, on his return to Italy, he passed a place where the grass and trees were of a most beautiful green. And here was brought to the Red General a most peculiar-looking person—a Wild Man of the Woods—who had been found asleep on the ground. "This is a satyr," said the people, who led the strange creature to Sulla. A satyr (sat-ir) was often carved by the old Greek sculptors. They made him appear as a mischievous-looking man, with a pug nose, curly hair, ears with pointed tips like goats' ears, and short tail. The satyrs used to play travellers in the woods many tricks, and then laugh at the vexation they caused. According to the story, the satyr who was shown to Sulla could not talk any language. He was asked questions in Latin, in Greek, in Persian, but all to no purpose; he replied in a noise that sounded like the neigh of a horse or the bleat of a goat. Sulla was shocked at the sight, and ordered the so-called satyr to be taken away. Well, it was indeed sad to see this deformed creature, and hear his harsh voice. But what shall we say to Sulla himself? He had the form of a man; his limbs were well-shapen; his mind was clever; yet his deeds were brutal. When he arrived in Italy he made his way toward Rome. It was his intention to crush down the people's party—the plebeians. He belonged to the upper class, or patricians. All over Italy there were brave and honest men who worked hard in field or trade, or served in the Roman armies, and yet were not allowed to rank as freemen, and had no vote in public affairs. Many of these men had raised a rebellion, and some had received the title of freemen; but there was still sore discontent over the land, and great was the hatred between the mass of the common folk and the rich patrician class to which Sulla belonged. A battle took place close to the walls of Rome. Sulla won, and entered the city. There is a dreadful tale that he had six thousand prisoners crowded into a yard and all put to death, and that he made a speech to the Roman senate while the cries of the unhappy prisoners were plainly heard. He had lists of citizens written up in a public place, the lists being the names of "proscribed," or condemned, citizens. All must die, and their property was given to strangers. One day eighty were proscribed; the next day, two hundred and twenty; the third day, two hundred and twenty more. He declared himself dictator, having all power of life and death. The people's party were in deep distress; the patricians were glad. When he thought he had quite cowed the people's party he gave up his high office, and lived as a common citizen, and walked about the streets without a guard. Then he retired to a villa at the seaside, and died in the year 79 B.C. At his funeral a vast amount of cinnamon and other sweet spices was burned. But his memory was not sweet. Who could love the memory of a man who had caused so much pain and grief? Rather would we honor the memory of a Roman in a certain city which was doomed by Sulla. An enormous number of captives, whom Sulla called rebels, were ordered to be slain—all except one, at whose house the Red General had once passed some agreeable hours. "No," said this noble Roman, "I will not live while so many of my fellow-citizens die unjustly." And he mixed with the people, and his dead body lay with theirs. His name is unknown, but we will salute the nameless hero.
2,073
ENGLISH
1
The successes of the League of Nations are frequently obscured by its failures – especially in the 1930’s when Europe and eventually the world moved towards war – the one thing the League of Nations was set up to avoid. However, in the honeymoon period of its first few years when there appeared to be a genuine desire for peace after the horrors of World War One, the League did have successes, though these tended to be in areas that had little strategic or economic importance. In view of the League’s desire to end war, the only criteria that can be used to classify a success, was whether war was avoided and a peaceful settlement formulated after a crisis between two nations. The League was successful in the Aaland Islands in 1921. These islands are nearly equally distant between Finland and Sweden. They had traditionally belonged to Finland but most of the islanders wanted to be governed by Sweden. Neither Sweden nor Finland could come to a decision as to who owned the islands and in 1921 they asked the League to adjudicate. The League’s decision was that they should remain with Finland but that no weapons should ever be kept there. Both countries accepted the decision and it remains in force to this day. In the same year, 1921, the League was equally successful in Upper Silesia. The Treaty of Versailles had given the people of Upper Silesia the right to have a referendum on whether they wanted to be part of Weimar Germany or part of Poland. In this referendum, 700,000 voted for Germany and 500,000 for Poland. This close result resulted in rioting between those who expected Silesia to be made part of Weimar Germany and those who wanted to be part of Poland. The League was asked to settle this dispute. After a six-week inquiry, the League decided to split Upper Silesia between Germany and Poland. The League’s decision was accepted by both countries and by the people in Upper Silesia. In 1923, the League was successful in resolving a problem in Memel. Memel was/is a port in Lithuania. Most people who lived in Memel were Lithuanians and, therefore, the government of Lithuania believed that the port should be governed by it. However, the Treaty of Versailles had put Memel and the land surrounding the port under the control of the League. For three years, a French general acted as a governor of the port but in 1923 the Lithuanians invaded the port. The League intervened and gave the area surrounding Memel to Lithuania but they made the port an “international zone”. Lithuania agreed to this decision. Though this can be seen as a League success – as the issue was settled – a counter argument is that what happened was the result of the use of force and that the League responded in a positive manner to those (the Lithuanians) who had used force. In the same year, 1923, the League faced further problems in Turkey. The League failed to stop a bloody war in Turkey (see League failures) but it did respond to the humanitarian crisis caused by this war.1,400,000 refugees had been created by this war with 80% of them being women and children. Typhoid and cholera were rampant. The League sent doctors from the Health Organisation to check the spread of disease and it spent £10 million on building farms, homes etc for the refugees. Money was also invested in seeds, wells and digging tools and by 1926, work was found for 600,000 people. A member of the League called this work “the greatest work of mercy which mankind has undertaken.” In 1925, the League helped to resolve a dispute between Greece and Bulgaria. Both these nations have a common border. In 1925, sentries patrolling this border fired on one another and a Greek soldier was killed. The Greek army invaded Bulgaria as a result. The Bulgarians asked the League for help and the League ordered both armies to stop fighting and that the Greeks should pull out of Bulgaria. The League then sent experts to the area and decided that Greece was to blame and fined her £45,000. Both nations accepted the decision. - The League of Nations came into being after the end of World War One. The League of Nation's task was simple - to ensure that… - League of Nations Failures While the League of Nations could celebrate its successes, the League had every reason to examine its failures and where it…
<urn:uuid:9fea8916-e2a9-4ef3-9f52-1b093c466edb>
CC-MAIN-2020-05
https://www.historylearningsite.co.uk/modern-world-history-1918-to-1980/league-of-nations-successes/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00531.warc.gz
en
0.981772
907
3.75
4
[ -0.07307112216949463, 0.15776720643043518, -0.0065045724622905254, 0.05930030345916748, 0.07316859066486359, 0.11660696566104889, 0.05847601220011711, 0.22914738953113556, 0.09157694876194, 0.24262270331382751, -0.0666690245270729, -0.32392317056655884, 0.1012074202299118, 0.42893570661544...
7
The successes of the League of Nations are frequently obscured by its failures – especially in the 1930’s when Europe and eventually the world moved towards war – the one thing the League of Nations was set up to avoid. However, in the honeymoon period of its first few years when there appeared to be a genuine desire for peace after the horrors of World War One, the League did have successes, though these tended to be in areas that had little strategic or economic importance. In view of the League’s desire to end war, the only criteria that can be used to classify a success, was whether war was avoided and a peaceful settlement formulated after a crisis between two nations. The League was successful in the Aaland Islands in 1921. These islands are nearly equally distant between Finland and Sweden. They had traditionally belonged to Finland but most of the islanders wanted to be governed by Sweden. Neither Sweden nor Finland could come to a decision as to who owned the islands and in 1921 they asked the League to adjudicate. The League’s decision was that they should remain with Finland but that no weapons should ever be kept there. Both countries accepted the decision and it remains in force to this day. In the same year, 1921, the League was equally successful in Upper Silesia. The Treaty of Versailles had given the people of Upper Silesia the right to have a referendum on whether they wanted to be part of Weimar Germany or part of Poland. In this referendum, 700,000 voted for Germany and 500,000 for Poland. This close result resulted in rioting between those who expected Silesia to be made part of Weimar Germany and those who wanted to be part of Poland. The League was asked to settle this dispute. After a six-week inquiry, the League decided to split Upper Silesia between Germany and Poland. The League’s decision was accepted by both countries and by the people in Upper Silesia. In 1923, the League was successful in resolving a problem in Memel. Memel was/is a port in Lithuania. Most people who lived in Memel were Lithuanians and, therefore, the government of Lithuania believed that the port should be governed by it. However, the Treaty of Versailles had put Memel and the land surrounding the port under the control of the League. For three years, a French general acted as a governor of the port but in 1923 the Lithuanians invaded the port. The League intervened and gave the area surrounding Memel to Lithuania but they made the port an “international zone”. Lithuania agreed to this decision. Though this can be seen as a League success – as the issue was settled – a counter argument is that what happened was the result of the use of force and that the League responded in a positive manner to those (the Lithuanians) who had used force. In the same year, 1923, the League faced further problems in Turkey. The League failed to stop a bloody war in Turkey (see League failures) but it did respond to the humanitarian crisis caused by this war.1,400,000 refugees had been created by this war with 80% of them being women and children. Typhoid and cholera were rampant. The League sent doctors from the Health Organisation to check the spread of disease and it spent £10 million on building farms, homes etc for the refugees. Money was also invested in seeds, wells and digging tools and by 1926, work was found for 600,000 people. A member of the League called this work “the greatest work of mercy which mankind has undertaken.” In 1925, the League helped to resolve a dispute between Greece and Bulgaria. Both these nations have a common border. In 1925, sentries patrolling this border fired on one another and a Greek soldier was killed. The Greek army invaded Bulgaria as a result. The Bulgarians asked the League for help and the League ordered both armies to stop fighting and that the Greeks should pull out of Bulgaria. The League then sent experts to the area and decided that Greece was to blame and fined her £45,000. Both nations accepted the decision. - The League of Nations came into being after the end of World War One. The League of Nation's task was simple - to ensure that… - League of Nations Failures While the League of Nations could celebrate its successes, the League had every reason to examine its failures and where it…
947
ENGLISH
1
Twelve Years a Slave (Originally published in 1853 with the sub-title: “Narrative of Solomon Northup, a citizen of New-York, kidnapped in Washington city in 1841, and rescued in 1853, from a cotton plantation near the Red River in Louisiana”) is the written work of Solomon Northup; a man who was born free, but was bound into slavery later in life. Northup’s account describes the daily life of slaves in Bayou Beof, their diet, the relationship between the master and slave, the means that slave catchers used to recapture them and the ugly realities that slaves suffered. Northup’s slave narrative is comparable to that of Frederick Douglass, Harriet Ann Jacobs or William Wells Brown, and there are many similarities. Scholars reference this work today; one example is Jesse Holland, who referred to him in an interview given on January 20, 2009 on Democracy.now. He did so because Northup’s extremely detailed description of Washington in 1841 helps the neuromancers understand the location of some slave markets, and is an important part of understanding that African slaves built many of the monuments in Washington, including the Capitol and part of the original Executive Mansion. The book, which was originally published in 1853, tells the story of how two men approached him under the guise of circus promoters who were interested in his violin skills. They offered him a generous but fair amount of money to work for their circus, and then offered to put him up in a hotel in Washington D.C. Upon arriving there he was drugged, bound, and moved to a slave pen in the city owned by a man named James Burch, which was located in the Yellow House, which was one of several sites where African Americans were sold on the National Mall in DC. Another was Robey s Tavern; these slave markets were located between what are now the Department of Education and the Smithsonian Air and Space Museum, within view of the Capitol, according to researcher Jesse Holland, and Northup’s own account. Burch would coerce Northup into making up a new past for himself, one in which he had been born as a slave in Georgia. Burch told Northup that if he were ever to reveal his true past to another person he would be killed. When Northup continually asserts that he is a freeman of New York, Burch violently whips him until the paddle breaks and Rathburn insists on Burch to stop. Northup mentions different kind of owners that Northup had throughout his 12 years as a slave in Louisiana, and how he suffered severely under them: being forced to eat the meager slave diet, live on the dirt floor of a slave cabin, endure numerous beatings, being attacked with an axe, whippings and unimaginable emotional pain from being in such a state. One temporary master he was leased to was named Tibbeats; the man tried to kill him with an axe, but Northup ended up whipping him instead. Finally the book discusses how Northup eventually ended up winning back his freedom. A white carpenter from Canada named Samuel Bass arrived to do some work for Northup s current owner, and after conversing with him, Northup realized that Bass was quite different from the other white men he had met in the south; he said he stood out because he was openly laughed at for opposing the sub-human arguments slavery was based on. It was to Bass that Northup finally confided his story, and ultimately Bass would deliver the letters back to Northup s wife that would start the legal process of earning him his freedom back. This was no small matter, for if they had been caught, it could easily have resulted in their death, as Northup says.
<urn:uuid:e909f8f7-c826-4cdd-9e2a-a637f8eff130>
CC-MAIN-2020-05
https://www.whatshotnow.net/twelve-years-a-slave/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00208.warc.gz
en
0.989271
765
3.328125
3
[ -0.4945412278175354, 0.3382152318954468, 0.4103221297264099, 0.08530284464359283, -0.003012919332832098, -0.3670409023761749, -0.010469564236700535, -0.17298266291618347, -0.18425974249839783, 0.04003653675317764, 0.2628973722457886, 0.3283972144126892, -0.3455570340156555, 0.2867398858070...
1
Twelve Years a Slave (Originally published in 1853 with the sub-title: “Narrative of Solomon Northup, a citizen of New-York, kidnapped in Washington city in 1841, and rescued in 1853, from a cotton plantation near the Red River in Louisiana”) is the written work of Solomon Northup; a man who was born free, but was bound into slavery later in life. Northup’s account describes the daily life of slaves in Bayou Beof, their diet, the relationship between the master and slave, the means that slave catchers used to recapture them and the ugly realities that slaves suffered. Northup’s slave narrative is comparable to that of Frederick Douglass, Harriet Ann Jacobs or William Wells Brown, and there are many similarities. Scholars reference this work today; one example is Jesse Holland, who referred to him in an interview given on January 20, 2009 on Democracy.now. He did so because Northup’s extremely detailed description of Washington in 1841 helps the neuromancers understand the location of some slave markets, and is an important part of understanding that African slaves built many of the monuments in Washington, including the Capitol and part of the original Executive Mansion. The book, which was originally published in 1853, tells the story of how two men approached him under the guise of circus promoters who were interested in his violin skills. They offered him a generous but fair amount of money to work for their circus, and then offered to put him up in a hotel in Washington D.C. Upon arriving there he was drugged, bound, and moved to a slave pen in the city owned by a man named James Burch, which was located in the Yellow House, which was one of several sites where African Americans were sold on the National Mall in DC. Another was Robey s Tavern; these slave markets were located between what are now the Department of Education and the Smithsonian Air and Space Museum, within view of the Capitol, according to researcher Jesse Holland, and Northup’s own account. Burch would coerce Northup into making up a new past for himself, one in which he had been born as a slave in Georgia. Burch told Northup that if he were ever to reveal his true past to another person he would be killed. When Northup continually asserts that he is a freeman of New York, Burch violently whips him until the paddle breaks and Rathburn insists on Burch to stop. Northup mentions different kind of owners that Northup had throughout his 12 years as a slave in Louisiana, and how he suffered severely under them: being forced to eat the meager slave diet, live on the dirt floor of a slave cabin, endure numerous beatings, being attacked with an axe, whippings and unimaginable emotional pain from being in such a state. One temporary master he was leased to was named Tibbeats; the man tried to kill him with an axe, but Northup ended up whipping him instead. Finally the book discusses how Northup eventually ended up winning back his freedom. A white carpenter from Canada named Samuel Bass arrived to do some work for Northup s current owner, and after conversing with him, Northup realized that Bass was quite different from the other white men he had met in the south; he said he stood out because he was openly laughed at for opposing the sub-human arguments slavery was based on. It was to Bass that Northup finally confided his story, and ultimately Bass would deliver the letters back to Northup s wife that would start the legal process of earning him his freedom back. This was no small matter, for if they had been caught, it could easily have resulted in their death, as Northup says.
775
ENGLISH
1
Herpes virus type 5 is also known as cytomegalovirus. It is the major cause of mononucleosis. Mononucleosis causes symptoms similar to infectious mononucleosis. It is spread via blood transfusion, breast-feeding, organ transplants, and sexual contact. The virus causes diarrhea, or severe vision problems and even leads to AIDS. People with weakened immune systems are more susceptible to these diseases. Oral herpes is also known commonly as cold sores and fever blisters but is different entity from oral canker sores although canker sores may sometimes be associated with HSV infection. Canker sores occur solely inside the mouth. Oral herpes occurs inside and around the mouth. Most of the time HSV-1 causes mouth symptoms and in a minority of cases it may also be responsible for genital symptoms. The opposite is true for HSV-2 – it causes genital symptoms in the majority of cases while only a few cases of HSV-2 infection will result in mouth symptoms. HSV-1 infection may be seen in all ages, including children, but when genital herpes is seen in children, sexual abuse needs to be a consideration. Getting tested for STDs is a basic part of staying healthy and taking care of your body — like brushing your teeth and exercising regularly. Getting tested and knowing your status shows you care about yourself and your partner. STD awareness and testing is a basic part of staying healthy and taking care of your body. It’s important to know your risk and protect your health. Herpes sores usually appear as one or more blisters on or around the genitals, rectum or mouth. The blisters break and leave painful sores that may take a week or more to heal. These symptoms are sometimes called “having an outbreak.” The first time someone has an outbreak they may also have flu-like symptoms such as fever, body aches, or swollen glands. Herpes infection can be passed from you to your unborn child before birth but is more commonly passed to your infant during delivery. This can lead to a potentially deadly infection in your baby (called neonatal herpes). It is important that you avoid getting herpes during pregnancy. If you are pregnant and have genital herpes, you may be offered anti-herpes medicine towards the end of your pregnancy. This medicine may reduce your risk of having signs or symptoms of genital herpes at the time of delivery. At the time of delivery, your doctor should carefully examine you for herpes sores. If you have herpes symptoms at delivery, a ‘C-section’ is usually performed. I am so scared. My boyfriend is the only person I have ever had unprotected sex with 4 times. We had a herpes scare. He got tested. They swabbed him and gave him a blood test and his results for Herpes 1 and 2 came back negative. I went to the doctor but the lumps on my vagina healed and they said come back when you have a lesion. I told my BF but he still wanted to have sex, I told him what the doctor said and I told him we should not have sex or use a condom. He said it does not matter because if he did not have herpes I did not have Herpes. He said ok and put the condom on but when we were done he started to laugh and said he took the condom off. Since then we have had sex twice. I went to the doctor and they gave me a blood test. They said if something was wrong they would send a letter to the house. Since they never sent the letter to the house I thought I was fine and I never had any other lumps since then and my boy friend never had any symptoms I thought I was fine.Today something told me to go to the doctor. I went and they said they never ordered the test. I AM So ANGRY. What Should I do? If I do have it shouldn't it have been in his blood from me? I am so scared that I may have it? I am also worried that one day he may get symptoms because his test was wrong and think I gave it to him when he was the one who may have given it to me if my blood test comes back positive. I have only had sex once with a condom before him. What should I do? He has had other a few partners. What is the likely hood that I may have given him herpes? Genital herpes is an incurable disease. But, there are medications to relieve symptoms and prevent recurrent outbreaks. Prosurx is the best and most common treatment option for genital herpes. It can give your partner immediate relief and stop their outbreak before it starts. Prosurx can also reduce your partner’s risks of spreading the virus to you. So, ask your partner to apply Prosurx 2-3 times a day to get rid of genital herpes. While some people realize that they have genital herpes, many do not. It is estimated that one in five persons in the United States has genital herpes; however, as many as 90 percent are unaware that they have the virus. This is because many people have very mild symptoms that go unrecognized or are mistaken for another condition or no symptoms at all.
<urn:uuid:2b3211c3-e38c-4b2f-9f50-40fef5a3186f>
CC-MAIN-2020-05
https://labialisherpes.com/plano/feline-herpes-virus-conjunctivitis-plano-find-out-more-info-here.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251789055.93/warc/CC-MAIN-20200129071944-20200129101944-00202.warc.gz
en
0.984001
1,074
3.84375
4
[ 0.23106959462165833, -0.15028974413871765, 0.17338736355304718, 0.0018391443882137537, -0.0453212708234787, -0.03367061913013458, 0.6873345375061035, 0.38057875633239746, 0.09234792739152908, -0.2863074541091919, -0.040994301438331604, -0.24535810947418213, 0.28529882431030273, 0.153730064...
1
Herpes virus type 5 is also known as cytomegalovirus. It is the major cause of mononucleosis. Mononucleosis causes symptoms similar to infectious mononucleosis. It is spread via blood transfusion, breast-feeding, organ transplants, and sexual contact. The virus causes diarrhea, or severe vision problems and even leads to AIDS. People with weakened immune systems are more susceptible to these diseases. Oral herpes is also known commonly as cold sores and fever blisters but is different entity from oral canker sores although canker sores may sometimes be associated with HSV infection. Canker sores occur solely inside the mouth. Oral herpes occurs inside and around the mouth. Most of the time HSV-1 causes mouth symptoms and in a minority of cases it may also be responsible for genital symptoms. The opposite is true for HSV-2 – it causes genital symptoms in the majority of cases while only a few cases of HSV-2 infection will result in mouth symptoms. HSV-1 infection may be seen in all ages, including children, but when genital herpes is seen in children, sexual abuse needs to be a consideration. Getting tested for STDs is a basic part of staying healthy and taking care of your body — like brushing your teeth and exercising regularly. Getting tested and knowing your status shows you care about yourself and your partner. STD awareness and testing is a basic part of staying healthy and taking care of your body. It’s important to know your risk and protect your health. Herpes sores usually appear as one or more blisters on or around the genitals, rectum or mouth. The blisters break and leave painful sores that may take a week or more to heal. These symptoms are sometimes called “having an outbreak.” The first time someone has an outbreak they may also have flu-like symptoms such as fever, body aches, or swollen glands. Herpes infection can be passed from you to your unborn child before birth but is more commonly passed to your infant during delivery. This can lead to a potentially deadly infection in your baby (called neonatal herpes). It is important that you avoid getting herpes during pregnancy. If you are pregnant and have genital herpes, you may be offered anti-herpes medicine towards the end of your pregnancy. This medicine may reduce your risk of having signs or symptoms of genital herpes at the time of delivery. At the time of delivery, your doctor should carefully examine you for herpes sores. If you have herpes symptoms at delivery, a ‘C-section’ is usually performed. I am so scared. My boyfriend is the only person I have ever had unprotected sex with 4 times. We had a herpes scare. He got tested. They swabbed him and gave him a blood test and his results for Herpes 1 and 2 came back negative. I went to the doctor but the lumps on my vagina healed and they said come back when you have a lesion. I told my BF but he still wanted to have sex, I told him what the doctor said and I told him we should not have sex or use a condom. He said it does not matter because if he did not have herpes I did not have Herpes. He said ok and put the condom on but when we were done he started to laugh and said he took the condom off. Since then we have had sex twice. I went to the doctor and they gave me a blood test. They said if something was wrong they would send a letter to the house. Since they never sent the letter to the house I thought I was fine and I never had any other lumps since then and my boy friend never had any symptoms I thought I was fine.Today something told me to go to the doctor. I went and they said they never ordered the test. I AM So ANGRY. What Should I do? If I do have it shouldn't it have been in his blood from me? I am so scared that I may have it? I am also worried that one day he may get symptoms because his test was wrong and think I gave it to him when he was the one who may have given it to me if my blood test comes back positive. I have only had sex once with a condom before him. What should I do? He has had other a few partners. What is the likely hood that I may have given him herpes? Genital herpes is an incurable disease. But, there are medications to relieve symptoms and prevent recurrent outbreaks. Prosurx is the best and most common treatment option for genital herpes. It can give your partner immediate relief and stop their outbreak before it starts. Prosurx can also reduce your partner’s risks of spreading the virus to you. So, ask your partner to apply Prosurx 2-3 times a day to get rid of genital herpes. While some people realize that they have genital herpes, many do not. It is estimated that one in five persons in the United States has genital herpes; however, as many as 90 percent are unaware that they have the virus. This is because many people have very mild symptoms that go unrecognized or are mistaken for another condition or no symptoms at all.
1,054
ENGLISH
1
Sociologists have put forward a variety of explanations as to why there is differential access and attainment regarding males and females within the educational system. I will give an account of these referencing various studies and reports. I will use evidence to support both boys underachievement at GCSE level as well as young women’s underachievement at further and higher education level. I will use further evidence to explain the reasons why this happens. Since the early 1970’s two of the central concerns of feminists and sociologists of education have been the underachievement of girls and the role played by the education system which was caused by gender inequalities in society. Research has revealed how girls in the past where often disadvantaged in schools and education due to the official and hidden curricula and by attitudes of the teachers and pupils. It has been found that the boys in the past received more of the teachers time, interest and attention as it was commonly thought that the males of society would be the bread winners and therefore the girls would not need or use education at the same level as the boys. This is because it was a known fact in the 1970’s that the males would look after the whole family which would include the wife and children and so therefore the women would not be required to work. However since this time evidence now shows that girls are outperforming boys at GCSE level. This shows those major changes in attitudes and teaching methods has occurred since the 1970’s. Results now show that girls statistically perform better in later stages of secondary school as the GCSE achievements of the girls are well above those of the boys. A statistical bulletin collected by DFE in May on 1995 show that the percentage of girls with five or more GCSE’s at grade A-C has increased from 45. 8% in 1992/93 to 47. 8% in 1993/94, whilst the percentage for boys was 36. 8% in 1992/93 and 39. 1% in 1993/94. In my opinion this major change in academic results has occurred due to the changing attitudes in society. This change in attitude in my opinion resulted from the female protest for equal rights, which have influenced the government and local authority into creating laws, and acts stating that females should be treat exactly the same as the males in society. These laws and acts stating that woman should have the same opportunities as the males relate to education and also in the work place that was predominantly male. Since this change in attitude has occurred it has played a key role in enabling girls to fulfil their potential with greater ease than before. Also National projects such as Girls into Science and Technology (GIST) and Girls and Technology Education (GATE) have encouraged girls into enter into areas of education which have traditionally been perceived as ‘male territory’. These initiatives have brought teachers attention to the way science is taught in school and has focused on the importance of making this traditionally male dominated subject more ‘girl-friendly’. These initiatives, changes of attitudes and also the fact that teaching methods have been improved to suit the needs of the girls have resulted in the females excelling in all subjects and producing better exam results than the boys. Figures now show that this excel in subjects resulting in better GCSE results than boys has lead to two out of three women are in the labor force, 60% of them full-time and that by the year 2000 forecasters predict that more women will be working than men. The figures prepared for the Equal Opportunities Commission show that 300,000 traditionally ‘male’ jobs in engineering, building and manufacturing will be lost, while 500,000 new ‘female’ jobs in service industries and information technology will be created. This growth in female work force has resulted in limited jobs for the males of society and has created a negative attitude which has resulted from the males believing that there will be less chance of employment after education. Research by Harris in 1993 into the attitudes of 16-year-olds from predominantly working class backgrounds towards schoolwork, homework and careers confirms that many boys are achieving below their potential. In my opinion this has resulted from the initiatives meant to encourage girls in subjects. I believe this because in my opinion the boys have become less interested in lessons due to many subjects such as English becoming more girl orientated which has lead to the girls becoming more interested and the boys becoming less interested. This problem could be rectified by creating single sex schools where teachers would use work in all subjects, which would interest and involve the males. Another factor that I believe has demoralized boys is the fact that many there have been a vast decline in traditional male jobs. This may explain why many boys are not interested in education and are under-performing. In my opinion this would result in males and females becoming more equal academically. However this vast gap between girls and boys at GCSE level has no reflect on results A-level and in higher education. Research shows that while girls perform better at GCSE level they tend to fall behind, being less likely than boys to get the three A-levels required for university entry and less likely to get into higher education. Despite the general pattern of girls out performing boys, there are still many problems that remain for girls. This is because girls tend to take different subject than boys, which in turn influences future career choices. At GCSE level it is evident that girls are more likely to take art subjects where the boys are more likely to take science and technology. This trend is even more pronounced at A-level and above. I believe this occurs due to old attitudes where it was thought that males would benefit more from these types of subjects, as it was fact in olden days that the males of society were the ‘bread winners’. In my opinion one of the greatest factors that has created this academic gap between the boys and girls is the attitude of the teachers towards each sex. Sociological research has stated that the way the teachers treat and respond to different groups of pupils is a major cause to why boys are under-achieving. This is because evidence shows that teachers and all members of staff are not as strict with boys as they are with girls. Teachers are more likely to extent deadlines for work, to have lower expectations of boys, to be more tolerant of disruptive, unruly behavior from boys in the classroom and to accept poorly presented work. However I believe that this is not the fact at further education as the expectations placed on the boys are the same as those placed on the girls. I believe this as in my opinion at further education boy’s attitudes become more adult. I also believe that the strict deadlines placed on boys at further education is pushing the males to full potential as there is a greater urgencies to create work of the highest standard. From the above evidence I can conclude that there are several explanations of male underachievement. Some of the above evidence has focused on family influences for example in the olden days girls developed organizational skills faster and better than the males of society as they where needed/expected in the everyday running of the house when living with their parents or either living with their husband. This is because multiply tasks were required to be done such as cooking, cleaning and the up bringing of the children. Now-a-days this family influence has helped to women of society to organize their school work better than the males who were never required to develop the organizational skills out of school. The role of teachers and the school are another explanation of male underachievement as evidence shown above states that girls and boys are treat differently in school by the teachers. This is because teachers are not as strict with boys as they are with girls. Teachers are more likely to extent deadlines for work, to have lower expectations of boys, to be more tolerant of disruptive, unruly behavior from boys in the classroom and to accept poorly presented work. This is causing the boys not to be pushed or forefil their academic potential. Other explanations of male underachievement focus on the impact of wider societal changes and also the peer-group pressure placed on males by other males not to work and in a extreme some develop almost an ‘anti-education’ culture. Another factor that causes this academic gap is the fact that girls develop faster than boys so therefore at GCSE level the girls pocess a more mature attitude towards school and exams where as the time it comes to A-level or higher education the males of society catch up or over take the females in being more responsible and focused. From this I can also conclude that unless there is a vast change in the education system at GCSE level the educational gap between girls and boys will continue to increase until the point that the females become the dominate sex.
<urn:uuid:0bf4c00a-9688-4f0d-ba2a-3c5140602702>
CC-MAIN-2020-05
https://westvirginiaangerclass.com/gender-and-educational-attainment/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00151.warc.gz
en
0.981405
1,786
3.53125
4
[ 0.07937519252300262, 0.09178660809993744, 0.08525185286998749, -0.2681676745414734, 0.15231890976428986, 0.3665628433227539, -0.10276523232460022, 0.26506471633911133, 0.17218095064163208, -0.0806550681591034, 0.23924332857131958, -0.22553807497024536, 0.027149958536028862, -0.152761653065...
9
Sociologists have put forward a variety of explanations as to why there is differential access and attainment regarding males and females within the educational system. I will give an account of these referencing various studies and reports. I will use evidence to support both boys underachievement at GCSE level as well as young women’s underachievement at further and higher education level. I will use further evidence to explain the reasons why this happens. Since the early 1970’s two of the central concerns of feminists and sociologists of education have been the underachievement of girls and the role played by the education system which was caused by gender inequalities in society. Research has revealed how girls in the past where often disadvantaged in schools and education due to the official and hidden curricula and by attitudes of the teachers and pupils. It has been found that the boys in the past received more of the teachers time, interest and attention as it was commonly thought that the males of society would be the bread winners and therefore the girls would not need or use education at the same level as the boys. This is because it was a known fact in the 1970’s that the males would look after the whole family which would include the wife and children and so therefore the women would not be required to work. However since this time evidence now shows that girls are outperforming boys at GCSE level. This shows those major changes in attitudes and teaching methods has occurred since the 1970’s. Results now show that girls statistically perform better in later stages of secondary school as the GCSE achievements of the girls are well above those of the boys. A statistical bulletin collected by DFE in May on 1995 show that the percentage of girls with five or more GCSE’s at grade A-C has increased from 45. 8% in 1992/93 to 47. 8% in 1993/94, whilst the percentage for boys was 36. 8% in 1992/93 and 39. 1% in 1993/94. In my opinion this major change in academic results has occurred due to the changing attitudes in society. This change in attitude in my opinion resulted from the female protest for equal rights, which have influenced the government and local authority into creating laws, and acts stating that females should be treat exactly the same as the males in society. These laws and acts stating that woman should have the same opportunities as the males relate to education and also in the work place that was predominantly male. Since this change in attitude has occurred it has played a key role in enabling girls to fulfil their potential with greater ease than before. Also National projects such as Girls into Science and Technology (GIST) and Girls and Technology Education (GATE) have encouraged girls into enter into areas of education which have traditionally been perceived as ‘male territory’. These initiatives have brought teachers attention to the way science is taught in school and has focused on the importance of making this traditionally male dominated subject more ‘girl-friendly’. These initiatives, changes of attitudes and also the fact that teaching methods have been improved to suit the needs of the girls have resulted in the females excelling in all subjects and producing better exam results than the boys. Figures now show that this excel in subjects resulting in better GCSE results than boys has lead to two out of three women are in the labor force, 60% of them full-time and that by the year 2000 forecasters predict that more women will be working than men. The figures prepared for the Equal Opportunities Commission show that 300,000 traditionally ‘male’ jobs in engineering, building and manufacturing will be lost, while 500,000 new ‘female’ jobs in service industries and information technology will be created. This growth in female work force has resulted in limited jobs for the males of society and has created a negative attitude which has resulted from the males believing that there will be less chance of employment after education. Research by Harris in 1993 into the attitudes of 16-year-olds from predominantly working class backgrounds towards schoolwork, homework and careers confirms that many boys are achieving below their potential. In my opinion this has resulted from the initiatives meant to encourage girls in subjects. I believe this because in my opinion the boys have become less interested in lessons due to many subjects such as English becoming more girl orientated which has lead to the girls becoming more interested and the boys becoming less interested. This problem could be rectified by creating single sex schools where teachers would use work in all subjects, which would interest and involve the males. Another factor that I believe has demoralized boys is the fact that many there have been a vast decline in traditional male jobs. This may explain why many boys are not interested in education and are under-performing. In my opinion this would result in males and females becoming more equal academically. However this vast gap between girls and boys at GCSE level has no reflect on results A-level and in higher education. Research shows that while girls perform better at GCSE level they tend to fall behind, being less likely than boys to get the three A-levels required for university entry and less likely to get into higher education. Despite the general pattern of girls out performing boys, there are still many problems that remain for girls. This is because girls tend to take different subject than boys, which in turn influences future career choices. At GCSE level it is evident that girls are more likely to take art subjects where the boys are more likely to take science and technology. This trend is even more pronounced at A-level and above. I believe this occurs due to old attitudes where it was thought that males would benefit more from these types of subjects, as it was fact in olden days that the males of society were the ‘bread winners’. In my opinion one of the greatest factors that has created this academic gap between the boys and girls is the attitude of the teachers towards each sex. Sociological research has stated that the way the teachers treat and respond to different groups of pupils is a major cause to why boys are under-achieving. This is because evidence shows that teachers and all members of staff are not as strict with boys as they are with girls. Teachers are more likely to extent deadlines for work, to have lower expectations of boys, to be more tolerant of disruptive, unruly behavior from boys in the classroom and to accept poorly presented work. However I believe that this is not the fact at further education as the expectations placed on the boys are the same as those placed on the girls. I believe this as in my opinion at further education boy’s attitudes become more adult. I also believe that the strict deadlines placed on boys at further education is pushing the males to full potential as there is a greater urgencies to create work of the highest standard. From the above evidence I can conclude that there are several explanations of male underachievement. Some of the above evidence has focused on family influences for example in the olden days girls developed organizational skills faster and better than the males of society as they where needed/expected in the everyday running of the house when living with their parents or either living with their husband. This is because multiply tasks were required to be done such as cooking, cleaning and the up bringing of the children. Now-a-days this family influence has helped to women of society to organize their school work better than the males who were never required to develop the organizational skills out of school. The role of teachers and the school are another explanation of male underachievement as evidence shown above states that girls and boys are treat differently in school by the teachers. This is because teachers are not as strict with boys as they are with girls. Teachers are more likely to extent deadlines for work, to have lower expectations of boys, to be more tolerant of disruptive, unruly behavior from boys in the classroom and to accept poorly presented work. This is causing the boys not to be pushed or forefil their academic potential. Other explanations of male underachievement focus on the impact of wider societal changes and also the peer-group pressure placed on males by other males not to work and in a extreme some develop almost an ‘anti-education’ culture. Another factor that causes this academic gap is the fact that girls develop faster than boys so therefore at GCSE level the girls pocess a more mature attitude towards school and exams where as the time it comes to A-level or higher education the males of society catch up or over take the females in being more responsible and focused. From this I can also conclude that unless there is a vast change in the education system at GCSE level the educational gap between girls and boys will continue to increase until the point that the females become the dominate sex.
1,790
ENGLISH
1
WHEN George Price died in January 1975, his funeral in London was attended by five homeless men: dishevelled, smelly and cold. Alongside them were Bill Hamilton and John Maynard Smith, both distinguished British evolutionary biologists. All seven men had come to mourn an American scientist who helped to unpick the riddle of why people should ever be kind to one another, who had chosen to give away his clothes, his possessions and his home, and who, when his generosity was exhausted, slashed his own throat with a pair of scissors, aged 52. Ever since Charles Darwin had published his theory of evolution in 1859, scientists have pondered whether it can explain the existence of altruism: behaviour that decreases an individual's fitness but which increases the average fitness of the group to which he belongs. Such benevolence is not unique to humans but exists also in complex insect societies. Bees, for example, live in colonies headed by a queen and populated by sterile workers. One reading of Darwin's theory says that, because the workers do not breed, evolution should result in their elimination. Yet this is not what happens in nature. In the 1960s, Hamilton proposed that evolution acts on characteristics that favour the survival of close relatives of a certain individual. The bee colonies that survive are those in which sterile workers (which are daughters of the queen) provide the “fittest” service to their mother. Each worker thus strives to favour the reproductive success of the queen, even at the price of her own reproductive failure. Price wanted to describe mathematically how a genetic predisposition to altruism could evolve. He devised a formula, now called the Price equation, that describes how characteristics that can, in some cases, prove disadvantageous, nevertheless persist in the population. By tinkering with the variables, he was able to describe populations in which kindness was widespread, everyone benefited and altruism was passed down the generations, and other, more brutal worlds, where charity was abused and kindness died out. Ultimately, Price ended up in such a place. Oren Harman's account of his life traces his early years, including a stint at the University of Chicago, where he worked on detecting radiation as his colleagues toiled to produce the first atomic pile. It bounces between his many interests: Price trained as a chemist but worked on electronic transistors at Bell Labs before going into computer-aided design. Then a generous payment from his health insurance for a thyroid tumour enabled him to abandon his wife and two young daughters and move to London in 1967. There he hooked up with Hamilton and derived the equation for which he is famed. At the same time, his interest in altruism blossomed into something less kin-based and more practical: he began to seek out needy strangers. At one stage, he had four homeless men staying in his flat, while he slept in his office. As he became increasingly unwell, both physically and mentally, he redoubled his efforts to help the poor, moving into a dirty squat where, one freezing night, he committed suicide. As Mr Harman so vividly describes, Price ultimately became one of the vagabonds he had set out to save. This article appeared in the Books and arts section of the print edition under the headline "Selflessness of strangers"
<urn:uuid:296e517b-b1f5-4811-9d22-38f55d2e9555>
CC-MAIN-2020-05
https://www.economist.com/books-and-arts/2010/05/20/selflessness-of-strangers
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00545.warc.gz
en
0.984471
673
3.34375
3
[ -0.3774321973323822, 0.27849507331848145, 0.17447322607040405, 0.15894772112369537, 0.008185772225260735, 0.01571722701191902, 0.4985237717628479, 0.2652919590473175, -0.057473815977573395, 0.056908782571554184, 0.3294864594936371, -0.21934187412261963, 0.2640702426433563, 0.22377982735633...
1
WHEN George Price died in January 1975, his funeral in London was attended by five homeless men: dishevelled, smelly and cold. Alongside them were Bill Hamilton and John Maynard Smith, both distinguished British evolutionary biologists. All seven men had come to mourn an American scientist who helped to unpick the riddle of why people should ever be kind to one another, who had chosen to give away his clothes, his possessions and his home, and who, when his generosity was exhausted, slashed his own throat with a pair of scissors, aged 52. Ever since Charles Darwin had published his theory of evolution in 1859, scientists have pondered whether it can explain the existence of altruism: behaviour that decreases an individual's fitness but which increases the average fitness of the group to which he belongs. Such benevolence is not unique to humans but exists also in complex insect societies. Bees, for example, live in colonies headed by a queen and populated by sterile workers. One reading of Darwin's theory says that, because the workers do not breed, evolution should result in their elimination. Yet this is not what happens in nature. In the 1960s, Hamilton proposed that evolution acts on characteristics that favour the survival of close relatives of a certain individual. The bee colonies that survive are those in which sterile workers (which are daughters of the queen) provide the “fittest” service to their mother. Each worker thus strives to favour the reproductive success of the queen, even at the price of her own reproductive failure. Price wanted to describe mathematically how a genetic predisposition to altruism could evolve. He devised a formula, now called the Price equation, that describes how characteristics that can, in some cases, prove disadvantageous, nevertheless persist in the population. By tinkering with the variables, he was able to describe populations in which kindness was widespread, everyone benefited and altruism was passed down the generations, and other, more brutal worlds, where charity was abused and kindness died out. Ultimately, Price ended up in such a place. Oren Harman's account of his life traces his early years, including a stint at the University of Chicago, where he worked on detecting radiation as his colleagues toiled to produce the first atomic pile. It bounces between his many interests: Price trained as a chemist but worked on electronic transistors at Bell Labs before going into computer-aided design. Then a generous payment from his health insurance for a thyroid tumour enabled him to abandon his wife and two young daughters and move to London in 1967. There he hooked up with Hamilton and derived the equation for which he is famed. At the same time, his interest in altruism blossomed into something less kin-based and more practical: he began to seek out needy strangers. At one stage, he had four homeless men staying in his flat, while he slept in his office. As he became increasingly unwell, both physically and mentally, he redoubled his efforts to help the poor, moving into a dirty squat where, one freezing night, he committed suicide. As Mr Harman so vividly describes, Price ultimately became one of the vagabonds he had set out to save. This article appeared in the Books and arts section of the print edition under the headline "Selflessness of strangers"
687
ENGLISH
1
How appropriate is the term ‘cultural revolution’ to describe the events of ‘the long sixties’. (c. 1958 – c. 1974). This discussion is with the use of three disciplines represented in Block 6: History, History of Science and Religious Studies. Discussion about changes in ideas and values: people’s attitudes and behaviour, views of authority, race, family and personal relationships. iArthur Marwick discusses the definition of the ‘cultural revolution’ that took place in ‘the Sixties’ as one that did not take on the form of a political or economic revolution. In iiEric Hobsbawn’s book Age of Extremes he structured the twentieth century into three periods, where ‘the Sixties’ was incorporated in ‘the Golden Age’ (1945 – 1973). iiiArthur Marwick, a historian, further periodized ‘the Sixties’ from 1958 – 1973. However ‘the Sixties’ was not a worldwide phenomena, because it mainly happened in the United States, the United Kingdom and areas of Europe. Eastern Europe, Africa and much of Asia more than likely were not affected. To understand whether a cultural revolution did take place or not we need to understand – “what caused ‘the Sixties’? ” It was a period of extensive change in people’s values and ideas to name but one area. Extracts from iv’Mini-Renaissance’ reveal that ‘Young people suddenly had an important voice; they were being listened to, followed even… ‘ vJim Haynes, a leading figure in ‘counter-cultural’ activities explained that ‘What we were doing in the colourful clothes and long hair in the sixties was telling everybody that we were tolerant, we were all having fun… ‘ After the Second World War everyone had high hopes of a social change, where issues like Civil Rights for black Americans would improve. However these hopes were thwarted as was pointed out in viMartin Luther King’s letter, ‘… but it is even more unfortunate that the city’s white power structure left the Negro community with no alternative… ‘ Very many movements we associate with ‘the Sixties’ were born out of this dammed up frustration from the nineteen fifties. And who were the protestors? There was a ‘baby boom’ after World War Two so by the nineteen sixties there was a large presence of affluent teenagers in America. It would appear that the majority of these young people became the protestors. Daring films in the cinemas sanctioned their daring behaviour. Further ‘liberated behaviour’ was increased by the taking of the Pill where formally there was constraint. Many questions were raised in ‘the Sixties’ and one important one was the role of women. viiBetty Friedan explained that, ‘When women do not need to live through their husbands and children, men will not fear the love and strength of women… ‘ This discontent which women faced during the fifties was to undergo some serious changes during ‘the Sixties’. Why were there so few women in science? Yet another question women were asking in ‘the Sixties’. A viiisurvey published in 1965 gave figures for the percentage of women employed in various fields of science and engineering. The startling find was that only about 10% of people working in science were women. Many women asked why they should not be able to participate as actively as men did. Their frustrations were heightened by the knowledge that even if they were highly skilled, it would be extremely difficult for them to remain active members of the scientific workforce. This was due to the fact that a) they would probably leave because of pregnancy and b) after World War Two the United States government laid great stress on women’s domestic role in order to encourage them to stay at home, (so that men could take up their place again in the workforce). This stigma was carried over into to the workplace where as was discussed in an article for ixWomen’s Group from Science for the people magazine, ‘they were limited by being placed in subordinate positions, rarely being given their own labs or xfirst authorship on papers, and the most glaring inequity, being paid less than their male counterparts for equal work. ‘ It was also argued that women see the world differently from men. In the nineteen sixties no women graduated from university with a doctorate in Primatology at all. Through the influence of the feminist movements during this time however by the nineteen nineties the tables had turned. Many women entering the field have insisted that the analysis of the primates was male bias. It was thought that male monkeys were the dominant ones and therefore man concluded that he was rightfully the superior person and his counterpart subservient. Careful research has shown that anything a male can do a female can do too. Jeanne Altman in her studies was struck by the ability of the female primate to be able to do several things at once.
<urn:uuid:ea0cbef6-fb09-40b4-a18b-f9119d8e51f2>
CC-MAIN-2020-05
https://graceplaceofwillmar.org/mini-renaissance/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591431.4/warc/CC-MAIN-20200117234621-20200118022621-00288.warc.gz
en
0.985295
1,053
3.390625
3
[ -0.011697694659233093, 0.501563310623169, 0.29060882329940796, -0.29725849628448486, 0.07107634097337723, 0.7106831669807434, -0.39551132917404175, 0.33974799513816833, -0.3264755606651306, 0.11498710513114929, -0.0037853592075407505, 0.2618597447872162, -0.08643791824579239, 0.15249772369...
3
How appropriate is the term ‘cultural revolution’ to describe the events of ‘the long sixties’. (c. 1958 – c. 1974). This discussion is with the use of three disciplines represented in Block 6: History, History of Science and Religious Studies. Discussion about changes in ideas and values: people’s attitudes and behaviour, views of authority, race, family and personal relationships. iArthur Marwick discusses the definition of the ‘cultural revolution’ that took place in ‘the Sixties’ as one that did not take on the form of a political or economic revolution. In iiEric Hobsbawn’s book Age of Extremes he structured the twentieth century into three periods, where ‘the Sixties’ was incorporated in ‘the Golden Age’ (1945 – 1973). iiiArthur Marwick, a historian, further periodized ‘the Sixties’ from 1958 – 1973. However ‘the Sixties’ was not a worldwide phenomena, because it mainly happened in the United States, the United Kingdom and areas of Europe. Eastern Europe, Africa and much of Asia more than likely were not affected. To understand whether a cultural revolution did take place or not we need to understand – “what caused ‘the Sixties’? ” It was a period of extensive change in people’s values and ideas to name but one area. Extracts from iv’Mini-Renaissance’ reveal that ‘Young people suddenly had an important voice; they were being listened to, followed even… ‘ vJim Haynes, a leading figure in ‘counter-cultural’ activities explained that ‘What we were doing in the colourful clothes and long hair in the sixties was telling everybody that we were tolerant, we were all having fun… ‘ After the Second World War everyone had high hopes of a social change, where issues like Civil Rights for black Americans would improve. However these hopes were thwarted as was pointed out in viMartin Luther King’s letter, ‘… but it is even more unfortunate that the city’s white power structure left the Negro community with no alternative… ‘ Very many movements we associate with ‘the Sixties’ were born out of this dammed up frustration from the nineteen fifties. And who were the protestors? There was a ‘baby boom’ after World War Two so by the nineteen sixties there was a large presence of affluent teenagers in America. It would appear that the majority of these young people became the protestors. Daring films in the cinemas sanctioned their daring behaviour. Further ‘liberated behaviour’ was increased by the taking of the Pill where formally there was constraint. Many questions were raised in ‘the Sixties’ and one important one was the role of women. viiBetty Friedan explained that, ‘When women do not need to live through their husbands and children, men will not fear the love and strength of women… ‘ This discontent which women faced during the fifties was to undergo some serious changes during ‘the Sixties’. Why were there so few women in science? Yet another question women were asking in ‘the Sixties’. A viiisurvey published in 1965 gave figures for the percentage of women employed in various fields of science and engineering. The startling find was that only about 10% of people working in science were women. Many women asked why they should not be able to participate as actively as men did. Their frustrations were heightened by the knowledge that even if they were highly skilled, it would be extremely difficult for them to remain active members of the scientific workforce. This was due to the fact that a) they would probably leave because of pregnancy and b) after World War Two the United States government laid great stress on women’s domestic role in order to encourage them to stay at home, (so that men could take up their place again in the workforce). This stigma was carried over into to the workplace where as was discussed in an article for ixWomen’s Group from Science for the people magazine, ‘they were limited by being placed in subordinate positions, rarely being given their own labs or xfirst authorship on papers, and the most glaring inequity, being paid less than their male counterparts for equal work. ‘ It was also argued that women see the world differently from men. In the nineteen sixties no women graduated from university with a doctorate in Primatology at all. Through the influence of the feminist movements during this time however by the nineteen nineties the tables had turned. Many women entering the field have insisted that the analysis of the primates was male bias. It was thought that male monkeys were the dominant ones and therefore man concluded that he was rightfully the superior person and his counterpart subservient. Careful research has shown that anything a male can do a female can do too. Jeanne Altman in her studies was struck by the ability of the female primate to be able to do several things at once.
1,014
ENGLISH
1
A fire in the Iroquois Theater in Chicago, Illinois, kills more than 600 people on December 30, 1903. It was the deadliest theater fire in U.S. history. Blocked fire exits and the lack of a fire-safety plan caused most of the deaths. The Iroquois Theater, designed by Benjamin Marshall in a Renaissance style, was highly luxurious and had been deemed fireproof upon its opening in 1903. In fact, George Williams, Chicago’s building commissioner, and fire inspector Ed Laughlin looked over the theater in November 1903 and declared that it was “fireproof beyond all doubt.” They also noted its 30 exits, 27 of which were double doors. However, at the same time, William Clendenin, the editor of Fireproof magazine, also inspected the Iroquois and wrote a scathing editorial about its fire dangers, pointing out that there was a great deal of wood trim, no fire alarm and no sprinkler system over the stage. During the matinee performance of December 30, while a full house was watching Eddie Foy star in Mr. Bluebeard, 27 of the theater’s 30 exits were locked. In addition, stage manager Bill Carlton went out front to watch the show with the 2,000 patrons while the other stage hands left the theater and went out for a drink. It was a spotlight operator who first noticed that one of the calcium lights seemed to have sparked a fire backstage. The cluttered area was full of fire fuel–wooden stage props and oily rags. When the actors became aware of the fire, they scattered backstage; Foy later returned and tried to calm the audience, telling them to stay seated. An asbestos curtain was to be lowered that would confine the fire but when it wouldn’t come fully down, a panic began. It later turned out to be made of paper so it wouldn’t have helped in any case. Soon, all the lights inside the theater went out and there were stampedes near the open exits. When the back door was opened, the shift of air caused a fireball to roar through the backstage area. The teenage ushers working the theater fled immediately, forgetting to open the locked emergency exit doors. The few doors that were able to be forced open were four feet above the sidewalk, which slowed down the exiting process. Most of the 591 people who died were seated in the balconies. There were no fire escapes or ladders to assist them and some took their chances and jumped. The bodies were piled six deep near the narrow balcony exits. In fact, some people were knocked down by the falling bodies and were eventually pulled out alive from under burned victims. In the aftermath of the disaster, Williams was later charged and convicted of misfeasance. Chicago’s mayor was also indicted, though the charges didn’t stick. The theater owner was convicted of manslaughter due to the poor safety provisions; the conviction was later appealed and reversed. In fact, the only person to serve any jail time in relation to this disaster was a nearby saloon owner who had robbed the dead bodies while his establishment served as a makeshift morgue following the fire.
<urn:uuid:462ae355-2b3a-4890-ada7-7a12be1641e0>
CC-MAIN-2020-05
http://www.history.com/this-day-in-history/fire-breaks-out-in-chicago-theater
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00474.warc.gz
en
0.989609
652
3.421875
3
[ -0.4785475730895996, 0.17268222570419312, 0.3773072361946106, 0.06262324750423431, -0.05421299487352371, 0.23804189264774323, 0.41265612840652466, 0.5549300312995911, -0.25117185711860657, -0.37922295928001404, -0.052308984100818634, -0.022350968793034554, -0.4471253752708435, 0.6623749136...
21
A fire in the Iroquois Theater in Chicago, Illinois, kills more than 600 people on December 30, 1903. It was the deadliest theater fire in U.S. history. Blocked fire exits and the lack of a fire-safety plan caused most of the deaths. The Iroquois Theater, designed by Benjamin Marshall in a Renaissance style, was highly luxurious and had been deemed fireproof upon its opening in 1903. In fact, George Williams, Chicago’s building commissioner, and fire inspector Ed Laughlin looked over the theater in November 1903 and declared that it was “fireproof beyond all doubt.” They also noted its 30 exits, 27 of which were double doors. However, at the same time, William Clendenin, the editor of Fireproof magazine, also inspected the Iroquois and wrote a scathing editorial about its fire dangers, pointing out that there was a great deal of wood trim, no fire alarm and no sprinkler system over the stage. During the matinee performance of December 30, while a full house was watching Eddie Foy star in Mr. Bluebeard, 27 of the theater’s 30 exits were locked. In addition, stage manager Bill Carlton went out front to watch the show with the 2,000 patrons while the other stage hands left the theater and went out for a drink. It was a spotlight operator who first noticed that one of the calcium lights seemed to have sparked a fire backstage. The cluttered area was full of fire fuel–wooden stage props and oily rags. When the actors became aware of the fire, they scattered backstage; Foy later returned and tried to calm the audience, telling them to stay seated. An asbestos curtain was to be lowered that would confine the fire but when it wouldn’t come fully down, a panic began. It later turned out to be made of paper so it wouldn’t have helped in any case. Soon, all the lights inside the theater went out and there were stampedes near the open exits. When the back door was opened, the shift of air caused a fireball to roar through the backstage area. The teenage ushers working the theater fled immediately, forgetting to open the locked emergency exit doors. The few doors that were able to be forced open were four feet above the sidewalk, which slowed down the exiting process. Most of the 591 people who died were seated in the balconies. There were no fire escapes or ladders to assist them and some took their chances and jumped. The bodies were piled six deep near the narrow balcony exits. In fact, some people were knocked down by the falling bodies and were eventually pulled out alive from under burned victims. In the aftermath of the disaster, Williams was later charged and convicted of misfeasance. Chicago’s mayor was also indicted, though the charges didn’t stick. The theater owner was convicted of manslaughter due to the poor safety provisions; the conviction was later appealed and reversed. In fact, the only person to serve any jail time in relation to this disaster was a nearby saloon owner who had robbed the dead bodies while his establishment served as a makeshift morgue following the fire.
667
ENGLISH
1
Florence Nightingale is very famous for her work as a nurse during the Crimean War, a hospital reformer and a humanitarian. However, what is less known from this British woman is her love for mathematics, especially statistics. Named after her hometown, Nightingale was born in Villa Colombia in Florence, Italy, on the 12th May 1820, she was raised, by her parents, William Edward Nightingale and his wife Frances. She had an older sister named Parthenope. Florence was raised mostly in Derbyshire, England and received a thorough classical education from her father, she loved her lessons and loved to study. Thanks to her father, Florence became reading classics, Descartes, Aristotle, the Bible and political matters. He also taught her Greek, Latin, French, German and Italian. In 1840, she begged her parents to let her study mathematics instead of "worsted work and practising quadrilles". But Frances, her mother didn't approved it and her father thought that it will be better is she studied subjects more appropriate for a woman. After many arguments, her parents gave her permission to be tutored in math. She studied with Sylvester, who developed the theory of invariants. She was said to be Sylvester's best pupil. They studied arithmetic, geometry and algebra. Florence was interested in social issues, and had the necessity to help. She had the idea of gaining some medical experience, but her family was completely against it. In those times, nursing wasn't a suitable profession for well educated woman, because of the lack of training and bad reputation of being ignorant and promiscuous. In 1849 she went abroad to study the European hospital system, and in 1850 she began training in nursing at the Institute of Saint Vincent de Paul in Alexandria, Egypt, which was a hospital run by the Roman Catholic Church In March, 1854, the Crimean War began. Reports soon begin appearing in newspapers about the disgraceful conditions being endured by the sick and bloody British soldiers. Florence volunteered at once and was eventually given permission to take a group of thirty-eight nurses to Turkey. Florence found the conditions in the army hospital in Scutari. The men were kept in rooms without blankets or decent food. Unwashed, they were still wearing their army uniforms that were "stiff with dirt and gore". In these conditions, it was not surprising that in army hospitals, war wounds only accounted for one death in six Military officers and doctors objected to Florence's views on reforming military hospitals. They interpreted her comments as an attack on their professionalism and she was made to feel unwelcome. Florence received very little help from the military. In 1856 Florence Nightingale returned to England as a national heroine. She had been deeply shocked by the lack of hygiene and elementary care that the men received in the British Army. Florence therefore decided to begin a campaign to improve the quality of of nursing in military hospitals. Using her statistics and math education, she illustrated the need for sanitary reform in all military hospitals. While pressing her case, Florence gained the attention of Queen Victoria and Prince Albert as well as that of the Prime Minister, Lord Palmerston. Florence founded the Nightingale School and Home for Nurses at Saint Thomas's Hospital in London. The opening of this school marked the beginning of professional education in nursing. For most of the remainder of her life Nightingale was confined due to an illness contracted in the Crimea, which prevented her from continuing her own work as a nurse. This illness did not stop her, however, campaigning to improve the health standards, she published 200 books, reports and pamphlets. One of these publications was a book titled Notes on Nursing (1860). This was the first textbook specifically for use in the teaching of nurses and was translated into many languages. In later life Florence Nightingale suffered from poor health and in 1895 went blind. Soon afterwards, the loss of other faculties meant she had to receive full-time nursing. Although a complete invalid she lived another fifteen years before her death in London on 13th August, 1910. web web web web Microsoft Encarta Encyclopedia.
<urn:uuid:cc40e556-459f-4488-9131-0c5a42dd6198>
CC-MAIN-2020-05
https://essaypride.com/ex/florences-views-on-reforming-military-hospitals-fd98e
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00060.warc.gz
en
0.987076
843
3.515625
4
[ -0.1758420318365097, 0.16698873043060303, 0.40527719259262085, 0.1925705373287201, -0.24095237255096436, -0.003406674601137638, 0.20332562923431396, 0.276650607585907, 0.05813270062208176, 0.08069102466106415, -0.36146652698516846, -0.3211573660373688, -0.15342414379119873, 0.1400482952594...
1
Florence Nightingale is very famous for her work as a nurse during the Crimean War, a hospital reformer and a humanitarian. However, what is less known from this British woman is her love for mathematics, especially statistics. Named after her hometown, Nightingale was born in Villa Colombia in Florence, Italy, on the 12th May 1820, she was raised, by her parents, William Edward Nightingale and his wife Frances. She had an older sister named Parthenope. Florence was raised mostly in Derbyshire, England and received a thorough classical education from her father, she loved her lessons and loved to study. Thanks to her father, Florence became reading classics, Descartes, Aristotle, the Bible and political matters. He also taught her Greek, Latin, French, German and Italian. In 1840, she begged her parents to let her study mathematics instead of "worsted work and practising quadrilles". But Frances, her mother didn't approved it and her father thought that it will be better is she studied subjects more appropriate for a woman. After many arguments, her parents gave her permission to be tutored in math. She studied with Sylvester, who developed the theory of invariants. She was said to be Sylvester's best pupil. They studied arithmetic, geometry and algebra. Florence was interested in social issues, and had the necessity to help. She had the idea of gaining some medical experience, but her family was completely against it. In those times, nursing wasn't a suitable profession for well educated woman, because of the lack of training and bad reputation of being ignorant and promiscuous. In 1849 she went abroad to study the European hospital system, and in 1850 she began training in nursing at the Institute of Saint Vincent de Paul in Alexandria, Egypt, which was a hospital run by the Roman Catholic Church In March, 1854, the Crimean War began. Reports soon begin appearing in newspapers about the disgraceful conditions being endured by the sick and bloody British soldiers. Florence volunteered at once and was eventually given permission to take a group of thirty-eight nurses to Turkey. Florence found the conditions in the army hospital in Scutari. The men were kept in rooms without blankets or decent food. Unwashed, they were still wearing their army uniforms that were "stiff with dirt and gore". In these conditions, it was not surprising that in army hospitals, war wounds only accounted for one death in six Military officers and doctors objected to Florence's views on reforming military hospitals. They interpreted her comments as an attack on their professionalism and she was made to feel unwelcome. Florence received very little help from the military. In 1856 Florence Nightingale returned to England as a national heroine. She had been deeply shocked by the lack of hygiene and elementary care that the men received in the British Army. Florence therefore decided to begin a campaign to improve the quality of of nursing in military hospitals. Using her statistics and math education, she illustrated the need for sanitary reform in all military hospitals. While pressing her case, Florence gained the attention of Queen Victoria and Prince Albert as well as that of the Prime Minister, Lord Palmerston. Florence founded the Nightingale School and Home for Nurses at Saint Thomas's Hospital in London. The opening of this school marked the beginning of professional education in nursing. For most of the remainder of her life Nightingale was confined due to an illness contracted in the Crimea, which prevented her from continuing her own work as a nurse. This illness did not stop her, however, campaigning to improve the health standards, she published 200 books, reports and pamphlets. One of these publications was a book titled Notes on Nursing (1860). This was the first textbook specifically for use in the teaching of nurses and was translated into many languages. In later life Florence Nightingale suffered from poor health and in 1895 went blind. Soon afterwards, the loss of other faculties meant she had to receive full-time nursing. Although a complete invalid she lived another fifteen years before her death in London on 13th August, 1910. web web web web Microsoft Encarta Encyclopedia.
869
ENGLISH
1
Samurai (侍, /ˈsæmʊraɪ/) were the hereditary military nobility and officer caste of medieval and early-modern Japan from the 12th century to their abolition in the 1870s. They were the well-paid retainers of the daimyo (the great feudal landholders). They had high prestige and special privileges such as wearing two swords. They cultivated the bushido codes of martial virtues, indifference to pain, and unflinching loyalty, engaging in many local battles. During the peaceful Edo era (1603 to 1868) they became the stewards and chamberlains of the daimyo estates, gaining managerial experience and education. In the 1870s they were 5% of the population. The Meiji Revolution ended their feudal roles and they moved into professional and entrepreneurial roles. Their memory and weaponry remain prominent in Japanese popular culture.
<urn:uuid:369fa518-0d7f-4572-9e68-91f9894c1911>
CC-MAIN-2020-05
http://yomoya.info/3434/10880-japanese-samurai-warrior-armor.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00501.warc.gz
en
0.980183
179
3.71875
4
[ -0.3018569350242615, 0.5470779538154602, 0.21547645330429077, -0.019165096804499626, 0.24958057701587677, 0.12759625911712646, -0.005633195862174034, 0.07737453281879425, 0.11823592334985733, 0.09107042849063873, -0.011465739458799362, -0.43262672424316406, 0.4030880928039551, 0.3964679241...
1
Samurai (侍, /ˈsæmʊraɪ/) were the hereditary military nobility and officer caste of medieval and early-modern Japan from the 12th century to their abolition in the 1870s. They were the well-paid retainers of the daimyo (the great feudal landholders). They had high prestige and special privileges such as wearing two swords. They cultivated the bushido codes of martial virtues, indifference to pain, and unflinching loyalty, engaging in many local battles. During the peaceful Edo era (1603 to 1868) they became the stewards and chamberlains of the daimyo estates, gaining managerial experience and education. In the 1870s they were 5% of the population. The Meiji Revolution ended their feudal roles and they moved into professional and entrepreneurial roles. Their memory and weaponry remain prominent in Japanese popular culture.
194
ENGLISH
1
Alzheimer’s Disease can be prevented New study shows how to prevent Alzheimer’s disease Our whole population is living longer and as we do the fear of dementia and Alzheimer’s grows with each passing day. Up until recently the only way we could definitively diagnose Alzheimer’s was through an autopsy after death. The evidence was in what we called beta-amyloid plaques and bundles of these were considered positive proof. A new study just recently published has shown that this theory is completely wrong and without any medical basis. New technology today uses a PET (positive emission tomography) scan which uses radiation that can detect the presence of beta-amyloid plaques in the brain while the patient is still alive. The new study examined 14,000 residents of a retirement community going back into the early 1980’s. They continually tested these people for all sorts of diseases including Alzheimer’s and found that that a large majority of the residents did have large bundles of beta-amyloid plaques in their brains but had no symptoms of either dementia or Alzheimer’s. How could this be possible? They had to determine if there was another cause of Alzheimer’s and could it be prevented. Dr.Claudia Kawas of the University of California, Irvine, armed with a 6 million dollar grant from the National Institutes of Health studied the files of 14,000 patients and checked on the ones that were still alive every six months. The study was originally designed to observe and record the lifestyles of the patients that lived the longest and stayed the healthiest into their middle to late nineties and those that lived to over 100 years old. What Dr. Kawas and her colleagues found was that cause of Alzheimer’s was not beta-amyloid plaques in the brain, but mini-strokes caused by LOW blood pressure. You read that correctly. While every physician is so concerned about high blood pressure and putting patients on multiple medications to bring it down, it turns out that the body needs to have higher blood pressure as it ages. As you age your arteries are not as elastic and pliable as they were when you were young. More pressure is needed to push the blood through those very thin blood vessels such as your arteries and then into the very fine capillaries and if your blood pressure is too low you will suffer hundreds of mini strokes in the brain of which you are not even aware. Your bloodstream carries the oxygen to your brain and if there is a temporary stop in blood flow you get a mini stroke. When Dr. Kawas examined the brains of those people who took medications that lower blood pressure, she found many areas where cells had died off and their was no activity. Patients may have hundreds of these episodes without feeling a thing and yet it was these patients who suffered dementia and were even labelled as Alzheimer patients. It was the use of the PET scan that proved this result beyond any doubt. The majority of these patients did not have any amyloid plaques in their brain at all but had severe forms of dementia. Your blood pressure is measured in millimeters of mercury. A perfect blood pressure is considered to be 120 over 80. What this means is that the top number or the higher number, better known as the systolic pressure measures the blood pressure in the arteries when the heart beats (heart muscle is contracting). The lower number is known as the diastolic pressure and measures the pressure in the arteries between beats or when the heart muscle is at rest and refills with blood. The 120 over 80 is a great number for young people but the guidelines move up as you age into 135 over 85. However this new study showed that as you aged between 70 and 100 you were better off to let your blood pressure gradually rise to ensure that sufficient amounts of oxygen were supplied to the brain. It was the medicated people that suffered the most min-strokes and the worst cases of dementia and Alzheimer’s. Those who did not take medication seemed to age gracefully and keep all their faculties intact. Unfortunately this study is brand new and 99 per cent of family physicians are always trying to lower your blood pressure in the hopes of preventing heart attacks or strokes. The new study suggests going the natural way without the use of a lot of medications. This does not mean that if you have very high blood pressure such as 190 over 110 you let it go untreated. For those very large numbers you must get it down to a safe level but of course not so low that your brain is deprived of the delivery of oxygen. This also means that all the drugs that are used to treat Alzheimer’s patients are useless because they are designed to prevent the accumulation of amyloid plaques in the brain. No wonder the results have been so bad with these drugs and the side-effects have been horrible. It also explains why in all the years I worked as a pharmacist, I witnessed those patients who took multiple medications for blood pressure and heart disease always seemed to have the highest incidence of dementia. More good news from the old age study As I mentioned earlier the premise of this study originally was to find out why some people live longer than others and stay healthier into their nineties. Some things are obvious; in other words those that exercised lived longer than those that did not. The interesting thing about exercise was as little as 15 minutes a day made the difference between a shorter life and long life. In fact those who exercised more frequently and for shorter periods lived longer than those who exercised for 2 or 3 hours at a time. The study showed that being obese at any age is unhealthy however Dr. Kawas found in her study that older people who were moderately overweight or average weight lived longer than people who were underweight. Apparently it’s not good to be skinny when you are old. It is also quite natural to put on a few pounds as you age. In the study, people who drank up to two drinks a day had a 10 to 15 per cent chance of reduced risk of death when compared to non-drinkers. It didn’t matter whether they drank red wine, white wine beer or any kind of alcoholic beverage. No matter what you drink, you will live longer as long as you drink in moderation and do not binge drink. I would love to say that the study showed that people taking vitamins lived longer but it was inconclusive. That was because the study started in the eighties before vitamins and supplements were very popular. Although most of the old people in the community were taking a variety of supplements for their health, no conclusion could be drawn because most of it has occurred in the last 10 to 15 years of the study. The study showed that involving yourself in social activities increased longevity. Those that played board games, cards, attended book clubs and generally hung out with others in social settings lived longer. For every hour spent doing activities longevity increased by a day. That seems like a good investment. The secret of longevity seems quite nice. Exercise a little bit each day, about 15 minutes at a time. Socialize with your friends and have one or two drinks a day. Let your blood pressure rise naturally but not too high and don’t worry if you put on a few extra pounds. Although it was not mentioned in the study, this lifestyle in your old age seems to me to be relatively stress-free and I think that this is a huge factor in growing old and aging well.
<urn:uuid:ac7c5207-e101-48e4-9f8f-bf850ea0e81d>
CC-MAIN-2020-05
http://www.barryshealthnews.com/?p=681
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00079.warc.gz
en
0.981656
1,528
3.28125
3
[ -0.4019734859466553, 0.27780941128730774, 0.1772029846906662, 0.042689789086580276, 0.24659785628318787, 0.4278123378753662, 0.34675371646881104, 0.20731143653392792, -0.01984795182943344, -0.2174038290977478, 0.21365272998809814, -0.0509975403547287, -0.03675314038991928, 0.42287686467170...
6
Alzheimer’s Disease can be prevented New study shows how to prevent Alzheimer’s disease Our whole population is living longer and as we do the fear of dementia and Alzheimer’s grows with each passing day. Up until recently the only way we could definitively diagnose Alzheimer’s was through an autopsy after death. The evidence was in what we called beta-amyloid plaques and bundles of these were considered positive proof. A new study just recently published has shown that this theory is completely wrong and without any medical basis. New technology today uses a PET (positive emission tomography) scan which uses radiation that can detect the presence of beta-amyloid plaques in the brain while the patient is still alive. The new study examined 14,000 residents of a retirement community going back into the early 1980’s. They continually tested these people for all sorts of diseases including Alzheimer’s and found that that a large majority of the residents did have large bundles of beta-amyloid plaques in their brains but had no symptoms of either dementia or Alzheimer’s. How could this be possible? They had to determine if there was another cause of Alzheimer’s and could it be prevented. Dr.Claudia Kawas of the University of California, Irvine, armed with a 6 million dollar grant from the National Institutes of Health studied the files of 14,000 patients and checked on the ones that were still alive every six months. The study was originally designed to observe and record the lifestyles of the patients that lived the longest and stayed the healthiest into their middle to late nineties and those that lived to over 100 years old. What Dr. Kawas and her colleagues found was that cause of Alzheimer’s was not beta-amyloid plaques in the brain, but mini-strokes caused by LOW blood pressure. You read that correctly. While every physician is so concerned about high blood pressure and putting patients on multiple medications to bring it down, it turns out that the body needs to have higher blood pressure as it ages. As you age your arteries are not as elastic and pliable as they were when you were young. More pressure is needed to push the blood through those very thin blood vessels such as your arteries and then into the very fine capillaries and if your blood pressure is too low you will suffer hundreds of mini strokes in the brain of which you are not even aware. Your bloodstream carries the oxygen to your brain and if there is a temporary stop in blood flow you get a mini stroke. When Dr. Kawas examined the brains of those people who took medications that lower blood pressure, she found many areas where cells had died off and their was no activity. Patients may have hundreds of these episodes without feeling a thing and yet it was these patients who suffered dementia and were even labelled as Alzheimer patients. It was the use of the PET scan that proved this result beyond any doubt. The majority of these patients did not have any amyloid plaques in their brain at all but had severe forms of dementia. Your blood pressure is measured in millimeters of mercury. A perfect blood pressure is considered to be 120 over 80. What this means is that the top number or the higher number, better known as the systolic pressure measures the blood pressure in the arteries when the heart beats (heart muscle is contracting). The lower number is known as the diastolic pressure and measures the pressure in the arteries between beats or when the heart muscle is at rest and refills with blood. The 120 over 80 is a great number for young people but the guidelines move up as you age into 135 over 85. However this new study showed that as you aged between 70 and 100 you were better off to let your blood pressure gradually rise to ensure that sufficient amounts of oxygen were supplied to the brain. It was the medicated people that suffered the most min-strokes and the worst cases of dementia and Alzheimer’s. Those who did not take medication seemed to age gracefully and keep all their faculties intact. Unfortunately this study is brand new and 99 per cent of family physicians are always trying to lower your blood pressure in the hopes of preventing heart attacks or strokes. The new study suggests going the natural way without the use of a lot of medications. This does not mean that if you have very high blood pressure such as 190 over 110 you let it go untreated. For those very large numbers you must get it down to a safe level but of course not so low that your brain is deprived of the delivery of oxygen. This also means that all the drugs that are used to treat Alzheimer’s patients are useless because they are designed to prevent the accumulation of amyloid plaques in the brain. No wonder the results have been so bad with these drugs and the side-effects have been horrible. It also explains why in all the years I worked as a pharmacist, I witnessed those patients who took multiple medications for blood pressure and heart disease always seemed to have the highest incidence of dementia. More good news from the old age study As I mentioned earlier the premise of this study originally was to find out why some people live longer than others and stay healthier into their nineties. Some things are obvious; in other words those that exercised lived longer than those that did not. The interesting thing about exercise was as little as 15 minutes a day made the difference between a shorter life and long life. In fact those who exercised more frequently and for shorter periods lived longer than those who exercised for 2 or 3 hours at a time. The study showed that being obese at any age is unhealthy however Dr. Kawas found in her study that older people who were moderately overweight or average weight lived longer than people who were underweight. Apparently it’s not good to be skinny when you are old. It is also quite natural to put on a few pounds as you age. In the study, people who drank up to two drinks a day had a 10 to 15 per cent chance of reduced risk of death when compared to non-drinkers. It didn’t matter whether they drank red wine, white wine beer or any kind of alcoholic beverage. No matter what you drink, you will live longer as long as you drink in moderation and do not binge drink. I would love to say that the study showed that people taking vitamins lived longer but it was inconclusive. That was because the study started in the eighties before vitamins and supplements were very popular. Although most of the old people in the community were taking a variety of supplements for their health, no conclusion could be drawn because most of it has occurred in the last 10 to 15 years of the study. The study showed that involving yourself in social activities increased longevity. Those that played board games, cards, attended book clubs and generally hung out with others in social settings lived longer. For every hour spent doing activities longevity increased by a day. That seems like a good investment. The secret of longevity seems quite nice. Exercise a little bit each day, about 15 minutes at a time. Socialize with your friends and have one or two drinks a day. Let your blood pressure rise naturally but not too high and don’t worry if you put on a few extra pounds. Although it was not mentioned in the study, this lifestyle in your old age seems to me to be relatively stress-free and I think that this is a huge factor in growing old and aging well.
1,528
ENGLISH
1
Malcolm X (19 May 1925-21 Feb. 1965), African-American religious and political leader also known as el-Hajj Malik el-Shabazz, was born Malcolm Little in Omaha, Nebraska, the son of Earl Little and Louise (also Louisa) Norton, both activists in the Universal Negro Improvement Association established by Marcus Garvey. Earl Little, a Georgia-born itinerant Baptist preacher, encountered considerable racial harassment because of his black nationalist views. He moved his family several times before settling in Michigan, purchasing a home in 1929 on the outskirts of East Lansing, where Malcolm Little spent his childhood. In 1931 the elder Little was run over by a streetcar and died. Although police concluded that the death was accidental, the victim's friends and relatives suspected that he had been murdered by a local white supremacist group. This incident led to a severe decline in the family's economic fortunes and contributed to Louise Little's mental deterioration. In January 1939 she was declared legally insane and committed to a Michigan mental asylum, where she remained until 1963. Although Malcolm Little excelled academically in grammar school and was popular among classmates at the predominately white schools, he also became embittered toward white authority figures. In his autobiography he recalled quitting school after a teacher warned that his desire to become a lawyer was not a "realistic goal for a nigger." As his mother's mental health deteriorated and he became increasingly incorrigible, welfare officials intervened, placing him in several reform schools and foster homes. In 1941 he left Michigan to live in Boston with his half sister, Ella Collins. In Boston and New York during the early 1940s, Malcolm held a variety of railroad jobs while also becoming increasingly involved in criminal activities such as peddling illegal drugs and numbers running. At this time he was often called Detroit Red because of his reddish hair. Arrested in 1946 for larceny as well as breaking and entering, he was sent to prison in February 1946. While in Concord Reformatory in Massachusetts, Malcolm X responded to the urgings of his brother Reginald and became a follower of Elijah Muhammad (formerly Robert Poole), leader of the Temple of Islam (later Nation of Islam--often called the Black Muslims), a small black nationalist Islamic sect. Attracted to the religious group's racial doctrines, which categorized whites as "devils," he began reading extensively about world history and politics, particularly concerning African slavery and the oppression of black people in America. After he was paroled from prison in August 1952, he became Minister Malcolm X, using the surname assigned to him in place of the African name that had been taken from his slave ancestors. Malcolm X quickly became Elijah Muhammad's most effective minister, bringing large numbers of new recruits into the group during the 1950s and early 1960s. By 1954 he had become minister of New York Temple No. 7, and he later helped establish Islamic temples in other cities. In 1957 he became the Nation of Islam's national representative, a position of influence second only to that of Elijah Muhammad. In January 1958 he married Betty X (Sanders), also a Muslim; the two had six daughters. Malcolm's forceful, cogent oratory attracted considerable publicity and a large personal following among discontented African Americans. In his speeches he urged black people to separate from whites and win their freedom "by any means necessary." In 1957, after New York police beat and jailed Nation of Islam member Hinton Johnson, Malcolm X mobilized supporters to confront police officials and secure medical treatment. A 1959 television documentary on the Nation of Islam called The Hate That Hate Produced further increased Malcolm's notoriety among whites. In 1959 he traveled to Europe and the Middle East on behalf of Elijah Muhammad, and in 1961 he served as Muhammad's emissary at a secret Atlanta meeting seeking an accommodation with the Ku Klux Klan. The following year he participated in protest meetings prompted by the killing of a black Muslim during a police raid on a Los Angeles mosque. By 1963 he had become a frequent guest on radio and television programs and was the most well known figure in the Nation of Islam. Malcolm X was particularly harsh in his criticisms of the nonviolent strategy to achieve civil rights reforms advocated by Martin Luther King, Jr. His letters seeking King's participation in public forums were generally ignored by King. During a November 1963 address at the Northern Negro Grass Roots Leadership Conference in Detroit, Malcolm derided the notion that African Americans could achieve freedom nonviolently. "The only revolution in which the goal is loving your enemy is the Negro revolution," he announced. "Revolution is bloody, revolution is hostile, revolution knows no compromise, revolution overturns and destroys everything that gets in its way." Malcolm also charged that King and other leaders of the recently held March on Washington had taken over the event, with the help of white liberals, in order to subvert its militancy. "And as they took it over, it lost its militancy. It ceased to be angry, it ceased to be hot, it ceased to be uncompromising," he insisted. Despite his caustic criticisms of King, however, Malcolm nevertheless identified himself with the grass-roots leaders of the southern civil rights protest movement. His desire to move from rhetorical to political militancy led him to become increasingly dissatisfied with Elijah Muhammad's apolitical stance. As he later explained in his autobiography, "It could be heard increasingly in the Negro communities: 'Those Muslims talk tough, but they never do anything, unless somebody bothers Muslims.' " Malcolm's disillusionment with Elijah Muhammad resulted not only from political differences but also his personal dismay when he discovered that the religious leader had fathered illegitimate children. Other members of the Nation of Islam began to resent Malcolm's growing prominence and to suspect that he intended to lay claim to leadership of the group. When Malcolm X remarked that President John Kennedy's assassination in November 1963 was a case of the "chickens coming home to roost," Elijah Muhammad used the opportunity to ban his increasingly popular minister from speaking in public. Despite this effort to silence him, Malcolm X continued to attract public attention during 1964. He counseled boxer Cassius Clay, who publicly announced, shortly after winning the heavyweight boxing title, that he had become a member of the Nation of Islam and adopted the name Muhammad Ali. In March 1964 Malcolm announced that he was breaking with the Nation of Islam to form his own group, Muslim Mosque, Inc. The theological and ideological gulf between Malcolm and Elijah Muhammad widened during a month-long trip to Africa and the Middle East. During a pilgrimage to Mecca on 20 April 1964 Malcolm reported that seeing Muslims of all colors worshiping together caused him to reject the view that all whites were devils. Repudiating the racial theology of the Nation of Islam, he moved toward orthodox Islam as practiced outside the group. He also traveled to Egypt, Lebanon, Nigeria, Ghana, Senegal, and Morocco, meeting with political activists and national leaders, including Ghanaian president Kwame Nkrumah. After returning to the United States on 21 May, Malcolm announced that he had adopted a Muslim name, el-Hajj Malik el-Shabazz, and that he was forming a new political group, the Organization of Afro-American Unity (OAAU), to bring together all elements of the African-American freedom struggle. Determined to unify African Americans, Malcolm sought to strengthen his ties with the more militant factions of the civil rights movement. Although he continued to reject King's nonviolent, integrationist approach, he had a brief, cordial encounter with King on 26 March 1964 as the latter left a press conference at the U.S. Capitol. The following month, at a Cleveland symposium sponsored by the Congress of Racial Equality, Malcolm X delivered one of his most notable speeches, "The Ballot or the Bullet," in which he urged black people to submerge their differences "and realize that it is best for us to first see that we have the same problem, a common problem--a problem that will make you catch hell whether you're a Baptist, or a Methodist, or a Muslim, or a nationalist." When he traveled again to Africa during the summer of 1964 to attend the Organization of African Unity Summit Conference, he was able to discuss his unity plans at an impromptu meeting in Nairobi with leaders of the Student Nonviolent Coordinating Committee. After returning to the United States in November, he invited Fannie Lou Hamer and other members of the Mississippi Freedom Democratic Party to be guests of honor at an OAAU meeting held the following month in Harlem. Early in February 1965 he traveled to Alabama to address gatherings of young activists involved in a voting rights campaign. He tried to meet with King during this trip, but the civil rights leader was in jail; instead Malcolm met with Coretta Scott King, telling her that he did not intend to make life more difficult for her husband. "If white people realize what the alternative is, perhaps they will be more willing to hear Dr. King," he explained. Even as he strengthened his ties with civil rights activists, however, Malcolm acquired many new enemies. The U.S. government saw him as a subversive, and the Federal Bureau of Investigation initiated efforts to undermine his influence. In addition, some of his former Nation of Islam colleagues, including Louis X (later Louis Farrakhan), condemned him as a traitor for publicly criticizing Elijah Muhammad. The Nation of Islam attempted to evict him from the home he occupied in Queens, New York. On 14 February 1965 Malcolm's home was firebombed; although he and his family escaped unharmed, the perpetrators were never apprehended. On 21 February 1965 members of the Nation of Islam shot and killed Malcolm as he was beginning a speech at the Audubon Ballroom in New York City. On 27 February more than 1,500 people attended his funeral service held in Harlem. Although three men were later convicted in 1966 and sentenced to life terms, one of those involved, Thomas Hagan, filed an affidavit in 1977 insisting that his actual accomplices were never apprehended. After his death, Malcolm's views reached an even larger audience than during his life. The Autobiography of Malcolm X, written with the assistance of Alex Haley, became a bestselling book following its publication in 1965. During subsequent years other books appeared containing texts of many of his speeches, including Malcolm X Speaks (1965), The End of White World Supremacy: Four Speeches (1971), and February 1965: The Final Speeches (1992). In 1994 Orlando Bagwell and Judy Richardson produced a major documentary, Malcolm X: Make It Plain. His words and image also exerted a lasting influence on African-American popular culture, as evidenced in the hip-hop or rap music of the late twentieth century and in director Spike Lee's film biography, Malcolm X (1992).
<urn:uuid:69bd6c69-03d7-4002-beb3-65df847a0618>
CC-MAIN-2020-05
https://www.clevelandumadaop.com/single-post/2014/05/20/Malcolm-X
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00217.warc.gz
en
0.980715
2,200
3.5
4
[ -0.4386559724807739, 0.3513980209827423, 0.2303445190191269, 0.09839890897274017, -0.08635561913251877, 0.2301657348871231, -0.05417041480541229, 0.051407963037490845, -0.1451466977596283, 0.06973807513713837, 0.28103071451187134, 0.27706918120384216, 0.16208228468894958, 0.007956975139677...
13
Malcolm X (19 May 1925-21 Feb. 1965), African-American religious and political leader also known as el-Hajj Malik el-Shabazz, was born Malcolm Little in Omaha, Nebraska, the son of Earl Little and Louise (also Louisa) Norton, both activists in the Universal Negro Improvement Association established by Marcus Garvey. Earl Little, a Georgia-born itinerant Baptist preacher, encountered considerable racial harassment because of his black nationalist views. He moved his family several times before settling in Michigan, purchasing a home in 1929 on the outskirts of East Lansing, where Malcolm Little spent his childhood. In 1931 the elder Little was run over by a streetcar and died. Although police concluded that the death was accidental, the victim's friends and relatives suspected that he had been murdered by a local white supremacist group. This incident led to a severe decline in the family's economic fortunes and contributed to Louise Little's mental deterioration. In January 1939 she was declared legally insane and committed to a Michigan mental asylum, where she remained until 1963. Although Malcolm Little excelled academically in grammar school and was popular among classmates at the predominately white schools, he also became embittered toward white authority figures. In his autobiography he recalled quitting school after a teacher warned that his desire to become a lawyer was not a "realistic goal for a nigger." As his mother's mental health deteriorated and he became increasingly incorrigible, welfare officials intervened, placing him in several reform schools and foster homes. In 1941 he left Michigan to live in Boston with his half sister, Ella Collins. In Boston and New York during the early 1940s, Malcolm held a variety of railroad jobs while also becoming increasingly involved in criminal activities such as peddling illegal drugs and numbers running. At this time he was often called Detroit Red because of his reddish hair. Arrested in 1946 for larceny as well as breaking and entering, he was sent to prison in February 1946. While in Concord Reformatory in Massachusetts, Malcolm X responded to the urgings of his brother Reginald and became a follower of Elijah Muhammad (formerly Robert Poole), leader of the Temple of Islam (later Nation of Islam--often called the Black Muslims), a small black nationalist Islamic sect. Attracted to the religious group's racial doctrines, which categorized whites as "devils," he began reading extensively about world history and politics, particularly concerning African slavery and the oppression of black people in America. After he was paroled from prison in August 1952, he became Minister Malcolm X, using the surname assigned to him in place of the African name that had been taken from his slave ancestors. Malcolm X quickly became Elijah Muhammad's most effective minister, bringing large numbers of new recruits into the group during the 1950s and early 1960s. By 1954 he had become minister of New York Temple No. 7, and he later helped establish Islamic temples in other cities. In 1957 he became the Nation of Islam's national representative, a position of influence second only to that of Elijah Muhammad. In January 1958 he married Betty X (Sanders), also a Muslim; the two had six daughters. Malcolm's forceful, cogent oratory attracted considerable publicity and a large personal following among discontented African Americans. In his speeches he urged black people to separate from whites and win their freedom "by any means necessary." In 1957, after New York police beat and jailed Nation of Islam member Hinton Johnson, Malcolm X mobilized supporters to confront police officials and secure medical treatment. A 1959 television documentary on the Nation of Islam called The Hate That Hate Produced further increased Malcolm's notoriety among whites. In 1959 he traveled to Europe and the Middle East on behalf of Elijah Muhammad, and in 1961 he served as Muhammad's emissary at a secret Atlanta meeting seeking an accommodation with the Ku Klux Klan. The following year he participated in protest meetings prompted by the killing of a black Muslim during a police raid on a Los Angeles mosque. By 1963 he had become a frequent guest on radio and television programs and was the most well known figure in the Nation of Islam. Malcolm X was particularly harsh in his criticisms of the nonviolent strategy to achieve civil rights reforms advocated by Martin Luther King, Jr. His letters seeking King's participation in public forums were generally ignored by King. During a November 1963 address at the Northern Negro Grass Roots Leadership Conference in Detroit, Malcolm derided the notion that African Americans could achieve freedom nonviolently. "The only revolution in which the goal is loving your enemy is the Negro revolution," he announced. "Revolution is bloody, revolution is hostile, revolution knows no compromise, revolution overturns and destroys everything that gets in its way." Malcolm also charged that King and other leaders of the recently held March on Washington had taken over the event, with the help of white liberals, in order to subvert its militancy. "And as they took it over, it lost its militancy. It ceased to be angry, it ceased to be hot, it ceased to be uncompromising," he insisted. Despite his caustic criticisms of King, however, Malcolm nevertheless identified himself with the grass-roots leaders of the southern civil rights protest movement. His desire to move from rhetorical to political militancy led him to become increasingly dissatisfied with Elijah Muhammad's apolitical stance. As he later explained in his autobiography, "It could be heard increasingly in the Negro communities: 'Those Muslims talk tough, but they never do anything, unless somebody bothers Muslims.' " Malcolm's disillusionment with Elijah Muhammad resulted not only from political differences but also his personal dismay when he discovered that the religious leader had fathered illegitimate children. Other members of the Nation of Islam began to resent Malcolm's growing prominence and to suspect that he intended to lay claim to leadership of the group. When Malcolm X remarked that President John Kennedy's assassination in November 1963 was a case of the "chickens coming home to roost," Elijah Muhammad used the opportunity to ban his increasingly popular minister from speaking in public. Despite this effort to silence him, Malcolm X continued to attract public attention during 1964. He counseled boxer Cassius Clay, who publicly announced, shortly after winning the heavyweight boxing title, that he had become a member of the Nation of Islam and adopted the name Muhammad Ali. In March 1964 Malcolm announced that he was breaking with the Nation of Islam to form his own group, Muslim Mosque, Inc. The theological and ideological gulf between Malcolm and Elijah Muhammad widened during a month-long trip to Africa and the Middle East. During a pilgrimage to Mecca on 20 April 1964 Malcolm reported that seeing Muslims of all colors worshiping together caused him to reject the view that all whites were devils. Repudiating the racial theology of the Nation of Islam, he moved toward orthodox Islam as practiced outside the group. He also traveled to Egypt, Lebanon, Nigeria, Ghana, Senegal, and Morocco, meeting with political activists and national leaders, including Ghanaian president Kwame Nkrumah. After returning to the United States on 21 May, Malcolm announced that he had adopted a Muslim name, el-Hajj Malik el-Shabazz, and that he was forming a new political group, the Organization of Afro-American Unity (OAAU), to bring together all elements of the African-American freedom struggle. Determined to unify African Americans, Malcolm sought to strengthen his ties with the more militant factions of the civil rights movement. Although he continued to reject King's nonviolent, integrationist approach, he had a brief, cordial encounter with King on 26 March 1964 as the latter left a press conference at the U.S. Capitol. The following month, at a Cleveland symposium sponsored by the Congress of Racial Equality, Malcolm X delivered one of his most notable speeches, "The Ballot or the Bullet," in which he urged black people to submerge their differences "and realize that it is best for us to first see that we have the same problem, a common problem--a problem that will make you catch hell whether you're a Baptist, or a Methodist, or a Muslim, or a nationalist." When he traveled again to Africa during the summer of 1964 to attend the Organization of African Unity Summit Conference, he was able to discuss his unity plans at an impromptu meeting in Nairobi with leaders of the Student Nonviolent Coordinating Committee. After returning to the United States in November, he invited Fannie Lou Hamer and other members of the Mississippi Freedom Democratic Party to be guests of honor at an OAAU meeting held the following month in Harlem. Early in February 1965 he traveled to Alabama to address gatherings of young activists involved in a voting rights campaign. He tried to meet with King during this trip, but the civil rights leader was in jail; instead Malcolm met with Coretta Scott King, telling her that he did not intend to make life more difficult for her husband. "If white people realize what the alternative is, perhaps they will be more willing to hear Dr. King," he explained. Even as he strengthened his ties with civil rights activists, however, Malcolm acquired many new enemies. The U.S. government saw him as a subversive, and the Federal Bureau of Investigation initiated efforts to undermine his influence. In addition, some of his former Nation of Islam colleagues, including Louis X (later Louis Farrakhan), condemned him as a traitor for publicly criticizing Elijah Muhammad. The Nation of Islam attempted to evict him from the home he occupied in Queens, New York. On 14 February 1965 Malcolm's home was firebombed; although he and his family escaped unharmed, the perpetrators were never apprehended. On 21 February 1965 members of the Nation of Islam shot and killed Malcolm as he was beginning a speech at the Audubon Ballroom in New York City. On 27 February more than 1,500 people attended his funeral service held in Harlem. Although three men were later convicted in 1966 and sentenced to life terms, one of those involved, Thomas Hagan, filed an affidavit in 1977 insisting that his actual accomplices were never apprehended. After his death, Malcolm's views reached an even larger audience than during his life. The Autobiography of Malcolm X, written with the assistance of Alex Haley, became a bestselling book following its publication in 1965. During subsequent years other books appeared containing texts of many of his speeches, including Malcolm X Speaks (1965), The End of White World Supremacy: Four Speeches (1971), and February 1965: The Final Speeches (1992). In 1994 Orlando Bagwell and Judy Richardson produced a major documentary, Malcolm X: Make It Plain. His words and image also exerted a lasting influence on African-American popular culture, as evidenced in the hip-hop or rap music of the late twentieth century and in director Spike Lee's film biography, Malcolm X (1992).
2,374
ENGLISH
1
Despite two attempts to overtake Britain, Caesar ultimately returned home emptyhanded In the late summer of 55 BC Julius Caesar stood on the north coast of France and looked out over the Channel. Some 30 miles across the water lay an island, which, according to travellers' tales was rich in pearls, lead, gold, and tin. However,Caesar's interest in Britain was dictated not so much by a desire to exploit her mineral wealth as by the strategic position of the island. He could clearly see that Britain posed a backdoor threat to his latest and greatest conquest (France) whose subjugation Caesar had now enforced after eight years' hard campaigning. During those years the Celts of Britain had aided their Gallic kinsmen against Caesar and he judged that until Britain was his, the north coast of France would always be vulnerable to surprise attack. Read more: The effect of the Battle of Hastings Caesar, however, was aware that there was little time left before winter brought campaigning to a halt to complete a British invasion, not time enough, in fact, to mount the usual Roman form of attack that called for long-term tactics, infiltrating enemy territory and sapping morale through propaganda and subversion. There was no time either for proper reconnaissance of the island, of for gathering information about the nature and size of the country, its harbours and the methods of fighting used by its inhabitants. Caesar had already tried to extract this information from the Veneti, a tribe living in Britanny who traded regularly with the British. But the Veneti had refused to talk. Their recent defeat by the Romans had been marked by the massacre of their nobility and the sale into slavery of most of their people, and Caesar's questions only prompted them to warn the Celts of Britain that Rome's greatest general was now interested in their land. Caesar's reputation in Britain was well known and the Celts knew they would have little chance against the magnificently equipped Roman Army unless their defense was carefully planned. While they armed in secret, they also began to play for time, sending representatives to Caesar at Boulogne ostensibly to offer their submission to Rome. The Celts knew that Caesar would not doubt the sincerity of this; arrogant and accustomed to success as he was, he took this submission as his natural right. Read more: The weaponry of the Normans The Celts returned to Britain accompanied by Caesar's ambassador, Commius, King of the Atrebates, one of the Gallic tribes. With Commius, Caesar send 30 horsemen, who had instructions to 'visit as many of the tribes as possible, to persuade them to place themselves under the protection of Rome, and to announce that Caesar himself would shortly be arriving.' Caesar arrived within a few weeks, on an early autumn morning. He came with 80 transports and the X and VII Legions, but without his cavalry, whose ships had been trapped in France by savage Channel winds. As Caesar approached the White Cliffs of Dover, he found an impressive sight awaiting him. On the clifftops stood rank upon rank of Celts, waiting, Caesar had no doubt, to pay homage to himself and his legions. It was only when the Roman ships came closer to the shore that Caesar saw this was no welcoming party: the British ranks were bristling with weapons. The Roman galleys sailed northeast towards Deal, and the Celts walked and rode along the clifftops, pacing the ships. It was an unnerving sight for the would-be invaders, and by the time the galleys were as close to the beach as their size would allow, even the courageous X Legion, Caesar's favourite, was apprehensive. Quite uncharacteristically, these legionaries hesitated for several minutes before obeying the order to jump into the waist-high water. Their hesitation was soon justified. The men were still wading towards the shore, weighed down by their arms and the heavy mailed leather jerkins they wore, when the British horsemen came riding out into the surf, swinging their swords and shouting battle cries. Behind the horsemen, on the beach, stood more Britons armed with stones and javelins. Bombarded from above and slipping on the shingle, some of the Romans fell into the water. Enough reached the beach, however, to form up in line and charge their assailants, and with the menacing line of Roman javelins now advancing on them, the Celts turned and fled. It was fortunate for them that Caesar, lacking his cavalry, could order no pursuit. The Britons now had tested the strength and determination of the Romans, and had found them to be considerable. They decided therefore to play for time once again and the following day sent a deputation to Caesar offering apologies for their hostility. With the arrival of the British chieftains who swore loyalty to Caesar, the general once again began to hope that Britain would prove an easy conquest. The Celtic ceasefire The Celts' goodwill, however, was soon seen to vanish when an unexpected but powerful ally came to their aid, the British weather. About a week after Caesar's arrival, the ships carrying his cavalry appeared on the horizon, almost at once, a fierce storm blew up, tossing the ships about on the water, snapping their masts and tearing their sails to shreds. As the fury of the gale mounted, the ships were driven back towards France, and by the time darkness came, all had disappeared from sight. The bleak dawn that followed revealed a beach littered with the wreckage of Caesar's transports. All that remained at anchor was a pitiful row of storm-battered hulks. As the Romans surveyed the appalling scene, the morale of the Celts rose once more. The British chieftains began to slip away from the camp. Peasants were rounded up, war chariots made ready, arms burnished and sharpened. Now that the Romans seemed marooned on their unfriendly island, the Britons were once more preparing to fight them. The Romans, however, were far from helpless. Roman legionaries were not only superb fighters, they were skilful engineers as well, and this would not be the first time they had repaired ships by using the wreckage of those more badly damaged. They were even able to forge the nails that held the timbers together. While the men of the X Legion began this repair work, their colleagues of the VII went foraging for food. From their dense oak forests the Britons watched the Romans begin to reap their barley fields, waited till the task absorbed them and then rushed out of the trees, yelling war cries and brandishing spears. Some distance away in the Roman camp, sentries saw a huge rising cloud of dust. Immediately Caesar himself and a handful of troops stormed out of the camp and ran towards the fields. At their approach the Britons fled back into the forest. The next few days brought more heavy rain, but on this occasion the weather worked to the Romans' advantage. It kept the Britons away long enough for them to finish repairing some of their ships and send them to Boulogne to fetch more materials. However, when the downpour at last abated, the Britons staged another lightning raid. The Romans drove them back to their forest hideouts, but by this time Caesar had lost patience with so capricious an enemy, The following evening he packed his troops into the remaining galleys and sailed back to France. He had spent less than three weeks in Britain. Caesar's second assault Caesar did not record his feelings about the failure of his 55 BC invasion, but he was careful to send a report to the Senate in Rome painting a favourable picture of what had, in reality, been a near disaster. As a result, the Senate voted a 20-day period of thanksgiving for Caesar's 'exploit.' To explain its lack of success, Caesar intimated that his expedition had been a mere dress rehearsal for a full-scale assault, planned for the following year. Convinced now that a new 'province' would soon be added to the Roman Empire, a motley group of opportunists, treasure-seekers, and adventurers joined Caesar's second invasion force. This time he took with him five legions (25,000 men) and 2,000 cavalry. He also embarked an elephant--probably the first ever to be seen in Britain. The Roman fleet of 800 ships arrived off the Kent coast in the summer of 54 BC to find the landing beach deserted. The newcomers, unaware of the events of the previous summer, supposed that the mere sight of the Roman galleys had frightened the Celts away. Caesar knew better. He guessed, correctly, that the Britons had decided to wage guerilla warfare on the Romans, a plan well suited to their inferior weapons and tactics. A pitched battle, which Caesar knew the Britons could not win, was what he now desired most. Read more: 3 of the best British mysteries Caesar sent scouts to round up a few prisoners, and from them he learned that the Britons were about ten miles away. It was nearly midnight, but Caesar set off immediately and marched through the moonlit forests and marshes of Kent towards Canterbury. There was a brief skirmish near the banks of the river Stour, but as soon as the Romans began to attack in earnest, the Britons disappeared into the trees. The further the Romans advanced, the further the Britons retreated, drawing the invaders deeper and deeper into the forest. Once again, the weather came to the Britons' aid. No sooner had the Romans sighted the British rearguard, than a messenger came running up to Caesar with the news that a gale in the Channel had wrecked his ships, plucking them from their moorings and smashing them down upon the shore. A disappointed and angry Caesar was obliged to abandon the pursuit of his elusive enemy and return to the beach to survey the damage. Forty ships had been completely destroyed. Those less badly damaged were dragged up on the beach and for ten days the Romans worked around the clock to repair them. That done, Caesar ordered his men to dig themselves in behind earthen ramparts and wait for the Britons to attack in force. The Britons let them wait. They had now overcome petty rivalries in their own camp and had united under one leader, Cassivellaunus, King of the Catuvellauni tribe. He was content now to nibble at the Romans, by sending out raiding parties and staging a few ambushes, knowing that sooner or later, Caesar would have to take the initiative. Summer was fast fading into autumn when Caesar at last lost patience and marched from his fortified camp towards the Thames. The Romans arrived at the only crossing place to find that the Britons had barricaded it by driving stakes into the riverbed. The obstacle was overcome when the Romans clothed their elephant in an armor of iron scales and placed on its back a tower full of archers and slingers. The great beast lumbered into the Thames, with a shower of arrows and stones pouring down from the tower. The terrified Britons bolted for the protection of the trees and refused to come out, except to make a few hit-and-run forays, which did them little good. The Four Kings of Kent Now the unmistakable smell of autumn was in the air and Caesar, aware that time was running out, resorted to subversive tactics. He had in his camp the son of a British chieftain recently defeated by Cassivellaunus. When Caesar promised to restore this young man to his stolen kingdom, some of the smaller tribes deserted their leader. Cassivellaunus, in his growing isolation, persuaded the four kings of Kent to attack Caesar's base camp and so draw the Romans away to defend it. The plan failed, but Caesar eagerly seized his chance when Cassivellaunus asked for a truce. Caesar negotiated a treaty imperiously, almost as if he had won a great victory. Cassivelaunus promised to abide by it, but Caesar, impatient now to be gone, took no precautions to ensure that he did so. All Caesar wanted was to get away from this inhospitable island, from its abominable weather, and its cunning inhabitants. Autumn gales were already blowing round the coast and the winds were frothing up dangerously choppy seas when the Roman ships weighed anchor and sailed for France. Read more: The real story of King Edward II's husband Julius Caesar never returned to Britain. The island was left undisturbed for nearly a century, until AD 43 when the Emperor Claudius ordered the invasion that succeeded where that of Rome's greatest general had so conspicuously failed.
<urn:uuid:2b2d4d6c-c732-4d65-990e-aa3c77c6d7fc>
CC-MAIN-2020-05
https://britishheritage.com/roman/julius-caesar-in-britain
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00200.warc.gz
en
0.983754
2,602
4.0625
4
[ -0.15925118327140808, 0.1621137410402298, 0.2617480158805847, 0.3238758146762848, 0.011154908686876297, -0.2328025847673416, 0.12442299723625183, 0.16399428248405457, 0.23387283086776733, -0.06319689750671387, -0.22284463047981262, -0.40778112411499023, -0.18581879138946533, 0.268045246601...
1
Despite two attempts to overtake Britain, Caesar ultimately returned home emptyhanded In the late summer of 55 BC Julius Caesar stood on the north coast of France and looked out over the Channel. Some 30 miles across the water lay an island, which, according to travellers' tales was rich in pearls, lead, gold, and tin. However,Caesar's interest in Britain was dictated not so much by a desire to exploit her mineral wealth as by the strategic position of the island. He could clearly see that Britain posed a backdoor threat to his latest and greatest conquest (France) whose subjugation Caesar had now enforced after eight years' hard campaigning. During those years the Celts of Britain had aided their Gallic kinsmen against Caesar and he judged that until Britain was his, the north coast of France would always be vulnerable to surprise attack. Read more: The effect of the Battle of Hastings Caesar, however, was aware that there was little time left before winter brought campaigning to a halt to complete a British invasion, not time enough, in fact, to mount the usual Roman form of attack that called for long-term tactics, infiltrating enemy territory and sapping morale through propaganda and subversion. There was no time either for proper reconnaissance of the island, of for gathering information about the nature and size of the country, its harbours and the methods of fighting used by its inhabitants. Caesar had already tried to extract this information from the Veneti, a tribe living in Britanny who traded regularly with the British. But the Veneti had refused to talk. Their recent defeat by the Romans had been marked by the massacre of their nobility and the sale into slavery of most of their people, and Caesar's questions only prompted them to warn the Celts of Britain that Rome's greatest general was now interested in their land. Caesar's reputation in Britain was well known and the Celts knew they would have little chance against the magnificently equipped Roman Army unless their defense was carefully planned. While they armed in secret, they also began to play for time, sending representatives to Caesar at Boulogne ostensibly to offer their submission to Rome. The Celts knew that Caesar would not doubt the sincerity of this; arrogant and accustomed to success as he was, he took this submission as his natural right. Read more: The weaponry of the Normans The Celts returned to Britain accompanied by Caesar's ambassador, Commius, King of the Atrebates, one of the Gallic tribes. With Commius, Caesar send 30 horsemen, who had instructions to 'visit as many of the tribes as possible, to persuade them to place themselves under the protection of Rome, and to announce that Caesar himself would shortly be arriving.' Caesar arrived within a few weeks, on an early autumn morning. He came with 80 transports and the X and VII Legions, but without his cavalry, whose ships had been trapped in France by savage Channel winds. As Caesar approached the White Cliffs of Dover, he found an impressive sight awaiting him. On the clifftops stood rank upon rank of Celts, waiting, Caesar had no doubt, to pay homage to himself and his legions. It was only when the Roman ships came closer to the shore that Caesar saw this was no welcoming party: the British ranks were bristling with weapons. The Roman galleys sailed northeast towards Deal, and the Celts walked and rode along the clifftops, pacing the ships. It was an unnerving sight for the would-be invaders, and by the time the galleys were as close to the beach as their size would allow, even the courageous X Legion, Caesar's favourite, was apprehensive. Quite uncharacteristically, these legionaries hesitated for several minutes before obeying the order to jump into the waist-high water. Their hesitation was soon justified. The men were still wading towards the shore, weighed down by their arms and the heavy mailed leather jerkins they wore, when the British horsemen came riding out into the surf, swinging their swords and shouting battle cries. Behind the horsemen, on the beach, stood more Britons armed with stones and javelins. Bombarded from above and slipping on the shingle, some of the Romans fell into the water. Enough reached the beach, however, to form up in line and charge their assailants, and with the menacing line of Roman javelins now advancing on them, the Celts turned and fled. It was fortunate for them that Caesar, lacking his cavalry, could order no pursuit. The Britons now had tested the strength and determination of the Romans, and had found them to be considerable. They decided therefore to play for time once again and the following day sent a deputation to Caesar offering apologies for their hostility. With the arrival of the British chieftains who swore loyalty to Caesar, the general once again began to hope that Britain would prove an easy conquest. The Celtic ceasefire The Celts' goodwill, however, was soon seen to vanish when an unexpected but powerful ally came to their aid, the British weather. About a week after Caesar's arrival, the ships carrying his cavalry appeared on the horizon, almost at once, a fierce storm blew up, tossing the ships about on the water, snapping their masts and tearing their sails to shreds. As the fury of the gale mounted, the ships were driven back towards France, and by the time darkness came, all had disappeared from sight. The bleak dawn that followed revealed a beach littered with the wreckage of Caesar's transports. All that remained at anchor was a pitiful row of storm-battered hulks. As the Romans surveyed the appalling scene, the morale of the Celts rose once more. The British chieftains began to slip away from the camp. Peasants were rounded up, war chariots made ready, arms burnished and sharpened. Now that the Romans seemed marooned on their unfriendly island, the Britons were once more preparing to fight them. The Romans, however, were far from helpless. Roman legionaries were not only superb fighters, they were skilful engineers as well, and this would not be the first time they had repaired ships by using the wreckage of those more badly damaged. They were even able to forge the nails that held the timbers together. While the men of the X Legion began this repair work, their colleagues of the VII went foraging for food. From their dense oak forests the Britons watched the Romans begin to reap their barley fields, waited till the task absorbed them and then rushed out of the trees, yelling war cries and brandishing spears. Some distance away in the Roman camp, sentries saw a huge rising cloud of dust. Immediately Caesar himself and a handful of troops stormed out of the camp and ran towards the fields. At their approach the Britons fled back into the forest. The next few days brought more heavy rain, but on this occasion the weather worked to the Romans' advantage. It kept the Britons away long enough for them to finish repairing some of their ships and send them to Boulogne to fetch more materials. However, when the downpour at last abated, the Britons staged another lightning raid. The Romans drove them back to their forest hideouts, but by this time Caesar had lost patience with so capricious an enemy, The following evening he packed his troops into the remaining galleys and sailed back to France. He had spent less than three weeks in Britain. Caesar's second assault Caesar did not record his feelings about the failure of his 55 BC invasion, but he was careful to send a report to the Senate in Rome painting a favourable picture of what had, in reality, been a near disaster. As a result, the Senate voted a 20-day period of thanksgiving for Caesar's 'exploit.' To explain its lack of success, Caesar intimated that his expedition had been a mere dress rehearsal for a full-scale assault, planned for the following year. Convinced now that a new 'province' would soon be added to the Roman Empire, a motley group of opportunists, treasure-seekers, and adventurers joined Caesar's second invasion force. This time he took with him five legions (25,000 men) and 2,000 cavalry. He also embarked an elephant--probably the first ever to be seen in Britain. The Roman fleet of 800 ships arrived off the Kent coast in the summer of 54 BC to find the landing beach deserted. The newcomers, unaware of the events of the previous summer, supposed that the mere sight of the Roman galleys had frightened the Celts away. Caesar knew better. He guessed, correctly, that the Britons had decided to wage guerilla warfare on the Romans, a plan well suited to their inferior weapons and tactics. A pitched battle, which Caesar knew the Britons could not win, was what he now desired most. Read more: 3 of the best British mysteries Caesar sent scouts to round up a few prisoners, and from them he learned that the Britons were about ten miles away. It was nearly midnight, but Caesar set off immediately and marched through the moonlit forests and marshes of Kent towards Canterbury. There was a brief skirmish near the banks of the river Stour, but as soon as the Romans began to attack in earnest, the Britons disappeared into the trees. The further the Romans advanced, the further the Britons retreated, drawing the invaders deeper and deeper into the forest. Once again, the weather came to the Britons' aid. No sooner had the Romans sighted the British rearguard, than a messenger came running up to Caesar with the news that a gale in the Channel had wrecked his ships, plucking them from their moorings and smashing them down upon the shore. A disappointed and angry Caesar was obliged to abandon the pursuit of his elusive enemy and return to the beach to survey the damage. Forty ships had been completely destroyed. Those less badly damaged were dragged up on the beach and for ten days the Romans worked around the clock to repair them. That done, Caesar ordered his men to dig themselves in behind earthen ramparts and wait for the Britons to attack in force. The Britons let them wait. They had now overcome petty rivalries in their own camp and had united under one leader, Cassivellaunus, King of the Catuvellauni tribe. He was content now to nibble at the Romans, by sending out raiding parties and staging a few ambushes, knowing that sooner or later, Caesar would have to take the initiative. Summer was fast fading into autumn when Caesar at last lost patience and marched from his fortified camp towards the Thames. The Romans arrived at the only crossing place to find that the Britons had barricaded it by driving stakes into the riverbed. The obstacle was overcome when the Romans clothed their elephant in an armor of iron scales and placed on its back a tower full of archers and slingers. The great beast lumbered into the Thames, with a shower of arrows and stones pouring down from the tower. The terrified Britons bolted for the protection of the trees and refused to come out, except to make a few hit-and-run forays, which did them little good. The Four Kings of Kent Now the unmistakable smell of autumn was in the air and Caesar, aware that time was running out, resorted to subversive tactics. He had in his camp the son of a British chieftain recently defeated by Cassivellaunus. When Caesar promised to restore this young man to his stolen kingdom, some of the smaller tribes deserted their leader. Cassivellaunus, in his growing isolation, persuaded the four kings of Kent to attack Caesar's base camp and so draw the Romans away to defend it. The plan failed, but Caesar eagerly seized his chance when Cassivellaunus asked for a truce. Caesar negotiated a treaty imperiously, almost as if he had won a great victory. Cassivelaunus promised to abide by it, but Caesar, impatient now to be gone, took no precautions to ensure that he did so. All Caesar wanted was to get away from this inhospitable island, from its abominable weather, and its cunning inhabitants. Autumn gales were already blowing round the coast and the winds were frothing up dangerously choppy seas when the Roman ships weighed anchor and sailed for France. Read more: The real story of King Edward II's husband Julius Caesar never returned to Britain. The island was left undisturbed for nearly a century, until AD 43 when the Emperor Claudius ordered the invasion that succeeded where that of Rome's greatest general had so conspicuously failed.
2,634
ENGLISH
1
After building up a successful, widespread organisation throughout Christendom why did the Church outlaw and destroy the Knights Templar? Who was behind the downfall and what happened with the devoted remnants of the Order? In mid-12th century, the Muslim world became more united and under effective rulers like Saladin. There was also dissension and division in Christendom. The Knights Templar were occasionally at odds with two other Christian military Orders, the Teutonic Knights and the Knights Hospitaller. The Templars were involved in a few unsuccessful campaigns, including the decisive Battle of Hattin in 1187. Jerusalem and the Temple of Solomon was taken and reclaimed as the Al-Aqsa Mosque and the Holy Land was now under the control of Saladin. After a brief recapture by the Roman Emperor between 1229 and 1244, Jerusalem was not to return to Western control until 1917. The Templars were forced to relocate their headquarters to other cities in the north, such as Acre, but c. 1303, they lost their final foothold in the Holy Land when the island of Arwad was lost to the Egyptians. With their mission in the Holy Land lost, the Knights Templars began to lose support. They still managed businesses throughout Europe and the Near East, such as farms and vineyards and their banking network. In 1305, Pope Clement V sent letters to the Grand Masters of the Templars and Hospitallers, to discuss merging the two Orders. Neither liked the idea, but eventually agreed to meet. The Knights Templar Grand Master, Jacques de Molay arrived first in early 1307. However, whilst he waited, King Philip IV of France, in debt to the Templars, trumped up criminal charges from claims made by an ousted Templar. At dawn on Friday 13 October 1307, Philip acted against the Templars and scores of Templars were arrested in France. Phillip put pressure on Clement and two papal bulls were subsequently issued. Templars were arrested throughout Europe and assets seized. Many were burnt at the stake and most of their assets given to the Hospitallers. The Knights Templar Order were officially abolished on 22 March 1312. However, the Portuguese king, Denis I, refused to pursue and persecute the former knights, providing protection. In 1319, the former Knights Templar Order in Portugal was reconstituted as the Military Order of Christ. Denis negotiated with Pope Clements’s successor, John XXII, for recognition of the new Order and their inheritance of the Templars assets and properties. Today this small remnant of the former Christendom wide network still exists, with the President of the Portuguese Republic acting as Grand Master. Comments will be approved before showing up.
<urn:uuid:2385a6a1-f8ce-40d7-8c40-bb0e57909376>
CC-MAIN-2020-05
https://ancientreasures.com/blogs/ancient-treasures/015-military-order-of-christ
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00095.warc.gz
en
0.982334
556
3.53125
4
[ -0.11445367336273193, 0.42985135316848755, 0.031523607671260834, -0.09813756495714188, -0.11070524156093597, -0.23526504635810852, -0.2930867075920105, 0.3072301745414734, 0.4542146921157837, 0.2870523929595947, 0.17345106601715088, -0.25049811601638794, -0.14662529528141022, 0.13742801547...
11
After building up a successful, widespread organisation throughout Christendom why did the Church outlaw and destroy the Knights Templar? Who was behind the downfall and what happened with the devoted remnants of the Order? In mid-12th century, the Muslim world became more united and under effective rulers like Saladin. There was also dissension and division in Christendom. The Knights Templar were occasionally at odds with two other Christian military Orders, the Teutonic Knights and the Knights Hospitaller. The Templars were involved in a few unsuccessful campaigns, including the decisive Battle of Hattin in 1187. Jerusalem and the Temple of Solomon was taken and reclaimed as the Al-Aqsa Mosque and the Holy Land was now under the control of Saladin. After a brief recapture by the Roman Emperor between 1229 and 1244, Jerusalem was not to return to Western control until 1917. The Templars were forced to relocate their headquarters to other cities in the north, such as Acre, but c. 1303, they lost their final foothold in the Holy Land when the island of Arwad was lost to the Egyptians. With their mission in the Holy Land lost, the Knights Templars began to lose support. They still managed businesses throughout Europe and the Near East, such as farms and vineyards and their banking network. In 1305, Pope Clement V sent letters to the Grand Masters of the Templars and Hospitallers, to discuss merging the two Orders. Neither liked the idea, but eventually agreed to meet. The Knights Templar Grand Master, Jacques de Molay arrived first in early 1307. However, whilst he waited, King Philip IV of France, in debt to the Templars, trumped up criminal charges from claims made by an ousted Templar. At dawn on Friday 13 October 1307, Philip acted against the Templars and scores of Templars were arrested in France. Phillip put pressure on Clement and two papal bulls were subsequently issued. Templars were arrested throughout Europe and assets seized. Many were burnt at the stake and most of their assets given to the Hospitallers. The Knights Templar Order were officially abolished on 22 March 1312. However, the Portuguese king, Denis I, refused to pursue and persecute the former knights, providing protection. In 1319, the former Knights Templar Order in Portugal was reconstituted as the Military Order of Christ. Denis negotiated with Pope Clements’s successor, John XXII, for recognition of the new Order and their inheritance of the Templars assets and properties. Today this small remnant of the former Christendom wide network still exists, with the President of the Portuguese Republic acting as Grand Master. Comments will be approved before showing up.
592
ENGLISH
1
Explore the presentation of revenge in ‘Hamlet’ Essay Revenge is a key theme in Hamlet. It is not only essential to understanding Hamlet’s character, it forms the structure for the whole play, supporting and overlapping other important themes that arise. Though it is Hamlets revenge that forms the basis for the story, tied into this is the vengeance of Laertes and Fortinbras, whose situations in many ways mirror Hamlets’ own. By juxtaposing these avengers, Shakespeare draws attention to their different approaches to the problem of revenge and how they resolve these. The idea of revenge is first introduced by the appearance of the ghost in act 1 Scene 5, and linked to this is the theme of hell and the afterlife. At the end of this scene, Hamlet is irreversibly bound to revenge for the duration of the play, ‘speak, I am bound to hear’ ‘So art thou to revenge’. The ghost appears with the sole aim of using his son to obtain revenge on his brother, and so every word he speaks is designed to enrage Hamlet and stir in him a desire for vengeance. He uses very emotive language to exaggerate the enormity of the crime, and he concentrates Hamlet’s attention on the treachery of Claudius. His description of the murder itself demonises Claudius and contains many references to original sin, ‘the serpent that did sting thy fathers life now wears his crown.’ Hamlet, who has been brought up with absolute notions of good and evil, is susceptible to these religious references, ‘o all you host of heaven! O earth! And shall I couple hell?’It is ironic that the ghost refers to his own torment, trapped in purgatory, in order to demonstrate to Hamlet the injustice of the situation, yet this serves only to warn Hamlet of the possible consequences of revenge. Instead of enraging him, Hamlet is now wary of acting rashly or without proof as it could place him in a similar situation to his father. The other revengers in the play do not have this wariness, they act immediately without considering the spiritual consequences and it is unclear whether Hamlet would have had a similar attitude had he not been inadvertently alerted to this danger by old Hamlet’s ghost.Though Hamlet’s immediate reaction to news of his father’s murder is one of anger and a desire for action, by the end of the scene his desire for revenge is already blunted, for a number of reasons. Unlike Laertes and Fortinbras, Hamlet receives the information of his father’s murder from a secret and unreliable source, which means that not only is he unsure of the truth, he is forced to act out his revenge in secret. Throughout the play, Hamlet frustrates the audience with his lack of action, especially as all around him his contemporaries are visibly taking their own revenge. Fortinbras is in a similar situation to Hamlet, as his father had been murdered by old Hamlet and his land taken. The land itself is worthless and Fortinbras stands to lose more than he can gain; yet like Hamlet it is a matter of honour. Both are exacting revenge for something that nobody else cares for or remembers; a dead king for whom nobody grieves and a patch of worthless land. Part of Hamlet’s dilemma is the moral question of whether his desire for revenge is worth disrupting and endangering the lives of all those around him, ‘whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or take arms against a sea of troubles and by opposing end them’ However, unlike Hamlet Fortinbras does not pause to contemplate the idea of revenge; he acts on it, ‘sharked up a list of lawless resolutes’ and marched on Denmark. The difference in their characters is obvious; Fortinbras’ character matches his name, ‘strong in arm’. He is a man of action, not of words, he has a strong presence and a commanding attitude which demands obedience, ‘Go captain, from me greet the Danish king’ ‘I will do’t my lord’. Fortinbras’ situation is infinitely less complex than Hamlet’s own; the boundaries between good and evil, personal and public, right and wrong, are for him, clearly defined. He is able to act openly, uninfluenced by friends and family. Hamlet on the other hand is surrounded by people who have obligations to both himself and the king, and is therefore unsure of whom to trust.Hamlets dilemma is founded on this; that any action he takes carries with it risks and possible consequences which could destroy the foundation of his very existence, so he hesitates and does nothing, all the while hating himself for his inaction, ‘makes us rather bear those ills we have than fly to others that we know not of’. The problem for Hamlet is that the murder is too close to home, so he is unable to define the boundaries between personal and public. He cannot publicly confront Claudius without proof because he risks losing his claim to the thrown, alienating his friends and family and being exiled from Denmark, as it would be seen as an attempt by the prince to regain the throne, rather than a son avenging his fathers murder. On top of this Hamlet hopes to avoid jeopardising his relationship with his mother, but at the same time he wants revenge on her for her betrayal.In order to fully understand Hamlet’s psyche and therefore the reasoning behind his actions, it is important to understand how religion affected all aspects of life in Elizabethan times. It was believed that a person who was able to confess his sins before death would be absolved and therefore go to heaven, but if a person were unable to do this their soul would be condemned to purgatory until they were able to confess and repent. Old Hamlet’s soul is in purgatory and Hamlet wants Claudius to suffer the same fate, ‘a villain kills my father and for that, I his sole son do this same villain send to heaven. Why, this is hire and salary not revenge.’ For this reason Hamlet has to wait for the opportune moment to kill Claudius, ‘when he is drunk asleep, or in his rage, at game, a-swearing or about some act that has no relish of salvation in it’. However, the other problem which religion creates is that of Hamlets own afterlife. If murder for revenge is wrong then by killing Claudius, Hamlet condemns his own soul along with that of Claudius’. On the other hand, Hamlet is honour bound to exact revenge for his father’s murder, and the consequences of not doing so could be even more drastic. Even suicide offers no solution, as ‘the dread of something after death, the undiscovered country from whose bourn no traveller returns, puzzles the will, and makes us rather bear those ills we have than fly to others we know not of’.Hamlets indecisiveness is not just a result of his uncertainty about the consequences his actions will have. He is in emotional turmoil at this point in the play, and is feeling betrayed and rejected by those whom he had relied on so far in his life. His anger and frustration at his mother’s behaviour is amplified by her lack of grief, and his desire for revenge at the start of the play is mainly fuelled by his own grief and a sense of injustice. His anger towards Claudius diminishes, as he is distracted form revenge by more immediate concerns, such as his relationships with Ophelia and with his mother.Part of Hamlets feelings of isolation stem from what he sees as betrayal by his friends, Rosencrantz and Guildenstern, and his lover Ophelia. Hamlets critical relationship with Claudius forces all three to take sides, and decide to whom they owe the strongest allegiance. Ophelia’s father Polonious, Claudius’ right hand man, instructs her to shun Hamlet and, as his dependant she is forced to obey him. Women were viewed as property during Shakespearian times, and without a male protector her future prospects were slim. Also, the emphasis placed on family duty and loyalty was far greater, so to disobey her father would be tantamount to treason. Rosencrantz and Guildenstern were given a direct order from their king, so to disobey would actually have been treason. Added to this was their ignorance of Hamlets situation due to both Hamlet and Claudius’ deceit, which meant that they were unsympathetic with Hamlets mental instability and obsession with old Hamlets death.Hamlet refuses to recognise the impossible situation his friends were placed in, and resents them for abandoning him when he needs them most, even though it is his feud with Claudius that has forced them to into it. Feeling betrayed, he has no compunctions in using them to further his own gains. All three are, ultimately, fatalities of Hamlets vendetta against Claudius, as Hamlet brings about the deaths of Rosencrantz and Guildenstern and drives Ophelia to madness and suicide. Ophelia especially is very much a victim, as in obeying her father she loses Hamlet, and when Hamlet kills Polonious she loses him as well. With Laertes away, she has no-one left to protect her and is very much alone.In many ways, Hamlet himself is a victim of revenge, as he used as a tool by his father, to instigate revenge against old Hamlets killer. By placing this obligation on Hamlet, on top of all his emotional instability, Old Hamlet effectively pushes his son over the edge and renders him incapable of decisiveness. It is unsurprising that Hamlet is unable to take revenge or in fact make any significant decisions, as he is under considerable emotional and mental strain. Laertes is in a similar situation, as Hamlet his friend has murdered his father and driven his sister to madness. His vulnerable state of mind makes it easy for Claudius to use him as a tool against Hamlet, so the two friends become instruments in the power struggle between the two brothers, a struggle which crosses the divide between life and death.Laertes’ situation resembles Hamlet in other ways. They are joined by their love for Ophelia, Hamlet as a lover and Laertes as a brother. When Laertes returns to find his father murdered, he faces the same dilemma that Hamlet originally had in that, as far as he knew, the king of Denmark had murdered his father. Unlike Hamlet who promptly chose to employ deceit in order to combat Claudius’s deceit, when Laertes discovers this he immediately confronts Claudius. By doing this he achieves his revenge far sooner than Hamlet, but consequently becomes a tool for Claudius against Hamlet. These two revengers differ in their approach to revenge, but ultimately they come to the same end. They both fall victim to the corruption that surrounds the court of Denmark, with Claudius at the centre. Claudius’ use of deceit throughout the play hides the truth under a veil of dishonesty. Claudius uses other people as tools to achieve his aims, so if they fail he escapes the brunt. He uses Polonious, he uses the king of Norway against Fortinbras, and finally he uses Laertes against Hamlet himself. His corrupting influence means that nobody in Denmarck knows the truth, and Hamlets only attempt to break this veil of deceit causes the death of Polonious instead of Claudius. In act 3 scene 3, Shakespeare uses the curtain concealing Polonious as a metaphor for the corruption surrounding Denmark, making it impossible for Hamlet to take revenge as he is unaware of the truth. Though Hamlet tries to cut through the curtain, he fails and ends up killing the wrong man. This shows him that it is no good trying to confront the problem, he must remove the cloak of deceit and reveal Claudius for what he truly is before he can take his revenge.Though Hamlet tries to get around this problem by being deceitful himself, and Laertes tries to confront the problem face on, both end up being used as weapons in a fight that kills them both.The ending of the play is very satisfying despite, or perhaps because of, the deaths of nearly all the characters. For a neat ending, it was necessary that all the characters achieve their revenge, and as there were so many intertwining strands of revenge, it was inevitable that a large proportion of characters would be killed. The play ends with a new beginning, as the corruption at the heart of Denmark dies with Claudius and Hamlet. Hamlet succeeded in taking revenge on Claudius and revealing the truth about his character, and Laertes succeeded in killing Hamlet but died in the process. All this clears the way for Fortinbras, who we see is far more suited to leadership than the indecisive Hamlet.Fortinbras was more successful in his revenge than Hamlet and Laertes for a number of reasons. He is not held back by the dilemma that freezes Hamlet; of having to choose between betraying his fathers trust or losing the throne and alienating everyone he loves. Hamlet is held back by his proximity to Claudius and the situation, whereas Fortinbras is free to act uninfluenced by the people around him. Another factor in Fortinbras’ favour is that, unlike both Hamlet and Laertes, Fortinbras made the decision to take revenge alone, so it was entirely his responsibility. Revenge has to be nurtured in Hamlet and Laertes, and both are used as tools in the ongoing feud between the two brothers. Fortinbras is a man of action, and doesn’t waste time pondering the philosophy behind the revenge mentality, as Hamlet does. And unlike Laertes, he plans and organises his revenge, he doesn’t rush straight into confrontation unprepared. In fact, he represents the best qualities of both of them, so it is fitting that it is he who emerges with not only his life, but the throne of Denmark to go with it. Cite this Explore the presentation of revenge in ‘Hamlet’ Essay Explore the presentation of revenge in ‘Hamlet’ Essay. (2018, Feb 08). Retrieved from https://graduateway.com/explore-the-presentation-of-revenge-in-hamlet/
<urn:uuid:724455f5-6fb5-4974-82be-0efec2aead45>
CC-MAIN-2020-05
https://graduateway.com/explore-the-presentation-of-revenge-in-hamlet/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00500.warc.gz
en
0.980879
3,088
3.546875
4
[ -0.19020289182662964, -0.06336880475282669, 0.09848608821630478, 0.13036489486694336, -0.3044624924659729, 0.05788372457027435, 0.46601712703704834, 0.38425758481025696, -0.07886829972267151, -0.21794147789478302, -0.028587479144334793, -0.08743096888065338, -0.2203424870967865, 0.34251677...
1
Explore the presentation of revenge in ‘Hamlet’ Essay Revenge is a key theme in Hamlet. It is not only essential to understanding Hamlet’s character, it forms the structure for the whole play, supporting and overlapping other important themes that arise. Though it is Hamlets revenge that forms the basis for the story, tied into this is the vengeance of Laertes and Fortinbras, whose situations in many ways mirror Hamlets’ own. By juxtaposing these avengers, Shakespeare draws attention to their different approaches to the problem of revenge and how they resolve these. The idea of revenge is first introduced by the appearance of the ghost in act 1 Scene 5, and linked to this is the theme of hell and the afterlife. At the end of this scene, Hamlet is irreversibly bound to revenge for the duration of the play, ‘speak, I am bound to hear’ ‘So art thou to revenge’. The ghost appears with the sole aim of using his son to obtain revenge on his brother, and so every word he speaks is designed to enrage Hamlet and stir in him a desire for vengeance. He uses very emotive language to exaggerate the enormity of the crime, and he concentrates Hamlet’s attention on the treachery of Claudius. His description of the murder itself demonises Claudius and contains many references to original sin, ‘the serpent that did sting thy fathers life now wears his crown.’ Hamlet, who has been brought up with absolute notions of good and evil, is susceptible to these religious references, ‘o all you host of heaven! O earth! And shall I couple hell?’It is ironic that the ghost refers to his own torment, trapped in purgatory, in order to demonstrate to Hamlet the injustice of the situation, yet this serves only to warn Hamlet of the possible consequences of revenge. Instead of enraging him, Hamlet is now wary of acting rashly or without proof as it could place him in a similar situation to his father. The other revengers in the play do not have this wariness, they act immediately without considering the spiritual consequences and it is unclear whether Hamlet would have had a similar attitude had he not been inadvertently alerted to this danger by old Hamlet’s ghost.Though Hamlet’s immediate reaction to news of his father’s murder is one of anger and a desire for action, by the end of the scene his desire for revenge is already blunted, for a number of reasons. Unlike Laertes and Fortinbras, Hamlet receives the information of his father’s murder from a secret and unreliable source, which means that not only is he unsure of the truth, he is forced to act out his revenge in secret. Throughout the play, Hamlet frustrates the audience with his lack of action, especially as all around him his contemporaries are visibly taking their own revenge. Fortinbras is in a similar situation to Hamlet, as his father had been murdered by old Hamlet and his land taken. The land itself is worthless and Fortinbras stands to lose more than he can gain; yet like Hamlet it is a matter of honour. Both are exacting revenge for something that nobody else cares for or remembers; a dead king for whom nobody grieves and a patch of worthless land. Part of Hamlet’s dilemma is the moral question of whether his desire for revenge is worth disrupting and endangering the lives of all those around him, ‘whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or take arms against a sea of troubles and by opposing end them’ However, unlike Hamlet Fortinbras does not pause to contemplate the idea of revenge; he acts on it, ‘sharked up a list of lawless resolutes’ and marched on Denmark. The difference in their characters is obvious; Fortinbras’ character matches his name, ‘strong in arm’. He is a man of action, not of words, he has a strong presence and a commanding attitude which demands obedience, ‘Go captain, from me greet the Danish king’ ‘I will do’t my lord’. Fortinbras’ situation is infinitely less complex than Hamlet’s own; the boundaries between good and evil, personal and public, right and wrong, are for him, clearly defined. He is able to act openly, uninfluenced by friends and family. Hamlet on the other hand is surrounded by people who have obligations to both himself and the king, and is therefore unsure of whom to trust.Hamlets dilemma is founded on this; that any action he takes carries with it risks and possible consequences which could destroy the foundation of his very existence, so he hesitates and does nothing, all the while hating himself for his inaction, ‘makes us rather bear those ills we have than fly to others that we know not of’. The problem for Hamlet is that the murder is too close to home, so he is unable to define the boundaries between personal and public. He cannot publicly confront Claudius without proof because he risks losing his claim to the thrown, alienating his friends and family and being exiled from Denmark, as it would be seen as an attempt by the prince to regain the throne, rather than a son avenging his fathers murder. On top of this Hamlet hopes to avoid jeopardising his relationship with his mother, but at the same time he wants revenge on her for her betrayal.In order to fully understand Hamlet’s psyche and therefore the reasoning behind his actions, it is important to understand how religion affected all aspects of life in Elizabethan times. It was believed that a person who was able to confess his sins before death would be absolved and therefore go to heaven, but if a person were unable to do this their soul would be condemned to purgatory until they were able to confess and repent. Old Hamlet’s soul is in purgatory and Hamlet wants Claudius to suffer the same fate, ‘a villain kills my father and for that, I his sole son do this same villain send to heaven. Why, this is hire and salary not revenge.’ For this reason Hamlet has to wait for the opportune moment to kill Claudius, ‘when he is drunk asleep, or in his rage, at game, a-swearing or about some act that has no relish of salvation in it’. However, the other problem which religion creates is that of Hamlets own afterlife. If murder for revenge is wrong then by killing Claudius, Hamlet condemns his own soul along with that of Claudius’. On the other hand, Hamlet is honour bound to exact revenge for his father’s murder, and the consequences of not doing so could be even more drastic. Even suicide offers no solution, as ‘the dread of something after death, the undiscovered country from whose bourn no traveller returns, puzzles the will, and makes us rather bear those ills we have than fly to others we know not of’.Hamlets indecisiveness is not just a result of his uncertainty about the consequences his actions will have. He is in emotional turmoil at this point in the play, and is feeling betrayed and rejected by those whom he had relied on so far in his life. His anger and frustration at his mother’s behaviour is amplified by her lack of grief, and his desire for revenge at the start of the play is mainly fuelled by his own grief and a sense of injustice. His anger towards Claudius diminishes, as he is distracted form revenge by more immediate concerns, such as his relationships with Ophelia and with his mother.Part of Hamlets feelings of isolation stem from what he sees as betrayal by his friends, Rosencrantz and Guildenstern, and his lover Ophelia. Hamlets critical relationship with Claudius forces all three to take sides, and decide to whom they owe the strongest allegiance. Ophelia’s father Polonious, Claudius’ right hand man, instructs her to shun Hamlet and, as his dependant she is forced to obey him. Women were viewed as property during Shakespearian times, and without a male protector her future prospects were slim. Also, the emphasis placed on family duty and loyalty was far greater, so to disobey her father would be tantamount to treason. Rosencrantz and Guildenstern were given a direct order from their king, so to disobey would actually have been treason. Added to this was their ignorance of Hamlets situation due to both Hamlet and Claudius’ deceit, which meant that they were unsympathetic with Hamlets mental instability and obsession with old Hamlets death.Hamlet refuses to recognise the impossible situation his friends were placed in, and resents them for abandoning him when he needs them most, even though it is his feud with Claudius that has forced them to into it. Feeling betrayed, he has no compunctions in using them to further his own gains. All three are, ultimately, fatalities of Hamlets vendetta against Claudius, as Hamlet brings about the deaths of Rosencrantz and Guildenstern and drives Ophelia to madness and suicide. Ophelia especially is very much a victim, as in obeying her father she loses Hamlet, and when Hamlet kills Polonious she loses him as well. With Laertes away, she has no-one left to protect her and is very much alone.In many ways, Hamlet himself is a victim of revenge, as he used as a tool by his father, to instigate revenge against old Hamlets killer. By placing this obligation on Hamlet, on top of all his emotional instability, Old Hamlet effectively pushes his son over the edge and renders him incapable of decisiveness. It is unsurprising that Hamlet is unable to take revenge or in fact make any significant decisions, as he is under considerable emotional and mental strain. Laertes is in a similar situation, as Hamlet his friend has murdered his father and driven his sister to madness. His vulnerable state of mind makes it easy for Claudius to use him as a tool against Hamlet, so the two friends become instruments in the power struggle between the two brothers, a struggle which crosses the divide between life and death.Laertes’ situation resembles Hamlet in other ways. They are joined by their love for Ophelia, Hamlet as a lover and Laertes as a brother. When Laertes returns to find his father murdered, he faces the same dilemma that Hamlet originally had in that, as far as he knew, the king of Denmark had murdered his father. Unlike Hamlet who promptly chose to employ deceit in order to combat Claudius’s deceit, when Laertes discovers this he immediately confronts Claudius. By doing this he achieves his revenge far sooner than Hamlet, but consequently becomes a tool for Claudius against Hamlet. These two revengers differ in their approach to revenge, but ultimately they come to the same end. They both fall victim to the corruption that surrounds the court of Denmark, with Claudius at the centre. Claudius’ use of deceit throughout the play hides the truth under a veil of dishonesty. Claudius uses other people as tools to achieve his aims, so if they fail he escapes the brunt. He uses Polonious, he uses the king of Norway against Fortinbras, and finally he uses Laertes against Hamlet himself. His corrupting influence means that nobody in Denmarck knows the truth, and Hamlets only attempt to break this veil of deceit causes the death of Polonious instead of Claudius. In act 3 scene 3, Shakespeare uses the curtain concealing Polonious as a metaphor for the corruption surrounding Denmark, making it impossible for Hamlet to take revenge as he is unaware of the truth. Though Hamlet tries to cut through the curtain, he fails and ends up killing the wrong man. This shows him that it is no good trying to confront the problem, he must remove the cloak of deceit and reveal Claudius for what he truly is before he can take his revenge.Though Hamlet tries to get around this problem by being deceitful himself, and Laertes tries to confront the problem face on, both end up being used as weapons in a fight that kills them both.The ending of the play is very satisfying despite, or perhaps because of, the deaths of nearly all the characters. For a neat ending, it was necessary that all the characters achieve their revenge, and as there were so many intertwining strands of revenge, it was inevitable that a large proportion of characters would be killed. The play ends with a new beginning, as the corruption at the heart of Denmark dies with Claudius and Hamlet. Hamlet succeeded in taking revenge on Claudius and revealing the truth about his character, and Laertes succeeded in killing Hamlet but died in the process. All this clears the way for Fortinbras, who we see is far more suited to leadership than the indecisive Hamlet.Fortinbras was more successful in his revenge than Hamlet and Laertes for a number of reasons. He is not held back by the dilemma that freezes Hamlet; of having to choose between betraying his fathers trust or losing the throne and alienating everyone he loves. Hamlet is held back by his proximity to Claudius and the situation, whereas Fortinbras is free to act uninfluenced by the people around him. Another factor in Fortinbras’ favour is that, unlike both Hamlet and Laertes, Fortinbras made the decision to take revenge alone, so it was entirely his responsibility. Revenge has to be nurtured in Hamlet and Laertes, and both are used as tools in the ongoing feud between the two brothers. Fortinbras is a man of action, and doesn’t waste time pondering the philosophy behind the revenge mentality, as Hamlet does. And unlike Laertes, he plans and organises his revenge, he doesn’t rush straight into confrontation unprepared. In fact, he represents the best qualities of both of them, so it is fitting that it is he who emerges with not only his life, but the throne of Denmark to go with it. Cite this Explore the presentation of revenge in ‘Hamlet’ Essay Explore the presentation of revenge in ‘Hamlet’ Essay. (2018, Feb 08). Retrieved from https://graduateway.com/explore-the-presentation-of-revenge-in-hamlet/
2,972
ENGLISH
1
The Great War began in 1914 and ended in 1918, the war was brought about by one little incident but by increasing tension among the nations of Europe, which included nationalism, imperialism, militarism and the Alliance system. A series of crises only hightened the tension between nations and when Archduke Ferdinand was assisnated; was was declared soon after. Pride and strong feeling in one's nation was widespread at the beginning of the 20th century. Nationalist's believed that the needs of other nations were not as important as their own. Nationalist's were aggressive and were stubborn to forgive other nations when the felt their nation had been offended, this added tension among the nations of Europe. Another long term cause was Imperialism. Imperialism was when a nation wanted to take over colonies and build an empire. The European nations had been taking over colonies since the 15th century and from 1870 there was fierce competition over which countires took over colonies over the world. Britain, Germany and France almost went to war over disagreements in North Africa. . Italy felt amnimosity towards France because they prevented Italy from setting up colonies. Many other countries went to war or almost came close to war because of the decreasing number of colonies. Imperialism caused tension among the nations because as each nation gained a colony, those colonies became dedicated to help their motherland in any event of war. Militarism was the belief that one's country should be well armed and military ways could be used to gain nationalist needs. Militarists was an influential force throughout Europe. The Great Powers competed to build up their armed forces and supplies of weapons, thus this added to the escelating tension, fear and anger between the countries. . There were many alliance agreements between the Great Powers before WW1 had broke out, and one alliance system became a long term cause of war because it divided Europe into two armed forces; Germany, Austria and Italy became allies and France, Russia and Britain joined forces.
<urn:uuid:948a0231-30c9-4371-a7b8-c9c6a44f23a9>
CC-MAIN-2020-05
https://www.exampleessays.com/viewpaper/4885.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00125.warc.gz
en
0.989799
404
3.609375
4
[ -0.06727886199951172, 0.5350778698921204, 0.49038174748420715, -0.17719100415706635, -0.033334821462631226, -0.2685587406158447, -0.029809575527906418, 0.20425263047218323, -0.0906042829155922, -0.3464178442955017, 0.2740097939968109, -0.2928955852985382, 0.07850197702646255, 0.67882019281...
7
The Great War began in 1914 and ended in 1918, the war was brought about by one little incident but by increasing tension among the nations of Europe, which included nationalism, imperialism, militarism and the Alliance system. A series of crises only hightened the tension between nations and when Archduke Ferdinand was assisnated; was was declared soon after. Pride and strong feeling in one's nation was widespread at the beginning of the 20th century. Nationalist's believed that the needs of other nations were not as important as their own. Nationalist's were aggressive and were stubborn to forgive other nations when the felt their nation had been offended, this added tension among the nations of Europe. Another long term cause was Imperialism. Imperialism was when a nation wanted to take over colonies and build an empire. The European nations had been taking over colonies since the 15th century and from 1870 there was fierce competition over which countires took over colonies over the world. Britain, Germany and France almost went to war over disagreements in North Africa. . Italy felt amnimosity towards France because they prevented Italy from setting up colonies. Many other countries went to war or almost came close to war because of the decreasing number of colonies. Imperialism caused tension among the nations because as each nation gained a colony, those colonies became dedicated to help their motherland in any event of war. Militarism was the belief that one's country should be well armed and military ways could be used to gain nationalist needs. Militarists was an influential force throughout Europe. The Great Powers competed to build up their armed forces and supplies of weapons, thus this added to the escelating tension, fear and anger between the countries. . There were many alliance agreements between the Great Powers before WW1 had broke out, and one alliance system became a long term cause of war because it divided Europe into two armed forces; Germany, Austria and Italy became allies and France, Russia and Britain joined forces.
415
ENGLISH
1
Edward took part in the events upon which Shakespeare, five hundred years later, founded his famous tragedy of "Macbeth." There lived in Scotland during his reign an ambitious nobleman named Macbeth, who invited Duncan, the King of Scotland, to his castle and murdered him. He tried to make it appear that the murder had been committed by Duncan 's attendants and he caused the king's son and heir, Prince Malcolm, to flee from the land. He then made himself king of Scotland . Malcolm hastened to England and appealed to King Edward for help. When the king was told the number of soldiers Malcolm would probably need he gave orders for double that number to march into Scotland . Malcolm with this support attacked Macbeth, and after several well-fought battles drove the usurper from Scotland and took possession of the throne. Edward did a great deal during his reign to aid the cause of Christianity. He rebuilt the ancient Westminster Abbey in London and erected churches and monasteries in different parts of England . Edward was long supposed to have made many just laws, and years after his death the English people, when suffering from bad government, would exclaim, "Oh, for the good laws and customs of Edward the Confessor!" What he really did was to have the old laws faithfully carried out. He died in 1066 and was buried in Westminster Abbey.
<urn:uuid:0c5f3233-ffb8-468b-ba1a-752b8f0d6f18>
CC-MAIN-2020-05
http://e-reading.mobi/chapter.php/70959/67/Haaren_-_Famous_Men_of_The_Middle_Ages.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00050.warc.gz
en
0.990083
282
3.796875
4
[ -0.5112881064414978, -0.06628216803073883, 0.35114938020706177, -0.09434811025857925, -0.1426762491464615, -0.3846210241317749, 0.5600070357322693, 0.01467396505177021, -0.0029642258305102587, 0.12509670853614807, 0.045871734619140625, -0.4211733937263489, -0.26051896810531616, -0.00751704...
1
Edward took part in the events upon which Shakespeare, five hundred years later, founded his famous tragedy of "Macbeth." There lived in Scotland during his reign an ambitious nobleman named Macbeth, who invited Duncan, the King of Scotland, to his castle and murdered him. He tried to make it appear that the murder had been committed by Duncan 's attendants and he caused the king's son and heir, Prince Malcolm, to flee from the land. He then made himself king of Scotland . Malcolm hastened to England and appealed to King Edward for help. When the king was told the number of soldiers Malcolm would probably need he gave orders for double that number to march into Scotland . Malcolm with this support attacked Macbeth, and after several well-fought battles drove the usurper from Scotland and took possession of the throne. Edward did a great deal during his reign to aid the cause of Christianity. He rebuilt the ancient Westminster Abbey in London and erected churches and monasteries in different parts of England . Edward was long supposed to have made many just laws, and years after his death the English people, when suffering from bad government, would exclaim, "Oh, for the good laws and customs of Edward the Confessor!" What he really did was to have the old laws faithfully carried out. He died in 1066 and was buried in Westminster Abbey.
277
ENGLISH
1
Janša’s duties were fairly simple, but important nonetheless. He kept the bees in the imperial gardens, but his main task was to travel around the land presenting his bee observations, and he had plenty of them. He came to change the size and shape of the hive, meaning they could be stacked upon each other like blocks. He also used his experience as a painter and decorated the fronts of hives, which were previously bland and uninspiring. He would write two books in German during his work at the court, entitled ‘Discussion in Beekeeping’ and ‘A Full Guide to Beekeeping’. His bee lectures were famous throughout the lands, and he popularised the method of smoking bees out of their hives for the honey. He would die in Vienna in 1773, of typhus. His work was influential enough to be considered the only resource for those in the Austrian empire who studied apiculture following his death, and he is considered one of the fathers of European apiculture. The 19th century saw further developments in apiculture, and although the 20th century would see us push on our attempts to eradicate the bee, the art is still practiced today. Slovenia is the only country that officially protects its national bee no less, and Janša would probably be very happy with this fact.
<urn:uuid:ce28c1e9-21f1-4928-a0c5-16638405258e>
CC-MAIN-2020-05
https://www.inyourpocket.com/slovenia/The-Father-of-European-Beekeeping_73548f
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00167.warc.gz
en
0.992624
271
3.4375
3
[ 0.2276974320411682, 0.3529222905635834, 0.32471945881843567, -0.6253657341003418, 0.35985344648361206, -0.032448556274175644, 0.3966171443462372, 0.025106264278292656, -0.19190432131290436, -0.18317991495132446, -0.11220765858888626, -0.6386702656745911, -0.06947353482246399, 0.21393275260...
16
Janša’s duties were fairly simple, but important nonetheless. He kept the bees in the imperial gardens, but his main task was to travel around the land presenting his bee observations, and he had plenty of them. He came to change the size and shape of the hive, meaning they could be stacked upon each other like blocks. He also used his experience as a painter and decorated the fronts of hives, which were previously bland and uninspiring. He would write two books in German during his work at the court, entitled ‘Discussion in Beekeeping’ and ‘A Full Guide to Beekeeping’. His bee lectures were famous throughout the lands, and he popularised the method of smoking bees out of their hives for the honey. He would die in Vienna in 1773, of typhus. His work was influential enough to be considered the only resource for those in the Austrian empire who studied apiculture following his death, and he is considered one of the fathers of European apiculture. The 19th century saw further developments in apiculture, and although the 20th century would see us push on our attempts to eradicate the bee, the art is still practiced today. Slovenia is the only country that officially protects its national bee no less, and Janša would probably be very happy with this fact.
270
ENGLISH
1
The British came to India as traders in the seventeenth century. But from trading they soon became the imperial rulers of the sub-continent. India became a jewel in the British crown. Are you interesteded to know how British rule has benefitted India? Read the underlying article to know about it in detail. The British came to India as traders in the seventeenth century. But from trading they soon became the imperial rulers of the sub-continent. India became a jewel in the British crown. Beginning of the Raj The Battle of Plessey in 1757 marked the start of the British Raj which endured for almost 200 years till 1947. Many Indians are of the opinion that the Raj brought little good to the country. They opine that the British milked India dry for their benefit and left only by compulsion. Perhaps it is true to an extant but the Raj ushered in benefits that have survived the passage of time. Benefits of the Raj When the British came to India the Hindus were an oppressed lot. In addition they were compelled to pay the jizya tax. The fact is that the Hindus were second class citizens in their own land and the Muslim rulers who came from Central Asia ruled with an iron hand. Muslim rule was marked by mass conversions to Islam and destruction of Hindu temples. The British Raj placed the Hindus on an equal footing with the Muslims and they were no longer second class citizens. The Hindus could now profess their religion and worship their Gods. The Jizyia tax was abolished. The Hindus also developed far more than the Muslims in the fields of education and commerce. Reforms by the British Hindu religion had fallen prey to obscure and abhorrent practices. Thus practices like child marriage, sati (burning of the wife on the husband's pyre), untouchability and thuggery (Dacoity) had spread. The Raj took on these evils headlong and wiped them from the map of Hindustan. The Raj also ushered in a period when law and order was restored and the people could live and work freely. Many Englishmen came to India with zeal, mission and love. They helped in the task of preserving the ancient Indian heritage. Thus the famous temples of Khujaraho and creation of the infrastructure for development like roads and railway were created. These benefited the people in a big way. The Raj gave to India a frame work of a civil administration and a unified army. India slowly moved forward under the Raj. There is no doubt that the English were the rulers and lived a royal life. But from the tea gardens of Assam to the cotton mills of Ahmadabad the mark of development can be traced to the Raj. Lastly the Raj brought in the concept of India as one nation. This had never happened before, as even the empires of Asoka and Aurangzeb never covered the entire nation. With the advent of the twenty-first century a look back to the period of British rule will show it as a romantic period. Jainism can be rightly considered the first detailed philosophy that advocated absolute non-violence, and in turn gave rise to the large scale adoption of Ahimsa (Non-violence) in a human civilization, a value that was later adopted by Buddhism and subsequently, to some extent, Christianity. Many aspects of this philosophy are now part of the Indian civilization and some parts have become a universal human value.. Governor Generals were the British statesmen appointed for the governance of India. Later, the titled was changed to Viceroy. History is usually written as the his story of our historical heroes - individuals, whose eventful lives and achievements make it worth to remember and document them. However, in the process of limiting our history to the stories of such greats, we miss out on the way civilizations organize themselves.
<urn:uuid:1f501d2f-0221-4ede-95af-05c86e89143a>
CC-MAIN-2020-05
https://travelandculture.expertscolumn.com/british-raj-its-benefits-and-advantages
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00420.warc.gz
en
0.980499
781
3.8125
4
[ 0.05410254746675491, 0.02259441651403904, 0.22798682749271393, -0.02074912190437317, -0.4912589192390442, -0.41098281741142273, 0.6860644221305847, -0.147454634308815, 0.2844383716583252, 0.1317991018295288, -0.00555978249758482, -0.5614100694656372, 0.06170513108372688, 0.1136524379253387...
1
The British came to India as traders in the seventeenth century. But from trading they soon became the imperial rulers of the sub-continent. India became a jewel in the British crown. Are you interesteded to know how British rule has benefitted India? Read the underlying article to know about it in detail. The British came to India as traders in the seventeenth century. But from trading they soon became the imperial rulers of the sub-continent. India became a jewel in the British crown. Beginning of the Raj The Battle of Plessey in 1757 marked the start of the British Raj which endured for almost 200 years till 1947. Many Indians are of the opinion that the Raj brought little good to the country. They opine that the British milked India dry for their benefit and left only by compulsion. Perhaps it is true to an extant but the Raj ushered in benefits that have survived the passage of time. Benefits of the Raj When the British came to India the Hindus were an oppressed lot. In addition they were compelled to pay the jizya tax. The fact is that the Hindus were second class citizens in their own land and the Muslim rulers who came from Central Asia ruled with an iron hand. Muslim rule was marked by mass conversions to Islam and destruction of Hindu temples. The British Raj placed the Hindus on an equal footing with the Muslims and they were no longer second class citizens. The Hindus could now profess their religion and worship their Gods. The Jizyia tax was abolished. The Hindus also developed far more than the Muslims in the fields of education and commerce. Reforms by the British Hindu religion had fallen prey to obscure and abhorrent practices. Thus practices like child marriage, sati (burning of the wife on the husband's pyre), untouchability and thuggery (Dacoity) had spread. The Raj took on these evils headlong and wiped them from the map of Hindustan. The Raj also ushered in a period when law and order was restored and the people could live and work freely. Many Englishmen came to India with zeal, mission and love. They helped in the task of preserving the ancient Indian heritage. Thus the famous temples of Khujaraho and creation of the infrastructure for development like roads and railway were created. These benefited the people in a big way. The Raj gave to India a frame work of a civil administration and a unified army. India slowly moved forward under the Raj. There is no doubt that the English were the rulers and lived a royal life. But from the tea gardens of Assam to the cotton mills of Ahmadabad the mark of development can be traced to the Raj. Lastly the Raj brought in the concept of India as one nation. This had never happened before, as even the empires of Asoka and Aurangzeb never covered the entire nation. With the advent of the twenty-first century a look back to the period of British rule will show it as a romantic period. Jainism can be rightly considered the first detailed philosophy that advocated absolute non-violence, and in turn gave rise to the large scale adoption of Ahimsa (Non-violence) in a human civilization, a value that was later adopted by Buddhism and subsequently, to some extent, Christianity. Many aspects of this philosophy are now part of the Indian civilization and some parts have become a universal human value.. Governor Generals were the British statesmen appointed for the governance of India. Later, the titled was changed to Viceroy. History is usually written as the his story of our historical heroes - individuals, whose eventful lives and achievements make it worth to remember and document them. However, in the process of limiting our history to the stories of such greats, we miss out on the way civilizations organize themselves.
784
ENGLISH
1
Your feedback is vital to us as we continue to increase the quality of our services. You are here: Date: 22 January 2020 History of Selly Oak Hospital Selly Oak Hospital's birth as an institution was not an event of national pride or cutting-edge medical innovation. Its remarkable story is one of gradual evolution from local poor law workhouse and infirmary to modern hospital treating complex cases of injury and disease. Through the history of one local institution we get a fascinating insight into aspects of the social and medical history of our country. The first buildings on the site of Selly Oak Hospital were those of the King's Norton Union Workhouse, featured in the image below. It was a place for the care of the poor and was one of many workhouses constructed throughout the country following the introduction of the Poor Law Amendment Act of 1834. This act replaced the earlier system of poor relief, dating from 1601. The rising costs of poor relief had become a national problem and the new act sought to address this. Throughout the country, parishes were formed into larger unions with the power to raise money from rates on property to pay for the poor. King's Norton Poor Law Union was formed from the parishes of Harborne, Edgbaston, King's Norton, Northfield and Beoley. Each of these five parishes had individual workhouses. These were replaced in 1872 by the new, much larger one at Selly Oak. It was built to accommodate 200 pauper inmates. Central supervision by the Poor Law Commissioners in London ensured that all workhouses were administered similarly by a set of rules and regulations. How humanely these were interpreted depended entirely upon each local board of poor law guardians, who were local worthies. They were elected annually and gave their services voluntarily. The aim of the Poor Law Amendment Act was to deny any form of relief except through admission to the workhouse. Generally it was assumed that the able-bodied poor could find work and if they didn't then they should be forced to work within the confines of the workhouse. It was thought that if conditions in the workhouse were really bad then the poor would be deterred from seeking relief. However, by the late 18th century it became apparent that the majority of workhouse inmates were the most vulnerable people in society; the young, the old, the chronically sick and the mentally ill. Various Acts of Parliament ruled that separate provision should be made for children and the mentally ill. The sick poor were to be accommodated in separate infirmary blocks. These were often built adjacent to the workhouses and were the forerunners of many great hospitals of today. At Selly Oak, a separate infirmary was built in 1897 at a cost of £52,000. It was the subject of much heated debate as the original estimate had been £18,000. It was a light, clean and practical building, and generally a source of much pride. The guardians took great care and gathered information from other infirmaries to ensure that the final design, put out to a competition and won by Mr. Daniel Arkell, was up-to-date and modern. The infirmary accommodated about 250 patients in eight Nightingale wards and smaller side wards and rooms. There was also provision for maternity cases. Between the two main pavilions were a central administration block, kitchens, a laundry, a water tower, doctors' rooms and a telephone exchange. There was no operating theatre or mortuary and, in the workhouse tradition, the internal walls were not plastered, painted brick being considered good enough for the sick paupers. The workhouse and infirmary were separated by a high dividing wall and were run as separate establishments. The population of the King's Norton Union increased dramatically, and in 1907 extensions to the infirmary and the workhouse made provision for the growing numbers of poor people. This doubled the size of the main hospital building. The Woodlands Nurses' Home was built at the same time to accommodate forty nurses. A small operating room was added to the infirmary. The map below shows the development of the Selly Oak site from 1884 to 1916. There was a resident nursing staff of eight trained nurses and nineteen probationers who were supervised by the Matron. She also had responsibility for the resident female servants. The Steward managed the infirmary, governed the male servants, kept the accounts, ordered provisions, and recorded births and deaths. There was a Senior Medical Officer who attended three times a week between 11:00 and 13:00. A Resident Medical Officer attended at both the infirmary and the workhouse. In 1911, King's Norton – no longer a rural area – left Worcestershire and became part of the City of Birmingham. The Birmingham Union was formed from the unions of King's Norton, Aston and Birmingham. The King's Norton Workhouse Infirmary was renamed Selly Oak Hospital. Over the next two decades, facilities improved with the addition of an operating theatre, plastering of internal walls, and the introduction of physiotherapy, pathological and X-ray services. By 1929 there were seven full-time members of the medical staff, and the medical residence was built at this time. Attitudes to the poor changed gradually and measures to relieve poverty, such as old age pensions and National Insurance, were introduced before the First World War. By 1930, the administrative structure of the Poor Law was finally dismantled. Selly Oak Hospital and the workhouse, renamed Selly Oak House, came under the administration of Birmingham City Council. Selly Oak House was administered separately and used for the care of the elderly chronically sick. Selly Oak Hospital continued to grow, new operating theatres were added in 1931, and the biochemistry and pathology laboratories opened in 1934. Nurses had been trained at Selly Oak since 1897, but it wasn't until 1942 that the School of Nursing was opened. In 1948, when the National Health Service was introduced, Selly Oak Hospital and Selly Oak House were amalgamated. Since then many changes to the site have resulted in the institution we see today. The image below shows an aerial view of Selly Oak Hospital in 1952. By Valerie Richards Valerie Richards (formerly Arthur) worked as a rheumatology nurse specialist at Selly Oak Hospital before retiring some years ago. She is the author of a forthcoming book on the history of Selly Oak Hospital. Information about travelling to, staying at and getting around the hospital. Jobs at UHB A great place to work. Learn why.
<urn:uuid:9e62765c-0a0a-4448-b3d4-969b47542b47>
CC-MAIN-2020-05
https://www.uhb.nhs.uk/soh-history.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00168.warc.gz
en
0.983844
1,355
3.46875
3
[ -0.3374330997467041, -0.40738415718078613, 0.3010990023612976, 0.11372845619916916, 0.06629283726215363, 0.051069457083940506, -0.14020225405693054, 0.10317634046077728, -0.04694356769323349, -0.2438427358865738, 0.0629725232720375, -0.2076813131570816, -0.3427227735519409, 0.1398753523826...
1
Your feedback is vital to us as we continue to increase the quality of our services. You are here: Date: 22 January 2020 History of Selly Oak Hospital Selly Oak Hospital's birth as an institution was not an event of national pride or cutting-edge medical innovation. Its remarkable story is one of gradual evolution from local poor law workhouse and infirmary to modern hospital treating complex cases of injury and disease. Through the history of one local institution we get a fascinating insight into aspects of the social and medical history of our country. The first buildings on the site of Selly Oak Hospital were those of the King's Norton Union Workhouse, featured in the image below. It was a place for the care of the poor and was one of many workhouses constructed throughout the country following the introduction of the Poor Law Amendment Act of 1834. This act replaced the earlier system of poor relief, dating from 1601. The rising costs of poor relief had become a national problem and the new act sought to address this. Throughout the country, parishes were formed into larger unions with the power to raise money from rates on property to pay for the poor. King's Norton Poor Law Union was formed from the parishes of Harborne, Edgbaston, King's Norton, Northfield and Beoley. Each of these five parishes had individual workhouses. These were replaced in 1872 by the new, much larger one at Selly Oak. It was built to accommodate 200 pauper inmates. Central supervision by the Poor Law Commissioners in London ensured that all workhouses were administered similarly by a set of rules and regulations. How humanely these were interpreted depended entirely upon each local board of poor law guardians, who were local worthies. They were elected annually and gave their services voluntarily. The aim of the Poor Law Amendment Act was to deny any form of relief except through admission to the workhouse. Generally it was assumed that the able-bodied poor could find work and if they didn't then they should be forced to work within the confines of the workhouse. It was thought that if conditions in the workhouse were really bad then the poor would be deterred from seeking relief. However, by the late 18th century it became apparent that the majority of workhouse inmates were the most vulnerable people in society; the young, the old, the chronically sick and the mentally ill. Various Acts of Parliament ruled that separate provision should be made for children and the mentally ill. The sick poor were to be accommodated in separate infirmary blocks. These were often built adjacent to the workhouses and were the forerunners of many great hospitals of today. At Selly Oak, a separate infirmary was built in 1897 at a cost of £52,000. It was the subject of much heated debate as the original estimate had been £18,000. It was a light, clean and practical building, and generally a source of much pride. The guardians took great care and gathered information from other infirmaries to ensure that the final design, put out to a competition and won by Mr. Daniel Arkell, was up-to-date and modern. The infirmary accommodated about 250 patients in eight Nightingale wards and smaller side wards and rooms. There was also provision for maternity cases. Between the two main pavilions were a central administration block, kitchens, a laundry, a water tower, doctors' rooms and a telephone exchange. There was no operating theatre or mortuary and, in the workhouse tradition, the internal walls were not plastered, painted brick being considered good enough for the sick paupers. The workhouse and infirmary were separated by a high dividing wall and were run as separate establishments. The population of the King's Norton Union increased dramatically, and in 1907 extensions to the infirmary and the workhouse made provision for the growing numbers of poor people. This doubled the size of the main hospital building. The Woodlands Nurses' Home was built at the same time to accommodate forty nurses. A small operating room was added to the infirmary. The map below shows the development of the Selly Oak site from 1884 to 1916. There was a resident nursing staff of eight trained nurses and nineteen probationers who were supervised by the Matron. She also had responsibility for the resident female servants. The Steward managed the infirmary, governed the male servants, kept the accounts, ordered provisions, and recorded births and deaths. There was a Senior Medical Officer who attended three times a week between 11:00 and 13:00. A Resident Medical Officer attended at both the infirmary and the workhouse. In 1911, King's Norton – no longer a rural area – left Worcestershire and became part of the City of Birmingham. The Birmingham Union was formed from the unions of King's Norton, Aston and Birmingham. The King's Norton Workhouse Infirmary was renamed Selly Oak Hospital. Over the next two decades, facilities improved with the addition of an operating theatre, plastering of internal walls, and the introduction of physiotherapy, pathological and X-ray services. By 1929 there were seven full-time members of the medical staff, and the medical residence was built at this time. Attitudes to the poor changed gradually and measures to relieve poverty, such as old age pensions and National Insurance, were introduced before the First World War. By 1930, the administrative structure of the Poor Law was finally dismantled. Selly Oak Hospital and the workhouse, renamed Selly Oak House, came under the administration of Birmingham City Council. Selly Oak House was administered separately and used for the care of the elderly chronically sick. Selly Oak Hospital continued to grow, new operating theatres were added in 1931, and the biochemistry and pathology laboratories opened in 1934. Nurses had been trained at Selly Oak since 1897, but it wasn't until 1942 that the School of Nursing was opened. In 1948, when the National Health Service was introduced, Selly Oak Hospital and Selly Oak House were amalgamated. Since then many changes to the site have resulted in the institution we see today. The image below shows an aerial view of Selly Oak Hospital in 1952. By Valerie Richards Valerie Richards (formerly Arthur) worked as a rheumatology nurse specialist at Selly Oak Hospital before retiring some years ago. She is the author of a forthcoming book on the history of Selly Oak Hospital. Information about travelling to, staying at and getting around the hospital. Jobs at UHB A great place to work. Learn why.
1,417
ENGLISH
1
Framed Ships’ Passports and Sea Letters About Mediterranean Passports The Mediterranean Passport, commonly called a ship’s passport, was created after the United States concluded a treaty with Algiers in 1795. During the early years of independence, America was one of several nations paying tribute to the Barbary states in exchange for the ability to sail and conduct business in the Mediterranean area without interference. This treaty provided American-owned vessels with a “Passport” that would be recognized by Algeria and later by other Barbary states through similar treaties. These Passports were to be issued only to vessels that were completely owned by citizens of the United States, and were intended to serve as additional evidence of official nationality. In June 1796, a Federal law was passed which required the Secretary of State to prepare a form for the Passport and submit it to the President for approval. The result was a document modeled after a similar British form, called a Mediterranean Pass, which England had employed for the same purpose. The American version was a printed document, on vellum, that measured approximately 15 inches X 11 inches. Centered in the upper half were two engravings, one below the other (some early examples had a single large engraving of a lighthouse with a ship at anchor across the entire top quarter of the document). Signatures of the President of the United States, Secretary of State, and Customs Collector appear in the lower right-hand corner. The United States seal is in the lower left-hand corner…After they were printed, the Passports were cut along the waved line and the top portion sent to the U.S. Consuls along the Barbary coast. The Consuls subsequently provided copies to the corsairs, whose commanders were instructed to let all vessels proceed, who had passes that fit the scalloped tops. Every American vessel sailing in this area was to have a Mediterranean Passport as part of its papers. The penalty for sailing without one was $200.00. The master requested the document from the collector and paid a fee of ten dollars. A bond was also required to insure that the Passport was used in accordance with the conditions under which it was obtained, and was canceled when the document was forfeited. New Passports were not required for each succeeding foreign voyage, but it could not be transferred to another vessel, and it was to be returned to the port of original issue if the ship was wrecked or sold. Mediterranean Passports were received by the various customs districts pre-signed by the President and Secretary of State. The Collector could then insert the vessel’s name and tonnage, master’s name, number of crew members, and the number of guns mounted on the vessel, into the appropriate clanks and sign the document…Unused and outdated Passports were supposed to be returned to the Treasury Department, after first being canceled by cutting holes through the seals. About Sea Letters Unlike the Mediterranean Passport, the Sea Letter does not appear to have had any formal establishment, but rather acquired validity through years of maritime use. The term “Sea Letter” has been used to describe any document issued by a government or monarch to one of its merchant fleet, which established proof of nationality and guaranteed protection for the vessel and her owners . . . The 1822 edition of The Merchants and Shipmaster’s Assistant described the Sea Letter as a document which “specifies the nature of the cargo and the place of destination,” and says that is was only required for vessels bound to the Southern Hemisphere. It further “indicated that . . . this paper is not so necessary as the passport, because that, in most particulars, supplies its place . . .” In 1859 the document was defined as part of the ship’s papers when bound on a foreign voyage, . . . is written in four languages, the French, Spanish, English, and Dutch, and is only necessary for vessels bound round Cape Horn and the Cape of Good Hope. Like the Mediterranean Passport, the Sea Letter was a remarkably standardized document, which changed little during the time that it was used. Usually printed on heavy grade paper, approximately 16 inches x 20 inches in size, the first Sea Letters carried only three languages instead of four. However, they soon became known as “Four Language Sea Letters.” The statement within the document conveys in part that the vessel described is owned entirely by American citizens, and requests that all “Prudent Lords, Kings, Republics, Princes, Dukes, Earls, Barons, Lord, Burgomasters, Schepens, Consullors…” etc., treat the vessel and her crew with fairness and respect. The signatures of the President of the United Stated, the Secretary of State, and the customs collector appear, usually in the middle portion of the document, The United States seal is present, while customs and consular stamps or seals are frequently in evidence. Sea Letters are mentioned in the formative maritime legislation forged by the new Federal governments. Like passports, they provided additional evidence of ownership and nationality, but the criteria by which a shipmaster utilized one document over the other is not completely clear. It was explained at the time that both documents were “rendered necessary of expedient by reason of treaties with foreign powers,” a statement which suggests that certain nations required a particular document because of existing agreements with the United States. …The Sea Letter was valid for only a single voyage, and a bond does not seem to have been required. Neither was it to be returned to the collector when the voyage was completed. Indications are that, as the years progressed, Sea Letters were being used more often by whaling ships than by merchant vessels, perhaps because American whalers fished in areas where this document was preferred as proof of national origin. By providing a statement of American property, signed by the President of the United States, the Mediterranean Passport and the Sea Letter were intended to confirm our status as a neutral nation, when international conflict put added dangers on America’s commerce at sea. By mid-century, however, much of what had previously threatened our shipping was being neutralized by the expanding power of the United States. In 1831, Congress eliminated the fee required for obtaining a Mediterranean Passport. It was argued at the time that the revenue arising from that source, and the protection which it provided, were no longer objects of any importance. As our merchant fleet became more secure, fewer ship owners and shipmasters considered these documents as necessary to guarantee their rights and safety in foreign lands. Both pieces were considered important parts of a ship’s papers in the 1800s. They were kept aboard ship during the voyage and deposited, along with the Registry Certificate, with the appropriate U.S. consular authority anytime the vessel was in a foreign port. The Mediterranean Passport had disappeared from use by 1860, while the Sea Letter was still in evidence several years later. Today both pieces are considered to be important documents in any maritime collection. However, they are also highly valued by autograph collectors and investors, which keeps many fine pieces in private hands. Of note, the passports were actually issued by the Collectors of Customs in the various ports. With the slowness of transportation, it was impossible for the President and Secretary of State to sign documents in a timely manner for specific ships. Therefore, blank passports were signed by the President and the Secretary of State and Sealed with the Great Seal of the United States. Then groups of these signed documents were transported to the ports, where they would be issued as needed by the Collectors of Custom. As a check, they were usually notarized at the time of issuance and the document carried both the date of issuance and the signature and seal of the notary. This process occasionally resulted in a posthumous issuance of a passport after the signing President had died. [Excerpt from American Maritime Documents 1776-1860 by Douglas L. Stein.]
<urn:uuid:3f7674f8-a041-4628-b63d-195110b4ebc6>
CC-MAIN-2020-05
https://www.law.lsu.edu/maritimeart/letters/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00338.warc.gz
en
0.983975
1,655
3.296875
3
[ -0.10738368332386017, 0.12332072108983994, -0.003936308901757002, -0.2975127100944519, -0.47397011518478394, -0.11802075058221817, 0.21852481365203857, 0.3491944372653961, 0.08630922436714172, 0.0304343793541193, 0.2738068699836731, -0.13900861144065857, -0.16641145944595337, 0.28206455707...
9
Framed Ships’ Passports and Sea Letters About Mediterranean Passports The Mediterranean Passport, commonly called a ship’s passport, was created after the United States concluded a treaty with Algiers in 1795. During the early years of independence, America was one of several nations paying tribute to the Barbary states in exchange for the ability to sail and conduct business in the Mediterranean area without interference. This treaty provided American-owned vessels with a “Passport” that would be recognized by Algeria and later by other Barbary states through similar treaties. These Passports were to be issued only to vessels that were completely owned by citizens of the United States, and were intended to serve as additional evidence of official nationality. In June 1796, a Federal law was passed which required the Secretary of State to prepare a form for the Passport and submit it to the President for approval. The result was a document modeled after a similar British form, called a Mediterranean Pass, which England had employed for the same purpose. The American version was a printed document, on vellum, that measured approximately 15 inches X 11 inches. Centered in the upper half were two engravings, one below the other (some early examples had a single large engraving of a lighthouse with a ship at anchor across the entire top quarter of the document). Signatures of the President of the United States, Secretary of State, and Customs Collector appear in the lower right-hand corner. The United States seal is in the lower left-hand corner…After they were printed, the Passports were cut along the waved line and the top portion sent to the U.S. Consuls along the Barbary coast. The Consuls subsequently provided copies to the corsairs, whose commanders were instructed to let all vessels proceed, who had passes that fit the scalloped tops. Every American vessel sailing in this area was to have a Mediterranean Passport as part of its papers. The penalty for sailing without one was $200.00. The master requested the document from the collector and paid a fee of ten dollars. A bond was also required to insure that the Passport was used in accordance with the conditions under which it was obtained, and was canceled when the document was forfeited. New Passports were not required for each succeeding foreign voyage, but it could not be transferred to another vessel, and it was to be returned to the port of original issue if the ship was wrecked or sold. Mediterranean Passports were received by the various customs districts pre-signed by the President and Secretary of State. The Collector could then insert the vessel’s name and tonnage, master’s name, number of crew members, and the number of guns mounted on the vessel, into the appropriate clanks and sign the document…Unused and outdated Passports were supposed to be returned to the Treasury Department, after first being canceled by cutting holes through the seals. About Sea Letters Unlike the Mediterranean Passport, the Sea Letter does not appear to have had any formal establishment, but rather acquired validity through years of maritime use. The term “Sea Letter” has been used to describe any document issued by a government or monarch to one of its merchant fleet, which established proof of nationality and guaranteed protection for the vessel and her owners . . . The 1822 edition of The Merchants and Shipmaster’s Assistant described the Sea Letter as a document which “specifies the nature of the cargo and the place of destination,” and says that is was only required for vessels bound to the Southern Hemisphere. It further “indicated that . . . this paper is not so necessary as the passport, because that, in most particulars, supplies its place . . .” In 1859 the document was defined as part of the ship’s papers when bound on a foreign voyage, . . . is written in four languages, the French, Spanish, English, and Dutch, and is only necessary for vessels bound round Cape Horn and the Cape of Good Hope. Like the Mediterranean Passport, the Sea Letter was a remarkably standardized document, which changed little during the time that it was used. Usually printed on heavy grade paper, approximately 16 inches x 20 inches in size, the first Sea Letters carried only three languages instead of four. However, they soon became known as “Four Language Sea Letters.” The statement within the document conveys in part that the vessel described is owned entirely by American citizens, and requests that all “Prudent Lords, Kings, Republics, Princes, Dukes, Earls, Barons, Lord, Burgomasters, Schepens, Consullors…” etc., treat the vessel and her crew with fairness and respect. The signatures of the President of the United Stated, the Secretary of State, and the customs collector appear, usually in the middle portion of the document, The United States seal is present, while customs and consular stamps or seals are frequently in evidence. Sea Letters are mentioned in the formative maritime legislation forged by the new Federal governments. Like passports, they provided additional evidence of ownership and nationality, but the criteria by which a shipmaster utilized one document over the other is not completely clear. It was explained at the time that both documents were “rendered necessary of expedient by reason of treaties with foreign powers,” a statement which suggests that certain nations required a particular document because of existing agreements with the United States. …The Sea Letter was valid for only a single voyage, and a bond does not seem to have been required. Neither was it to be returned to the collector when the voyage was completed. Indications are that, as the years progressed, Sea Letters were being used more often by whaling ships than by merchant vessels, perhaps because American whalers fished in areas where this document was preferred as proof of national origin. By providing a statement of American property, signed by the President of the United States, the Mediterranean Passport and the Sea Letter were intended to confirm our status as a neutral nation, when international conflict put added dangers on America’s commerce at sea. By mid-century, however, much of what had previously threatened our shipping was being neutralized by the expanding power of the United States. In 1831, Congress eliminated the fee required for obtaining a Mediterranean Passport. It was argued at the time that the revenue arising from that source, and the protection which it provided, were no longer objects of any importance. As our merchant fleet became more secure, fewer ship owners and shipmasters considered these documents as necessary to guarantee their rights and safety in foreign lands. Both pieces were considered important parts of a ship’s papers in the 1800s. They were kept aboard ship during the voyage and deposited, along with the Registry Certificate, with the appropriate U.S. consular authority anytime the vessel was in a foreign port. The Mediterranean Passport had disappeared from use by 1860, while the Sea Letter was still in evidence several years later. Today both pieces are considered to be important documents in any maritime collection. However, they are also highly valued by autograph collectors and investors, which keeps many fine pieces in private hands. Of note, the passports were actually issued by the Collectors of Customs in the various ports. With the slowness of transportation, it was impossible for the President and Secretary of State to sign documents in a timely manner for specific ships. Therefore, blank passports were signed by the President and the Secretary of State and Sealed with the Great Seal of the United States. Then groups of these signed documents were transported to the ports, where they would be issued as needed by the Collectors of Custom. As a check, they were usually notarized at the time of issuance and the document carried both the date of issuance and the signature and seal of the notary. This process occasionally resulted in a posthumous issuance of a passport after the signing President had died. [Excerpt from American Maritime Documents 1776-1860 by Douglas L. Stein.]
1,633
ENGLISH
1
Harriet Tubman was born to enslaved parents in Dorchester County, Maryland, and originally named Araminta Harriet Ross. Her mother, Harriet “ Old Rit” Green, was owned by Mary Pattison Brodess. Her father, Ben (Old Ben) Ross, was owned by Anthony Thompson, who eventually married Mary Brodess. Araminta, or “Minty,” was one of nine children. While the year of Araminta’s birth is unknown, it probably occurred between 1820 and 1825. Minty’s early life was full of hardship. Mary Brodess’ son Edward sold three of her sisters to distant plantations, severing the family. When a trader from Georgia approached Brodess about buying Old Rit’s youngest son, Moses, she successfully resisted the further fracturing of her family, setting a powerful example for her young daughter. Physical violence was a part of daily life for Tubman and her family. The violence she suffered early in life caused permanent physical injuries. Harriet later recounted a particular day when she was lashed five times before breakfast. She carried the scars for the rest of her life. The most severe injury occurred when Tubman was fifteen. Sent to a dry-goods store for supplies, she encountered a slave who had left the fields without permission. The man’s overseer demanded that Tubman help restrain the runaway. When Harriet refused, the overseer threw a two-pound weight that struck her in the head. Tubman endured seizures, severe headaches and narcoleptic episodes for the rest of her life. She also experienced intense dream states, which she classified as religious experiences. The line between freedom and slavery was hazy for Tubman and her family. Harriet Tubman’s father, Old Ben, was freed from slavery at the age of 45, as stipulated in the will of a previous owner. Nonetheless, Ben had few options but to continue working as a timber estimator and foreman for his former owners. Although similar manumission stipulations applied to Old Rit and her children, the individuals who owned the family chose not to free them. Despite his free status, Ben had little power to challenge their decision. By the time Harriet reached adulthood, around half of the African-American people on the eastern shore of Maryland were free. It was not unusual for a family to include both free and enslaved people, as did Tubman’s immediate family. In 1844, Harriet married a free black man named John Tubman. Little is known about John Tubman or his marriage to Harriet. Any children they might have had would have been considered enslaved, since the mother’s status dictated that of any offspring. Araminta changed her name to Harriet around the time of her marriage, possibly to honor her mother. Harriet Tubman escaped from slavery in 1849, fleeing to Philadelphia. Tubman decided to escape following a bout of illness and the death of her owner in 1849. Tubman feared that her family would be further severed, and feared for own her fate as a sickly slave of low economic value. She initially left Maryland with two of her brothers, Ben and Henry, on September 17, 1849. A notice published in the Cambridge Democrat offered a $300 reward for the return of Araminta (Minty), Harry and Ben. Once they had left, Tubman’s brothers had second thoughts and returned to the plantation. Harriet had no plans to remain in bondage. Seeing her brothers safely home, she soon set off alone for Pennsylvania. Tubman made use of the network known as the Underground Railroad to travel nearly 90 miles to Philadelphia. She crossed into the free state of Pennsylvania with a feeling of relief and awe, and recalled later: “When I found I had crossed that line, I looked at my hands to see if I was the same person. There was such a glory over everything; the sun came like gold through the trees, and over the fields, and I felt like I was in Heaven.” Rather than remaining in the safety of the North, Tubman made it her mission to rescue her family and others living in slavery. In December 1850, Tubman received a warning that her niece Kessiah was going to be sold, along with her two young children. Kessiah’s husband, a free black man named John Bowley, made the winning bid for his wife at an auction in Baltimore. Harriet then helped the entire family make the journey to Philadelphia. This was the first of many trips by Tubman, who earned the nickname “Moses” for her leadership. Over time, she was able to guide her parents, several siblings and about 60 others to freedom. One family member who declined to make the journey was Harriet’s husband, John, who preferred to stay in Maryland with his new wife. The dynamics of escaping slavery changed in 1850, with the passage of the Fugitive Slave Law. This law stated that escaped slaves could be captured in the North and returned to slavery, leading to the abduction of former slaves and free blacks living in Free States. Law enforcement officials in the North were compelled to aid in the capture of slaves, regardless of their personal principles. In response to the law, Tubman re-routed the Underground Railroad to Canada, which prohibited slavery categorically. In December 1851, Tubman guided a group of 11 fugitives northward. There is evidence to suggest that the party stopped at the home of abolitionist and former slave Frederick Douglass. In April 1858, Tubman was introduced to the abolitionist John Brown, who advocated the use of violence to disrupt and destroy the institution of slavery. Tubman shared Brown’s goals and at least tolerated his methods. Tubman claimed to have had a prophetic vision of Brown before they met. When Brown began recruiting supporters for an attack on slaveholders at Harper’s Ferry, he turned to “General Tubman” for help. After Brown’s subsequent execution, Tubman praised him as a martyr. Harriet Tubman remained active during the Civil War. Working for the Union Army as a cook and nurse, Tubman quickly became an armed scout and spy. The first woman to lead an armed expedition in the war, she guided the Combahee River Raid, which liberated more than 700 slaves in South Carolina. In early 1859, abolitionist Senator William H. Seward sold Tubman a small piece of land on the outskirts of Auburn, New York. The land in Auburn became a haven for Tubman’s family and friends. Tubman spent the years following the war on this property, tending to her family and others who had taken up residence there. In 1869, she married a Civil War veteran named Nelson Davis. In 1874, Harriet and Nelson adopted a baby girl named Gertie. Despite Harriet’s fame and reputation, she was never financially secure. Tubman’s friends and supporters were able to raise some funds to support her. One admirer, Sarah H. Bradford, wrote a biography entitled Scenes in the Life of Harriet Tubman, with the proceeds going to Tubman and her family. Harriet continued to give freely in spite of her economic woes. In 1903, she donated a parcel of her land to the African Methodist Episcopal Church in Auburn. The Harriet Tubman Home for the Aged opened on this site in 1908. As Tubman aged, the head injuries sustained early in her life became more painful and disruptive. She underwent brain surgery at Boston’s Massachusetts General Hospital to alleviate the pains and “buzzing” she experienced regularly. Tubman was eventually admitted into the rest home named in her honor. Surrounded by friends and family members, Harriet Tubman died of pneumonia in 1913. Harriet Tubman, widely known and well-respected while she was alive, became an American icon in the years after she died. A survey at the end of the 20th century named her as one of the most famous civilians in American history before the Civil War, third only to Betsy Ross and Paul Revere. She continues to inspire generations of Americans struggling for civil rights with her bravery and bold action. When she died, Tubman was buried with military honors at Fort Hill Cemetery in Auburn. The city commemorated her life with a plaque on the courthouse. Tubman was celebrated in many other ways throughout the nation in the 20th century. Dozens of schools were named in her honor, and both the Harriet Tubman Home in Auburn and the Harriet Tubman Museum in Cambridge serve as monuments to her life. Harriet Tubman. (2014). The Biography Channel website.
<urn:uuid:e4e30977-e77f-4fd8-84e4-348e985b9ecd>
CC-MAIN-2020-05
https://www.clevelandumadaop.com/single-post/2014/03/12/Harriet-Tubman-
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00548.warc.gz
en
0.983762
1,804
4.09375
4
[ -0.0864383652806282, 0.09287694096565247, 0.38984590768814087, -0.177652969956398, 0.024314794689416885, 0.08293920755386353, 0.19729670882225037, -0.3078825771808624, -0.327617347240448, 0.191067636013031, 0.4090020954608917, -0.026768343523144722, 0.00013674329966306686, 0.01993987336754...
8
Harriet Tubman was born to enslaved parents in Dorchester County, Maryland, and originally named Araminta Harriet Ross. Her mother, Harriet “ Old Rit” Green, was owned by Mary Pattison Brodess. Her father, Ben (Old Ben) Ross, was owned by Anthony Thompson, who eventually married Mary Brodess. Araminta, or “Minty,” was one of nine children. While the year of Araminta’s birth is unknown, it probably occurred between 1820 and 1825. Minty’s early life was full of hardship. Mary Brodess’ son Edward sold three of her sisters to distant plantations, severing the family. When a trader from Georgia approached Brodess about buying Old Rit’s youngest son, Moses, she successfully resisted the further fracturing of her family, setting a powerful example for her young daughter. Physical violence was a part of daily life for Tubman and her family. The violence she suffered early in life caused permanent physical injuries. Harriet later recounted a particular day when she was lashed five times before breakfast. She carried the scars for the rest of her life. The most severe injury occurred when Tubman was fifteen. Sent to a dry-goods store for supplies, she encountered a slave who had left the fields without permission. The man’s overseer demanded that Tubman help restrain the runaway. When Harriet refused, the overseer threw a two-pound weight that struck her in the head. Tubman endured seizures, severe headaches and narcoleptic episodes for the rest of her life. She also experienced intense dream states, which she classified as religious experiences. The line between freedom and slavery was hazy for Tubman and her family. Harriet Tubman’s father, Old Ben, was freed from slavery at the age of 45, as stipulated in the will of a previous owner. Nonetheless, Ben had few options but to continue working as a timber estimator and foreman for his former owners. Although similar manumission stipulations applied to Old Rit and her children, the individuals who owned the family chose not to free them. Despite his free status, Ben had little power to challenge their decision. By the time Harriet reached adulthood, around half of the African-American people on the eastern shore of Maryland were free. It was not unusual for a family to include both free and enslaved people, as did Tubman’s immediate family. In 1844, Harriet married a free black man named John Tubman. Little is known about John Tubman or his marriage to Harriet. Any children they might have had would have been considered enslaved, since the mother’s status dictated that of any offspring. Araminta changed her name to Harriet around the time of her marriage, possibly to honor her mother. Harriet Tubman escaped from slavery in 1849, fleeing to Philadelphia. Tubman decided to escape following a bout of illness and the death of her owner in 1849. Tubman feared that her family would be further severed, and feared for own her fate as a sickly slave of low economic value. She initially left Maryland with two of her brothers, Ben and Henry, on September 17, 1849. A notice published in the Cambridge Democrat offered a $300 reward for the return of Araminta (Minty), Harry and Ben. Once they had left, Tubman’s brothers had second thoughts and returned to the plantation. Harriet had no plans to remain in bondage. Seeing her brothers safely home, she soon set off alone for Pennsylvania. Tubman made use of the network known as the Underground Railroad to travel nearly 90 miles to Philadelphia. She crossed into the free state of Pennsylvania with a feeling of relief and awe, and recalled later: “When I found I had crossed that line, I looked at my hands to see if I was the same person. There was such a glory over everything; the sun came like gold through the trees, and over the fields, and I felt like I was in Heaven.” Rather than remaining in the safety of the North, Tubman made it her mission to rescue her family and others living in slavery. In December 1850, Tubman received a warning that her niece Kessiah was going to be sold, along with her two young children. Kessiah’s husband, a free black man named John Bowley, made the winning bid for his wife at an auction in Baltimore. Harriet then helped the entire family make the journey to Philadelphia. This was the first of many trips by Tubman, who earned the nickname “Moses” for her leadership. Over time, she was able to guide her parents, several siblings and about 60 others to freedom. One family member who declined to make the journey was Harriet’s husband, John, who preferred to stay in Maryland with his new wife. The dynamics of escaping slavery changed in 1850, with the passage of the Fugitive Slave Law. This law stated that escaped slaves could be captured in the North and returned to slavery, leading to the abduction of former slaves and free blacks living in Free States. Law enforcement officials in the North were compelled to aid in the capture of slaves, regardless of their personal principles. In response to the law, Tubman re-routed the Underground Railroad to Canada, which prohibited slavery categorically. In December 1851, Tubman guided a group of 11 fugitives northward. There is evidence to suggest that the party stopped at the home of abolitionist and former slave Frederick Douglass. In April 1858, Tubman was introduced to the abolitionist John Brown, who advocated the use of violence to disrupt and destroy the institution of slavery. Tubman shared Brown’s goals and at least tolerated his methods. Tubman claimed to have had a prophetic vision of Brown before they met. When Brown began recruiting supporters for an attack on slaveholders at Harper’s Ferry, he turned to “General Tubman” for help. After Brown’s subsequent execution, Tubman praised him as a martyr. Harriet Tubman remained active during the Civil War. Working for the Union Army as a cook and nurse, Tubman quickly became an armed scout and spy. The first woman to lead an armed expedition in the war, she guided the Combahee River Raid, which liberated more than 700 slaves in South Carolina. In early 1859, abolitionist Senator William H. Seward sold Tubman a small piece of land on the outskirts of Auburn, New York. The land in Auburn became a haven for Tubman’s family and friends. Tubman spent the years following the war on this property, tending to her family and others who had taken up residence there. In 1869, she married a Civil War veteran named Nelson Davis. In 1874, Harriet and Nelson adopted a baby girl named Gertie. Despite Harriet’s fame and reputation, she was never financially secure. Tubman’s friends and supporters were able to raise some funds to support her. One admirer, Sarah H. Bradford, wrote a biography entitled Scenes in the Life of Harriet Tubman, with the proceeds going to Tubman and her family. Harriet continued to give freely in spite of her economic woes. In 1903, she donated a parcel of her land to the African Methodist Episcopal Church in Auburn. The Harriet Tubman Home for the Aged opened on this site in 1908. As Tubman aged, the head injuries sustained early in her life became more painful and disruptive. She underwent brain surgery at Boston’s Massachusetts General Hospital to alleviate the pains and “buzzing” she experienced regularly. Tubman was eventually admitted into the rest home named in her honor. Surrounded by friends and family members, Harriet Tubman died of pneumonia in 1913. Harriet Tubman, widely known and well-respected while she was alive, became an American icon in the years after she died. A survey at the end of the 20th century named her as one of the most famous civilians in American history before the Civil War, third only to Betsy Ross and Paul Revere. She continues to inspire generations of Americans struggling for civil rights with her bravery and bold action. When she died, Tubman was buried with military honors at Fort Hill Cemetery in Auburn. The city commemorated her life with a plaque on the courthouse. Tubman was celebrated in many other ways throughout the nation in the 20th century. Dozens of schools were named in her honor, and both the Harriet Tubman Home in Auburn and the Harriet Tubman Museum in Cambridge serve as monuments to her life. Harriet Tubman. (2014). The Biography Channel website.
1,823
ENGLISH
1
Andrew Jackson and the Trail of Tears The Long, Bitter Trail: Andrew Jackson and the Indians was written by Anthony F.C. Wallace. In his book, the main argument was how Andrew Jackson had a direct affect on the mistreatment and removal of the native Americans from their homelands to Indian Territory. It was a trail of blood, a trail of death, but ultimately it was known as the “Trail of Tears”. Throughout Jackson’s two terms as President, Jackson used his power unjustly. As a man from the Frontier State of Tennessee and a leader in the Indian wars, Jackson loathed the Native Americans. Keeping with consistency, Jackson found a way to use his power incorrectly to eliminate the Native Americans. In May 1830, President Andrew Jackson signed into law the Indian Removal Act. This act required all tribes east of the Mississippi River to leave their lands and travel to reservations in the Oklahoma Territory on the Great Plains. This was done because of the pressure of white settlers who wanted to take over the lands on which the Indians had lived.Order now The white settlers were already emigrating to the Union, or America. The East Coast was burdened with new settlers and becoming vastly populated. President Andrew Jackson and the government had to find a way to move people to the West to make room. In 1830, a new state law said that the Cherokees would be under the jurisdiction of state rather than federal law. This meant that the Indians now had little, if any, protection against the white settlers that desired their land. However, when the Cherokees brought their case to the Supreme Court, they were told that they could not sue on the basis that they were not a foreign nation. In 1832, though, on appeal, the U.S. Supreme Court ruled that the Cherokees were a “domestic dependent nation,” and therefore, eligible to receive federal protection against the state. However, Jackson essentially overruled the decision. By this, Jackson implied that he had more power than anyone else did and he could enforce the bill himself. This is yet another way in which Jackson abused his presidential power in order to produce a favorable result that complied with his own beliefs. The Indian Removal Act forced all Indians tribes be moved west of the Mississippi River. The Choctaw was the first tribe to leave from the southeast. Three years later the Chickasaw joined them. The Creeks were forced off their land in 1836. In the spring of 1838, the Cherokee became the last of the great southeastern nations to leave their eastern lands. In 1838 and 1839, the United States Army removed the Cherokee people by force with dragnets and held in wooden stockades, except for a few hundred that hid in the mountains in North Carolina. The Cherokees could take only what they could easily carry. The items that a few did take were often ordered to be left behind along the way. People were driven off their land at bayonet or gunpoint. Many of the old and the children died on the road due to the pace, exposure and bad food. They traveled by walking, sometimes without shoes or moccasins, horses, or covered wagons. Transportation was given only to those who could pay for it. Their clothing was thin and their bedding was light. There was not much medical attention because it took them so long to travel this trail. What food supplies were given had been rejected by the whites. Rotten beef and vegetables were the main provisions. The journey on which the Indians traveled brought many deaths. Approximately four thousand of the thirteen thousand Cherokees died on their way due to exposure to the bitter cold, disease, and starvation. This trail was better known as the “Trail of Tears”. The hardships of the Indian Nations were due to the signed Indian Removal Act that resulted in the Trail of Tears. Anthony F.C. Wallace believed that Jackson’s personal emotions toward the Indian Nations directly contributed to the pain and suffering that the Indians had to endure throughout the “Trail of Tears”. Wallace’s facts and point of views are credible because his is a well-known .
<urn:uuid:a5f31788-fd61-4de8-a476-807a6f130374>
CC-MAIN-2020-05
https://artscolumbia.org/essays/andrew-jackson-and-trail-of-tears-essay-114622/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00400.warc.gz
en
0.989318
858
4.5625
5
[ -0.3994607925415039, 0.08604428172111511, 0.7869740128517151, 0.11345557868480682, -0.0724257081747055, -0.01839948259294033, 0.0281782578676939, -0.419370174407959, -0.07870890945196152, -0.1750347763299942, 0.04238620400428772, 0.0462372824549675, -0.008246085606515408, -0.07897535711526...
2
Andrew Jackson and the Trail of Tears The Long, Bitter Trail: Andrew Jackson and the Indians was written by Anthony F.C. Wallace. In his book, the main argument was how Andrew Jackson had a direct affect on the mistreatment and removal of the native Americans from their homelands to Indian Territory. It was a trail of blood, a trail of death, but ultimately it was known as the “Trail of Tears”. Throughout Jackson’s two terms as President, Jackson used his power unjustly. As a man from the Frontier State of Tennessee and a leader in the Indian wars, Jackson loathed the Native Americans. Keeping with consistency, Jackson found a way to use his power incorrectly to eliminate the Native Americans. In May 1830, President Andrew Jackson signed into law the Indian Removal Act. This act required all tribes east of the Mississippi River to leave their lands and travel to reservations in the Oklahoma Territory on the Great Plains. This was done because of the pressure of white settlers who wanted to take over the lands on which the Indians had lived.Order now The white settlers were already emigrating to the Union, or America. The East Coast was burdened with new settlers and becoming vastly populated. President Andrew Jackson and the government had to find a way to move people to the West to make room. In 1830, a new state law said that the Cherokees would be under the jurisdiction of state rather than federal law. This meant that the Indians now had little, if any, protection against the white settlers that desired their land. However, when the Cherokees brought their case to the Supreme Court, they were told that they could not sue on the basis that they were not a foreign nation. In 1832, though, on appeal, the U.S. Supreme Court ruled that the Cherokees were a “domestic dependent nation,” and therefore, eligible to receive federal protection against the state. However, Jackson essentially overruled the decision. By this, Jackson implied that he had more power than anyone else did and he could enforce the bill himself. This is yet another way in which Jackson abused his presidential power in order to produce a favorable result that complied with his own beliefs. The Indian Removal Act forced all Indians tribes be moved west of the Mississippi River. The Choctaw was the first tribe to leave from the southeast. Three years later the Chickasaw joined them. The Creeks were forced off their land in 1836. In the spring of 1838, the Cherokee became the last of the great southeastern nations to leave their eastern lands. In 1838 and 1839, the United States Army removed the Cherokee people by force with dragnets and held in wooden stockades, except for a few hundred that hid in the mountains in North Carolina. The Cherokees could take only what they could easily carry. The items that a few did take were often ordered to be left behind along the way. People were driven off their land at bayonet or gunpoint. Many of the old and the children died on the road due to the pace, exposure and bad food. They traveled by walking, sometimes without shoes or moccasins, horses, or covered wagons. Transportation was given only to those who could pay for it. Their clothing was thin and their bedding was light. There was not much medical attention because it took them so long to travel this trail. What food supplies were given had been rejected by the whites. Rotten beef and vegetables were the main provisions. The journey on which the Indians traveled brought many deaths. Approximately four thousand of the thirteen thousand Cherokees died on their way due to exposure to the bitter cold, disease, and starvation. This trail was better known as the “Trail of Tears”. The hardships of the Indian Nations were due to the signed Indian Removal Act that resulted in the Trail of Tears. Anthony F.C. Wallace believed that Jackson’s personal emotions toward the Indian Nations directly contributed to the pain and suffering that the Indians had to endure throughout the “Trail of Tears”. Wallace’s facts and point of views are credible because his is a well-known .
855
ENGLISH
1
America turned to domestic isolation and social conservatism because of the Red Scare. The Red Scare cut back free speech, in which the hysteria caused many to want to eliminate the communists. Some states made it illegal to advocate overthrowing the government. From 1920 to 1921 about 800,000 Europeans named New Immigrants flooded into the US. Because of this Congress passed the Emergency Quota Act of 1921 which only allowed 3 percent of Europeans to come the US. Soon after, the Immigration Act of 1924 was passed cutting the 3 percent to 2 percent. This also ended all Japanese immigration. The US was anti-Europe and in this case they decided to isolate themselves from Europe. 2. Immigration became a big thing in the US. Europeans began flooding into the US and many were having a problem with it. The KKK reemerged again and this time fighting for America. Congress passed numerous Immigration Acts and this almost ruled out immigration. Prohibition was a long, fought cause that women supported. A Prohibition law was only for people to continue to violate it. Prohibition led to the rise in gangs. With gangs entailed murder and massacre. Black saw the happier side if things and began making music and doing things to entertain others. 3. Many thought immigration would hurt the country. In the 1920’s, after WWI, people were anti-Europe. Immigration was only a problem now because the immigrants were coming from Europe. Immigrants were believed to not be trusted. The KKK advocated that immigrants were responsible for crime. They also believed that immigrants brought along with them ideals of communism. Immigrants were seen as a great threat to society. 4. America did not like immigrants to begin with. With the pressure put on them to “Americanize” was not true to America. Europeans were part of a different culture. In a world with cultural pluralism, every culture would be accepting of one another. Everyone would understand what it is to be from a different culture. Everyone would accept the different laws and codes. 5. The Red Scare brought on the fear in Americans over communism. Anti-foreignism sprouted after Sacco and Vanzetti were captured and executed. It continued to thrive after foreigners began flocking to the US. The Ku Klux Klan was formed and they were anti-foreign, anti-Catholic, anti-black, anti-internationalist, and anti-revolutionists. They felt that immigrants would never be loyal and they were responsible for most crimes. Prohibition caused tension among most everyone. It was mostly supported by women which also brought about gender discrimination. 6. Henry Ford had perfected the assembly line and was producing an automobile every 10 seconds. Automobiles provided more freedom, luxury, and privacy. Advertising became popular and used ploys such as persuasion, seduction, and sex to appeal to the people. In turn they hoped this would sell merchandise. As hoped people bought cars. Two new, but dangerous, techniques were used. This included installment plans or credit to buy things. Both ways plunged many people into debt. 7. Movies became a pastime of the people. The first movies featured nudity which shocked the public. In turn they forced codes of censorship to be placed on these kinds of movies. A new group, labeled erotic and inappropriate, was called flappers. They danced to jazz and risque dances at that. This became known as a sexual revolution. Sigmund Freud said that sexual repression was responsible was society’s ills. African American development was of the same cultural movement. They produced music such a bee-bopping and made it possible for the flappers to be able to dance. 8. Before WWI, people were very idealistic in what they could achieve. After the war, people learned that not everything could be perfect. They began to get caught up in music, movies, and cars. There was also a new sense of nationalism. The KKK emerged again, but only to deal with foreign issues. The Red Scare had people wanting get rid of anyone who questioned the government. This was a time for people to care about themselves and less about the small things.
<urn:uuid:14d3cab5-9605-44ac-8a31-10e7cbfbbe61>
CC-MAIN-2020-05
https://philosophyessays.net/history-chapter/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00132.warc.gz
en
0.989085
835
3.625
4
[ -0.015448496676981449, -0.10116776823997498, -0.08718511462211609, -0.0693155974149704, 0.07454721629619598, 0.09839307516813278, -0.017681874334812164, 0.41130536794662476, -0.07307880371809006, 0.1646735817193985, 0.384486585855484, 0.4219804108142853, -0.26793354749679565, 0.07452431321...
3
America turned to domestic isolation and social conservatism because of the Red Scare. The Red Scare cut back free speech, in which the hysteria caused many to want to eliminate the communists. Some states made it illegal to advocate overthrowing the government. From 1920 to 1921 about 800,000 Europeans named New Immigrants flooded into the US. Because of this Congress passed the Emergency Quota Act of 1921 which only allowed 3 percent of Europeans to come the US. Soon after, the Immigration Act of 1924 was passed cutting the 3 percent to 2 percent. This also ended all Japanese immigration. The US was anti-Europe and in this case they decided to isolate themselves from Europe. 2. Immigration became a big thing in the US. Europeans began flooding into the US and many were having a problem with it. The KKK reemerged again and this time fighting for America. Congress passed numerous Immigration Acts and this almost ruled out immigration. Prohibition was a long, fought cause that women supported. A Prohibition law was only for people to continue to violate it. Prohibition led to the rise in gangs. With gangs entailed murder and massacre. Black saw the happier side if things and began making music and doing things to entertain others. 3. Many thought immigration would hurt the country. In the 1920’s, after WWI, people were anti-Europe. Immigration was only a problem now because the immigrants were coming from Europe. Immigrants were believed to not be trusted. The KKK advocated that immigrants were responsible for crime. They also believed that immigrants brought along with them ideals of communism. Immigrants were seen as a great threat to society. 4. America did not like immigrants to begin with. With the pressure put on them to “Americanize” was not true to America. Europeans were part of a different culture. In a world with cultural pluralism, every culture would be accepting of one another. Everyone would understand what it is to be from a different culture. Everyone would accept the different laws and codes. 5. The Red Scare brought on the fear in Americans over communism. Anti-foreignism sprouted after Sacco and Vanzetti were captured and executed. It continued to thrive after foreigners began flocking to the US. The Ku Klux Klan was formed and they were anti-foreign, anti-Catholic, anti-black, anti-internationalist, and anti-revolutionists. They felt that immigrants would never be loyal and they were responsible for most crimes. Prohibition caused tension among most everyone. It was mostly supported by women which also brought about gender discrimination. 6. Henry Ford had perfected the assembly line and was producing an automobile every 10 seconds. Automobiles provided more freedom, luxury, and privacy. Advertising became popular and used ploys such as persuasion, seduction, and sex to appeal to the people. In turn they hoped this would sell merchandise. As hoped people bought cars. Two new, but dangerous, techniques were used. This included installment plans or credit to buy things. Both ways plunged many people into debt. 7. Movies became a pastime of the people. The first movies featured nudity which shocked the public. In turn they forced codes of censorship to be placed on these kinds of movies. A new group, labeled erotic and inappropriate, was called flappers. They danced to jazz and risque dances at that. This became known as a sexual revolution. Sigmund Freud said that sexual repression was responsible was society’s ills. African American development was of the same cultural movement. They produced music such a bee-bopping and made it possible for the flappers to be able to dance. 8. Before WWI, people were very idealistic in what they could achieve. After the war, people learned that not everything could be perfect. They began to get caught up in music, movies, and cars. There was also a new sense of nationalism. The KKK emerged again, but only to deal with foreign issues. The Red Scare had people wanting get rid of anyone who questioned the government. This was a time for people to care about themselves and less about the small things.
871
ENGLISH
1
What was the first musical instrument? If you guessed human voice or using hands as percussion instruments, you are perhaps right. But when did man actually start creating instruments specifically for the purpose of music? Believe it or not, 45,000 years ago! In caves along Germany's Danube River, archeologists have discovered flutes carved out of bird bones and mammoth tusks. With five finger-holes and a V-shaped mouth, these prehistoric flutes look very similar to their modern-day versions. In these very same caves, researchers have found cave art and the earliest known statue of a woman made with ivory. Who were these artists? Radiocarbon dating of bone samples found in the same layer as the musical instruments reveal that these settlements belong to the Aurignacian culture of the Upper Paleolithic period. The Stone Age, when man was a hunter-gatherer, is divided into three periods, the last of which is the Upper Paleolithic period. It lasted from 45,000 to 10,000 years ago after which we saw the rise of agriculture as man settled down into communities. There were many cultures or races during this Upper Paleolithic period -- one of which were the Aurignacians. They lived in small pockets in present-day Germany, France, Austria and Spain. While this culture saw advances in tool-making, its characteristic feature was the sudden explosion in arts. The spectacular cave paintings that we wrote about here are from this period. The rise of the flute Researchers speculate that flutes may have been used in hunting and ancient rituals, but most importantly, it would have brought people together around stone age campfires. The earliest mention of the flute is in a Chinese poem from 9th century B.C. Flutes were also used by Sumerians and Egyptians thousands of years ago. Following the fall of the Roman Empire, flutes almost disappeared from Europe until the Crusades brought Europeans into contact with the Arabs. It is believed that the flute was introduced into western Europe at the end of the 2nd century. The first usage of the word 'flute' is thought to have been in France during the 12th century. They were one of the most popular instruments of the Italian musical scene throughout the 16th century and it is said King Henry VIII had a large collection of flutes. The flutes of today owe their design to Theobald Boehm, a goldsmith, and a flute-maker, who changed the spacing between finger-holes to generate precise notes. If you are familiar with a wind instrument or are in your school band, you realize how difficult it is to play the flute. The fact that it was invented 45,000 years ago is mind-blowing! Courtesy Science Daily, Wikipedia
<urn:uuid:6bd211f0-eaa0-430d-ba25-5993532401be>
CC-MAIN-2020-05
https://youngzine.org/news/history/stone-age-musicians
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00320.warc.gz
en
0.981201
573
3.953125
4
[ -0.05484355241060257, 0.2943577170372009, 0.3916487991809845, -0.4295963644981384, -0.6778729557991028, 0.02435419335961342, 0.3239542841911316, -0.07897793501615524, 0.17461365461349487, -0.06856517493724823, 0.09003102034330368, -0.3612242341041565, -0.023175086826086044, 0.4406730830669...
3
What was the first musical instrument? If you guessed human voice or using hands as percussion instruments, you are perhaps right. But when did man actually start creating instruments specifically for the purpose of music? Believe it or not, 45,000 years ago! In caves along Germany's Danube River, archeologists have discovered flutes carved out of bird bones and mammoth tusks. With five finger-holes and a V-shaped mouth, these prehistoric flutes look very similar to their modern-day versions. In these very same caves, researchers have found cave art and the earliest known statue of a woman made with ivory. Who were these artists? Radiocarbon dating of bone samples found in the same layer as the musical instruments reveal that these settlements belong to the Aurignacian culture of the Upper Paleolithic period. The Stone Age, when man was a hunter-gatherer, is divided into three periods, the last of which is the Upper Paleolithic period. It lasted from 45,000 to 10,000 years ago after which we saw the rise of agriculture as man settled down into communities. There were many cultures or races during this Upper Paleolithic period -- one of which were the Aurignacians. They lived in small pockets in present-day Germany, France, Austria and Spain. While this culture saw advances in tool-making, its characteristic feature was the sudden explosion in arts. The spectacular cave paintings that we wrote about here are from this period. The rise of the flute Researchers speculate that flutes may have been used in hunting and ancient rituals, but most importantly, it would have brought people together around stone age campfires. The earliest mention of the flute is in a Chinese poem from 9th century B.C. Flutes were also used by Sumerians and Egyptians thousands of years ago. Following the fall of the Roman Empire, flutes almost disappeared from Europe until the Crusades brought Europeans into contact with the Arabs. It is believed that the flute was introduced into western Europe at the end of the 2nd century. The first usage of the word 'flute' is thought to have been in France during the 12th century. They were one of the most popular instruments of the Italian musical scene throughout the 16th century and it is said King Henry VIII had a large collection of flutes. The flutes of today owe their design to Theobald Boehm, a goldsmith, and a flute-maker, who changed the spacing between finger-holes to generate precise notes. If you are familiar with a wind instrument or are in your school band, you realize how difficult it is to play the flute. The fact that it was invented 45,000 years ago is mind-blowing! Courtesy Science Daily, Wikipedia
574
ENGLISH
1
Teotihuacan was in touch with other Mesoamerican civilizations and at the height of its influence between 100 – 650 AD, it was the largest city in the Americas, and one of the largest in the world. It is unclear who the builders of the city were, and what relation they had to the peoples which followed. It is possible they were related to the Nahua or Totonac peoples. It is also unclear why the city was abandoned. There are several theories which include foreign invasion, a civil war, an ecological catastrophe, or some combination of all three. The Aztecs, who reached the height of their power about a thousand years later, held Teotihuacan in reverence. The site of Teotihuacan is located about forty kilometers from the site of the Aztec capital. They claimed to be the descendants of the Teotihuacans. That may or may not be true, but the Teotihuacans had a huge influence on the later Aztec culture. The name Teotihuacan comes from the Aztec language, and means ‘the birthplace of the gods’ and they believed it was the location of the creation of the universe. But the paper outlines how the influence of this ancient culture on the Aztecs was not limited only to their cultural beliefs, but also how it affected the urban design of their capital city, and also how unparalleled that original design was. Most ancient cities throughout Mesoamerica followed the same planning principles, and they included the same kinds of buildings. Each city usually had a well-planned central area which included temples, a royal palace, a ballcourt, and a plaza that was surrounded by a much more chaotic (in terms of planning) residential area. Teotihuacan most likely had no royal palace, no ballcourt, and no central areas. It was much larger than cities before it, and the residential areas were much better planned than its predecessors, and it had an innovation unique in world history – the apartment compound. Buildings with one entrance that contained many households had been rare before the industrial revolution and those that did exist were for the poor. Teotihuacan’s were spacious and comfortable. “Teotihuacan stood alone as the only city using a new and very different set of planning principles, and its apartment compounds represent a unique form of urban residence not just in Mesoamerica but in world urban history,” said Michael E. Smith. All of these features were unique in Central America before and after, until the Aztecs drew their inspiration for their capital Tenochtitlan from Teotihuacan using many of the same features.
<urn:uuid:fbdadb3e-8d2f-4b67-a945-4d27f904d04b>
CC-MAIN-2020-05
https://scienzaonline.com/scienceonline-news/item/1580-how-teotihuacan%E2%80%99s-urban-design-was-lost-and-found.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00267.warc.gz
en
0.98173
544
4.09375
4
[ -0.04600532352924347, 0.7079456448554993, 0.3672292232513428, -0.03329791501164436, -0.2732684016227722, -0.11007699370384216, 0.05992192029953003, 0.4308689832687378, -0.15151190757751465, -0.11974300444126129, -0.02893160656094551, -0.39286139607429504, -0.11087605357170105, -0.102324284...
1
Teotihuacan was in touch with other Mesoamerican civilizations and at the height of its influence between 100 – 650 AD, it was the largest city in the Americas, and one of the largest in the world. It is unclear who the builders of the city were, and what relation they had to the peoples which followed. It is possible they were related to the Nahua or Totonac peoples. It is also unclear why the city was abandoned. There are several theories which include foreign invasion, a civil war, an ecological catastrophe, or some combination of all three. The Aztecs, who reached the height of their power about a thousand years later, held Teotihuacan in reverence. The site of Teotihuacan is located about forty kilometers from the site of the Aztec capital. They claimed to be the descendants of the Teotihuacans. That may or may not be true, but the Teotihuacans had a huge influence on the later Aztec culture. The name Teotihuacan comes from the Aztec language, and means ‘the birthplace of the gods’ and they believed it was the location of the creation of the universe. But the paper outlines how the influence of this ancient culture on the Aztecs was not limited only to their cultural beliefs, but also how it affected the urban design of their capital city, and also how unparalleled that original design was. Most ancient cities throughout Mesoamerica followed the same planning principles, and they included the same kinds of buildings. Each city usually had a well-planned central area which included temples, a royal palace, a ballcourt, and a plaza that was surrounded by a much more chaotic (in terms of planning) residential area. Teotihuacan most likely had no royal palace, no ballcourt, and no central areas. It was much larger than cities before it, and the residential areas were much better planned than its predecessors, and it had an innovation unique in world history – the apartment compound. Buildings with one entrance that contained many households had been rare before the industrial revolution and those that did exist were for the poor. Teotihuacan’s were spacious and comfortable. “Teotihuacan stood alone as the only city using a new and very different set of planning principles, and its apartment compounds represent a unique form of urban residence not just in Mesoamerica but in world urban history,” said Michael E. Smith. All of these features were unique in Central America before and after, until the Aztecs drew their inspiration for their capital Tenochtitlan from Teotihuacan using many of the same features.
548
ENGLISH
1
Africa is home to 59 million orphans. Nelson is one of them. What is an orphan — or how exactly do we define orphan? As we follow Nelson’s journey, we will see multiple definitions of this oft-misunderstood term. A Turn for the Worse Meet Nelson Mandela. Not the iconic South African leader and Nobel Peace Prize Laureate, but a budding student of business at Kenyatta University, one of the leading public universities in Kenya. Nelson is the third of five children in a family that was blessed to have both parents. Both mother and father were employed and the family lived in a suburb of Nairobi, the capital city. Nelson grew up in a loving family — until life took a sharp turn for the worse. Nelson tells us, “My father was attacked by thugs. He was shot and killed on his way from work.” The loss left Nelson and his family devastated. At this point in his life, he and his siblings were what we categorize as a single orphan. A single orphan is defined as a child living with the loss of one parent, with one surviving parent. More specifically, Nelson was what we call a paternal orphan. If a single orphan loses his mother, he is referred to as a maternal orphan. Although in the United States, we often think the meaning of orphan is that both parents are dead, UNICEF defines orphan as, “a child under 18 years of age who has lost one or both parents to any cause of death.” Life at a Standstill Nelson recalls the early days after his father died: “We grieved our father, but my mom was shattered and she could not come to terms with it. She slid into depression. After one month in deep depression, my mother suffered a severe stroke.” His mom passed away, leaving her five young children in despair. “The loss of both parents in that space of time was a big shock to us. I felt like my life had come to a standstill.” When we think of the term orphan, we most often think of children who have lost both parents. After their mother’s death, Nelson and his siblings moved from single orphanhood to the most common definition of orphan: total orphans. Sometimes total orphans are also called double orphans. Nelson’s oldest sibling faced a difficult challenge. He was now a head-of-household orphan. At only 11 years old, Nelson’s oldest brother was in charge of caring for his four younger siblings, ages 9, 7, 5 and 2. Learn More: How to Help Orphans Without Adopting Moving Into a New Chapter Nelson and his siblings had to adapt quickly to their new circumstances. Their grandparents took them in, which involved a move from the city to rural, western Kenya, 400 kilometers away. Their strength waning with age, the grandparents were not prepared to handle five grandchildren on their own. Nelson and his siblings are examples of something fairly typical in the African culture. If both parents die and other family members are able, the family will take in the orphaned children as their own rather than having them enter an orphanage. Nelson’s grandparents were indeed able to take them in, but due to their frail condition they were unable to provide parental-level care to Nelson and his siblings. This introduces another type of orphan: the virtual or social orphan. Unlike Nelson, virtual or social orphans may still have living parents. However, the parents are unable to take care of their children for one reason or another. For example, they may have deserted the family or are in jail. No matter the label, an orphan is a child deprived of parental care and protection. But no orphan need be outside of God’s special attention. He is the ultimate caretaker of orphans. Throughout Scripture we see His promises. In John 14:18, Jesus says, “I will not leave you as orphans; I will come to you.” In Psalm 68, He promises that He is “a father to the fatherless.” God did not forget Nelson and his siblings. Fortunately for Nelson, he moved to an area where a local church partners with Compassion to run a church-based child development center. The situation at home was dire, and sponsorship at the local Compassion-assisted child development center was an answer to prayer. “I joined the Mahaya Child Development Center and the social worker there became very close to me. I developed a connection with him. I would meet him every Saturday and, after the program, we would talk about life.” Nelson drew strength from a loving God, his courageous-but-aging grandparents, and a generous worker at his child development center who invested in him. After his secondary-school education, Nelson was admitted to the university and received financial support to continue his education. “When I joined university, I met Mr. Kimando, one of my lecturers at the university. He openly professed his faith in Jesus. I admired him.” Nelson approached the professor and formed a rapport with him. He is now mentored by Mr. Kimando, and also participates in a male mentorship module dubbed Boyz to Men. It is administered by Transform Kenya, an organization that seeks to equip young men with godly principles. “My perspective about growing up into a man has reformed, especially since I did not grow up with my father. I now have a better understanding of my role as a man and a future father.” God did not abandon Nelson the orphan. And through the care of ministry and university mentors, this young man is on pace to become a world changer. Nelson’s story and photos compiled by Silas Irungu, Field Communications Specialist. This article was originally published Oct 29, 2012.
<urn:uuid:c516b055-6f9e-44a7-b68d-9e88da4cfd66>
CC-MAIN-2020-05
https://blog.compassion.com/who-is-an-orphan/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00193.warc.gz
en
0.983007
1,239
3.6875
4
[ -0.2537841200828552, 0.1729469895362854, -0.25056707859039307, -0.18992310762405396, 0.09746045619249344, -0.12667861580848694, 0.10572340339422226, -0.4863383173942566, -0.038088440895080566, 0.32914233207702637, 0.8158416748046875, -0.31317371129989624, 0.3159973919391632, -0.20958875119...
3
Africa is home to 59 million orphans. Nelson is one of them. What is an orphan — or how exactly do we define orphan? As we follow Nelson’s journey, we will see multiple definitions of this oft-misunderstood term. A Turn for the Worse Meet Nelson Mandela. Not the iconic South African leader and Nobel Peace Prize Laureate, but a budding student of business at Kenyatta University, one of the leading public universities in Kenya. Nelson is the third of five children in a family that was blessed to have both parents. Both mother and father were employed and the family lived in a suburb of Nairobi, the capital city. Nelson grew up in a loving family — until life took a sharp turn for the worse. Nelson tells us, “My father was attacked by thugs. He was shot and killed on his way from work.” The loss left Nelson and his family devastated. At this point in his life, he and his siblings were what we categorize as a single orphan. A single orphan is defined as a child living with the loss of one parent, with one surviving parent. More specifically, Nelson was what we call a paternal orphan. If a single orphan loses his mother, he is referred to as a maternal orphan. Although in the United States, we often think the meaning of orphan is that both parents are dead, UNICEF defines orphan as, “a child under 18 years of age who has lost one or both parents to any cause of death.” Life at a Standstill Nelson recalls the early days after his father died: “We grieved our father, but my mom was shattered and she could not come to terms with it. She slid into depression. After one month in deep depression, my mother suffered a severe stroke.” His mom passed away, leaving her five young children in despair. “The loss of both parents in that space of time was a big shock to us. I felt like my life had come to a standstill.” When we think of the term orphan, we most often think of children who have lost both parents. After their mother’s death, Nelson and his siblings moved from single orphanhood to the most common definition of orphan: total orphans. Sometimes total orphans are also called double orphans. Nelson’s oldest sibling faced a difficult challenge. He was now a head-of-household orphan. At only 11 years old, Nelson’s oldest brother was in charge of caring for his four younger siblings, ages 9, 7, 5 and 2. Learn More: How to Help Orphans Without Adopting Moving Into a New Chapter Nelson and his siblings had to adapt quickly to their new circumstances. Their grandparents took them in, which involved a move from the city to rural, western Kenya, 400 kilometers away. Their strength waning with age, the grandparents were not prepared to handle five grandchildren on their own. Nelson and his siblings are examples of something fairly typical in the African culture. If both parents die and other family members are able, the family will take in the orphaned children as their own rather than having them enter an orphanage. Nelson’s grandparents were indeed able to take them in, but due to their frail condition they were unable to provide parental-level care to Nelson and his siblings. This introduces another type of orphan: the virtual or social orphan. Unlike Nelson, virtual or social orphans may still have living parents. However, the parents are unable to take care of their children for one reason or another. For example, they may have deserted the family or are in jail. No matter the label, an orphan is a child deprived of parental care and protection. But no orphan need be outside of God’s special attention. He is the ultimate caretaker of orphans. Throughout Scripture we see His promises. In John 14:18, Jesus says, “I will not leave you as orphans; I will come to you.” In Psalm 68, He promises that He is “a father to the fatherless.” God did not forget Nelson and his siblings. Fortunately for Nelson, he moved to an area where a local church partners with Compassion to run a church-based child development center. The situation at home was dire, and sponsorship at the local Compassion-assisted child development center was an answer to prayer. “I joined the Mahaya Child Development Center and the social worker there became very close to me. I developed a connection with him. I would meet him every Saturday and, after the program, we would talk about life.” Nelson drew strength from a loving God, his courageous-but-aging grandparents, and a generous worker at his child development center who invested in him. After his secondary-school education, Nelson was admitted to the university and received financial support to continue his education. “When I joined university, I met Mr. Kimando, one of my lecturers at the university. He openly professed his faith in Jesus. I admired him.” Nelson approached the professor and formed a rapport with him. He is now mentored by Mr. Kimando, and also participates in a male mentorship module dubbed Boyz to Men. It is administered by Transform Kenya, an organization that seeks to equip young men with godly principles. “My perspective about growing up into a man has reformed, especially since I did not grow up with my father. I now have a better understanding of my role as a man and a future father.” God did not abandon Nelson the orphan. And through the care of ministry and university mentors, this young man is on pace to become a world changer. Nelson’s story and photos compiled by Silas Irungu, Field Communications Specialist. This article was originally published Oct 29, 2012.
1,178
ENGLISH
1
On the 3rd December 2010, Dr William Albert Bruce Cooper, Surgeon Lieutenant (Retired) Royal Navy Volunteer Reserve died aged 96 years. He was the last surviving member of a heroic group who were part of a top-secret WWII plan called Operation Tracer. The operation remained a secret long after the war ended. Operation Tracer originated during the Second World War for a number of military personnel to be sealed in man-made chambers on Gibraltar, Malta and in Aden in the event of the capture of those locations by enemy forces. The occupants would be provisioned for at least one year and their task would have been to transmit by radio the movements of enemy shipping passing through the Gibraltar Strait and leaving Algeciras, passing the island of Malta and entering and leaving the Red Sea. The plan was so secret that only rumour persisted after the war and it was not until the discovery of a chamber beneath Lord Airey’s Battery on Gibraltar in 1997 and the finding of papers from Naval Intelligence at the Public Records Office, Kew (England) that Operation Tracer could be confirmed. Details that have emerged so far indicate that Operation Tracer was conceived at the end of the summer of 1941, at a time when Allied expectations of a victory were at their lowest. On the 26th June 1940, during the Second World War, France surrendered to Germany. Hitler expected Britain to sue for peace faced with the prospect of imminent invasion but, as we know, Churchill had other ideas. On the 16th July 1940 Hitler issued an order to his High Command prefaced with the words, 'I have decided to prepare a landing operation against Britain and, if necessary, carry it out'. The plan was given the name ‘Operation Sealion’. Hitler made four demands that had to be met before the invasion could take place. The British Navy had to be sufficiently engaged in the North Sea and the Mediterranean that it could not intervene, the British Air Force had to be destroyed, British coastal defences had to be obliterated and British submarine action against the landing forces had to be prevented by the laying of mines at both ends of the Straits of Dover. Despite valiant attempts by the Luftwaffe and the Kriegsmarine none of those conditions was ever met and as early as September 1940 Hitler became convinced that a seaborn operation across the English Channel was doomed. On the 17th September 1940, he postponed the operation. Hitler then adopted a different policy, that of laying siege to Britain by strangling her supply lines. Important sources of food and raw materials originated in Britains Empire in the Far East and made their way via the Suez Canal and the Mediterranean through the Gibraltar Strait to the Atlantic and so to Britain. Three strategic points on this journey were the bottlenecks of the southern entrance to the Red Sea west of Aden, the Gibraltar Strait and the island of Malta. The German High Command called this concept the ‘Peripheral Strategy’ and it depended for its success on Germany having control of Gibraltar. By the 12th July 1940, four days before the significant order to initiate Sealion, which may say something about Hitler's conviction that an invasion across the English Channel was possible, the initial planning document for the invasion of Gibraltar via Spain had been prepared. Hitler called this plan ‘Operation Felix’. Operation Felix depended on Franco allowing German troops to transit through Spain. This he repeatedly refused to allow. Then towards the end of 1940 events started to conspire against Hitler. The invasion plan called for the use of huge numbers of troops and artillery to ensure its success due to the natural fortifications and defences already in place on Gibraltar. In October Italy attacked Greece and had to be supported by their ally Germany. Hitler then planned an invasion of Russia to commence in the summer of 1941. The allies meanwhile were starting to exert pressure on German forces in Libya, East Africa and Greece. By December 1940 available German forces were either committed already or earmarked for action elsewhere and Operation Felix was put on hold. The German build-up of the 4.5 million troops required to invade Russia took from the spring of 1941 to the actual invasion that started on the 22nd June. By December 1941 German troops were in sight of Moscow, their furthest advance. From that period until the end of the war Russia steadily pushed the Axis troops back to Berlin and the Eastern Front became a constant drain on Hitler's resources but during the summer of 1941, the situation on the Eastern Front must have looked very bleak to the Allies. Meanwhile, in North Africa, in June 1940, the Italians had invaded Egypt intending to capture the Suez Canal. By December 1940 however, they were being pushed back and suffered a crushing defeat. Hitler sent Rommel and the Africa Korps to North Africa in February 1941. Hitler was now fighting his war on two fronts and heavily committed on both. The Allies lost virtually all the ground they had taken from the Italian troops and by April German troops were again occupying part of Egypt, poised to capture the canal zone. A stalemate then ensued until November 1941 whilst both sides reorganised and rearmed. Two offensives during this period, Operation Brevity and Operation Battleaxe, both designed to push the German troops out of Egypt, failed. In North Africa then the situation was not looking good for the Allies in the summer of 1941. On Gibraltar, the postponement of Operation Felix gave Britain the chance to further fortify the Rock and extend the existing tunnels. Under cover of this work, an existing section of the tunnel near Airey’s Battery was adapted to create a room within the rock just 44 feet long, 16 feet wide and 8 feet high. At one end was a water tank with a capacity of 10,000 gallons and just off the main room was a small radio room. Steeply sloping tunnels led to an opening in the east cliff overlooking the Mediterranean outside of which is a 22 feet long ledge that cannot be seen from above due to an overhang of rock. A similar tunnel led to the west cliff from which, through a slit in the man made water catchment just 2 cms x 20 cms there was a good view of the harbour at Algeciras. Six men were to have occupied this space for up to one year. The radio was to have been powered from a battery that was charged by a generator attached to a bicycle, part of which was found in 1997. The radio signal would have been sent via an 18 foot aerial that was to have been extended into the open air. Part of this mechanism was also found. In 1941, Dr Copper was on shore leave in the UK, when he was approached by George Murray Levick. Levick was a member of the support crew on Captain Scott's ill-fated expedition to the Antarctic and had survived the 8 months trip to Cape Evans that included an entire winter holed up in a snow cave. He had been brought out of retirement by the Admiralty to serve as a consultant on survival in harsh conditions. All Copper was told was that he would be volunteering for a very hazardous mission. Only after he had accepted, and recommended another physician, Arthur Milner, was he given further details. Four other men were similarly recruited. Initial training took place at Romney Marsh before the men were shipped out to Gibraltar. Trials in the 'stay behind tunnels' started in January 1942 supervised by Colonel Gambier-Parry, a radio expert from MI6. In March a Lieutenant White arrived in Gibraltar following a signal to Commander Gibraltar asking for full co-operation and reminding everyone that the ultimate success of Operation Tracer depended on 100% security. Surgeon-Lieutenants Cooper and Milne, both of the RNVR, then arrived that summer on an operation on the instructions of the First Sea Lord. The entire six-man team was in place by the September of 1942 ready to occupy the 'stay behind tunnels'. Dr Cooper became the physician for the dockyard and censor of soldier's letters. Of course, nobody had informed the Transport Officer on Gibraltar of his dual role and on one occasion Cooper was almost sent to sea. Secrecy was essential. Whilst on Gibraltar Dr. Cooper lived at the Rock Hotel. He would enter the front doors dressed in his Surgeon Lieutenant's uniform and exit through the back in a sergeant's uniform to go up the Rock to continue his training in the WWII tunnels. Fortunately they were never called upon to carry out their duties and after one year were stood down. In October 2008 the last remaining member of that party, Dr Bruce Cooper, then 93 years old, returned to Gibraltar. He was able to confirm that the chamber discovered in 1997 was the secret chamber he and his five companions would have occupied. The visit to Gibraltar was organised by documentary film Producer Martin Nuza (Gold Productions Studios) with the assistance of Jim Crone (discovergibraltar.com) as part of Mr Nuza’s latest film project specifically about Operation Tracer. Nick has lived and worked in Andalucia for over 20 years. He and his partner, Julie Evans, have travelled extensively and dug deep into the history and culture, producing authoritative articles on all aspects of the region. Nick has written four books about Andalucia and writes articles for other websites and blogs. Please add together 10 + 10 = To receive our free newsletter Please prove you are humanadd together 4 + 1 = Like us on Mobile +34 634 344 163 (Nick) Mobile +34 647 379 245 (Julie) © Visit-Andalucia 2019
<urn:uuid:9297d795-6f4d-41a4-a196-135681dd07ce>
CC-MAIN-2020-05
https://visit-andalucia.com/one_post.php?id=304
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00395.warc.gz
en
0.982027
1,975
3.265625
3
[ -0.3167617619037628, 0.30649441480636597, 0.12294424325227737, 0.058971650898456573, -0.08896287530660629, -0.23587831854820251, 0.042824890464544296, 0.005527261178940535, -0.12708765268325806, -0.05644245445728302, 0.24586617946624756, -0.5454521179199219, 0.07314544916152954, 0.34316843...
1
On the 3rd December 2010, Dr William Albert Bruce Cooper, Surgeon Lieutenant (Retired) Royal Navy Volunteer Reserve died aged 96 years. He was the last surviving member of a heroic group who were part of a top-secret WWII plan called Operation Tracer. The operation remained a secret long after the war ended. Operation Tracer originated during the Second World War for a number of military personnel to be sealed in man-made chambers on Gibraltar, Malta and in Aden in the event of the capture of those locations by enemy forces. The occupants would be provisioned for at least one year and their task would have been to transmit by radio the movements of enemy shipping passing through the Gibraltar Strait and leaving Algeciras, passing the island of Malta and entering and leaving the Red Sea. The plan was so secret that only rumour persisted after the war and it was not until the discovery of a chamber beneath Lord Airey’s Battery on Gibraltar in 1997 and the finding of papers from Naval Intelligence at the Public Records Office, Kew (England) that Operation Tracer could be confirmed. Details that have emerged so far indicate that Operation Tracer was conceived at the end of the summer of 1941, at a time when Allied expectations of a victory were at their lowest. On the 26th June 1940, during the Second World War, France surrendered to Germany. Hitler expected Britain to sue for peace faced with the prospect of imminent invasion but, as we know, Churchill had other ideas. On the 16th July 1940 Hitler issued an order to his High Command prefaced with the words, 'I have decided to prepare a landing operation against Britain and, if necessary, carry it out'. The plan was given the name ‘Operation Sealion’. Hitler made four demands that had to be met before the invasion could take place. The British Navy had to be sufficiently engaged in the North Sea and the Mediterranean that it could not intervene, the British Air Force had to be destroyed, British coastal defences had to be obliterated and British submarine action against the landing forces had to be prevented by the laying of mines at both ends of the Straits of Dover. Despite valiant attempts by the Luftwaffe and the Kriegsmarine none of those conditions was ever met and as early as September 1940 Hitler became convinced that a seaborn operation across the English Channel was doomed. On the 17th September 1940, he postponed the operation. Hitler then adopted a different policy, that of laying siege to Britain by strangling her supply lines. Important sources of food and raw materials originated in Britains Empire in the Far East and made their way via the Suez Canal and the Mediterranean through the Gibraltar Strait to the Atlantic and so to Britain. Three strategic points on this journey were the bottlenecks of the southern entrance to the Red Sea west of Aden, the Gibraltar Strait and the island of Malta. The German High Command called this concept the ‘Peripheral Strategy’ and it depended for its success on Germany having control of Gibraltar. By the 12th July 1940, four days before the significant order to initiate Sealion, which may say something about Hitler's conviction that an invasion across the English Channel was possible, the initial planning document for the invasion of Gibraltar via Spain had been prepared. Hitler called this plan ‘Operation Felix’. Operation Felix depended on Franco allowing German troops to transit through Spain. This he repeatedly refused to allow. Then towards the end of 1940 events started to conspire against Hitler. The invasion plan called for the use of huge numbers of troops and artillery to ensure its success due to the natural fortifications and defences already in place on Gibraltar. In October Italy attacked Greece and had to be supported by their ally Germany. Hitler then planned an invasion of Russia to commence in the summer of 1941. The allies meanwhile were starting to exert pressure on German forces in Libya, East Africa and Greece. By December 1940 available German forces were either committed already or earmarked for action elsewhere and Operation Felix was put on hold. The German build-up of the 4.5 million troops required to invade Russia took from the spring of 1941 to the actual invasion that started on the 22nd June. By December 1941 German troops were in sight of Moscow, their furthest advance. From that period until the end of the war Russia steadily pushed the Axis troops back to Berlin and the Eastern Front became a constant drain on Hitler's resources but during the summer of 1941, the situation on the Eastern Front must have looked very bleak to the Allies. Meanwhile, in North Africa, in June 1940, the Italians had invaded Egypt intending to capture the Suez Canal. By December 1940 however, they were being pushed back and suffered a crushing defeat. Hitler sent Rommel and the Africa Korps to North Africa in February 1941. Hitler was now fighting his war on two fronts and heavily committed on both. The Allies lost virtually all the ground they had taken from the Italian troops and by April German troops were again occupying part of Egypt, poised to capture the canal zone. A stalemate then ensued until November 1941 whilst both sides reorganised and rearmed. Two offensives during this period, Operation Brevity and Operation Battleaxe, both designed to push the German troops out of Egypt, failed. In North Africa then the situation was not looking good for the Allies in the summer of 1941. On Gibraltar, the postponement of Operation Felix gave Britain the chance to further fortify the Rock and extend the existing tunnels. Under cover of this work, an existing section of the tunnel near Airey’s Battery was adapted to create a room within the rock just 44 feet long, 16 feet wide and 8 feet high. At one end was a water tank with a capacity of 10,000 gallons and just off the main room was a small radio room. Steeply sloping tunnels led to an opening in the east cliff overlooking the Mediterranean outside of which is a 22 feet long ledge that cannot be seen from above due to an overhang of rock. A similar tunnel led to the west cliff from which, through a slit in the man made water catchment just 2 cms x 20 cms there was a good view of the harbour at Algeciras. Six men were to have occupied this space for up to one year. The radio was to have been powered from a battery that was charged by a generator attached to a bicycle, part of which was found in 1997. The radio signal would have been sent via an 18 foot aerial that was to have been extended into the open air. Part of this mechanism was also found. In 1941, Dr Copper was on shore leave in the UK, when he was approached by George Murray Levick. Levick was a member of the support crew on Captain Scott's ill-fated expedition to the Antarctic and had survived the 8 months trip to Cape Evans that included an entire winter holed up in a snow cave. He had been brought out of retirement by the Admiralty to serve as a consultant on survival in harsh conditions. All Copper was told was that he would be volunteering for a very hazardous mission. Only after he had accepted, and recommended another physician, Arthur Milner, was he given further details. Four other men were similarly recruited. Initial training took place at Romney Marsh before the men were shipped out to Gibraltar. Trials in the 'stay behind tunnels' started in January 1942 supervised by Colonel Gambier-Parry, a radio expert from MI6. In March a Lieutenant White arrived in Gibraltar following a signal to Commander Gibraltar asking for full co-operation and reminding everyone that the ultimate success of Operation Tracer depended on 100% security. Surgeon-Lieutenants Cooper and Milne, both of the RNVR, then arrived that summer on an operation on the instructions of the First Sea Lord. The entire six-man team was in place by the September of 1942 ready to occupy the 'stay behind tunnels'. Dr Cooper became the physician for the dockyard and censor of soldier's letters. Of course, nobody had informed the Transport Officer on Gibraltar of his dual role and on one occasion Cooper was almost sent to sea. Secrecy was essential. Whilst on Gibraltar Dr. Cooper lived at the Rock Hotel. He would enter the front doors dressed in his Surgeon Lieutenant's uniform and exit through the back in a sergeant's uniform to go up the Rock to continue his training in the WWII tunnels. Fortunately they were never called upon to carry out their duties and after one year were stood down. In October 2008 the last remaining member of that party, Dr Bruce Cooper, then 93 years old, returned to Gibraltar. He was able to confirm that the chamber discovered in 1997 was the secret chamber he and his five companions would have occupied. The visit to Gibraltar was organised by documentary film Producer Martin Nuza (Gold Productions Studios) with the assistance of Jim Crone (discovergibraltar.com) as part of Mr Nuza’s latest film project specifically about Operation Tracer. Nick has lived and worked in Andalucia for over 20 years. He and his partner, Julie Evans, have travelled extensively and dug deep into the history and culture, producing authoritative articles on all aspects of the region. Nick has written four books about Andalucia and writes articles for other websites and blogs. Please add together 10 + 10 = To receive our free newsletter Please prove you are humanadd together 4 + 1 = Like us on Mobile +34 634 344 163 (Nick) Mobile +34 647 379 245 (Julie) © Visit-Andalucia 2019
2,090
ENGLISH
1
A BRIEF HISTORY OF DERRY By Tim Lambert Derry is an ancient settlement. Its name is believed to be derived from the Gaelic word doire meaning a grove of oak trees. From the 6th AD onwards century there was a monastery in Derry. (Tradition says St Columba founded it). In time a settlement grew up by the monastery. However, for centuries, Derry was a rather small settlement. It did not become truly important until the 17th century. In 1566 Derry was captured by the English. However, they did not hold it for long. In 1567 a gunpowder store exploded and the English departed. The English captured Derry again in 1600. A new town was founded at Derry in 1603. King James gave a charter founding the new town and a large number of merchants and tradesmen settled there. However, this first new town was destroyed by Cahir O'Doherty and his men in 1608. Nevertheless a second new town was created soon afterwards. King James confiscated large amounts of land from the Irish. He then settled large numbers of Scots and English people in Ulster to try and create a loyal population in the area. As part of the Ulster Plantation, several new towns were created. King James invited the merchants of London to help him settle English Protestants in Northern Ireland. They agreed to build a new town at Derry. It was to have 200 houses and a population of about 1,000. The new town was called Londonderry. It was given a charter in 1613 and had a mayor and corporation. In 1617 it gained a grammar school. By 1630 the population of Londonderry was probably about 1,000. Streets in the new town were laid out in a rectangular pattern. In the years 1613-1618 walls were built around Londonderry and St Columb's Cathedral was built in 1633. During the Irish rebellion of 1641 Derry was besieged but the Irish were unable to capture it. In 1649 during the civil wars between the king and parliament Derry was besieged by royalists for 20 weeks but again the city did not fall. The most famous siege of Londonderry took place in 1689. In 1688 the Catholic king James II was deposed. However, the Lord Deputy of Ireland, The Earl of Tyrconnell stayed loyal to James so did most of Ireland. Londonderry was one of the few places that remained loyal to William. A Catholic army attempted to enter Londonderry. On December 7, 1688, 13 apprentice boys shut the Ferryquay Gate against them. As a result, Protestants fled to the town, swelling its population. In March 1689 James landed at Kinsale in an attempt to regain his throne. The siege of Londonderry began in April 1689. Since they were not strong enough to take the town by storm the besiegers tried to starve the defenders into submission. Conditions inside the city grew worse and worse. There was a terrible shortage of food and the defenders were reduced to eating horse meat and tallow. Disease also broke out. Nevertheless, the defenders held firm. In June three ships arrived from England, carrying supplies However for several weeks they were unable to reach the city as James's men had erected a wooden boom across the Foyle. Eventually, on 28 July, one of the ships, The Mountjoy, broke the boom and the city was relieved. Three days later the besiegers realized the game was up and they left. DERRY IN THE 18th CENTURY In 1704 an Act of Parliament stated that only Anglicans could hold office in Ireland. Presbyterians were excluded. Partly as a result of this measure many Presbyterians emigrated from Derry to North America in the early 18th century. Despite this Derry grew larger in the 18th century and suburbs appeared outside the walls. Boom Hall was built in the 1770s at the point where the boom crossed the river during the siege. A number of new buildings were erected in Derry in the 18th century. The Irish Society House was built in 1764. Long Tower Church was built in 1784-86. Bishopgate was rebuilt in 1789. From about 1750 a linen industry grew up in Derry. Until the end of the 18th century there was only a ferry across the River Foyle. In 1789-91 a wooden bridge was built. This greatly boosted trade and industry in Derry. Meanwhile The Derry Journal began in 1772. DERRY IN THE 19th CENTURY In 1821, at the time of the first Irish census, Derry had a population of 9,313. It grew rapidly during the 19th century and had reached a population of 40,000 by its end. In the early 19th century large numbers of Catholics came to Derry from the countryside looking for work. The Courthouse was built in 1813. Derry workhouse opened in 1840 and the railway reached Derry in 1845. Magee College was founded in 1865 to train men for the Presbyterian ministry. St Columb's College was founded in 1879. In 1863 another bridge, this one of steel, was erected across the Foyle. Carlisle Bridge, as it was called, was demolished in 1933. St Augustines Church was built in 1872. St Eugene's Cathedral was built in 1873. Its spire was added in 1902. Derry Guildhall opened in 1890. It burned in 1908 and was rebuilt. It was bombed in 1972 then refurbished. Meanwhile in 1831 a man named William Scott began making shirts in Derry. From the 1850s the shirt making trade in Derry boomed. By the 1870s shirt making was the main industry in the town. There was also a shipbuilding industry in 19th century Derry. Meanwhile the port of Derry prospered. During the 19th century many emigrants from Ireland to North America left from Derry. Today they are remembered by an 'emigrants' sculpture. DERRY IN THE 20th CENTURY Brooke Park opened to the public in 1901. A War Memorial was erected in Derry in 1927 and Our Lady of Lourdes Church was built in 1976. Craigavon Bridge was built in 1933 to replace Carlisle Bridge. Foyle Bridge was built in 1984. In 1932 Amelia Earhart, the first woman to fly the Atlantic, landed at Ballyarnett. From the mid-19th century a shirt-making industry boomed in Derry. In 1997 United Technologies Automotive factory closed. This was a severe blow to the city. During World War II Derry was a major naval base. There were also air bases around the city. Large numbers of American and Canadian servicemen were stationed in the city. From the late 1940s a public housing estate was created at Creggan. In the 1960s the council demolished slums in Derry. On October 5 1968 the Northern Ireland Civil Rights Movement attempted to hold a march in Derry. However the Northern Irish government banned the march and when it went ahead it was broken up by the RUC in Duke Street. The Battle of Bogside occurred in August 1969. Tension between Catholic and Protestant had been building for some time and it eventually erupted into violence. On 12 August 1969 the annual Apprentice Boys march was routed past the Catholic Bogside area. As the Apprentice Boys marched past there were clashes between the RUC and Catholic civilians. There followed 3 days of rioting which ended when the British army was sent in. In 1972 came the tragic event known as 'Bloody Sunday'. On 30 January 1972 the Derry Civil Rights Association were holding a march through the town when the British 1st Parachute Regiment opened fire, killing 14 people. Today Derry is a flourishing city. Foyle Valley Railway Museum opened in 1990. Tower museum opened in 1992. Foyleside Shopping Centre opened in 1995. Rath Mor Centre also opened in 1995. Derry Visitor Centre and Convention Bureau opened in 1997. DERRY IN THE 21st CENTURY In 2001 Creggan Indigenous Enterprise Park opened. Millennium Forum opened in 2001. Creggan Country Park opened in 2003. Today the population of Derry is 83,000. A history of Belfast A history of Galway A history of Limerick A history of Dublin A history of Waterford
<urn:uuid:3c7c87a4-62a7-4cd1-8418-5ec776904c5b>
CC-MAIN-2020-05
http://www.localhistories.org/derry.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00363.warc.gz
en
0.985218
1,759
3.78125
4
[ 0.14689071476459503, 0.2778246998786926, 0.6338163614273071, 0.14124822616577148, -0.2479107528924942, -0.20573130249977112, 0.31956183910369873, -0.552614688873291, 0.025420140475034714, -0.34128159284591675, -0.3953281044960022, -0.5561091899871826, 0.021518254652619362, 0.04334339499473...
1
A BRIEF HISTORY OF DERRY By Tim Lambert Derry is an ancient settlement. Its name is believed to be derived from the Gaelic word doire meaning a grove of oak trees. From the 6th AD onwards century there was a monastery in Derry. (Tradition says St Columba founded it). In time a settlement grew up by the monastery. However, for centuries, Derry was a rather small settlement. It did not become truly important until the 17th century. In 1566 Derry was captured by the English. However, they did not hold it for long. In 1567 a gunpowder store exploded and the English departed. The English captured Derry again in 1600. A new town was founded at Derry in 1603. King James gave a charter founding the new town and a large number of merchants and tradesmen settled there. However, this first new town was destroyed by Cahir O'Doherty and his men in 1608. Nevertheless a second new town was created soon afterwards. King James confiscated large amounts of land from the Irish. He then settled large numbers of Scots and English people in Ulster to try and create a loyal population in the area. As part of the Ulster Plantation, several new towns were created. King James invited the merchants of London to help him settle English Protestants in Northern Ireland. They agreed to build a new town at Derry. It was to have 200 houses and a population of about 1,000. The new town was called Londonderry. It was given a charter in 1613 and had a mayor and corporation. In 1617 it gained a grammar school. By 1630 the population of Londonderry was probably about 1,000. Streets in the new town were laid out in a rectangular pattern. In the years 1613-1618 walls were built around Londonderry and St Columb's Cathedral was built in 1633. During the Irish rebellion of 1641 Derry was besieged but the Irish were unable to capture it. In 1649 during the civil wars between the king and parliament Derry was besieged by royalists for 20 weeks but again the city did not fall. The most famous siege of Londonderry took place in 1689. In 1688 the Catholic king James II was deposed. However, the Lord Deputy of Ireland, The Earl of Tyrconnell stayed loyal to James so did most of Ireland. Londonderry was one of the few places that remained loyal to William. A Catholic army attempted to enter Londonderry. On December 7, 1688, 13 apprentice boys shut the Ferryquay Gate against them. As a result, Protestants fled to the town, swelling its population. In March 1689 James landed at Kinsale in an attempt to regain his throne. The siege of Londonderry began in April 1689. Since they were not strong enough to take the town by storm the besiegers tried to starve the defenders into submission. Conditions inside the city grew worse and worse. There was a terrible shortage of food and the defenders were reduced to eating horse meat and tallow. Disease also broke out. Nevertheless, the defenders held firm. In June three ships arrived from England, carrying supplies However for several weeks they were unable to reach the city as James's men had erected a wooden boom across the Foyle. Eventually, on 28 July, one of the ships, The Mountjoy, broke the boom and the city was relieved. Three days later the besiegers realized the game was up and they left. DERRY IN THE 18th CENTURY In 1704 an Act of Parliament stated that only Anglicans could hold office in Ireland. Presbyterians were excluded. Partly as a result of this measure many Presbyterians emigrated from Derry to North America in the early 18th century. Despite this Derry grew larger in the 18th century and suburbs appeared outside the walls. Boom Hall was built in the 1770s at the point where the boom crossed the river during the siege. A number of new buildings were erected in Derry in the 18th century. The Irish Society House was built in 1764. Long Tower Church was built in 1784-86. Bishopgate was rebuilt in 1789. From about 1750 a linen industry grew up in Derry. Until the end of the 18th century there was only a ferry across the River Foyle. In 1789-91 a wooden bridge was built. This greatly boosted trade and industry in Derry. Meanwhile The Derry Journal began in 1772. DERRY IN THE 19th CENTURY In 1821, at the time of the first Irish census, Derry had a population of 9,313. It grew rapidly during the 19th century and had reached a population of 40,000 by its end. In the early 19th century large numbers of Catholics came to Derry from the countryside looking for work. The Courthouse was built in 1813. Derry workhouse opened in 1840 and the railway reached Derry in 1845. Magee College was founded in 1865 to train men for the Presbyterian ministry. St Columb's College was founded in 1879. In 1863 another bridge, this one of steel, was erected across the Foyle. Carlisle Bridge, as it was called, was demolished in 1933. St Augustines Church was built in 1872. St Eugene's Cathedral was built in 1873. Its spire was added in 1902. Derry Guildhall opened in 1890. It burned in 1908 and was rebuilt. It was bombed in 1972 then refurbished. Meanwhile in 1831 a man named William Scott began making shirts in Derry. From the 1850s the shirt making trade in Derry boomed. By the 1870s shirt making was the main industry in the town. There was also a shipbuilding industry in 19th century Derry. Meanwhile the port of Derry prospered. During the 19th century many emigrants from Ireland to North America left from Derry. Today they are remembered by an 'emigrants' sculpture. DERRY IN THE 20th CENTURY Brooke Park opened to the public in 1901. A War Memorial was erected in Derry in 1927 and Our Lady of Lourdes Church was built in 1976. Craigavon Bridge was built in 1933 to replace Carlisle Bridge. Foyle Bridge was built in 1984. In 1932 Amelia Earhart, the first woman to fly the Atlantic, landed at Ballyarnett. From the mid-19th century a shirt-making industry boomed in Derry. In 1997 United Technologies Automotive factory closed. This was a severe blow to the city. During World War II Derry was a major naval base. There were also air bases around the city. Large numbers of American and Canadian servicemen were stationed in the city. From the late 1940s a public housing estate was created at Creggan. In the 1960s the council demolished slums in Derry. On October 5 1968 the Northern Ireland Civil Rights Movement attempted to hold a march in Derry. However the Northern Irish government banned the march and when it went ahead it was broken up by the RUC in Duke Street. The Battle of Bogside occurred in August 1969. Tension between Catholic and Protestant had been building for some time and it eventually erupted into violence. On 12 August 1969 the annual Apprentice Boys march was routed past the Catholic Bogside area. As the Apprentice Boys marched past there were clashes between the RUC and Catholic civilians. There followed 3 days of rioting which ended when the British army was sent in. In 1972 came the tragic event known as 'Bloody Sunday'. On 30 January 1972 the Derry Civil Rights Association were holding a march through the town when the British 1st Parachute Regiment opened fire, killing 14 people. Today Derry is a flourishing city. Foyle Valley Railway Museum opened in 1990. Tower museum opened in 1992. Foyleside Shopping Centre opened in 1995. Rath Mor Centre also opened in 1995. Derry Visitor Centre and Convention Bureau opened in 1997. DERRY IN THE 21st CENTURY In 2001 Creggan Indigenous Enterprise Park opened. Millennium Forum opened in 2001. Creggan Country Park opened in 2003. Today the population of Derry is 83,000. A history of Belfast A history of Galway A history of Limerick A history of Dublin A history of Waterford
2,010
ENGLISH
1
Here's a great story of someone using their noggin to figure out something about nature. For a long time people have been fascinated by comets passing by the earth. However, nobody knew how far away they were. Aristotle, for example, figured that comets were in the upper atmosphere, just a few miles high. Other ancient thinkers thought this was probably wrong. How could you tell? Can you think of a way? Imagine that you don't have any modern instruments to use. No hubble telescope, no airplanes, no radar. What might you do? A Danish astronomer named Tycho Brahe figured this one out. In 1577 everyone was talking about a comet that was then in the sky. Since a lot of people had seen it, Brahe traveled around gathering reports. He figured if the comet was only as high as the upper atmosphere, it should appear to be in different parts of the sky to different observers. That's because moving around under the comet would change your perspective. The stars are so far away that we can't change our perspectives on them, so Brahe realized he could use them as a reference. He found that everyone said it was in the same part of the sky on the same day. This showed that comets are much farther away than the upper atmosphere. Brahe decided they were even farther than the moon! That was a bold statement in his day, but his experiment showed him right. Now we know that comets are much, much farther away than anyone thought, and that they only pass by our planet while orbiting the sun.
<urn:uuid:9f7929cf-9b1b-447a-9faf-dc1e7b29b2f1>
CC-MAIN-2020-05
https://indianapublicmedia.org/amomentofscience/how-far.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00217.warc.gz
en
0.988988
324
3.984375
4
[ 0.08152329921722412, -0.11775778979063034, -0.280794620513916, -0.3153409957885742, 0.11809340119361877, 0.026830589398741722, 0.6584518551826477, 0.18928495049476624, -0.08508464694023132, 0.14505472779273987, 0.2864665389060974, -0.27474910020828247, 0.11617371439933777, 0.10449354350566...
2
Here's a great story of someone using their noggin to figure out something about nature. For a long time people have been fascinated by comets passing by the earth. However, nobody knew how far away they were. Aristotle, for example, figured that comets were in the upper atmosphere, just a few miles high. Other ancient thinkers thought this was probably wrong. How could you tell? Can you think of a way? Imagine that you don't have any modern instruments to use. No hubble telescope, no airplanes, no radar. What might you do? A Danish astronomer named Tycho Brahe figured this one out. In 1577 everyone was talking about a comet that was then in the sky. Since a lot of people had seen it, Brahe traveled around gathering reports. He figured if the comet was only as high as the upper atmosphere, it should appear to be in different parts of the sky to different observers. That's because moving around under the comet would change your perspective. The stars are so far away that we can't change our perspectives on them, so Brahe realized he could use them as a reference. He found that everyone said it was in the same part of the sky on the same day. This showed that comets are much farther away than the upper atmosphere. Brahe decided they were even farther than the moon! That was a bold statement in his day, but his experiment showed him right. Now we know that comets are much, much farther away than anyone thought, and that they only pass by our planet while orbiting the sun.
323
ENGLISH
1
28 Jun Sandro Botticelli Dead: 28 June 1676 In: Chiesa di San Salvatore, Borgo Ognissanti, Firenze, FI, Italia He was an Italian painter of the Early Renaissance. He belonged to the Florentine School under the patronage of Lorenzo de' Medici, a movement that Giorgio Vasari would characterize less than a hundred years later in his Vita of Botticelli as a "golden age". Botticelli's posthumous reputation suffered until the late 19th century; since then, his work has been seen to represent the linear grace of Early Renaissance painting. He and his workshop were especially known for their Madonna and Childs, many in the round tondo shape. Botticelli's best-known works are The Birth of Venus and Primavera, both in the Uffizi in Florence. He lived all his life in the same neighbourhood of Florence, with probably his only significant time elsewhere the months he spent painting in Pisa in 1474 and the Sistine Chapel in Rome in 1481–82. He was an independent master for all the 1470s, growing in mastery and reputation, and the 1480s were his most successful decade, when all his large mythological paintings were done, and many of his best Madonnas. By the 1490s his style became more personal and to some extent mannered, and he could be seen as moving in a direction opposite to that of Leonardo da Vinci (seven years his junior) and a new generation of painters creating the High Renaissance style as Botticelli returned in some ways to the Gothic style. He has been described as "an outsider in the mainstream of Italian painting", who had a limited interest in many of the developments most associated with Quattrocento painting, such as the realistic depiction of human anatomy, perspective, and landscape, and the use of direct borrowings from classical art.
<urn:uuid:67a75347-4490-478d-83f4-9190749f2266>
CC-MAIN-2020-05
https://memoryou.it/persona/sandro-botticelli/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00472.warc.gz
en
0.983979
409
3.40625
3
[ -0.24080389738082886, 0.19934256374835968, 0.22349686920642853, 0.2386811524629593, -0.5248271226882935, 0.22187745571136475, 0.019139330834150314, 0.47509267926216125, -0.11942360550165176, 0.0398375429213047, -0.14871221780776978, -0.2275446206331253, 0.27202972769737244, 0.3895696103572...
1
28 Jun Sandro Botticelli Dead: 28 June 1676 In: Chiesa di San Salvatore, Borgo Ognissanti, Firenze, FI, Italia He was an Italian painter of the Early Renaissance. He belonged to the Florentine School under the patronage of Lorenzo de' Medici, a movement that Giorgio Vasari would characterize less than a hundred years later in his Vita of Botticelli as a "golden age". Botticelli's posthumous reputation suffered until the late 19th century; since then, his work has been seen to represent the linear grace of Early Renaissance painting. He and his workshop were especially known for their Madonna and Childs, many in the round tondo shape. Botticelli's best-known works are The Birth of Venus and Primavera, both in the Uffizi in Florence. He lived all his life in the same neighbourhood of Florence, with probably his only significant time elsewhere the months he spent painting in Pisa in 1474 and the Sistine Chapel in Rome in 1481–82. He was an independent master for all the 1470s, growing in mastery and reputation, and the 1480s were his most successful decade, when all his large mythological paintings were done, and many of his best Madonnas. By the 1490s his style became more personal and to some extent mannered, and he could be seen as moving in a direction opposite to that of Leonardo da Vinci (seven years his junior) and a new generation of painters creating the High Renaissance style as Botticelli returned in some ways to the Gothic style. He has been described as "an outsider in the mainstream of Italian painting", who had a limited interest in many of the developments most associated with Quattrocento painting, such as the realistic depiction of human anatomy, perspective, and landscape, and the use of direct borrowings from classical art.
419
ENGLISH
1
A cooking smoker, used to cook meats, works by allowing the warm air filled with smoke to rise to the meat, while the cooler air sinks closer to the fire. What type of heat transfer is responsible for smoking the meat? A student was conducting an experiment in which a heater was placed on one side of a large box and a thermometer on the other side. The student removed all the air from the box, turned on the heater, and recorded the change in temperature. What was the student's model designed to show? This appliance dries clothes primarily by converting — A teacher rubbed a match against a piece of sandpaper. The match started to burn. Which statement best describes the energy changes that occurred? During a warm summer day, a car became extremely hot. When a student went to open the car door, he burned his fingers. What two forms of energy were responsible for the student burning his fingers? What transfer of heat is responsible for cooking the chicken? Which of these actions, on their own, will cause ice to change to water? Remember, hot --- > cold When an egg is boiled in a pot of water on the stove, there is more than one kind of heat transfer. Heat is transferred from the stove to the pot directly since they are physically touching. Heat is also transferred from the pot to the egg, carried through the water. What two kinds of heat transfer are represented? Which of the following is an example of conduction? Four steel blocks were heated to different temperatures and stacked together as shown. In which direction will heat be transferred by conduction?
<urn:uuid:150768c3-c55e-4e88-832e-bdb6a2fd28d4>
CC-MAIN-2020-05
https://quizizz.com/admin/quiz/5b9ea3aa71177a0019c99fa5
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00038.warc.gz
en
0.981766
328
3.765625
4
[ -0.0034126662649214268, 0.34299421310424805, 0.3200457692146301, 0.12004338204860687, 0.2633267343044281, -0.1819233000278473, 0.6639946699142456, -0.0800694152712822, -0.23738962411880493, -0.16980087757110596, -0.23517560958862305, -0.4521969258785248, -0.37992915511131287, 0.22327630221...
1
A cooking smoker, used to cook meats, works by allowing the warm air filled with smoke to rise to the meat, while the cooler air sinks closer to the fire. What type of heat transfer is responsible for smoking the meat? A student was conducting an experiment in which a heater was placed on one side of a large box and a thermometer on the other side. The student removed all the air from the box, turned on the heater, and recorded the change in temperature. What was the student's model designed to show? This appliance dries clothes primarily by converting — A teacher rubbed a match against a piece of sandpaper. The match started to burn. Which statement best describes the energy changes that occurred? During a warm summer day, a car became extremely hot. When a student went to open the car door, he burned his fingers. What two forms of energy were responsible for the student burning his fingers? What transfer of heat is responsible for cooking the chicken? Which of these actions, on their own, will cause ice to change to water? Remember, hot --- > cold When an egg is boiled in a pot of water on the stove, there is more than one kind of heat transfer. Heat is transferred from the stove to the pot directly since they are physically touching. Heat is also transferred from the pot to the egg, carried through the water. What two kinds of heat transfer are represented? Which of the following is an example of conduction? Four steel blocks were heated to different temperatures and stacked together as shown. In which direction will heat be transferred by conduction?
320
ENGLISH
1
8. Why are their pants flared? Flared pants were first described in 1813 as a part of a sailors’ uniform. Researchers say that their shape was significant. The trouser leg can be rolled up easily, allowing sailors to wash the deck. What’s more, when a boat approached the shore, sailors could get off, and still keep their pants dry. Sailors were always at risk of falling in the water, and the shape of pants allowed them to take them off quickly. They even didn’t have to take off their shoes. Later there was no more need for this uniform and flares became a part of people’s everyday outfits, which became popular and goes out of fashion from time to time. 7. Why are London telephone boxes red? The first telephone boxes appeared in London in 1920. They were made of concrete, so they were a cream color, and only the door was red. In 1924, a contest to design a new kiosk was held, and Giles Gilbert Scott won this competition. But his project was altered a little: the cabins were made of iron, not steel, and the color was changed from grey to red to allow people to be able to spot the telephone boxes on the streets. Later this color became really useful due to London’s fog (including smog from industrial enterprises). So, due to the Great Smog of London of 1952, the city kind of shut down and everything was cancelled since people couldn’t see the stage in theatres or the screen in cinemas. But red telephone boxes were still clearly visible in these conditions. 6. Why are door handles made of brass in public places? Many people know that silver can disinfect water, but we probably haven’t thought about the reason for this feature. The thing is, the ions of some metals (silver, mercury, zinc, copper, lead, gold, and some others) possess something called the oligodynamic effect: they have a resistance to mold, viruses, and other microorganisms. This is the reason why door handles that are made of brass (an alloy of copper and zinc) are able to be cleaned really easily and stay free of germs, even in public places. 5. Why are sailors’ shirts striped? Initially, wearing striped clothes was unacceptable for sailors. Only prisoners, sick people, and women of the night wore this clothing. In 1858, Napoleon allowed the wearing of striped shirts in the navy. It’s said that stripes help identify a person on a deck and help to find a person if they fall in the water. By the way, the first swimsuits were also striped. 4. Why do soccer referees use red and yellow cards? In 1966, during a match between Argentina and England, because of a language barrier, Argentine soccer player Antonio Rattin didn’t understand (or didn’t want to understand) German referee Rudolf Kreitlein’s words. Afterward, the player was removed from the field — he had been there for about 9 minutes. The English players couldn’t understand some warnings either, and people couldn’t figure out what was happening on the field. After this accident, World Cup referee head Ken Aston created a clearer system of penalties and he suggested using red and yellow cards. 3. Where did the pockets on flight jackets’ sleeves come from? In 1955, the MA-1 jacket was released. It’s a predecessor of modern flight jackets. Initially, they were designed for pilots of heavy bombers. Designers supplied the jacket with a “service” pocket on the sleeve, that pilots really liked. It was really useful because they could put there their keys or a cigarette pack in there (that’s why the pocket was also called a “cigarette pocket”). 2. Why do trench coats have shoulder tabs? The trench coat appeared in 1901 as an alternative to soldiers’ heavy overcoats. So some details had a practical use. Thus, the storm flap on the chest was designed to protect a soldier’s shoulder from the rubbing of the strap of a rifle. Modern trench coats often have shoulder tabs. Initially, their function was to keep the cartridge bag strap from sliding down and to prevent scuff marks. As a rifle was carried on one and the same shoulder, there was usually only one shoulder tab. 1. Why there are holes in men’s shoes? The brogue, a style of shoe that is perforated, was first used by Irish and Scottish cattlemen in the 17th century. They worked on marshland, so they made small holes in their shoes so that they could get dry faster. Later, those perforations became decorative, and brogue shoes became popular among noble people. By the way, these shoes were worn on the countryside and considered informal. Thanks to Edward VII, people started wearing brogues everywhere. The King liked to wear them while playing golf and spending time in the city. Why don’t men fasten their suit’s bottom button? Suits didn’t appear until the second part of the 19th century. At the end of the 19th century, men started wearing informal suits while horseback riding or spending time in the countryside. It was comfier for them to have the lower button unfastened. It’s also said that this feature turned into a tradition thanks to British King Edward VII. He was a fashion icon and made unfastened lower buttons, tweed hats, black ties — instead of white ones, brogues, and other things really popular. Why do we call daytime TV series “soap operas”? When soap operas appeared, there was no such thing as TVs in every house, but everyone had a radio. So there were audio series about love, and housewives loved to listen to them while doing their household duties. There were commercial breaks, which advertised the products of Procter & Gamble, Colgate-Palmolive, and other soap manufacturers. Later, certain series were tightly associated with cleaning products, and in the 30s, the American press invented the term “soap opera.” In 1940, the amount of these audio series made up about 90% of all daily broadcasted programming.
<urn:uuid:1157e136-f442-4b2a-8a25-49948a1cf66f>
CC-MAIN-2020-05
http://www.campusrock.sg/celebritytalk/10-reasons-why-ordinary-things-look-the-way-they-do
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00173.warc.gz
en
0.982319
1,325
3.34375
3
[ -0.18875297904014587, 0.07622401416301727, 0.5243277549743652, 0.13694624602794647, -0.1927836835384369, 0.0029528732411563396, 0.5493900179862976, 0.3616012930870056, -0.19750578701496124, -0.1890660673379898, -0.1206352561712265, 0.029421288520097733, -0.18207359313964844, 0.258858621120...
3
8. Why are their pants flared? Flared pants were first described in 1813 as a part of a sailors’ uniform. Researchers say that their shape was significant. The trouser leg can be rolled up easily, allowing sailors to wash the deck. What’s more, when a boat approached the shore, sailors could get off, and still keep their pants dry. Sailors were always at risk of falling in the water, and the shape of pants allowed them to take them off quickly. They even didn’t have to take off their shoes. Later there was no more need for this uniform and flares became a part of people’s everyday outfits, which became popular and goes out of fashion from time to time. 7. Why are London telephone boxes red? The first telephone boxes appeared in London in 1920. They were made of concrete, so they were a cream color, and only the door was red. In 1924, a contest to design a new kiosk was held, and Giles Gilbert Scott won this competition. But his project was altered a little: the cabins were made of iron, not steel, and the color was changed from grey to red to allow people to be able to spot the telephone boxes on the streets. Later this color became really useful due to London’s fog (including smog from industrial enterprises). So, due to the Great Smog of London of 1952, the city kind of shut down and everything was cancelled since people couldn’t see the stage in theatres or the screen in cinemas. But red telephone boxes were still clearly visible in these conditions. 6. Why are door handles made of brass in public places? Many people know that silver can disinfect water, but we probably haven’t thought about the reason for this feature. The thing is, the ions of some metals (silver, mercury, zinc, copper, lead, gold, and some others) possess something called the oligodynamic effect: they have a resistance to mold, viruses, and other microorganisms. This is the reason why door handles that are made of brass (an alloy of copper and zinc) are able to be cleaned really easily and stay free of germs, even in public places. 5. Why are sailors’ shirts striped? Initially, wearing striped clothes was unacceptable for sailors. Only prisoners, sick people, and women of the night wore this clothing. In 1858, Napoleon allowed the wearing of striped shirts in the navy. It’s said that stripes help identify a person on a deck and help to find a person if they fall in the water. By the way, the first swimsuits were also striped. 4. Why do soccer referees use red and yellow cards? In 1966, during a match between Argentina and England, because of a language barrier, Argentine soccer player Antonio Rattin didn’t understand (or didn’t want to understand) German referee Rudolf Kreitlein’s words. Afterward, the player was removed from the field — he had been there for about 9 minutes. The English players couldn’t understand some warnings either, and people couldn’t figure out what was happening on the field. After this accident, World Cup referee head Ken Aston created a clearer system of penalties and he suggested using red and yellow cards. 3. Where did the pockets on flight jackets’ sleeves come from? In 1955, the MA-1 jacket was released. It’s a predecessor of modern flight jackets. Initially, they were designed for pilots of heavy bombers. Designers supplied the jacket with a “service” pocket on the sleeve, that pilots really liked. It was really useful because they could put there their keys or a cigarette pack in there (that’s why the pocket was also called a “cigarette pocket”). 2. Why do trench coats have shoulder tabs? The trench coat appeared in 1901 as an alternative to soldiers’ heavy overcoats. So some details had a practical use. Thus, the storm flap on the chest was designed to protect a soldier’s shoulder from the rubbing of the strap of a rifle. Modern trench coats often have shoulder tabs. Initially, their function was to keep the cartridge bag strap from sliding down and to prevent scuff marks. As a rifle was carried on one and the same shoulder, there was usually only one shoulder tab. 1. Why there are holes in men’s shoes? The brogue, a style of shoe that is perforated, was first used by Irish and Scottish cattlemen in the 17th century. They worked on marshland, so they made small holes in their shoes so that they could get dry faster. Later, those perforations became decorative, and brogue shoes became popular among noble people. By the way, these shoes were worn on the countryside and considered informal. Thanks to Edward VII, people started wearing brogues everywhere. The King liked to wear them while playing golf and spending time in the city. Why don’t men fasten their suit’s bottom button? Suits didn’t appear until the second part of the 19th century. At the end of the 19th century, men started wearing informal suits while horseback riding or spending time in the countryside. It was comfier for them to have the lower button unfastened. It’s also said that this feature turned into a tradition thanks to British King Edward VII. He was a fashion icon and made unfastened lower buttons, tweed hats, black ties — instead of white ones, brogues, and other things really popular. Why do we call daytime TV series “soap operas”? When soap operas appeared, there was no such thing as TVs in every house, but everyone had a radio. So there were audio series about love, and housewives loved to listen to them while doing their household duties. There were commercial breaks, which advertised the products of Procter & Gamble, Colgate-Palmolive, and other soap manufacturers. Later, certain series were tightly associated with cleaning products, and in the 30s, the American press invented the term “soap opera.” In 1940, the amount of these audio series made up about 90% of all daily broadcasted programming.
1,286
ENGLISH
1
Translated from Italian as “Little Mountain”, is Monticello, third U.S President Thomas Jefferson’s plantation estate. Acquired by Jefferson in 1777, the renowned Founding Father’s property near Charlottesville Virginia, as well as its workers, to this day, hold a significant place in U.S history. The Thomas Jefferson Foundation, which has preserved, and maintains the 5,000-acre property, notes that “to understand Jefferson, one must understand Monticello; it can be seen as his autobiographical statement.” The iconic property is a national landmark, and has been the subject of quite some interest by American, and international scholars. Whilst there is a plethora of documentation regarding Jefferson’s primary home, there has been a push by academia and archaeologists alike to delve deeper into the original use of the property and the activities which took place within its grounds. The initiative taken was not for nothing however; a discovery was made which has left historians astounded. Not only in regard to the livelihoods of the people who lived there, but in regard to what it meant for the history of the United States. Let’s take a look at what they found! A President’s Plantation As Thomas Jefferson’s primary residence, Monticello was his home before he moved to the White House in 1801. Today, the estate is preserved by the Thomas Jefferson Foundation. Open to the public, it is regarded as a historical landmark. Construction of the expansive estate commenced in 1768, with its renowned sprawling grounds well-documented and well-known. Fun fact; an image of the plantation’s main house is ingrained on the flip-side of the U.S. nickel! Intense research and constant study of the property had not yielded a find quite as fruitful as the one recently uncovered by historians. A mystery which had baffled historians and political scholars for the years seemed to be hiding in plain sight. The Controversy Surrounding Monticello Born in 1743, Jefferson began building Monticello at 26-years old. Having inherited the land from his father the initial purpose for the land was to cultivate wheat and tobacco. Charlottesville, named after British Queen Charlotte of Mecklenburg-Strelitz, is an area which is characterised by hot, humid summers and mild winters, which made it suitable for growing these crops. Such as is the case with many family-held plantations in America, Monticello is not alone in having its own dark past. Jefferson used free workers along with servants and enslaved labourers to construct the plantation house, however he also had hundreds of slaves working and living at Monticello. Whilst he consistently spoke out against the chains of slavery, and worked to end the practice, he had a secret of his own. Many individuals to this day find his dark secret a difficult pill to swallow, however the discovery made in 2017 by a group of archaeologists creates a case for Jefferson, shedding light on a matter which was previously unresolved. A Complicated Legacy The principal author of the Declaration of Independence, Jefferson is regarded as one of the most prominent figures in American history. One of the U.S’ visionary Founding Fathers, he, somewhat ironically penned the famous line “All men are created equal. Ironic why you ask? Jefferson, despite the airs and graces he upheld, simultaneously kept 600 African-American slaves for the duration of his adult life. As such he left a legacy which reflected his duality; his political life and his personal life. The discovery in 2017 during an excavation on the estate contributes to history’s understanding and opinion of Jefferson, causing many to re-evaluate his contribution. An Enigmatic Figure Among the 600 slaves is a key figure in the Jefferson mystery. Enter Sally Hemings. For the most part, she remains a puzzling figure, however historians cannot discount her involvement in Jefferson’s life. Naturally, her story piqued the interest of historians, and has continued to do so for over a century. Almost 200 years after her death, the discovery made brought some new insights into who she was and the events which took place during her time at Monticello estate. Who Was Sally Hemings? Born in 1773 to a planter and slave trader named John Wayles, who was also the father of Martha Jefferson, Sally was the half-sister of Thomas Jefferson’s wife. As a child, Sally, her siblings and mother came into Martha’s possession as part of her inheritance from her father. An enslaved woman of mixed race, she held an important place in Jefferson’s life. Owned by Jefferson, the historical consensus is that Hemings was the mother of his children. Due to legalities, the children she bore were considered slaves. The historical question of whether Jefferson did father Hemings’ children is the subject of what is known as the Jefferson-Hemings controversy. Much investigation, and historic analysis of DNA found a match between the Jefferson men and a descendant of Hemings’ youngest son, Eston. It has since been alleged that Jefferson was the father of all five of her children. We’re scratching our heads; which version of events is true? Before She Was a Subject of National Intrigue The youngest of 6 siblings, and 25 years younger than her half-sister Martha, Hemings and her siblings grew up at Monticello. Trained and put to work as servants, the children were “spared” as they held positions which were considered better than the conditions of others, such as the labourers in the fields. During her youth she was quite the plain Jane, but years later, it was her destiny to become a figure who was scrutinised, eventually becoming a household name. She would be described as the former President’s “mistress” however she was not even that; she was his property. Her story is not one of glamour and pomp; rather it is the life of a slave whose wellbeing was betwixt with that of her owner. A Trail of Clues Hemings unfortunately did not know much life outside of being a slave; she was kept until Jefferson’s death in 1826. Whilst she lived her final years freely, the details of her time at Monticello are largely unknown. However, the keen eyes and scouring of documentation by scholars and historians unveiled a series of clues which has thus led to a better understanding of Hemings’ importance and historical significance. At this stage, we’re wondering what it was about Hemings that caught Jefferson’s eye; was it her lovely face, her physique, her pleasant character? One of the only existing documents which describes her appearance, which was written by the blacksmith Isaac Granger Jefferson gives us partial clues. According to his memoirs, Hemings was “mighty near white…very handsome, long straight hair down her back.” Whilst it seems apparent as to why Hemings took Jefferson’s fancy from this description, there are a few crucial elements still missing in the Hemings-Jefferson puzzle. Painting a Picture Being a slave, it was highly inappropriate for portraits to be taken, however historians have constructed an image of her based on the descriptions documented. According to Jefferson’s grandson, Thomas Jefferson Randolph, she was “light coloured and decidedly good looking.” As far as her role in the plantation, historians have noted her duties included working as a seamstress as well as a chambermaid. Perhaps surprisingly, the diligent and meticulous Jefferson, whilst keeping detailed ledgers of finances and births in his records of Monticello, left not a shred of documentation on Hemings. Whilst her face would be forever etched into Jefferson’s memory, and may show in the faces of her children, it will forever be a mystery as to what Sally actually looked like. The French Connection Jefferson widowed at 39 years of age. Two years later, in 1784, he took his eldest daughter Martha to Paris. He then sent for his youngest daughter, the 9-year-old Mary, who was accompanied by the 15 year old Hemings. The future President served as the U.S. envoy to France; it was during these two years that Sally’s life was to change forever. Hemings’ brother James also accompanied the Jefferson’s to Europe as their personal chef. In France at the time, slavery was prohibited, and both Sally and her brother could have petitioned for freedom and lived in France as a free person. If she returned to Virginia with Jefferson, it would be as a slave. She agreed to return to the United States, for a reason which is both shocking yet also in the best interests of Sally’s secret. What Happens in Paris, Doesn’t Stay in Paris Ah Paris…the city of love. Not the city of teenage pregnancy. It was in Paris that historians agree Jefferson began a sexual relationship with the young Hemings. He was in his mid-40s, and she was barely 16. It was at this time that, according to Hemings’ son Madison, Hemings became pregnant by Jefferson. The pair returned to the U.S. in 1789, and it seems that the child she bore was not the only one that would call Jefferson “father.” Sally went on to have six children following her return from Europe, and reports from the time suggest that they were indeed all Jefferson’s due to the strong features and strong resemblance to their father. This relationship was kept extremely discreet; any sort of relations with a slave would be scandalous, particularly against the name of a man running for the position of President. It was not until over 20 years later that the facts would come to light, and the controversy would come to the fore. Come the spring of 1802. After 20 years, the “Jefferson-Hemings controversy” was born. One of Jefferson’s opponents, James T. Callender, published a report which smeared his reputation, after reports of several light-skinned slaves at the Monticello plantation. Jefferson never denied the allegation publicly, nor did he divulge the father of Hemings’ children in his detailed “Farm Book.” However, his family attempted to hush the story in later years, denying Jefferson’s hand in the controversy. The children he allegedly fathered who survived into adulthood were freed once they were of age, which nigh confirmed the rumours that he was indeed their biological father. His family once again, as well as historians to this day, vehemently deny the paternity allegations. It was not until 150 years later, when historians began reanalysing the evidence, that a new piece of information would subvert the accepted truth. After 150 Years of Uncertainty… American historian Annette Gordon-Reed published a book in 1997 which analysed the Jefferson-Hemings controversy, and the flaws in the “accepted truth.” Her scrutiny of the historiography of the saga found that 19th century historians had merely accepted assumptions without further investigation. They dismissed the Hemings family’s testimony as “oral history”, deeming Jefferson’s family testimony as the only truth. The story which had been propagated by the Jefferson’s was that the father of Hemings’ children was Peter Carr. However, the 1998 DNA analysis showed there was no match between the Carr line, and the Hemings’ descendant who was tested. The breakthrough you ask? That there was a match between the Jefferson male line and Eston Hemings’ descendant! Eston was Sally’s youngest son, and his DNA was the link which shed light on the astounding controversy, proving that the Carr story was a fabrication, and, revealing the absolute truth that Thomas Jefferson did indeed have intimate relations with a slave. Not just once, either. Two decades later, archaeologists today were to discover a long-hidden secret that provided an even bigger revelation of her life. A Monumental Discovery For over 90 years, Monticello has been lovingly maintained and restored by the Thomas Jefferson Foundation. It is frequently subject to the probing of historians, archaeologists and the general public alike. However, in 2017, said probing was fruitful. During a dig, archaeologists, in their restoration efforts, discovered a piece of the puzzle which had eluded them for quite some time. They discovered the concealed living quarters of Sally Hemings! Their excavation was initially proposing to uncover the original layout of the Monticello plantation’s Southern Wing, however they definitely stumbled across something much more exciting. Despite several decades’ worth of work, the room had remained untouched and undiscovered. Something which had slipped through the fingers of social scientists had finally made itself known, and was a discovery which set the nation aflame once again with the headlines of the Jefferson-Hemings controversy. Hidden in Time It was extremely fortunate that the archaeological team came across the room, particularly owing to the fact that the southern Pavilion of the estate had been subject to a large number of changes, both during and after Jefferson’s lifetime. A museum had been constructed and many people had passed on through (and above) the hidden living quarters. You may think; how did a whole room just disappear? Well, in 1941, the installation of a modern bathroom concealed the room, completely covering any trace of an opening. Again in the 1960s, the bathroom underwent a renovation to accommodate the increasing number of guests at Monticello. Even still, the changes did not reveal Hemings’ long-lost living quarters. The point which alerted archaeologists, and motivated them to dig deeper (literally), came to them in a most surprising fashion. A Historic Hint It was during the analysis of the history of Monticello that historians came across a surviving document written by one of Thomas Jefferson’s grandsons. The source revealed that Sally Hemings’ room was in fact located in the South Wing of the former main house. Whilst historians were sceptical at first, and knew not to take the word as gospel, it did raise questions which led them to consider the modern restroom addition, and subsequently, to dig. With each turn in this tale, it seems that archaeologists and historians were to uncover new artifacts and missing clues which piece together the history of Monticello, and of its inhabitants. During the excavation, historians unearthed a number of relics, all pointing to one thing. Taking heed of Jefferson’s grandson’s clues, the archaeologists proceeded to demolish the men’s bathrooms, sieving the dirt for fragments and clues to the mystery. Their digging was not for nothing; they eventually discovered Sally Hemings’ 14-foot living quarters. Among their discoveries were original brick floors from the early 1800s, a brick hearth and fireplace, as well as a fixture which was a suitable structure to hold a stove. However, the point which really flabbergasted archaeologists was the room’s vicinity to Jefferson’s private bedroom. It was located directly adjacent. It seems that there was more truth to the controversy than initially imagined, a dark secret which, after over 170 years, was finally to see the light. What It Means Historians and archaeologists alike see the proximity of Hemings’ room to that of Jefferson’s as a tell-tale sign that he indeed was the father of her children. The discovery of the room, as well as the results discovered in the DNA almost certainly provide solid proof of their intimate relationship. What this meant was that a man who supposedly upheld justice in his position as President, was just as corrupt and hid secrets as any other man. Fraser Neiman, director of archaeology at Monticello remarked “this room is a real connection to the past.” He went on to say that as they dug deeper, “we are uncovering and discovering and we’re finding many, many artefacts.” The room did not only uncover their secret, but also filled in the gaps for many historians, answering questions which had been asked time and time again, yet not answered. How Enslaved People Were Living The room also highlighted the difference between the Hemings’ lives as slaves, versus that of the others. Gardiner Hallock, director of the restoration for Jefferson’s home, noted that “the discovery gives us a sense of how enslaved people were living. Some of Sally’s children may have been born in this room.” “It is important because it shows Sally as a human being- a mother, daughter and sister – and brings out the relationships in her life.” Whilst the paradox of liberty rests with Jefferson, in that a man who cried out for the freedom of all not only kept slaves but kept one as a sexual slave. Not the most flattering secret to expose, particularly one regarding a U.S. President! It is speculated that Sally’s decision to return from Paris was owing to the fact that Jefferson promised the children she bore would be free once they came of age (21 years old). Perhaps not surprisingly, the Hemings were the only family that Jefferson freed among the slaves he kept (aside from three very lucky others). A Window into the Past? Uncovering Sally Hemings’ room also revealed that she enjoyed a standard of living well above that of the other slaves who lived at Monticello. Regardless, she was still a slave, and was treated thus, though there were some indicators which shed light on her own living conditions. Historians note that Hemings’ room was dark and dingy, with no natural light allowed in; there were no windows whatsoever, so the conditions would have been uncomfortable. Some historians have mentioned the possibility that building the bathroom above her quarters was calculated; it was an attempt to cover up Sally and her secret, as it was considered a great insult, not only to Jefferson’s legacy, but to her own. However, following her death, her story was to be known to all. Revealing the Truth Following its discovery, historians and the committee of the Thomas Jefferson Foundation sought to restore Sally Hemings’ room for public display, with its expected open date to be during 2018. The space is designed to exhibit with furniture of the period as well as artefacts excavated on the property; such pieces include fine ceramics and bone toothbrushes! Where previously the secrets of the estate were kept under lock and key, the $35-million Mountaintop Project at Monticello has made a bold effort to create more transparency; to tell the stories of both the free and enslaved people who inhabited the estate. In recent years, tours have been offered which focus solely on the Hemings family, with a reception which has been overwhelmingly positive. Spokeswoman for Monticello, Mia Magruder Damman notes that “for the first time at Monticello, we have a physical space dedicated to Sally Hemings and her life.” The significance of this discovery, and the ability to pay respect to her life is extraordinary, as it “connects the entire African-American arch at Monticello.” The discovery did three things; it answered questions, it clarified rumours, and gave insight to the daily activities of Monticello, as well as the human interactions there. The estate’s curators are now working around the clock to more solidly incorporate her life, rightfully, into Jefferson’s story, and dismiss the notions that she was merely his mistress; his “concubine.” But we are not quite finished there; there is more to the story, and these other facts hold even greater significance. Outside of the Mystery Monticello estate is seemingly finished with avoiding Jefferson’s relationship with Hemings, with a new exhibit shedding light on the realities of slavery, as well as the truth behind Hemings. The discovery of Hemings’ room also allowed for the public to see the real, human side. Niya Bates, historian, remarked that the room would “portray her outside of the mystery” – no longer a topic of debate and speculation, but as the living, breathing woman she was. The exhibit seeks to bring life to a woman who was constantly linked to the drama of Jefferson’s life, not to mention terrible rumours and scandalous gossip. “She was a mother, a sister, an ancestor for her descendants (pictured), and [the room’s presentation] will really just shape her as a person and give her a presence outside of the wonder of their relationship,” Bates stated. Before the room was discovered, Sally’s name was never mentioned, and tours skimmed briefly over Jefferson’s love life, merely noting that he widowed a relatively young man. Remembering Sally’s Name With its newfound focus on the realities for the majority of the people who lived and worked there; not just the wealthy owners, Monticello’s change of course departs from the original portrayal towards the public. Retired historian Lucia “Cinder” Stanton began working at Monticello estate in 1968, and recalls that during her time there, Sally’s name was never mentioned; it was Monticello’s dirty little secret. Back in the 60s it would have been scandalous to unveil such a secret, even if there were rumours whirring around; as such, little was said about the Hemings family in its entirety. It was not until the 250th anniversary of Jefferson’s birthday in 1993 that the tours began to include stories of the slaves who worked and lived on the estate. Despite this giant leap forward in uncovering the truth of the livelihoods of those who lived there, it would take many more years for another fact to come to light. This fact would bring the descendants (pictured) of the slaves to visit the property their ancestors once called home. Remembering Mulberry Row Enter Mulberry Row, the dynamic, industrial hub of Jefferson’s grand enterprise. This famed street was the centre of work and domestic life for many people. Monticello unveiled the restoration of Mulberry Row in 2015, displaying a series of reconstructions of dwellings from the plantation street. Between 1770 and 1831, when Monticello was sold, the row comprised 20 buildings. The important unveiling welcomed over 100 descendants of slave families, with an emotional tree-planting memorial ceremony taking place in their ancestors’ honour. This was only the beginning of commemorative efforts, as the significance of the lives of these people grew, as well as their connection to their modern successors. A More Comprehensive Account Once the room was discovered, it opened the floodgates for a much-needed re-telling of the Monticello estate’s history. Just as the tide had turned with Mulberry Row, and for once, recounting the story of the majority who lived and worked there, so it was for Sally Hemings. Curators of the estate decided to incorporate her room and life into a fuller account of Monticello and its people; not just its wealthy, well-known owners. The breakthrough has acknowledged a dark, yet important part of history which was necessary for Americans to be aware of, however it also has had a negative impact on some. Hemings’ distant relatives hold a mixed response about their ancestor’s legacy, and particularly her dalliance with American president Thomas Jefferson. A Descendant’s View Whilst the discovery of Hemings’ room gave closure to some, it also gave answers which were less than satisfactory. Gayle Jessup White, Sally Hemings’ distance niece notes that “as an African-American descendant, I have mixed feelings – Thomas Jefferson was a slaveholder.” White, who works as Monticello’s Community Engagement Officer, is within her rights to feel uneasy, what with her father, the former U.S. president, and her mother, a lowly slave. The social differences were so great that it leads one to believe that Jefferson conveniently held Hemings as property and did not bother to think of the consequences of his desires. White, as an African-American woman appreciates the work of the Thomas Jefferson Foundation, as, “for too long our history has been ignored.” Indeed, the discovery shed light on the real truth behind Monticello, and that this sort of arrangement may have been more common spread than initially believed. “Some people still don’t want to admit that the Civil War was fought over slavery. We need to face history head-on and face the blemish of slavery and that’s what we’re doing at Monticello.” White is not alone, and, joined by her colleagues they seek to unveil more truths about the property and its history. What with its dark past, Monticello was never accepted by the majority of the local African-American community, owing to Jefferson’s mixed messages regarding slavery. On one hand, he was a champion of justice, and wished to abolish it, however kept 600 of his own. “I find that some people are receptive to the message and some are resistant,” she said. “But our message is that we want the under-served communities and communities of colour to become partners with us.” Whilst White acknowledges that there is much more work to be done with spreading the stories of their ancestors, “anecdotally we have seen an uptick in African-Americans visiting Monticello, so I know we’re making progress.” It is yet to be seen if the community will finally embrace Monticello, however, it cannot be doubted that it is undeniably a part of the history of the African-American people. Despite the answers provided by the finding of Hemings’ room, it still remains that there are a number of questions which still require further enquiry. Despite the great historical analysis of Monticello, of its records and documents regarding the estate, the history of the former plantation remains mysterious in its own ways. Whilst Jefferson kept detailed records and logged the lives of his hundreds of slaves, there were very few artefacts remaining. A scarce few individual photos of people from some of the families are all that are left at present. The descendants, and the curators of Monticello’s museum have since held several ventures which have revealed more remarkable information about these slaves. At last, justice to those who had seemingly lost their place in the history books. The Hemings’ Family Tree It seems that Sally’s name is not the only one to have made a significant contribution to the United States. Her family tree also includes a number of descendants who also held Jefferson’s genes. It is an impressive, wide-reaching lineage to follow which can be traced to the present day. Historian Annette Gordon-Reed in 2008 published her book The Hemings’s of Monticello: An American Family, which provides wonderful insight to the lives of slaves at the time. Gordon-Reed views the slaves through a lens which is analytical; she recounts the history of generations of the Hemings family based on surviving legal records, diaries, farm logs, newspapers, archives, correspondences and even oral history. In the next slide we will find out the details of her extraordinary findings. Life After Monticello Madison Hemings, one of Sally’s daughters has said that her mother’s first child passed soon after her return from Paris with Jefferson. The records which Jefferson kept confirmed this story, and also added that Hemings had six children after her return to the U.S. Of the six, four survived into adulthood: Madison, Eston, Beverley and Harriet. With time, all, except for Madison made the choice to live amongst the white society in the North. Madison’s memoir is critical in furthering her mother’s story, and that of her siblings. According to Madison, his sisters Beverley and Harriet both married affluent Washingtonians, and lived within DC’s white community. On the other hand, the Hemings brothers both married free women of colour in Virginia. Eston perhaps made the most surprising choice of all; changing his surname to Jefferson, to acknowledge the U.S President as his biological father. An Influential Lineage Hemings’ sons were to go on to enjoy success in adulthood, with multiple children taking up arms and fighting on the Union’s side in the bloody Civil War. Sally Hemings’ family tree expanded to include several grandchildren and great-grandchildren, who carried on the family legacy. It seems that politics was in the DNA of the offspring of Jefferson, with his and Hemings’ great-grandson, Frederick Madison Roberts becoming the first person elected – of black ancestry- to take up office on the West Coast of the United States. He was to serve more than one term in office, serving for over 20 years in the California State Assembly. However, this was not all for the Jefferson-Hemings descendants. An effort in 1993 was made by Monticello historians to glean more information from the descendants of the enslaved at the estate. Over 200 interviews were conducted, with the goal to collect personal accounts of the African-American families who lived at Jefferson’s Virginia plantation, from their descendants. This oral history project was furthered in recent years, reaching a peak with a 2016 public summit titled “Memory, Mourning, Mobilization: Legacies of Slavery and Freedom in America.” The summit opened with the following bold, chilling statement: “My ancestors were enslaved at Monticello. Generations of people bound to the earth, by blood and by law.” This gathering of people indicated just how many families had been impacted by the plantation, and in turn, by Thomas Jefferson. Finally, those who had been slaves were given a voice, to tell their story, albeit hundreds of years later. Most importantly, is the adjustment in the narrative told to the general public. Curious about the scandal and mystery surrounding the expansive grounds of Monticello, the estate sees over half a million visitors. You’d want to hope all these people are told the most realistic version of events! The gradual shift now portrays a more holistic story, with the details of slaves which were once glossed over, now brought to light. Tom Nash, one of the expert guides at Monticello made the candid remark to his visitors “this is a spectacular view from this mountaintop.”“But not for the enslaved people who worked these fields. This was a tough job and some of them – even young boys 10 to 16 years old – felt the whip.” Whilst these days Monticello is green pastures and sprawling lawns, it was not enjoyed in that way hundreds of years ago. Conditions for the enslaved were harsh, cruel even; these people were considered as sub-human, and whilst were perhaps afforded better living conditions than many slaves in the U.S., were still treated in a manner which was almost intolerable. ‘No Such Thing as a Good Slave Owner’ Nash, constantly in the firing line of the public’s probing questions shares some of the wide range which are thrown at him. “Why did some slaves want to pass for white when they were freed,” one tourist asked, while another questioned: “why did Jefferson own slaves and write that all men are created equal?” Retrospectively, Nash’s answer reflects the realities of the time, “working in the fields was not a happy time. There were long days on the plantation.” “Enslaved people worked from sun-up to sundown six days a week. There was no such thing as a good slave owner.” It doesn’t get much clearer than that; any slave was still just that: a slave. The one thing these people yearned for, was something which was dangled in front of them yet not even remotely within their grasp; freedom. And the man who was supposedly able to grant them it, guarded his secret jealously. July 2017 saw Monticello’s 55th annual Independence Day, and while the memory of its history may still linger in the minds of the descendants of the enslaved, a celebration was held. Not just to celebrate the estate, but the memory of those who had either experienced, or been touched by the events of the plantation. 70 people from 30 countries streamed in from all corners of the globe to attend the event, and in doing so, became naturalised citizens of the United States. This acknowledgment brings together those affected, and unites them as a whole, to create a sense of belonging. The United States, and the world along with it continue to recognise the complexities of American history and are working harder than ever to acknowledge the contribution, and often, sacrifice of those who were not free as you or I am today. Jefferson Wasn’t the Only One Whilst it is easy to point the finger at Jefferson as a leader who went back on his word of creating a freer, more equal America, he was not the only prominent U.S. figure with a history of slave ownership. As historians scour documentation and evidence of the impressive line-up of presidents, it has been found that twelve leaders of the United States were slave owners at a point in their lives. Of those twelve, eight were slave owners whilst they held office! Despite the United States’ Declaration of Independence being founded on the principle of “all men are created equal” there was a glaring hole in this statement. The links of these Founding Fathers to participating in slave ownership highlight a fatal flaw in America’s history, presenting an astonishing contradiction which is forever ingrained in the nation’s past. Early Years of the Republic Although there were paradoxical and conflicting views on the institution of slavery, of the first five presidents of the United States, four were slave owners! A nation which was supposedly built on equality and freedom was indeed a huge lie, which tested the integrity of the nation and its leaders. The “father of the country,” George Washington, is among the four slaveowners. Over 300 slaves lived on the first President’s Mount Vernon plantation, and this number grew. Despite his engagement of slaves, he was singular in the respect that he chose to free his personal slaves. When his will was read, it called for the freeing of the slaves upon his wife Martha’s death. However, Martha had more charity and goodness in her heart, and decided to free a large number of them earlier, releasing them only a year after he had passed away. Despite the occurrence of slave-holding presidents in the early years of the nation’s history, John Adams, the second President of the United States, proves an exception. He was the first resident of the White House, and whilst slave labourers did work to construct the iconic residence, Adams himself never owned slaves. He was considered as holding “moderate” views on slavery and decided to listen to the message of the Declaration of Independence. Like his father before him, Adam’s son, John Quincy, the sixth U.S. President, also did not holds slaves during his lifetime. In his final years, and the years where he was not holding office, Adams sought to oppose the institution of slavery and spread the message of freedom for all regardless of race. Presidents After Jefferson Slave laborers were not merely used by Jefferson on his plantation as we have found, what with some working on Mount Vernon as well as the fabled White House. Though Jefferson once referred to slavery as an “assemblage of horrors,” he was not to be the last President to be a slave owner. James Madison, James Monroe and Andrew Jackson also participated in the institution, as well as eighth President Martin Van Buren. These Presidents often noted they opposed the expansion of slavery yet could hardly be considered abolitionists; perhaps they enjoyed the benefits of owning slaves to further their prospects. Surprisingly, the last two Presidents to own slaves were both men associated with Abraham Lincoln. Let’s have a final look at who these men were! Before Lincoln, a number of other prominent figures held slaves during office, including; John Tyler, James Polk and Zachary Taylor. The last president to personally own slaves was Ulysses S. Grant. Serving two terms, between 1869 and 1877, the former general of the Union Army kept a single black slave named William Jones. However, even he granted him his freedom, noting later on that slavery was “a stain to the Union (that) people had once been bought and sold like cattle.” As was the fashion of the time, it was perfectly acceptable to own slaves. However, a growing movement, which had been given impetus by Abraham Lincoln, was sure to overturn this archaic institution. Abraham Lincoln’s signing of the Emancipation Proclamation championed the passing of the 13th Amendment to end slavery. The bill was controversial at the time, and, Andrew Johnson, Lincoln’s right-hand, who owned slaves, even lobbied against his own President! Finally, in 1863, the 16th U.S. President Lincoln freed almost 3 million enslaved people with his Emancipation Proclamation. America was officially abolished two years later, with the adoption of the famous 13th Amendment.
<urn:uuid:8c5967d2-4da8-484c-9a1f-39987705a7a7>
CC-MAIN-2020-05
https://www.daily-choices.com/the-thomas-jefferson-mystery/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00523.warc.gz
en
0.983461
7,842
3.28125
3
[ -0.35220852494239807, 0.282866507768631, 0.19721369445323944, -0.0633101761341095, -0.16948707401752472, 0.05979820340871811, 0.2090536653995514, -0.19044019281864166, 0.15057335793972015, 0.5965313911437988, 0.20085591077804565, -0.2437790036201477, -0.31922948360443115, 0.270340442657470...
8
Translated from Italian as “Little Mountain”, is Monticello, third U.S President Thomas Jefferson’s plantation estate. Acquired by Jefferson in 1777, the renowned Founding Father’s property near Charlottesville Virginia, as well as its workers, to this day, hold a significant place in U.S history. The Thomas Jefferson Foundation, which has preserved, and maintains the 5,000-acre property, notes that “to understand Jefferson, one must understand Monticello; it can be seen as his autobiographical statement.” The iconic property is a national landmark, and has been the subject of quite some interest by American, and international scholars. Whilst there is a plethora of documentation regarding Jefferson’s primary home, there has been a push by academia and archaeologists alike to delve deeper into the original use of the property and the activities which took place within its grounds. The initiative taken was not for nothing however; a discovery was made which has left historians astounded. Not only in regard to the livelihoods of the people who lived there, but in regard to what it meant for the history of the United States. Let’s take a look at what they found! A President’s Plantation As Thomas Jefferson’s primary residence, Monticello was his home before he moved to the White House in 1801. Today, the estate is preserved by the Thomas Jefferson Foundation. Open to the public, it is regarded as a historical landmark. Construction of the expansive estate commenced in 1768, with its renowned sprawling grounds well-documented and well-known. Fun fact; an image of the plantation’s main house is ingrained on the flip-side of the U.S. nickel! Intense research and constant study of the property had not yielded a find quite as fruitful as the one recently uncovered by historians. A mystery which had baffled historians and political scholars for the years seemed to be hiding in plain sight. The Controversy Surrounding Monticello Born in 1743, Jefferson began building Monticello at 26-years old. Having inherited the land from his father the initial purpose for the land was to cultivate wheat and tobacco. Charlottesville, named after British Queen Charlotte of Mecklenburg-Strelitz, is an area which is characterised by hot, humid summers and mild winters, which made it suitable for growing these crops. Such as is the case with many family-held plantations in America, Monticello is not alone in having its own dark past. Jefferson used free workers along with servants and enslaved labourers to construct the plantation house, however he also had hundreds of slaves working and living at Monticello. Whilst he consistently spoke out against the chains of slavery, and worked to end the practice, he had a secret of his own. Many individuals to this day find his dark secret a difficult pill to swallow, however the discovery made in 2017 by a group of archaeologists creates a case for Jefferson, shedding light on a matter which was previously unresolved. A Complicated Legacy The principal author of the Declaration of Independence, Jefferson is regarded as one of the most prominent figures in American history. One of the U.S’ visionary Founding Fathers, he, somewhat ironically penned the famous line “All men are created equal. Ironic why you ask? Jefferson, despite the airs and graces he upheld, simultaneously kept 600 African-American slaves for the duration of his adult life. As such he left a legacy which reflected his duality; his political life and his personal life. The discovery in 2017 during an excavation on the estate contributes to history’s understanding and opinion of Jefferson, causing many to re-evaluate his contribution. An Enigmatic Figure Among the 600 slaves is a key figure in the Jefferson mystery. Enter Sally Hemings. For the most part, she remains a puzzling figure, however historians cannot discount her involvement in Jefferson’s life. Naturally, her story piqued the interest of historians, and has continued to do so for over a century. Almost 200 years after her death, the discovery made brought some new insights into who she was and the events which took place during her time at Monticello estate. Who Was Sally Hemings? Born in 1773 to a planter and slave trader named John Wayles, who was also the father of Martha Jefferson, Sally was the half-sister of Thomas Jefferson’s wife. As a child, Sally, her siblings and mother came into Martha’s possession as part of her inheritance from her father. An enslaved woman of mixed race, she held an important place in Jefferson’s life. Owned by Jefferson, the historical consensus is that Hemings was the mother of his children. Due to legalities, the children she bore were considered slaves. The historical question of whether Jefferson did father Hemings’ children is the subject of what is known as the Jefferson-Hemings controversy. Much investigation, and historic analysis of DNA found a match between the Jefferson men and a descendant of Hemings’ youngest son, Eston. It has since been alleged that Jefferson was the father of all five of her children. We’re scratching our heads; which version of events is true? Before She Was a Subject of National Intrigue The youngest of 6 siblings, and 25 years younger than her half-sister Martha, Hemings and her siblings grew up at Monticello. Trained and put to work as servants, the children were “spared” as they held positions which were considered better than the conditions of others, such as the labourers in the fields. During her youth she was quite the plain Jane, but years later, it was her destiny to become a figure who was scrutinised, eventually becoming a household name. She would be described as the former President’s “mistress” however she was not even that; she was his property. Her story is not one of glamour and pomp; rather it is the life of a slave whose wellbeing was betwixt with that of her owner. A Trail of Clues Hemings unfortunately did not know much life outside of being a slave; she was kept until Jefferson’s death in 1826. Whilst she lived her final years freely, the details of her time at Monticello are largely unknown. However, the keen eyes and scouring of documentation by scholars and historians unveiled a series of clues which has thus led to a better understanding of Hemings’ importance and historical significance. At this stage, we’re wondering what it was about Hemings that caught Jefferson’s eye; was it her lovely face, her physique, her pleasant character? One of the only existing documents which describes her appearance, which was written by the blacksmith Isaac Granger Jefferson gives us partial clues. According to his memoirs, Hemings was “mighty near white…very handsome, long straight hair down her back.” Whilst it seems apparent as to why Hemings took Jefferson’s fancy from this description, there are a few crucial elements still missing in the Hemings-Jefferson puzzle. Painting a Picture Being a slave, it was highly inappropriate for portraits to be taken, however historians have constructed an image of her based on the descriptions documented. According to Jefferson’s grandson, Thomas Jefferson Randolph, she was “light coloured and decidedly good looking.” As far as her role in the plantation, historians have noted her duties included working as a seamstress as well as a chambermaid. Perhaps surprisingly, the diligent and meticulous Jefferson, whilst keeping detailed ledgers of finances and births in his records of Monticello, left not a shred of documentation on Hemings. Whilst her face would be forever etched into Jefferson’s memory, and may show in the faces of her children, it will forever be a mystery as to what Sally actually looked like. The French Connection Jefferson widowed at 39 years of age. Two years later, in 1784, he took his eldest daughter Martha to Paris. He then sent for his youngest daughter, the 9-year-old Mary, who was accompanied by the 15 year old Hemings. The future President served as the U.S. envoy to France; it was during these two years that Sally’s life was to change forever. Hemings’ brother James also accompanied the Jefferson’s to Europe as their personal chef. In France at the time, slavery was prohibited, and both Sally and her brother could have petitioned for freedom and lived in France as a free person. If she returned to Virginia with Jefferson, it would be as a slave. She agreed to return to the United States, for a reason which is both shocking yet also in the best interests of Sally’s secret. What Happens in Paris, Doesn’t Stay in Paris Ah Paris…the city of love. Not the city of teenage pregnancy. It was in Paris that historians agree Jefferson began a sexual relationship with the young Hemings. He was in his mid-40s, and she was barely 16. It was at this time that, according to Hemings’ son Madison, Hemings became pregnant by Jefferson. The pair returned to the U.S. in 1789, and it seems that the child she bore was not the only one that would call Jefferson “father.” Sally went on to have six children following her return from Europe, and reports from the time suggest that they were indeed all Jefferson’s due to the strong features and strong resemblance to their father. This relationship was kept extremely discreet; any sort of relations with a slave would be scandalous, particularly against the name of a man running for the position of President. It was not until over 20 years later that the facts would come to light, and the controversy would come to the fore. Come the spring of 1802. After 20 years, the “Jefferson-Hemings controversy” was born. One of Jefferson’s opponents, James T. Callender, published a report which smeared his reputation, after reports of several light-skinned slaves at the Monticello plantation. Jefferson never denied the allegation publicly, nor did he divulge the father of Hemings’ children in his detailed “Farm Book.” However, his family attempted to hush the story in later years, denying Jefferson’s hand in the controversy. The children he allegedly fathered who survived into adulthood were freed once they were of age, which nigh confirmed the rumours that he was indeed their biological father. His family once again, as well as historians to this day, vehemently deny the paternity allegations. It was not until 150 years later, when historians began reanalysing the evidence, that a new piece of information would subvert the accepted truth. After 150 Years of Uncertainty… American historian Annette Gordon-Reed published a book in 1997 which analysed the Jefferson-Hemings controversy, and the flaws in the “accepted truth.” Her scrutiny of the historiography of the saga found that 19th century historians had merely accepted assumptions without further investigation. They dismissed the Hemings family’s testimony as “oral history”, deeming Jefferson’s family testimony as the only truth. The story which had been propagated by the Jefferson’s was that the father of Hemings’ children was Peter Carr. However, the 1998 DNA analysis showed there was no match between the Carr line, and the Hemings’ descendant who was tested. The breakthrough you ask? That there was a match between the Jefferson male line and Eston Hemings’ descendant! Eston was Sally’s youngest son, and his DNA was the link which shed light on the astounding controversy, proving that the Carr story was a fabrication, and, revealing the absolute truth that Thomas Jefferson did indeed have intimate relations with a slave. Not just once, either. Two decades later, archaeologists today were to discover a long-hidden secret that provided an even bigger revelation of her life. A Monumental Discovery For over 90 years, Monticello has been lovingly maintained and restored by the Thomas Jefferson Foundation. It is frequently subject to the probing of historians, archaeologists and the general public alike. However, in 2017, said probing was fruitful. During a dig, archaeologists, in their restoration efforts, discovered a piece of the puzzle which had eluded them for quite some time. They discovered the concealed living quarters of Sally Hemings! Their excavation was initially proposing to uncover the original layout of the Monticello plantation’s Southern Wing, however they definitely stumbled across something much more exciting. Despite several decades’ worth of work, the room had remained untouched and undiscovered. Something which had slipped through the fingers of social scientists had finally made itself known, and was a discovery which set the nation aflame once again with the headlines of the Jefferson-Hemings controversy. Hidden in Time It was extremely fortunate that the archaeological team came across the room, particularly owing to the fact that the southern Pavilion of the estate had been subject to a large number of changes, both during and after Jefferson’s lifetime. A museum had been constructed and many people had passed on through (and above) the hidden living quarters. You may think; how did a whole room just disappear? Well, in 1941, the installation of a modern bathroom concealed the room, completely covering any trace of an opening. Again in the 1960s, the bathroom underwent a renovation to accommodate the increasing number of guests at Monticello. Even still, the changes did not reveal Hemings’ long-lost living quarters. The point which alerted archaeologists, and motivated them to dig deeper (literally), came to them in a most surprising fashion. A Historic Hint It was during the analysis of the history of Monticello that historians came across a surviving document written by one of Thomas Jefferson’s grandsons. The source revealed that Sally Hemings’ room was in fact located in the South Wing of the former main house. Whilst historians were sceptical at first, and knew not to take the word as gospel, it did raise questions which led them to consider the modern restroom addition, and subsequently, to dig. With each turn in this tale, it seems that archaeologists and historians were to uncover new artifacts and missing clues which piece together the history of Monticello, and of its inhabitants. During the excavation, historians unearthed a number of relics, all pointing to one thing. Taking heed of Jefferson’s grandson’s clues, the archaeologists proceeded to demolish the men’s bathrooms, sieving the dirt for fragments and clues to the mystery. Their digging was not for nothing; they eventually discovered Sally Hemings’ 14-foot living quarters. Among their discoveries were original brick floors from the early 1800s, a brick hearth and fireplace, as well as a fixture which was a suitable structure to hold a stove. However, the point which really flabbergasted archaeologists was the room’s vicinity to Jefferson’s private bedroom. It was located directly adjacent. It seems that there was more truth to the controversy than initially imagined, a dark secret which, after over 170 years, was finally to see the light. What It Means Historians and archaeologists alike see the proximity of Hemings’ room to that of Jefferson’s as a tell-tale sign that he indeed was the father of her children. The discovery of the room, as well as the results discovered in the DNA almost certainly provide solid proof of their intimate relationship. What this meant was that a man who supposedly upheld justice in his position as President, was just as corrupt and hid secrets as any other man. Fraser Neiman, director of archaeology at Monticello remarked “this room is a real connection to the past.” He went on to say that as they dug deeper, “we are uncovering and discovering and we’re finding many, many artefacts.” The room did not only uncover their secret, but also filled in the gaps for many historians, answering questions which had been asked time and time again, yet not answered. How Enslaved People Were Living The room also highlighted the difference between the Hemings’ lives as slaves, versus that of the others. Gardiner Hallock, director of the restoration for Jefferson’s home, noted that “the discovery gives us a sense of how enslaved people were living. Some of Sally’s children may have been born in this room.” “It is important because it shows Sally as a human being- a mother, daughter and sister – and brings out the relationships in her life.” Whilst the paradox of liberty rests with Jefferson, in that a man who cried out for the freedom of all not only kept slaves but kept one as a sexual slave. Not the most flattering secret to expose, particularly one regarding a U.S. President! It is speculated that Sally’s decision to return from Paris was owing to the fact that Jefferson promised the children she bore would be free once they came of age (21 years old). Perhaps not surprisingly, the Hemings were the only family that Jefferson freed among the slaves he kept (aside from three very lucky others). A Window into the Past? Uncovering Sally Hemings’ room also revealed that she enjoyed a standard of living well above that of the other slaves who lived at Monticello. Regardless, she was still a slave, and was treated thus, though there were some indicators which shed light on her own living conditions. Historians note that Hemings’ room was dark and dingy, with no natural light allowed in; there were no windows whatsoever, so the conditions would have been uncomfortable. Some historians have mentioned the possibility that building the bathroom above her quarters was calculated; it was an attempt to cover up Sally and her secret, as it was considered a great insult, not only to Jefferson’s legacy, but to her own. However, following her death, her story was to be known to all. Revealing the Truth Following its discovery, historians and the committee of the Thomas Jefferson Foundation sought to restore Sally Hemings’ room for public display, with its expected open date to be during 2018. The space is designed to exhibit with furniture of the period as well as artefacts excavated on the property; such pieces include fine ceramics and bone toothbrushes! Where previously the secrets of the estate were kept under lock and key, the $35-million Mountaintop Project at Monticello has made a bold effort to create more transparency; to tell the stories of both the free and enslaved people who inhabited the estate. In recent years, tours have been offered which focus solely on the Hemings family, with a reception which has been overwhelmingly positive. Spokeswoman for Monticello, Mia Magruder Damman notes that “for the first time at Monticello, we have a physical space dedicated to Sally Hemings and her life.” The significance of this discovery, and the ability to pay respect to her life is extraordinary, as it “connects the entire African-American arch at Monticello.” The discovery did three things; it answered questions, it clarified rumours, and gave insight to the daily activities of Monticello, as well as the human interactions there. The estate’s curators are now working around the clock to more solidly incorporate her life, rightfully, into Jefferson’s story, and dismiss the notions that she was merely his mistress; his “concubine.” But we are not quite finished there; there is more to the story, and these other facts hold even greater significance. Outside of the Mystery Monticello estate is seemingly finished with avoiding Jefferson’s relationship with Hemings, with a new exhibit shedding light on the realities of slavery, as well as the truth behind Hemings. The discovery of Hemings’ room also allowed for the public to see the real, human side. Niya Bates, historian, remarked that the room would “portray her outside of the mystery” – no longer a topic of debate and speculation, but as the living, breathing woman she was. The exhibit seeks to bring life to a woman who was constantly linked to the drama of Jefferson’s life, not to mention terrible rumours and scandalous gossip. “She was a mother, a sister, an ancestor for her descendants (pictured), and [the room’s presentation] will really just shape her as a person and give her a presence outside of the wonder of their relationship,” Bates stated. Before the room was discovered, Sally’s name was never mentioned, and tours skimmed briefly over Jefferson’s love life, merely noting that he widowed a relatively young man. Remembering Sally’s Name With its newfound focus on the realities for the majority of the people who lived and worked there; not just the wealthy owners, Monticello’s change of course departs from the original portrayal towards the public. Retired historian Lucia “Cinder” Stanton began working at Monticello estate in 1968, and recalls that during her time there, Sally’s name was never mentioned; it was Monticello’s dirty little secret. Back in the 60s it would have been scandalous to unveil such a secret, even if there were rumours whirring around; as such, little was said about the Hemings family in its entirety. It was not until the 250th anniversary of Jefferson’s birthday in 1993 that the tours began to include stories of the slaves who worked and lived on the estate. Despite this giant leap forward in uncovering the truth of the livelihoods of those who lived there, it would take many more years for another fact to come to light. This fact would bring the descendants (pictured) of the slaves to visit the property their ancestors once called home. Remembering Mulberry Row Enter Mulberry Row, the dynamic, industrial hub of Jefferson’s grand enterprise. This famed street was the centre of work and domestic life for many people. Monticello unveiled the restoration of Mulberry Row in 2015, displaying a series of reconstructions of dwellings from the plantation street. Between 1770 and 1831, when Monticello was sold, the row comprised 20 buildings. The important unveiling welcomed over 100 descendants of slave families, with an emotional tree-planting memorial ceremony taking place in their ancestors’ honour. This was only the beginning of commemorative efforts, as the significance of the lives of these people grew, as well as their connection to their modern successors. A More Comprehensive Account Once the room was discovered, it opened the floodgates for a much-needed re-telling of the Monticello estate’s history. Just as the tide had turned with Mulberry Row, and for once, recounting the story of the majority who lived and worked there, so it was for Sally Hemings. Curators of the estate decided to incorporate her room and life into a fuller account of Monticello and its people; not just its wealthy, well-known owners. The breakthrough has acknowledged a dark, yet important part of history which was necessary for Americans to be aware of, however it also has had a negative impact on some. Hemings’ distant relatives hold a mixed response about their ancestor’s legacy, and particularly her dalliance with American president Thomas Jefferson. A Descendant’s View Whilst the discovery of Hemings’ room gave closure to some, it also gave answers which were less than satisfactory. Gayle Jessup White, Sally Hemings’ distance niece notes that “as an African-American descendant, I have mixed feelings – Thomas Jefferson was a slaveholder.” White, who works as Monticello’s Community Engagement Officer, is within her rights to feel uneasy, what with her father, the former U.S. president, and her mother, a lowly slave. The social differences were so great that it leads one to believe that Jefferson conveniently held Hemings as property and did not bother to think of the consequences of his desires. White, as an African-American woman appreciates the work of the Thomas Jefferson Foundation, as, “for too long our history has been ignored.” Indeed, the discovery shed light on the real truth behind Monticello, and that this sort of arrangement may have been more common spread than initially believed. “Some people still don’t want to admit that the Civil War was fought over slavery. We need to face history head-on and face the blemish of slavery and that’s what we’re doing at Monticello.” White is not alone, and, joined by her colleagues they seek to unveil more truths about the property and its history. What with its dark past, Monticello was never accepted by the majority of the local African-American community, owing to Jefferson’s mixed messages regarding slavery. On one hand, he was a champion of justice, and wished to abolish it, however kept 600 of his own. “I find that some people are receptive to the message and some are resistant,” she said. “But our message is that we want the under-served communities and communities of colour to become partners with us.” Whilst White acknowledges that there is much more work to be done with spreading the stories of their ancestors, “anecdotally we have seen an uptick in African-Americans visiting Monticello, so I know we’re making progress.” It is yet to be seen if the community will finally embrace Monticello, however, it cannot be doubted that it is undeniably a part of the history of the African-American people. Despite the answers provided by the finding of Hemings’ room, it still remains that there are a number of questions which still require further enquiry. Despite the great historical analysis of Monticello, of its records and documents regarding the estate, the history of the former plantation remains mysterious in its own ways. Whilst Jefferson kept detailed records and logged the lives of his hundreds of slaves, there were very few artefacts remaining. A scarce few individual photos of people from some of the families are all that are left at present. The descendants, and the curators of Monticello’s museum have since held several ventures which have revealed more remarkable information about these slaves. At last, justice to those who had seemingly lost their place in the history books. The Hemings’ Family Tree It seems that Sally’s name is not the only one to have made a significant contribution to the United States. Her family tree also includes a number of descendants who also held Jefferson’s genes. It is an impressive, wide-reaching lineage to follow which can be traced to the present day. Historian Annette Gordon-Reed in 2008 published her book The Hemings’s of Monticello: An American Family, which provides wonderful insight to the lives of slaves at the time. Gordon-Reed views the slaves through a lens which is analytical; she recounts the history of generations of the Hemings family based on surviving legal records, diaries, farm logs, newspapers, archives, correspondences and even oral history. In the next slide we will find out the details of her extraordinary findings. Life After Monticello Madison Hemings, one of Sally’s daughters has said that her mother’s first child passed soon after her return from Paris with Jefferson. The records which Jefferson kept confirmed this story, and also added that Hemings had six children after her return to the U.S. Of the six, four survived into adulthood: Madison, Eston, Beverley and Harriet. With time, all, except for Madison made the choice to live amongst the white society in the North. Madison’s memoir is critical in furthering her mother’s story, and that of her siblings. According to Madison, his sisters Beverley and Harriet both married affluent Washingtonians, and lived within DC’s white community. On the other hand, the Hemings brothers both married free women of colour in Virginia. Eston perhaps made the most surprising choice of all; changing his surname to Jefferson, to acknowledge the U.S President as his biological father. An Influential Lineage Hemings’ sons were to go on to enjoy success in adulthood, with multiple children taking up arms and fighting on the Union’s side in the bloody Civil War. Sally Hemings’ family tree expanded to include several grandchildren and great-grandchildren, who carried on the family legacy. It seems that politics was in the DNA of the offspring of Jefferson, with his and Hemings’ great-grandson, Frederick Madison Roberts becoming the first person elected – of black ancestry- to take up office on the West Coast of the United States. He was to serve more than one term in office, serving for over 20 years in the California State Assembly. However, this was not all for the Jefferson-Hemings descendants. An effort in 1993 was made by Monticello historians to glean more information from the descendants of the enslaved at the estate. Over 200 interviews were conducted, with the goal to collect personal accounts of the African-American families who lived at Jefferson’s Virginia plantation, from their descendants. This oral history project was furthered in recent years, reaching a peak with a 2016 public summit titled “Memory, Mourning, Mobilization: Legacies of Slavery and Freedom in America.” The summit opened with the following bold, chilling statement: “My ancestors were enslaved at Monticello. Generations of people bound to the earth, by blood and by law.” This gathering of people indicated just how many families had been impacted by the plantation, and in turn, by Thomas Jefferson. Finally, those who had been slaves were given a voice, to tell their story, albeit hundreds of years later. Most importantly, is the adjustment in the narrative told to the general public. Curious about the scandal and mystery surrounding the expansive grounds of Monticello, the estate sees over half a million visitors. You’d want to hope all these people are told the most realistic version of events! The gradual shift now portrays a more holistic story, with the details of slaves which were once glossed over, now brought to light. Tom Nash, one of the expert guides at Monticello made the candid remark to his visitors “this is a spectacular view from this mountaintop.”“But not for the enslaved people who worked these fields. This was a tough job and some of them – even young boys 10 to 16 years old – felt the whip.” Whilst these days Monticello is green pastures and sprawling lawns, it was not enjoyed in that way hundreds of years ago. Conditions for the enslaved were harsh, cruel even; these people were considered as sub-human, and whilst were perhaps afforded better living conditions than many slaves in the U.S., were still treated in a manner which was almost intolerable. ‘No Such Thing as a Good Slave Owner’ Nash, constantly in the firing line of the public’s probing questions shares some of the wide range which are thrown at him. “Why did some slaves want to pass for white when they were freed,” one tourist asked, while another questioned: “why did Jefferson own slaves and write that all men are created equal?” Retrospectively, Nash’s answer reflects the realities of the time, “working in the fields was not a happy time. There were long days on the plantation.” “Enslaved people worked from sun-up to sundown six days a week. There was no such thing as a good slave owner.” It doesn’t get much clearer than that; any slave was still just that: a slave. The one thing these people yearned for, was something which was dangled in front of them yet not even remotely within their grasp; freedom. And the man who was supposedly able to grant them it, guarded his secret jealously. July 2017 saw Monticello’s 55th annual Independence Day, and while the memory of its history may still linger in the minds of the descendants of the enslaved, a celebration was held. Not just to celebrate the estate, but the memory of those who had either experienced, or been touched by the events of the plantation. 70 people from 30 countries streamed in from all corners of the globe to attend the event, and in doing so, became naturalised citizens of the United States. This acknowledgment brings together those affected, and unites them as a whole, to create a sense of belonging. The United States, and the world along with it continue to recognise the complexities of American history and are working harder than ever to acknowledge the contribution, and often, sacrifice of those who were not free as you or I am today. Jefferson Wasn’t the Only One Whilst it is easy to point the finger at Jefferson as a leader who went back on his word of creating a freer, more equal America, he was not the only prominent U.S. figure with a history of slave ownership. As historians scour documentation and evidence of the impressive line-up of presidents, it has been found that twelve leaders of the United States were slave owners at a point in their lives. Of those twelve, eight were slave owners whilst they held office! Despite the United States’ Declaration of Independence being founded on the principle of “all men are created equal” there was a glaring hole in this statement. The links of these Founding Fathers to participating in slave ownership highlight a fatal flaw in America’s history, presenting an astonishing contradiction which is forever ingrained in the nation’s past. Early Years of the Republic Although there were paradoxical and conflicting views on the institution of slavery, of the first five presidents of the United States, four were slave owners! A nation which was supposedly built on equality and freedom was indeed a huge lie, which tested the integrity of the nation and its leaders. The “father of the country,” George Washington, is among the four slaveowners. Over 300 slaves lived on the first President’s Mount Vernon plantation, and this number grew. Despite his engagement of slaves, he was singular in the respect that he chose to free his personal slaves. When his will was read, it called for the freeing of the slaves upon his wife Martha’s death. However, Martha had more charity and goodness in her heart, and decided to free a large number of them earlier, releasing them only a year after he had passed away. Despite the occurrence of slave-holding presidents in the early years of the nation’s history, John Adams, the second President of the United States, proves an exception. He was the first resident of the White House, and whilst slave labourers did work to construct the iconic residence, Adams himself never owned slaves. He was considered as holding “moderate” views on slavery and decided to listen to the message of the Declaration of Independence. Like his father before him, Adam’s son, John Quincy, the sixth U.S. President, also did not holds slaves during his lifetime. In his final years, and the years where he was not holding office, Adams sought to oppose the institution of slavery and spread the message of freedom for all regardless of race. Presidents After Jefferson Slave laborers were not merely used by Jefferson on his plantation as we have found, what with some working on Mount Vernon as well as the fabled White House. Though Jefferson once referred to slavery as an “assemblage of horrors,” he was not to be the last President to be a slave owner. James Madison, James Monroe and Andrew Jackson also participated in the institution, as well as eighth President Martin Van Buren. These Presidents often noted they opposed the expansion of slavery yet could hardly be considered abolitionists; perhaps they enjoyed the benefits of owning slaves to further their prospects. Surprisingly, the last two Presidents to own slaves were both men associated with Abraham Lincoln. Let’s have a final look at who these men were! Before Lincoln, a number of other prominent figures held slaves during office, including; John Tyler, James Polk and Zachary Taylor. The last president to personally own slaves was Ulysses S. Grant. Serving two terms, between 1869 and 1877, the former general of the Union Army kept a single black slave named William Jones. However, even he granted him his freedom, noting later on that slavery was “a stain to the Union (that) people had once been bought and sold like cattle.” As was the fashion of the time, it was perfectly acceptable to own slaves. However, a growing movement, which had been given impetus by Abraham Lincoln, was sure to overturn this archaic institution. Abraham Lincoln’s signing of the Emancipation Proclamation championed the passing of the 13th Amendment to end slavery. The bill was controversial at the time, and, Andrew Johnson, Lincoln’s right-hand, who owned slaves, even lobbied against his own President! Finally, in 1863, the 16th U.S. President Lincoln freed almost 3 million enslaved people with his Emancipation Proclamation. America was officially abolished two years later, with the adoption of the famous 13th Amendment.
7,584
ENGLISH
1
Increase funding for the military? Repeal, replace or leave the Affordable Care Act alone? Defund Meals on Wheels for the elderly? Cut back on school meal funding? Every week of this new presidential administration has raised unsettling questions. As Congress wrestles with such issues it might help to look back at why Congress passed the National School Lunch Act in 1946. Individual school lunch programs emerged in the 1930s as communities grappled with the fact that many children in school did not have enough to eat as their families struggled during the Depression. Communities came together to provide school lunches not because studies said children might do better in school if they were fed. They simply saw it as a moral imperative to not let children go hungry. If a community, or sometimes a state, organized sufficiently, children could be fed. Even so, during World War II the military found that not all men who were drafted or volunteered for service were able to serve due to medical conditions that could have been prevented with good nutrition. So, as the war came to a conclusion, Congress organized a national school lunch program as a way of ensuring that those needed for future fighting forces would not be precluded from service due to lack of nutrition. Well-fed children were not seen so much as a moral imperative but as a way of bolstering the national strength and protection. At the same time, the program organized to ensure that farmers’ surplus crops could be pressed into use for school lunches. Farmers got subsidies for crops that would otherwise have gone to waste. Children were helped, the military was helped, farmers were helped. In the intervening years, the school meal program has grown. Children in families living in poverty were made eligible for free or reduced-cost school meals. Breakfasts, after-school meals and summertime meals are now offered at some schools. Some of those decisions were made based on studies that showed when children are not hungry they do better in school. Tweaks to the program in the past decade focused on getting more nutritious foods into school meals and perhaps educating children about the benefits of good nutrition and trying to help them to start healthy eating habits that could serve them for a lifetime. Some people criticize the school meal program as an entitlement, and others as the “nanny state” banning some foods and encouraging consumption of others as a way of dictating to people what they should eat. Perhaps in the issue of school meals, and in other issues as well, it’s time for Congress to go back to looking at what problems need to be addressed in society and how government can arrange win-win solutions. Those who feel the need to build up America’s fighting forces should keep in mind how health care, education and nutrition all help build a fit society better ready to take on the task of national defense. School meal programs — in fact all government programs — could be streamlined for efficiency. They could be examined to ensure they’re still serving the purposes for which they were started or that they’re addressing new problems that have arisen. School meal programs aren’t welfare run amok. They’re rooted in common sense. In this issue and so many others, it’s time for Congress to once again use common sense, find common ground and to once again try to make this a country that does what’s best for the common man, woman and child.
<urn:uuid:abba4c63-8a4a-46a7-b00b-c1b2fa9c8180>
CC-MAIN-2020-05
https://www.timescall.com/2017/03/25/editorial-school-lunch-program-worth-working-to-keep/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00502.warc.gz
en
0.984382
711
3.3125
3
[ -0.5064027905464172, 0.1053674966096878, 0.2491186410188675, -0.17936930060386658, 0.06359395384788513, 0.25509822368621826, -0.4955397844314575, 0.2905609607696533, -0.5151810646057129, -0.31657564640045166, 0.5578879117965698, 0.044724613428115845, 0.10723426938056946, 0.1225415691733360...
1
Increase funding for the military? Repeal, replace or leave the Affordable Care Act alone? Defund Meals on Wheels for the elderly? Cut back on school meal funding? Every week of this new presidential administration has raised unsettling questions. As Congress wrestles with such issues it might help to look back at why Congress passed the National School Lunch Act in 1946. Individual school lunch programs emerged in the 1930s as communities grappled with the fact that many children in school did not have enough to eat as their families struggled during the Depression. Communities came together to provide school lunches not because studies said children might do better in school if they were fed. They simply saw it as a moral imperative to not let children go hungry. If a community, or sometimes a state, organized sufficiently, children could be fed. Even so, during World War II the military found that not all men who were drafted or volunteered for service were able to serve due to medical conditions that could have been prevented with good nutrition. So, as the war came to a conclusion, Congress organized a national school lunch program as a way of ensuring that those needed for future fighting forces would not be precluded from service due to lack of nutrition. Well-fed children were not seen so much as a moral imperative but as a way of bolstering the national strength and protection. At the same time, the program organized to ensure that farmers’ surplus crops could be pressed into use for school lunches. Farmers got subsidies for crops that would otherwise have gone to waste. Children were helped, the military was helped, farmers were helped. In the intervening years, the school meal program has grown. Children in families living in poverty were made eligible for free or reduced-cost school meals. Breakfasts, after-school meals and summertime meals are now offered at some schools. Some of those decisions were made based on studies that showed when children are not hungry they do better in school. Tweaks to the program in the past decade focused on getting more nutritious foods into school meals and perhaps educating children about the benefits of good nutrition and trying to help them to start healthy eating habits that could serve them for a lifetime. Some people criticize the school meal program as an entitlement, and others as the “nanny state” banning some foods and encouraging consumption of others as a way of dictating to people what they should eat. Perhaps in the issue of school meals, and in other issues as well, it’s time for Congress to go back to looking at what problems need to be addressed in society and how government can arrange win-win solutions. Those who feel the need to build up America’s fighting forces should keep in mind how health care, education and nutrition all help build a fit society better ready to take on the task of national defense. School meal programs — in fact all government programs — could be streamlined for efficiency. They could be examined to ensure they’re still serving the purposes for which they were started or that they’re addressing new problems that have arisen. School meal programs aren’t welfare run amok. They’re rooted in common sense. In this issue and so many others, it’s time for Congress to once again use common sense, find common ground and to once again try to make this a country that does what’s best for the common man, woman and child.
673
ENGLISH
1
The ratification process for the Constitution of Europe stalled in 2005. The constitution was established through a European Union treaty signed in Rome in 2004 and was intended to make a community originally designed for six founding members in the 1950s more workable with a membership of 25 disparate countries. Governments that were faced with selling the document to a heavily skeptical electorate, such as that in the U.K., claimed that the treaty did not amount to a large extension of the EU’s powers and was little more than a “tidying-up exercise.” Meanwhile, many pro-integration political leaders in France and Germany billed it as a significant move toward the full “political union” to which they had always aspired. The document, which would supersede all previous community treaties (except the so-called Euratom Treaty, which established the European Atomic Energy Community), contained several significant—and highly controversial—changes to the structure and functioning of the 25-member European Union. To come into force the new constitutional treaty had to be ratified by all 25 member states either through referenda or by votes in the national parliaments. Its rejection in France and The Netherlands therefore meant that it had to be abandoned for the foreseeable future, though proponents insisted that the constitution was not dead. Until agreement on a new set of rules was reached—and no alternative had been announced as of year’s end—the EU would have to work under the existing treaty rules. Many of the failed constitution’s advocates argued that this situation would mean ineffectual decision making and would leave the EU less effective than it should be in international affairs.
<urn:uuid:283c2510-5f1d-4c88-85ca-edda7f2771ea>
CC-MAIN-2020-05
https://www.britannica.com/print/article/1453786
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00069.warc.gz
en
0.982711
333
3.875
4
[ -0.4886590540409088, 0.010305282659828663, 0.14754946529865265, -0.11573105305433273, -0.3888039290904999, 0.19339415431022644, -0.2797231078147888, 0.22895382344722748, 0.013324769213795662, 0.21256320178508759, 0.31992578506469727, -0.09290178120136261, -0.0470237210392952, 0.16748881340...
1
The ratification process for the Constitution of Europe stalled in 2005. The constitution was established through a European Union treaty signed in Rome in 2004 and was intended to make a community originally designed for six founding members in the 1950s more workable with a membership of 25 disparate countries. Governments that were faced with selling the document to a heavily skeptical electorate, such as that in the U.K., claimed that the treaty did not amount to a large extension of the EU’s powers and was little more than a “tidying-up exercise.” Meanwhile, many pro-integration political leaders in France and Germany billed it as a significant move toward the full “political union” to which they had always aspired. The document, which would supersede all previous community treaties (except the so-called Euratom Treaty, which established the European Atomic Energy Community), contained several significant—and highly controversial—changes to the structure and functioning of the 25-member European Union. To come into force the new constitutional treaty had to be ratified by all 25 member states either through referenda or by votes in the national parliaments. Its rejection in France and The Netherlands therefore meant that it had to be abandoned for the foreseeable future, though proponents insisted that the constitution was not dead. Until agreement on a new set of rules was reached—and no alternative had been announced as of year’s end—the EU would have to work under the existing treaty rules. Many of the failed constitution’s advocates argued that this situation would mean ineffectual decision making and would leave the EU less effective than it should be in international affairs.
331
ENGLISH
1
As we celebrate Dr. Martin Luther King Jr. Day this coming Monday, many know him for his work on Civil rights and his famous oratory, particularly, the “I have a Dream” Speech which took place at the Lincoln Memorial in Washington Dc on April 1963. However, Dr. King’s legacy is much greater than this. There are many interesting facts about his life as well as the holiday that commemorates his name. Here are some trivia or lesser-known facts about Dr. King Jr. and the Holiday. Dr. King was very young. That’s correct. Many of Dr. King’s accolades took place while he was relatively young. In fact, he received the Nobel Peace Prize at the age of 35, the youngest man to win the prize. Tragically, his assassination took place in 1968 when he was 39. Dr. King was very intelligent. Dr. King skipped the 9th and 12th grades and entered Morehouse College at the age of 15. He obtained two undergraduate degrees, one in sociology from Morehouse; the other was in Divinity from Crozer Theological Seminary. He also obtained a Ph.D. in Systematic Theology from Boston University. The Attacker in the Dr. King’s Previous Assassination attempt is reportedly still alive. In September 1958, a 42-year-old black woman named Izola Ware Curry attempted to stab Dr. King to death with a letter opener. After the stabbing incident, Curry was taken into custody and was found to be incompetent to stand trial for assault charges. She was later diagnosed with paranoid schizophrenia and was committed to the Matteawan State Hospital for the criminally insane according to the Martin Luther King, Jr. Research and Education Institute. According to a documentary called, When Harlem Saved A King, Ms. Curry is alleged to be alive, although she was born in 1916. The Road to create a Martin Luther King Dr. was long and difficult. In 1968, the first legislation was introduced by U.S. Rep. John Conyers Jr. of Michigan to make King’s birthday a federal holiday. The bill was finally turned into law in November 1983 and the first official holiday was observed on the third Monday of January in 1986. In 1994, Congress designated Martin Luther King Jr. Federal Holiday as a national day of service, which is led by Corporation for National and Community Service. The holiday is set for the 3rd Monday in January although January 15th is Dr. King’s actual birthday. On May 2, 2000, South Carolina governor Jim Hodges signed a bill to make Martin Luther King, Jr.’s birthday an official state holiday. South Carolina was the last state to recognize the day as a paid holiday for all state employees. Dr. King has received great acclaim during his lifetime and after his assassination. Because of his long and strong support for justice, civil rights, and for peace, Dr. King was arrested over 30 times. Yet, during his lifetime, he was awarded at least 50 honorary degrees from colleges and universities for this struggle. Now, there are more than 900 streets named after him in the United States. There are even streets and centers in other nations such as the Martin Luther King Center in Havana, Cuba named his honor. For further information about Dr. King and the MLK Holiday, visit The King Center website or the Corporation for National & Community Service.
<urn:uuid:481f2b25-03fa-441b-94ea-291dc865fb00>
CC-MAIN-2020-05
https://fatvox.com/interesting-factoids-about-dr-martin-luther-king-jr-and-his-holiday/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00138.warc.gz
en
0.985342
708
3.390625
3
[ -0.4898437261581421, 0.8066185712814331, 0.7396172881126404, -0.1926247924566269, -0.4541774094104767, 0.13000762462615967, 0.35858380794525146, 0.13006621599197388, -0.31655436754226685, 0.35695409774780273, 0.15380282700061798, 0.5415736436843872, -0.1511923372745514, 0.13052064180374146...
2
As we celebrate Dr. Martin Luther King Jr. Day this coming Monday, many know him for his work on Civil rights and his famous oratory, particularly, the “I have a Dream” Speech which took place at the Lincoln Memorial in Washington Dc on April 1963. However, Dr. King’s legacy is much greater than this. There are many interesting facts about his life as well as the holiday that commemorates his name. Here are some trivia or lesser-known facts about Dr. King Jr. and the Holiday. Dr. King was very young. That’s correct. Many of Dr. King’s accolades took place while he was relatively young. In fact, he received the Nobel Peace Prize at the age of 35, the youngest man to win the prize. Tragically, his assassination took place in 1968 when he was 39. Dr. King was very intelligent. Dr. King skipped the 9th and 12th grades and entered Morehouse College at the age of 15. He obtained two undergraduate degrees, one in sociology from Morehouse; the other was in Divinity from Crozer Theological Seminary. He also obtained a Ph.D. in Systematic Theology from Boston University. The Attacker in the Dr. King’s Previous Assassination attempt is reportedly still alive. In September 1958, a 42-year-old black woman named Izola Ware Curry attempted to stab Dr. King to death with a letter opener. After the stabbing incident, Curry was taken into custody and was found to be incompetent to stand trial for assault charges. She was later diagnosed with paranoid schizophrenia and was committed to the Matteawan State Hospital for the criminally insane according to the Martin Luther King, Jr. Research and Education Institute. According to a documentary called, When Harlem Saved A King, Ms. Curry is alleged to be alive, although she was born in 1916. The Road to create a Martin Luther King Dr. was long and difficult. In 1968, the first legislation was introduced by U.S. Rep. John Conyers Jr. of Michigan to make King’s birthday a federal holiday. The bill was finally turned into law in November 1983 and the first official holiday was observed on the third Monday of January in 1986. In 1994, Congress designated Martin Luther King Jr. Federal Holiday as a national day of service, which is led by Corporation for National and Community Service. The holiday is set for the 3rd Monday in January although January 15th is Dr. King’s actual birthday. On May 2, 2000, South Carolina governor Jim Hodges signed a bill to make Martin Luther King, Jr.’s birthday an official state holiday. South Carolina was the last state to recognize the day as a paid holiday for all state employees. Dr. King has received great acclaim during his lifetime and after his assassination. Because of his long and strong support for justice, civil rights, and for peace, Dr. King was arrested over 30 times. Yet, during his lifetime, he was awarded at least 50 honorary degrees from colleges and universities for this struggle. Now, there are more than 900 streets named after him in the United States. There are even streets and centers in other nations such as the Martin Luther King Center in Havana, Cuba named his honor. For further information about Dr. King and the MLK Holiday, visit The King Center website or the Corporation for National & Community Service.
735
ENGLISH
1
Pumpkins come from North America and scientists believe they have been grown there for at least 7500 years. The name ‘pumpkin’ originated from the Greek word ‘pepon’, which means ‘a large melon’. It was changed by the French into ‘pompon’, and the English changed it to ‘pumpion’. The word ‘pumpkin’ was created by American colonists. Native Americans had used pumpkin as a staple in their diets for centuries before the colonists’ arrival. They also used pumpkin seeds for food and medicine. White settlers also included the vegetable in their diets, as it was tasty, nutritious and easy to grow. Pumpkins are particularly popular around Halloween, when they are harvested and used to carve jack-o-lanterns. However, originally other vegetables, such as potatoes, beets and turnips were used to make them because pumpkins were not known in Europe at that time! The practice originated from an Irish legend about a man nicknamed ‘Stingy Jack’. According to the story, Stingy Jack invited the devil to have a drink with him. True to his name, Jack didn’t want to pay for his drink, so he asked the devil to turn himself into a coin that Jack could use to pay for their drinks. When the devil did so, Jack decided to keep the money and put it into his pocket, together with a silver cross – which stopped the devil from returning to his original form. Jack freed the devil under the condition that he would not bother him for one year and that he would not claim his soul after Jack’s death. The following year Jack tricked the devil into climbing into a tree to pick a piece of fruit. While he was up in the tree, the sly man carved a sign of the cross into the tree’s bark so that the devil could not come down until he promised Jack not to bother him for another ten years. When Jack died, God did not allow him into heaven. The devil kept his promise not to claim Jack’s soul and he did not allow him into hell either. Instead, he sent Jack off into the dark night with only a burning coal to light his way. Jack put the coal into a carved out turnip and has been roaming the Earth with it ever since. The Irish called the ghost ‘Jack of the Lantern’, and then ‘Jack O’Lantern’. In Ireland, Scotland and England, people started making their own versions of Jack’s lanterns by carving scary faces into turnips, potatoes or large beets and placing them into windows or near doors to frighten away Stingy Jack and other evil spirits. Irish immigrants brought the jack-o-lantern tradition to the United States, however, the original vegetables were soon replaced with pumpkins which turned out to be perfect for this purpose. The tradition later spread to many other parts of the world including Poland!
<urn:uuid:53f5ed5a-4c2e-476e-b29d-f65b77cdc72d>
CC-MAIN-2020-05
http://www.tesokrates.com.pl/aktualnosci/pumpkin-history/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00450.warc.gz
en
0.983933
630
3.328125
3
[ -0.33179596066474915, 0.07143615186214447, 0.3410651981830597, 0.3539300560951233, 0.1850450336933136, -0.2668150067329407, 0.38131311535835266, -0.011110360734164715, -0.11816734075546265, 0.26435524225234985, 0.03904680162668228, -0.07352359592914581, -0.23619607090950012, -0.13943327963...
2
Pumpkins come from North America and scientists believe they have been grown there for at least 7500 years. The name ‘pumpkin’ originated from the Greek word ‘pepon’, which means ‘a large melon’. It was changed by the French into ‘pompon’, and the English changed it to ‘pumpion’. The word ‘pumpkin’ was created by American colonists. Native Americans had used pumpkin as a staple in their diets for centuries before the colonists’ arrival. They also used pumpkin seeds for food and medicine. White settlers also included the vegetable in their diets, as it was tasty, nutritious and easy to grow. Pumpkins are particularly popular around Halloween, when they are harvested and used to carve jack-o-lanterns. However, originally other vegetables, such as potatoes, beets and turnips were used to make them because pumpkins were not known in Europe at that time! The practice originated from an Irish legend about a man nicknamed ‘Stingy Jack’. According to the story, Stingy Jack invited the devil to have a drink with him. True to his name, Jack didn’t want to pay for his drink, so he asked the devil to turn himself into a coin that Jack could use to pay for their drinks. When the devil did so, Jack decided to keep the money and put it into his pocket, together with a silver cross – which stopped the devil from returning to his original form. Jack freed the devil under the condition that he would not bother him for one year and that he would not claim his soul after Jack’s death. The following year Jack tricked the devil into climbing into a tree to pick a piece of fruit. While he was up in the tree, the sly man carved a sign of the cross into the tree’s bark so that the devil could not come down until he promised Jack not to bother him for another ten years. When Jack died, God did not allow him into heaven. The devil kept his promise not to claim Jack’s soul and he did not allow him into hell either. Instead, he sent Jack off into the dark night with only a burning coal to light his way. Jack put the coal into a carved out turnip and has been roaming the Earth with it ever since. The Irish called the ghost ‘Jack of the Lantern’, and then ‘Jack O’Lantern’. In Ireland, Scotland and England, people started making their own versions of Jack’s lanterns by carving scary faces into turnips, potatoes or large beets and placing them into windows or near doors to frighten away Stingy Jack and other evil spirits. Irish immigrants brought the jack-o-lantern tradition to the United States, however, the original vegetables were soon replaced with pumpkins which turned out to be perfect for this purpose. The tradition later spread to many other parts of the world including Poland!
591
ENGLISH
1
Moses held out his arm toward the sky and thick darkness descended upon all the land of Egypt for three days. There were 2 main reasons for the plague of darkness. To hide from the Egyptians the fact that many Jews, who were unworthy of being freed, died during this plague. We are told that 4/5th of the Jewish population died, which was roughly 12 million Jews. (It is interesting to note that there were 4 times the number of women than men in Egypt) To allow the Jews, who were unaffected by the plague of darkness, to roam freely within the Egyptians’ homes and locate their valuables, which they would later ask to “borrow.” The Egyptians were deeply impressed that the Jews didn’t take advantage of the plague to loot them.
<urn:uuid:0c34ef88-b93d-4535-bc7b-3342ab2275c1>
CC-MAIN-2020-05
https://www.igeretyad.com/single-post/2019/01/09/Exodus-Bo-What-were-the-reasons-for-the-plague-of-darkness
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00148.warc.gz
en
0.993647
165
3.375
3
[ -0.21540404856204987, 0.7685804963111877, -0.049581557512283325, 0.11451954394578934, -0.20683978497982025, -0.05656047910451889, 0.6545996069908142, 0.21697448194026947, -0.01925131492316723, 0.13817395269870758, 0.03310272842645645, -0.35031187534332275, -0.3032737970352173, -0.106561988...
4
Moses held out his arm toward the sky and thick darkness descended upon all the land of Egypt for three days. There were 2 main reasons for the plague of darkness. To hide from the Egyptians the fact that many Jews, who were unworthy of being freed, died during this plague. We are told that 4/5th of the Jewish population died, which was roughly 12 million Jews. (It is interesting to note that there were 4 times the number of women than men in Egypt) To allow the Jews, who were unaffected by the plague of darkness, to roam freely within the Egyptians’ homes and locate their valuables, which they would later ask to “borrow.” The Egyptians were deeply impressed that the Jews didn’t take advantage of the plague to loot them.
160
ENGLISH
1
The songs, musical instruments and performers who entertained all classes of society, from drinkers in alehouses, through to aristocracy in the royal courts. In an age long before radio and TV, music was one of the most popular entertainments. Medieval people enjoyed listening or taking part in music, from the bawdy songs sung in medieval ale houses, to the sophisticated performances enjoyed by members of the King’s court. The Medieval Musician A medieval musician who worked for wealthy households was a prized member of society, and someone who could expect to be handsomely rewarded. Only the very richest households could afford to keep permanent musicians. Most musicians were hired as and when they were needed. Records for English courts in the fourteenth century show regular payments were made to male musicians, particularly during the reign of Edward II, for performances at feasts and at court. Such payments were single instances, and were probably made to travelling musicians, who made their living moving from place to place. Musical Instruments of the Middle Ages The type of musical instrument used at an event depended on the resources of the person hiring the musicians and also on them musicians themselves. Most single travelling musicians would carry only the lightest of instruments, such as a fiddle, or flute, suitable for transporting on the road in all weathers. Musicians who were employed at a royal court would have access to lutes, tabors, clarions and even bagpipes. Many wind instruments, such as clarions and trumpets, also did duty as a means of sounding an alarm or heralding the arrival of an important visitor. Musical Entertainment in Medieval Times At the top end of the social scale, a lord’s feast would be enjoyed to the accompaniment of a group of musicians, who played according to the lord’s wishes. In a less formal setting, people might gather in a village square to watch a performance by travelling minstrels, tossing coins into a hat at the end of the performance. In an alehouse or barn, celebrations which often accompanied seasonal festivals such as harvest, Yuletide or Halloween would include music or dancing, using whatever resources were available. Bells, hurdy-gurdys (a stringed instrument driven by a wheel) and drums could all be taken out of storage for feast days and used to provide entertainment. In the Middle Ages, many well-known songs centred around the themes of courtly love and romance. The ideal of chivalry was popular across all levels of society, and many travelling minstrels and troubadors had visited the Near East and bringing Arab influences and ideals back through Europe with them.
<urn:uuid:c695afc1-8668-454c-8950-317bd76bb3fe>
CC-MAIN-2020-05
https://worldhistory.us/medieval-history/music-in-medieval-times.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00162.warc.gz
en
0.981378
547
3.890625
4
[ -0.15593312680721283, 0.04521230608224869, 0.30787962675094604, -0.4542023539543152, -0.10555119812488556, -0.07046261429786682, 0.13051751255989075, -0.2959236800670624, 0.08913207799196243, -0.3609856963157654, -0.05349888652563095, 0.32849782705307007, 0.310957133769989, -0.096935763955...
12
The songs, musical instruments and performers who entertained all classes of society, from drinkers in alehouses, through to aristocracy in the royal courts. In an age long before radio and TV, music was one of the most popular entertainments. Medieval people enjoyed listening or taking part in music, from the bawdy songs sung in medieval ale houses, to the sophisticated performances enjoyed by members of the King’s court. The Medieval Musician A medieval musician who worked for wealthy households was a prized member of society, and someone who could expect to be handsomely rewarded. Only the very richest households could afford to keep permanent musicians. Most musicians were hired as and when they were needed. Records for English courts in the fourteenth century show regular payments were made to male musicians, particularly during the reign of Edward II, for performances at feasts and at court. Such payments were single instances, and were probably made to travelling musicians, who made their living moving from place to place. Musical Instruments of the Middle Ages The type of musical instrument used at an event depended on the resources of the person hiring the musicians and also on them musicians themselves. Most single travelling musicians would carry only the lightest of instruments, such as a fiddle, or flute, suitable for transporting on the road in all weathers. Musicians who were employed at a royal court would have access to lutes, tabors, clarions and even bagpipes. Many wind instruments, such as clarions and trumpets, also did duty as a means of sounding an alarm or heralding the arrival of an important visitor. Musical Entertainment in Medieval Times At the top end of the social scale, a lord’s feast would be enjoyed to the accompaniment of a group of musicians, who played according to the lord’s wishes. In a less formal setting, people might gather in a village square to watch a performance by travelling minstrels, tossing coins into a hat at the end of the performance. In an alehouse or barn, celebrations which often accompanied seasonal festivals such as harvest, Yuletide or Halloween would include music or dancing, using whatever resources were available. Bells, hurdy-gurdys (a stringed instrument driven by a wheel) and drums could all be taken out of storage for feast days and used to provide entertainment. In the Middle Ages, many well-known songs centred around the themes of courtly love and romance. The ideal of chivalry was popular across all levels of society, and many travelling minstrels and troubadors had visited the Near East and bringing Arab influences and ideals back through Europe with them.
531
ENGLISH
1
William Henry Johnson is noted as potentially being the first African American to practice law in the United States. Johnson was born into slavery in Richmond, Virginia on July 16, 1811. He was the property of Andrew Johnson. Small in stature, William Henry Johnson became a jockey. Although it is clear that Macon Allen was the first Black person to formally practice law, Johnson qualified for the bar in 1842. However, he was not sworn in until 1865. In 1859, Johnson tried a divorce case in Providence, Rhode Island, and in 1864, he also tried a criminal case on Cape Cod. Johnson was also appointed by Massachusetts Governor John Andrew as a Justice of the Peace in the New Bedford area from 1860 to 1863, thus making him one of the first black judicial appointees in the nation. Johnson’s most notable case was that of a 13-year-old boy name Charles Cuffee. Cuffee was a young boy who was charged and convicted of murder in New Bedford, Massachusetts, in 1870. Johnson (also known as Squire Johnson) was asked by the district attorney of Bristol County to represent the young boy. Charles Cuffee was related to one of the most famous black sailing merchants during that time, Paul Cuffee. Paul Cuffee was wealthy enough to be able to purchase the freedom of various family members. Squire Johnson also defended liquor dealers of New Bedford and was known to have been retained by impoverished whites in the area who were often whiskey smugglers. Eventually, Johnson tried criminal and civil cases as far away as New York and New Hampshire. Newspaper accounts of the era praised him for his courtroom skills even when he lost.
<urn:uuid:9c74533f-b9d6-4223-986e-3f9f5dcae706>
CC-MAIN-2020-05
https://blackthen.com/william-henry-johnson-noted-as-being-first-african-american-to-practice-law-in-the-nation/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00365.warc.gz
en
0.987192
342
3.625
4
[ -0.2751218378543854, 0.02408474311232567, 0.20402619242668152, -0.17783530056476593, -0.08673014491796494, -0.09471108019351959, 0.5361643433570862, -0.2662806212902069, -0.5013653635978699, -0.054004330188035965, -0.08739792555570602, 0.3461160957813263, -0.1844560205936432, 0.36868244409...
10
William Henry Johnson is noted as potentially being the first African American to practice law in the United States. Johnson was born into slavery in Richmond, Virginia on July 16, 1811. He was the property of Andrew Johnson. Small in stature, William Henry Johnson became a jockey. Although it is clear that Macon Allen was the first Black person to formally practice law, Johnson qualified for the bar in 1842. However, he was not sworn in until 1865. In 1859, Johnson tried a divorce case in Providence, Rhode Island, and in 1864, he also tried a criminal case on Cape Cod. Johnson was also appointed by Massachusetts Governor John Andrew as a Justice of the Peace in the New Bedford area from 1860 to 1863, thus making him one of the first black judicial appointees in the nation. Johnson’s most notable case was that of a 13-year-old boy name Charles Cuffee. Cuffee was a young boy who was charged and convicted of murder in New Bedford, Massachusetts, in 1870. Johnson (also known as Squire Johnson) was asked by the district attorney of Bristol County to represent the young boy. Charles Cuffee was related to one of the most famous black sailing merchants during that time, Paul Cuffee. Paul Cuffee was wealthy enough to be able to purchase the freedom of various family members. Squire Johnson also defended liquor dealers of New Bedford and was known to have been retained by impoverished whites in the area who were often whiskey smugglers. Eventually, Johnson tried criminal and civil cases as far away as New York and New Hampshire. Newspaper accounts of the era praised him for his courtroom skills even when he lost.
368
ENGLISH
1
Medgar Evers was the field secretary of the NAACP and a major figure in civil rights history. Evers paid the ultimate price for his commitment to the cause of civil rights when he was murdered on June 12th, 1963. Medgar Evers was born in 1925 in Decatur, Mississippi. Mid-twenties Mississippi epitomised the White attitude to African Americans in the South. Few black children went to school, segregation in just about all aspects of life existed, most African Americans there could only expect the most menial of jobs. Lynching was used to keep the blacks ‘in their place’. The KKK was strong in Mississippi. Where the KKK existed, African Americans learned to live in fear of doing anything other than what was expected of them by the dominant White community. Evers grew up in this environment. As with all black youths in Decatur, Evers experienced racial abuse from an early age. In later years he recollected how a family friend was lynched in the town for answering back to a white woman. Everyone in Decatur apparently knew who did the killing but no-one was ever charged and nothing was ever said in public about it. The dead man’s bloodied clothes were left in public presumably as a warning to other African Americans about the consequences of such behaviour. |“Every Negro in town was supposed to get the message from those clothes and I can see those clothes now in my mind’s eye. But nothing was said in public. No sermons in church. No news. No protest. It was as if this man just dissolved except for his bloody clothes.”Evers| Despite the many obstacles put in the way of an African American receiving a decent education, Evers got his high school diploma by walking twelve miles to school and twelve miles back each week day. During World War Two, he joined the American Army and was honourably discharged from it in 1946. Evers returned from a Europe that had been freed from tyranny. After going through this experience, he decided that the South should be the same – free from tyranny. Ironically for Mississippi, Evers had no problems registering to vote for the 1948 election. However, as the vote neared, his family was subjected to more and more threats. When the voting day arrived, Evers and his brother Charlie found that about 200 white men blocked their way to the polling station. They never got to vote. Instead both young men joined the NAACP. Medgar became a very active member of it. He combined this work with studying at Alcorn A + M College in Lorman, Mississippi where he graduated in business administration in 1952. While at college, Evers married Myrlie Beasley. After graduating, Evers became an insurance salesman and he had a comfortable lifestyle. However, in 1954, while his father lay ill in hospital, Evers witnessed an attempted lynching. His father had been placed in the ‘Negro Ward’ in the basement of the hospital. In an effort to get some fresh air, Evers went outside where he saw a large mob of whites had gathered, demanding that an injured black man be brought outside for them. His crime? He had fought with a white man in the town of Union. Injured after being shot in the leg, the police had brought him to hospital. The mob gathered outside. |“It seemed that this (racism) would never change. It was that way for my daddy, it was that way for me and it looked as though it would be that way for my children. I was so mad that I just stood there trembling and tears rolled down my cheeks.”Evers| After this incident, Evers quit his job in insurance and went to work for the NAACP full-time. He quickly rose to become a field secretary within Mississippi. Evers became one of the best known and most vocal members of the NAACP in the state. He moved to the state capital, Jackson, to be nearer more civil rights leaders. However, his work gained him many enemies. His children were taught to throw themselves to the floor if they heard any strange noises outside. Evers received numerous threats over the phone and shortly before his death, his house was fire-bombed. |“We lived with death as a constant companion 24 hours a day. Medgar knew what he was doing, and he knew what the risks were. He just decided that he had to do what he had to do. But I knew at some point that he would be taken from me.”Myrlie Evers| Regardless of the threats, Evers carried on working – especially with voter registration. On June 12th, 1963, J F Kennedy addressed the nation on civil rights and stated that there would be federal support to push forward integration. Evers had worked all day and returned home late at night. As he got out of his car, he was shot in the back and died fifty minutes later in hospital. |“We both knew he was going to die. Medgar did not want to be a martyr. But if he had to die to get us that far, he was willing to do it.”Myrlie Evers| Byron de la Beckwith was arrested for the murder. His rifle had been found near the shooting and he had been seen by some youths in the vicinity of Ever’s house. His car was also positively identified. However, others stated in his trial that Beckwith had been seen 60 miles away at the very time of the shooting, and, therefore, could not have been the killer. Beckwith was tried twice for the murder (in 1964 and 1965) but was not convicted. However, he was re-arrested for the murder in 1991 and found guilty. Sentenced to life in prison, Beckwith died aged 80 in prison.
<urn:uuid:f1d88b3e-1bea-48fa-a4b9-b1ea5b0abb49>
CC-MAIN-2020-05
https://www.historylearningsite.co.uk/the-civil-rights-movement-in-america-1945-to-1968/medgar-evers/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00170.warc.gz
en
0.993054
1,209
3.921875
4
[ -0.3910362720489502, 0.23405180871486664, -0.16535170376300812, -0.05843309313058853, 0.2946304678916931, 0.10405583679676056, 0.1364908665418625, -0.14699548482894897, -0.11786364018917084, 0.34123653173446655, 0.23170015215873718, 0.34236571192741394, -0.10489623993635178, 0.169983923435...
10
Medgar Evers was the field secretary of the NAACP and a major figure in civil rights history. Evers paid the ultimate price for his commitment to the cause of civil rights when he was murdered on June 12th, 1963. Medgar Evers was born in 1925 in Decatur, Mississippi. Mid-twenties Mississippi epitomised the White attitude to African Americans in the South. Few black children went to school, segregation in just about all aspects of life existed, most African Americans there could only expect the most menial of jobs. Lynching was used to keep the blacks ‘in their place’. The KKK was strong in Mississippi. Where the KKK existed, African Americans learned to live in fear of doing anything other than what was expected of them by the dominant White community. Evers grew up in this environment. As with all black youths in Decatur, Evers experienced racial abuse from an early age. In later years he recollected how a family friend was lynched in the town for answering back to a white woman. Everyone in Decatur apparently knew who did the killing but no-one was ever charged and nothing was ever said in public about it. The dead man’s bloodied clothes were left in public presumably as a warning to other African Americans about the consequences of such behaviour. |“Every Negro in town was supposed to get the message from those clothes and I can see those clothes now in my mind’s eye. But nothing was said in public. No sermons in church. No news. No protest. It was as if this man just dissolved except for his bloody clothes.”Evers| Despite the many obstacles put in the way of an African American receiving a decent education, Evers got his high school diploma by walking twelve miles to school and twelve miles back each week day. During World War Two, he joined the American Army and was honourably discharged from it in 1946. Evers returned from a Europe that had been freed from tyranny. After going through this experience, he decided that the South should be the same – free from tyranny. Ironically for Mississippi, Evers had no problems registering to vote for the 1948 election. However, as the vote neared, his family was subjected to more and more threats. When the voting day arrived, Evers and his brother Charlie found that about 200 white men blocked their way to the polling station. They never got to vote. Instead both young men joined the NAACP. Medgar became a very active member of it. He combined this work with studying at Alcorn A + M College in Lorman, Mississippi where he graduated in business administration in 1952. While at college, Evers married Myrlie Beasley. After graduating, Evers became an insurance salesman and he had a comfortable lifestyle. However, in 1954, while his father lay ill in hospital, Evers witnessed an attempted lynching. His father had been placed in the ‘Negro Ward’ in the basement of the hospital. In an effort to get some fresh air, Evers went outside where he saw a large mob of whites had gathered, demanding that an injured black man be brought outside for them. His crime? He had fought with a white man in the town of Union. Injured after being shot in the leg, the police had brought him to hospital. The mob gathered outside. |“It seemed that this (racism) would never change. It was that way for my daddy, it was that way for me and it looked as though it would be that way for my children. I was so mad that I just stood there trembling and tears rolled down my cheeks.”Evers| After this incident, Evers quit his job in insurance and went to work for the NAACP full-time. He quickly rose to become a field secretary within Mississippi. Evers became one of the best known and most vocal members of the NAACP in the state. He moved to the state capital, Jackson, to be nearer more civil rights leaders. However, his work gained him many enemies. His children were taught to throw themselves to the floor if they heard any strange noises outside. Evers received numerous threats over the phone and shortly before his death, his house was fire-bombed. |“We lived with death as a constant companion 24 hours a day. Medgar knew what he was doing, and he knew what the risks were. He just decided that he had to do what he had to do. But I knew at some point that he would be taken from me.”Myrlie Evers| Regardless of the threats, Evers carried on working – especially with voter registration. On June 12th, 1963, J F Kennedy addressed the nation on civil rights and stated that there would be federal support to push forward integration. Evers had worked all day and returned home late at night. As he got out of his car, he was shot in the back and died fifty minutes later in hospital. |“We both knew he was going to die. Medgar did not want to be a martyr. But if he had to die to get us that far, he was willing to do it.”Myrlie Evers| Byron de la Beckwith was arrested for the murder. His rifle had been found near the shooting and he had been seen by some youths in the vicinity of Ever’s house. His car was also positively identified. However, others stated in his trial that Beckwith had been seen 60 miles away at the very time of the shooting, and, therefore, could not have been the killer. Beckwith was tried twice for the murder (in 1964 and 1965) but was not convicted. However, he was re-arrested for the murder in 1991 and found guilty. Sentenced to life in prison, Beckwith died aged 80 in prison.
1,229
ENGLISH
1
In his seminal book, Why We Can’t Wait, the Reverend Dr. Martin Luther King, Jr. wrote about the inspired life of Crispus Attucks, saying, “He is one of the most important figures in African-American history, not for what he did for his own race but for what he did for all oppressed people everywhere. He is a reminder that the African-American heritage is not only African but American and it is a heritage that begins with the beginning of America.” Attucks was one of the Boston Patriots to die during the Boston Massacre on March 5, 1770. Not much is known about Attucks, but most historians agree that he was of mixed blood of African and Native American descent. It appears that Attucks was engaged in the maritime industries of New England and had some experience as a sailor. As tension between Great Britain and her American colonies erupted in 1765 with Parliament’s passing of the Stamp Act, Great Britain felt compelled to send British troops to occupy Boston, the hotbed of colonial resistance. The lone sentry of the Custom House, was attacked by a vociferous mob who threw stones, snowballs, chunks of ice and wood at the sentinel. Fearing for his life, he called for reinforcements from the nearby garrison for assistance. Captain Thomas Preston and seven soldiers joined the sentry at the Custom House. The crowd only grew larger. As the crowd threw chunks of ice and clubs at the soldiers, one found its mark and knocked a British soldier to the ground. He stood back up, yelled and fired his musket into the crowd. Immediately all the other British soldiers opened fire in a ragged volley. Five men immediately fell dead, the first among them was Attucks with two musket balls in his chest. A large funeral was held in Boston and the five victims of the “Boston Massacre” were buried together in a common grave in Boston’s Old Granary Burying Ground. In the 19th century, Attucks became a symbol of the abolitionist movement and his image and story were seen and told to demonstrate his patriotic virtues. Abolitionists like William C. Nell and Frederick Douglass extolled Crispus Attucks as the first martyr in the cause of American liberty and used his memory to garner support to end slavery in America and attain equal rights for African Americans. In the 20th century Attucks’ continued to be celebrated as a major African American historical figure. Musician Stevie Wonder wrote a song during the American Revolution Bicentennial that mentioned Crispus Attucks and a commemorative postage stamp was also issued in his honor. Though little is known of Crispus Attucks’ life, his death continues to serve as a reminder that African Americans took an active role in the path to American independence. Join t Fight Donate today to preserve Revolutionary War battlefields and the nation’s history for generations to come.
<urn:uuid:ffba1e58-1032-4aa0-ad4d-45fd1519133a>
CC-MAIN-2020-05
https://www.battlefields.org/learn/biographies/crispus-attucks
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00357.warc.gz
en
0.985961
604
4.09375
4
[ -0.17386749386787415, 0.5165879130363464, 0.522845983505249, -0.20299838483333588, -0.4278202950954437, -0.03084377385675907, 0.17530173063278198, 0.04318402707576752, -0.07796551287174225, 0.007240059785544872, -0.05297791585326195, -0.0838405191898346, -0.2201453149318695, -0.09051670879...
3
In his seminal book, Why We Can’t Wait, the Reverend Dr. Martin Luther King, Jr. wrote about the inspired life of Crispus Attucks, saying, “He is one of the most important figures in African-American history, not for what he did for his own race but for what he did for all oppressed people everywhere. He is a reminder that the African-American heritage is not only African but American and it is a heritage that begins with the beginning of America.” Attucks was one of the Boston Patriots to die during the Boston Massacre on March 5, 1770. Not much is known about Attucks, but most historians agree that he was of mixed blood of African and Native American descent. It appears that Attucks was engaged in the maritime industries of New England and had some experience as a sailor. As tension between Great Britain and her American colonies erupted in 1765 with Parliament’s passing of the Stamp Act, Great Britain felt compelled to send British troops to occupy Boston, the hotbed of colonial resistance. The lone sentry of the Custom House, was attacked by a vociferous mob who threw stones, snowballs, chunks of ice and wood at the sentinel. Fearing for his life, he called for reinforcements from the nearby garrison for assistance. Captain Thomas Preston and seven soldiers joined the sentry at the Custom House. The crowd only grew larger. As the crowd threw chunks of ice and clubs at the soldiers, one found its mark and knocked a British soldier to the ground. He stood back up, yelled and fired his musket into the crowd. Immediately all the other British soldiers opened fire in a ragged volley. Five men immediately fell dead, the first among them was Attucks with two musket balls in his chest. A large funeral was held in Boston and the five victims of the “Boston Massacre” were buried together in a common grave in Boston’s Old Granary Burying Ground. In the 19th century, Attucks became a symbol of the abolitionist movement and his image and story were seen and told to demonstrate his patriotic virtues. Abolitionists like William C. Nell and Frederick Douglass extolled Crispus Attucks as the first martyr in the cause of American liberty and used his memory to garner support to end slavery in America and attain equal rights for African Americans. In the 20th century Attucks’ continued to be celebrated as a major African American historical figure. Musician Stevie Wonder wrote a song during the American Revolution Bicentennial that mentioned Crispus Attucks and a commemorative postage stamp was also issued in his honor. Though little is known of Crispus Attucks’ life, his death continues to serve as a reminder that African Americans took an active role in the path to American independence. Join t Fight Donate today to preserve Revolutionary War battlefields and the nation’s history for generations to come.
600
ENGLISH
1
In 1966, the national Black Panther party was created. Their platform and it’s ideals struck blacks across the country, especially in the inner cities of the north. The Panthers were able to organize and unite these blacks. This alarmed the federal government. They instituted many controversial, illegal programs of harassment, infiltration, and instigation which led to the deaths of many Panthers. From their inception, the Black Panthers were treated with contempt. The Panthers wrote out a platform called “What We Want, What We Believe.” There ideas and methods appealed greatly to blacks. The past few years had seen the civil rights struggle to rise, and had left many blacks with the feeling that not enough was being done accomplished. Many blacks shared the view of the Panthers in that violence was needed to defend themselves until true equality could be achieved. Aside from being radical, the Panthers did things that helped the community. They set up breakfast, and helped people to clean up their neighborhoods. The Black Panthers gave many urban black communities a sense of unity and identity that they hadn’t had before. The Panthers violence alarmed the government. In March of 1968, the Panther newspaper printed this warning to police, “Halt in the name of humanity! You shall make no more war on unarmed people. You will not kill another black person and walk on the streets of the black community to gloat about it and sneer at the defenseless relatives for your victims. From now on, when you murder a black person in this Babylon or Babylons, you may as well give it up because we will get your ass and God can’t hide you.”1 This gave the government cause for alarm, and they stepped up their “efforts” accordingly. The government went through great lengths to keep up the status quo. They began campaigns of disinformation against the Panthers in order to stop any support for the Panthers. The Panthers were continuously harassed by police. Panthers were followed and arrested on minor, sometimes fabricated charges. For example, in Oakland California, the headquarters of the Panthers, police would randomly arrest any Panthers. In 1967, the FBI arrested 21 Black Panthers for “conspiring” to blow up department stores and botanical gardens in New York. 2 Not only was it local law enforcement that tried to destroy the Panthers, but the FBI was very actively involved. The FBI had begun using their COINTELPRO program towards the Black Panthers In November 1968. They had many agents working to survey, harass and infiltrate the group. One of the first major actions the FBI undertook was to create a violent confrontation between the Panthers and the US group. The FBI used different methods, such as sending satirical cartoons to members of the Panthers under the pretence they were from US. These cartoons served to further agitate the already volatile situation. An FBI agent said of the cartoons, ” The BPP members… strongly objected being made fun of in cartoons being distributed by the US organization ( FBI cartoons in actuality)…informant has advised on several occasions that the cartoons are , “really shaking up the BPP.” 3 Later on, the FBI forged a Panthers name, and sent a letter to another group of Panthers. This later was intended to spark more hatred and confrontation between the two groups, which it did. The FBI’s efforts continued, and were escalated. Their work with the Black Panthers came to a end on a cold December morning in 1969. The FBI had gathered a large amount of information on the leader of the Chicago Black Panthers, Fred Hampton. Through their sources within the Panthers, they knew the layout of Fred’s apartment, and when he would be there. At 4:45 in the morning, fourteen, police burst through the door, and began shooting the interior of the apartment. The police wounded four people and killed two. Soon after the Illinois State attorney issued a statement that it was the Black Panthers who had mounted the attack on the police, who had been “carrying on a search for illegal weapons”. Flint Taylor wrote the State attorney’s statement, ” Taylor had a story that Fred was up and firing away at the police in the back part of the apartment. Well the bed that he was sleeping on had blood all over it- at the head and a other places. So obviously, that totally disproved the theory that Fred was up, about, and firing away.”4 Upon later investigation, it was discovered that the Panthers had only fired on shot out of the hundred or so that were fired . It was also discovered that the police had fabricated evidence to make it appear as if the Panthers had fired upon the police. In conclusion, the Black Panthers united the black communities within the inner cities of the United States. This unity threatened the control the government had on these people. The government used illegal and unethical methods in order to destroy the Black Panthers. Their deception led directly to the deaths of several Panthers. The Black Panthers moved on though and stayed strong , by opening their first overseas office in Algiers in 1970. Just another step in the quest for civil rights.
<urn:uuid:bf92d839-e3d5-4165-8c81-ff530fc259cc>
CC-MAIN-2020-05
https://guidedcollective.com/black-panthers-idea/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00148.warc.gz
en
0.980904
1,048
3.515625
4
[ -0.41682982444763184, 0.17630383372306824, -0.19826170802116394, 0.11531329154968262, -0.042053647339344025, 0.268321692943573, 0.2073582410812378, 0.2222558557987213, -0.28634414076805115, 0.3887469172477722, 0.3690711557865143, 0.22592946887016296, 0.17641696333885193, 0.2609558403491974...
4
In 1966, the national Black Panther party was created. Their platform and it’s ideals struck blacks across the country, especially in the inner cities of the north. The Panthers were able to organize and unite these blacks. This alarmed the federal government. They instituted many controversial, illegal programs of harassment, infiltration, and instigation which led to the deaths of many Panthers. From their inception, the Black Panthers were treated with contempt. The Panthers wrote out a platform called “What We Want, What We Believe.” There ideas and methods appealed greatly to blacks. The past few years had seen the civil rights struggle to rise, and had left many blacks with the feeling that not enough was being done accomplished. Many blacks shared the view of the Panthers in that violence was needed to defend themselves until true equality could be achieved. Aside from being radical, the Panthers did things that helped the community. They set up breakfast, and helped people to clean up their neighborhoods. The Black Panthers gave many urban black communities a sense of unity and identity that they hadn’t had before. The Panthers violence alarmed the government. In March of 1968, the Panther newspaper printed this warning to police, “Halt in the name of humanity! You shall make no more war on unarmed people. You will not kill another black person and walk on the streets of the black community to gloat about it and sneer at the defenseless relatives for your victims. From now on, when you murder a black person in this Babylon or Babylons, you may as well give it up because we will get your ass and God can’t hide you.”1 This gave the government cause for alarm, and they stepped up their “efforts” accordingly. The government went through great lengths to keep up the status quo. They began campaigns of disinformation against the Panthers in order to stop any support for the Panthers. The Panthers were continuously harassed by police. Panthers were followed and arrested on minor, sometimes fabricated charges. For example, in Oakland California, the headquarters of the Panthers, police would randomly arrest any Panthers. In 1967, the FBI arrested 21 Black Panthers for “conspiring” to blow up department stores and botanical gardens in New York. 2 Not only was it local law enforcement that tried to destroy the Panthers, but the FBI was very actively involved. The FBI had begun using their COINTELPRO program towards the Black Panthers In November 1968. They had many agents working to survey, harass and infiltrate the group. One of the first major actions the FBI undertook was to create a violent confrontation between the Panthers and the US group. The FBI used different methods, such as sending satirical cartoons to members of the Panthers under the pretence they were from US. These cartoons served to further agitate the already volatile situation. An FBI agent said of the cartoons, ” The BPP members… strongly objected being made fun of in cartoons being distributed by the US organization ( FBI cartoons in actuality)…informant has advised on several occasions that the cartoons are , “really shaking up the BPP.” 3 Later on, the FBI forged a Panthers name, and sent a letter to another group of Panthers. This later was intended to spark more hatred and confrontation between the two groups, which it did. The FBI’s efforts continued, and were escalated. Their work with the Black Panthers came to a end on a cold December morning in 1969. The FBI had gathered a large amount of information on the leader of the Chicago Black Panthers, Fred Hampton. Through their sources within the Panthers, they knew the layout of Fred’s apartment, and when he would be there. At 4:45 in the morning, fourteen, police burst through the door, and began shooting the interior of the apartment. The police wounded four people and killed two. Soon after the Illinois State attorney issued a statement that it was the Black Panthers who had mounted the attack on the police, who had been “carrying on a search for illegal weapons”. Flint Taylor wrote the State attorney’s statement, ” Taylor had a story that Fred was up and firing away at the police in the back part of the apartment. Well the bed that he was sleeping on had blood all over it- at the head and a other places. So obviously, that totally disproved the theory that Fred was up, about, and firing away.”4 Upon later investigation, it was discovered that the Panthers had only fired on shot out of the hundred or so that were fired . It was also discovered that the police had fabricated evidence to make it appear as if the Panthers had fired upon the police. In conclusion, the Black Panthers united the black communities within the inner cities of the United States. This unity threatened the control the government had on these people. The government used illegal and unethical methods in order to destroy the Black Panthers. Their deception led directly to the deaths of several Panthers. The Black Panthers moved on though and stayed strong , by opening their first overseas office in Algiers in 1970. Just another step in the quest for civil rights.
1,043
ENGLISH
1
This topic guide will help you work with the topic of Donald Trump. The guide is mainly intended for use in English class, but it may also be relevant for other school subjects such as Social Studies or History. The guide is designed to give you a good overview of both Donald Trump's presidential campaign and his presidency. You can also find specific suggestions for texts to use as reference points, as well as ideas for further thematic perspectives. This topic guide was last updated on December 19, 2019. When the businessman and reality star Donald J. Trump announced that he would enter the 2016 presidential race as Republican candidate, few people took him seriously. Most political analysts felt that his chances were very poor because of his lack of political experience, his extreme political views and his highly controversial rhetoric. However, Trump became increasingly popular among the American people, and he finally overtook all the other Republican candidates and secured the nomination to become the party's official candidate. Even then, most analysts believed that his Democratic opponent Hillary Clinton would be victorious in the end, but Trump secured some of the most important states and thereby won the election. His victory came as a surprise to many, as his campaign was plagued by a number of scandals. Trump was sworn in as President in January 2017. During his campaign, some of his most important promises were related to the fight against illegal immigration, abolishing Barack Obama's health reform, less restrictions for corporations and a general improvement of the US economy. Some of these promises proved difficult to realise, however. Despite the Republican's complete control of Congress at the beginning of Trump's presidency, they never managed to settle on an alternative for Obama's health reform, and Trump's early attempts to block immigration and travel from a number of Muslim countries also faced several legal challenges. Just like his presidential campaign, Trump's presidency has been haunted by an unusually high number of political and personal scandals. For example, suspicions about Russian interference in the 2016 election quickly started to emerge, with some evidence pointing to illegal collusion between Trump's campaign staff and the Russian government in an effort to discredit Hillary Clinton and secure Trump's victory. These suspicions led to a large-scale FBI investigation, which has already led to several arrests of people in Trump's inner circle. Trump has often been criticised for his aggressive rhetoric, where he tries to undermine his opponents by calling them names, constantly doubting their claims, or changing the subject when he is faced with personal accusations. He also has a very loose relationship with facts, and often labels stories he disagrees with as "fake news", without presenting any kind of evidence or argument. He is generally sceptical towards the mainstream media (such as TV networks like CNN or newspapers like The New York Times), and has even referred to such media as "the enemy of the people". Globally, Trump's presidency has led to a decline in the international popularity of the US - especially in Europe and in the other countries of the Americas. Nevertheless, Trump's message is also gaining global popularity, which can be seen in a general tendency towards more nationalistic policies in many parts of the world. For example, the UK's decision to leave the EU has often been compared to Trump's election - both can be viewed as symptoms of popular frustrations about the realities of the globalised world, and a desire to keep one's country safe against foreign powers.
<urn:uuid:d7d63633-a87d-4b3e-8687-99ef0a4938e2>
CC-MAIN-2020-05
https://liceunet.ro/donald-trump
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00398.warc.gz
en
0.986858
682
3.3125
3
[ 0.08192737400531769, -0.016485828906297684, 0.4549017548561096, 0.0610845610499382, 0.18162131309509277, 0.5678492784500122, 0.14354673027992249, 0.1071510836482048, 0.10956808924674988, -0.012137319892644882, 0.06037529557943344, 0.5172132253646851, -0.14560969173908234, 0.476802349090576...
6
This topic guide will help you work with the topic of Donald Trump. The guide is mainly intended for use in English class, but it may also be relevant for other school subjects such as Social Studies or History. The guide is designed to give you a good overview of both Donald Trump's presidential campaign and his presidency. You can also find specific suggestions for texts to use as reference points, as well as ideas for further thematic perspectives. This topic guide was last updated on December 19, 2019. When the businessman and reality star Donald J. Trump announced that he would enter the 2016 presidential race as Republican candidate, few people took him seriously. Most political analysts felt that his chances were very poor because of his lack of political experience, his extreme political views and his highly controversial rhetoric. However, Trump became increasingly popular among the American people, and he finally overtook all the other Republican candidates and secured the nomination to become the party's official candidate. Even then, most analysts believed that his Democratic opponent Hillary Clinton would be victorious in the end, but Trump secured some of the most important states and thereby won the election. His victory came as a surprise to many, as his campaign was plagued by a number of scandals. Trump was sworn in as President in January 2017. During his campaign, some of his most important promises were related to the fight against illegal immigration, abolishing Barack Obama's health reform, less restrictions for corporations and a general improvement of the US economy. Some of these promises proved difficult to realise, however. Despite the Republican's complete control of Congress at the beginning of Trump's presidency, they never managed to settle on an alternative for Obama's health reform, and Trump's early attempts to block immigration and travel from a number of Muslim countries also faced several legal challenges. Just like his presidential campaign, Trump's presidency has been haunted by an unusually high number of political and personal scandals. For example, suspicions about Russian interference in the 2016 election quickly started to emerge, with some evidence pointing to illegal collusion between Trump's campaign staff and the Russian government in an effort to discredit Hillary Clinton and secure Trump's victory. These suspicions led to a large-scale FBI investigation, which has already led to several arrests of people in Trump's inner circle. Trump has often been criticised for his aggressive rhetoric, where he tries to undermine his opponents by calling them names, constantly doubting their claims, or changing the subject when he is faced with personal accusations. He also has a very loose relationship with facts, and often labels stories he disagrees with as "fake news", without presenting any kind of evidence or argument. He is generally sceptical towards the mainstream media (such as TV networks like CNN or newspapers like The New York Times), and has even referred to such media as "the enemy of the people". Globally, Trump's presidency has led to a decline in the international popularity of the US - especially in Europe and in the other countries of the Americas. Nevertheless, Trump's message is also gaining global popularity, which can be seen in a general tendency towards more nationalistic policies in many parts of the world. For example, the UK's decision to leave the EU has often been compared to Trump's election - both can be viewed as symptoms of popular frustrations about the realities of the globalised world, and a desire to keep one's country safe against foreign powers.
690
ENGLISH
1
December 4, 2019 report Dogs found able to perceive slight changes in human spoken words A team of researchers with the University of Sussex, the Defence Science and Technology Laboratory and the University of Lyon, has found that dogs are able to detect minor differences in spoken human words. In their paper published in the journal Biology Letters, the group describes experiments they carried out with pet dogs and what they learned about their ability to hear slight differences in human language. Most people know that dogs can be trained to understand some words spoken by humans—sit, beg and stay are some familiar examples. But it has been assumed that dogs do not really follow or even listen to regular human conversation, because they are not able to understand what is being said. In their new effort, they found that dogs are able to notice when they hear words that they have not heard before. The researchers came to this conclusion by carrying out an experiment that involved taking video of dogs as they listened to human voices through a speaker. In all, the researchers recorded 42 dogs of different breeds as they listened to words emanating from the speaker. Just six words were spoken, all single-syllable, non-command words. Also, many of the words were close in pronunciation, such as "hid," "had" or "who'd" to see if the dogs could hear and react to the differences. The words were prerecorded by several male and female volunteers speaking with different accents to determine if that might throw the dogs off. The team reports that all of the dogs reacted to the voices coming from the speaker—at least initially. They turned their heads quickly to the source and focused on it for several seconds. But then the dogs became accustomed to the words and responded less to what was said—at least until they heard a new word. When that happened, the dogs snapped to attention again, demonstrating that they could hear the difference between "sit" and "sat," for example. They researchers found it did not matter if the speaker's gender or dialect changed; the dogs still responded in the same ways. They suggest this indicates that the dogs were capable of recognizing English words whether they understand their meaning or not. © 2019 Science X Network
<urn:uuid:6f4b4d8c-5c82-4239-8689-a62c9c58cec2>
CC-MAIN-2020-05
https://phys.org/news/2019-12-dogs-slight-human-spoken-words.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00173.warc.gz
en
0.985448
447
3.421875
3
[ -0.1004442498087883, -0.17773856222629547, 0.39604154229164124, 0.08129731565713882, 0.04543771594762802, -0.015565234236419201, 0.5300844311714172, 0.035431306809186935, 0.15855659544467926, -0.17452728748321533, 0.37975555658340454, -0.39813366532325745, 0.06735193729400635, 0.0573245622...
3
December 4, 2019 report Dogs found able to perceive slight changes in human spoken words A team of researchers with the University of Sussex, the Defence Science and Technology Laboratory and the University of Lyon, has found that dogs are able to detect minor differences in spoken human words. In their paper published in the journal Biology Letters, the group describes experiments they carried out with pet dogs and what they learned about their ability to hear slight differences in human language. Most people know that dogs can be trained to understand some words spoken by humans—sit, beg and stay are some familiar examples. But it has been assumed that dogs do not really follow or even listen to regular human conversation, because they are not able to understand what is being said. In their new effort, they found that dogs are able to notice when they hear words that they have not heard before. The researchers came to this conclusion by carrying out an experiment that involved taking video of dogs as they listened to human voices through a speaker. In all, the researchers recorded 42 dogs of different breeds as they listened to words emanating from the speaker. Just six words were spoken, all single-syllable, non-command words. Also, many of the words were close in pronunciation, such as "hid," "had" or "who'd" to see if the dogs could hear and react to the differences. The words were prerecorded by several male and female volunteers speaking with different accents to determine if that might throw the dogs off. The team reports that all of the dogs reacted to the voices coming from the speaker—at least initially. They turned their heads quickly to the source and focused on it for several seconds. But then the dogs became accustomed to the words and responded less to what was said—at least until they heard a new word. When that happened, the dogs snapped to attention again, demonstrating that they could hear the difference between "sit" and "sat," for example. They researchers found it did not matter if the speaker's gender or dialect changed; the dogs still responded in the same ways. They suggest this indicates that the dogs were capable of recognizing English words whether they understand their meaning or not. © 2019 Science X Network
452
ENGLISH
1
Bright fall foliage, warm pumpkin spice lattes, and spooky Halloween costumes all come to mind when thinking about October. And those cute newborns sporting ghost and ghoul costumes have a little-known history of living a long life. Researchers from the University of Chicago compared data on people born between 1880 and 1895 who lived to be 100 or older to their siblings or spouses. The study found that people born in October are more likely to survive to 100 than those born in April. It also found that people born in September and November have higher chances of living a long life as well. Those born in March, May, and July, however, produced 40 percent fewer centenarians than other months. Another study, on people born in the Northern Hemisphere, found that fall babies lived longer than spring babies, too. Not born in October? Grab a book because this is why bookworms could live longer. Why are October babies more likely to live 100? There are a few theories behind this research. One idea is that these babies were less exposed to certain seasonal illnesses since they avoided extreme high and low temperatures of summer and winter, according to Nesochi Okeke Igbokwe, MD, a physician and health expert. “Perhaps being born in the fall month of October created somewhat of a protective effect against exposure to seasonal illnesses that may ultimately impact one’s longevity,” Okeke-Igbokwe says. Still, researchers aren’t entirely sure why this is a trend and are still theorizing. We do know what your birth order reveals about you could be more telling. What impacts how long you live? There are plenty of other environmental factors that impact your centenarian status. Dietary habits, your level of physical activity, and avoidance of toxic habits like smoking are just a few things that may certainly contribute to longevity. “Essentially, there are a host of factors that may come into play that would make it more or less likely to reach the age of 100,” Okeke-Igbokwe says. There are a few simple rules to follow if you want to live to 100.
<urn:uuid:d1ecce4e-ca56-48fd-9648-a071204018e8>
CC-MAIN-2020-05
https://www.rd.com/culture/why-october-babies-more-likely-live-100/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00395.warc.gz
en
0.980296
441
3.359375
3
[ -0.06474824994802475, 0.15848562121391296, 0.46703946590423584, 0.4696238338947296, 0.3360121548175812, 0.32198014855384827, 0.028661945834755898, 0.014697397127747536, -0.4404789209365845, -0.015429172664880753, 0.07607793062925339, 0.16423426568508148, 0.038059718906879425, 0.28497231006...
1
Bright fall foliage, warm pumpkin spice lattes, and spooky Halloween costumes all come to mind when thinking about October. And those cute newborns sporting ghost and ghoul costumes have a little-known history of living a long life. Researchers from the University of Chicago compared data on people born between 1880 and 1895 who lived to be 100 or older to their siblings or spouses. The study found that people born in October are more likely to survive to 100 than those born in April. It also found that people born in September and November have higher chances of living a long life as well. Those born in March, May, and July, however, produced 40 percent fewer centenarians than other months. Another study, on people born in the Northern Hemisphere, found that fall babies lived longer than spring babies, too. Not born in October? Grab a book because this is why bookworms could live longer. Why are October babies more likely to live 100? There are a few theories behind this research. One idea is that these babies were less exposed to certain seasonal illnesses since they avoided extreme high and low temperatures of summer and winter, according to Nesochi Okeke Igbokwe, MD, a physician and health expert. “Perhaps being born in the fall month of October created somewhat of a protective effect against exposure to seasonal illnesses that may ultimately impact one’s longevity,” Okeke-Igbokwe says. Still, researchers aren’t entirely sure why this is a trend and are still theorizing. We do know what your birth order reveals about you could be more telling. What impacts how long you live? There are plenty of other environmental factors that impact your centenarian status. Dietary habits, your level of physical activity, and avoidance of toxic habits like smoking are just a few things that may certainly contribute to longevity. “Essentially, there are a host of factors that may come into play that would make it more or less likely to reach the age of 100,” Okeke-Igbokwe says. There are a few simple rules to follow if you want to live to 100.
448
ENGLISH
1
Originally, bearers of coats of arms were knights who could be called up for military duty. A knight’s rank was not readily apparent from his shield. In the reign of Edward I. the heraldry of these individuals does not appear to have been any different from that of their social superiors. King Edward's three lions passant guardant or on a field of gules (three gold lions, down on all fours on a red shield) was no more elaborate (or simple) than his enemy, William Wallace's gules, a lion rampant argent (red, with a white lion up on its hind legs), or Robert the Bruce's saltire and chief of gules on a field of argent (a white shield bearing a large yellow Saint Andrew's cross, with a yellow band across the top). The Rolls of Arms, which were painstakingly created by the Heralds of the time, were long narrow strips of parchment, on which were written lists of the names and titles of the knights and squires as well as full descriptions of their armorial insignia. The exact circumstances under which the rolls were created is unknown, but the accuracy and veracity of them has been proven beyond doubt by careful and repeated comparison with seals and other documents from the time period. It is obvious from the similarity in description between the rolls that the early Heralds from the time of Edward I had framed some system for the regulation of their work, and this is what raised their art form to a science.. The Heralds of the time had decided upon certain terms and rules for describing heraldic devices and figures, and had established laws to direct the granting, the assuming, and the bearing of arms.
<urn:uuid:12abc34a-ed54-4dca-a6b6-9539fcded593>
CC-MAIN-2020-05
https://heraldicjewelry.com/blogs/heraldic-times/116105541-the-development-of-heraldry-part-7
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00556.warc.gz
en
0.985666
348
3.734375
4
[ -0.6825020909309387, 0.47666504979133606, 0.03052556701004505, -0.19394981861114502, 0.05151296406984329, -0.2836974561214447, 0.44540002942085266, 0.20032170414924622, -0.1608874499797821, -0.01391424797475338, -0.06156160682439804, -0.36800867319107056, 0.16487273573875427, -0.3118599057...
6
Originally, bearers of coats of arms were knights who could be called up for military duty. A knight’s rank was not readily apparent from his shield. In the reign of Edward I. the heraldry of these individuals does not appear to have been any different from that of their social superiors. King Edward's three lions passant guardant or on a field of gules (three gold lions, down on all fours on a red shield) was no more elaborate (or simple) than his enemy, William Wallace's gules, a lion rampant argent (red, with a white lion up on its hind legs), or Robert the Bruce's saltire and chief of gules on a field of argent (a white shield bearing a large yellow Saint Andrew's cross, with a yellow band across the top). The Rolls of Arms, which were painstakingly created by the Heralds of the time, were long narrow strips of parchment, on which were written lists of the names and titles of the knights and squires as well as full descriptions of their armorial insignia. The exact circumstances under which the rolls were created is unknown, but the accuracy and veracity of them has been proven beyond doubt by careful and repeated comparison with seals and other documents from the time period. It is obvious from the similarity in description between the rolls that the early Heralds from the time of Edward I had framed some system for the regulation of their work, and this is what raised their art form to a science.. The Heralds of the time had decided upon certain terms and rules for describing heraldic devices and figures, and had established laws to direct the granting, the assuming, and the bearing of arms.
343
ENGLISH
1
The American colonists, on the eve of the Revolution were very concrete in their identity as well as their unity.The colonists had endured many years of far off governance by the mother country, as well as intercolonial problems that could only be solved by coming together as one close knit colonial unit.The colonists had made their decision; that they were not going to be governed by the far off, and tyrant mother country of England, and they were going to come together as one to defend their beliefs. The colonies had been exposed to many different instances in which they had to deal with a harsh suppression put on them by England.The Proclamation Act of 1763 was thefirst, and more would follow.The Quartering Act of 1765, The Stamp Act of 1675, and later the Townsend Acts.Mather Byles posed the question in his publication "…which is better, to be ruled by one tyrant three thousand miles away, or by three thousand tyrants not a mile away."He simply, but strongly makes the point that the colonists were becoming tired of being governed by some far off land. Colonists, because of this harsh suppression began to strengthen their beliefs that this governance by England was not going to meet the needs of the colonists without raising some extreme controversial.In the Declaration for the Causes of Taking up Arms in July of 1775, it was obvious that the colonists were not initially looking for the separation to happen, but because of England's harshness, they had no choice."…we assure them that we mean not to dissolve that union which has so long and so happily subsisted between us, and which we sincerely wish to see restored…" All colonists in these beliefs would begin to strengthen their colonial unity. The Revolution was the most prominent event in which it was necessary for all the colonists to be united for the same cause.The cartoon in the Pennsylvania Gazette in 1754, drawn b…
<urn:uuid:cd700c6e-eb15-494d-b2aa-fecc30cce660>
CC-MAIN-2020-05
https://gemmarketingsolutions.com/colonial-unity/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00249.warc.gz
en
0.992066
387
3.75
4
[ 0.09045333415269852, 0.03264603763818741, 0.6602203845977783, -0.20501814782619476, -0.34395456314086914, 0.020440660417079926, 0.10217361152172089, 0.41669273376464844, -0.0014290261315181851, 0.31753718852996826, 0.06643123924732208, -0.006929468363523483, -0.010894610546529293, 0.200505...
1
The American colonists, on the eve of the Revolution were very concrete in their identity as well as their unity.The colonists had endured many years of far off governance by the mother country, as well as intercolonial problems that could only be solved by coming together as one close knit colonial unit.The colonists had made their decision; that they were not going to be governed by the far off, and tyrant mother country of England, and they were going to come together as one to defend their beliefs. The colonies had been exposed to many different instances in which they had to deal with a harsh suppression put on them by England.The Proclamation Act of 1763 was thefirst, and more would follow.The Quartering Act of 1765, The Stamp Act of 1675, and later the Townsend Acts.Mather Byles posed the question in his publication "…which is better, to be ruled by one tyrant three thousand miles away, or by three thousand tyrants not a mile away."He simply, but strongly makes the point that the colonists were becoming tired of being governed by some far off land. Colonists, because of this harsh suppression began to strengthen their beliefs that this governance by England was not going to meet the needs of the colonists without raising some extreme controversial.In the Declaration for the Causes of Taking up Arms in July of 1775, it was obvious that the colonists were not initially looking for the separation to happen, but because of England's harshness, they had no choice."…we assure them that we mean not to dissolve that union which has so long and so happily subsisted between us, and which we sincerely wish to see restored…" All colonists in these beliefs would begin to strengthen their colonial unity. The Revolution was the most prominent event in which it was necessary for all the colonists to be united for the same cause.The cartoon in the Pennsylvania Gazette in 1754, drawn b…
403
ENGLISH
1
Gautama Buddha was the figure on whose teachings Buddhism was founded. He is believed to have lived and taught mostly in the eastern part of Ancient India, sometime between the sixth and fourth centuries BC. He is recognised by Buddhists as an enlightened teacher, who shared his insights to help sentient beings end the cycle of rebirth and suffering. Accounts of his life, discourses, and monastic rules are believed by Buddhists to have been summarised after his death and memorised by his followers. Various collections of teachings attributed to him were passed down by oral tradition, and not committed to writing until approximately 400 years later. Gandhara is the ancient name of a region in northwest Pakistan, which is bounded on the west by the Hindu Kush mountain range, and bounded to the north by the foothills of the Himalayas. Buddhism probably reached Gandhara as early as the third century BC, and this relief is a fine example of the increasing popularity enjoyed by the belief system. To discover more about Gandharan Buddahs, please visit our relevant blog post: Understanding Gandharan Buddha Poses and Postures.
<urn:uuid:69e2c975-2ac5-4e40-9952-a8df458bd915>
CC-MAIN-2020-05
https://antiquities.co.uk/shop/ancient-figurines-statues/buddhist/large-gandharan-relief-of-buddha/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00142.warc.gz
en
0.986617
226
3.75
4
[ 0.07454068958759308, 0.08217871189117432, 0.14653551578521729, -0.16855518519878387, -0.4410782754421234, 0.21760809421539307, 0.6323789954185486, -0.36997607350349426, 0.13883879780769348, 0.08610505610704422, -0.3905699551105499, -0.28212970495224, 0.07107631862163544, 0.370567262172699,...
8
Gautama Buddha was the figure on whose teachings Buddhism was founded. He is believed to have lived and taught mostly in the eastern part of Ancient India, sometime between the sixth and fourth centuries BC. He is recognised by Buddhists as an enlightened teacher, who shared his insights to help sentient beings end the cycle of rebirth and suffering. Accounts of his life, discourses, and monastic rules are believed by Buddhists to have been summarised after his death and memorised by his followers. Various collections of teachings attributed to him were passed down by oral tradition, and not committed to writing until approximately 400 years later. Gandhara is the ancient name of a region in northwest Pakistan, which is bounded on the west by the Hindu Kush mountain range, and bounded to the north by the foothills of the Himalayas. Buddhism probably reached Gandhara as early as the third century BC, and this relief is a fine example of the increasing popularity enjoyed by the belief system. To discover more about Gandharan Buddahs, please visit our relevant blog post: Understanding Gandharan Buddha Poses and Postures.
229
ENGLISH
1
Mastodons were large elephant-like animals. They have been extinct for about 9,000 years. Their remains have been found all over North America. They are often confused with the woolly mammoth. They were shorter in height but longer and heavier than the mammoth. They had straighter tusks and did not have a hump on their heads like the mammoth. Unlike mammoths who usually lived in open areas, mastodons seem to have liked being in forested and swampy areas. They ate leaves, twigs, cones, grasses, swamp plants and mosses. The remains of one mastodon had nearly 250 liters of plant material in its stomach. Mastodons were different colors of brown. They had long guard hair over a fine woolly layer. Some pieces of hair were found beside a skeleton in the northern United States. These hairs were from 7 cm to 18 cm long. Scimitar Cats preyed on young mastodons. Paleo-Indians hunted the young and the adults. All parts of the mastodon would have been used.
<urn:uuid:a089ca02-7126-48d0-b689-829fef02acd5>
CC-MAIN-2020-05
https://www.denaliwildlifetour.com/alaskan-wildlife/american-mastadon/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00268.warc.gz
en
0.991637
218
3.59375
4
[ -0.32217463850975037, -0.1556142270565033, 0.3118199408054352, 0.1087300106883049, -0.19227509200572968, 0.11779697239398956, -0.048633553087711334, -0.008996576070785522, -0.25451231002807617, 0.11076555401086807, 0.3609086573123932, -0.5324079394340515, 0.3832542896270752, 0.123892240226...
6
Mastodons were large elephant-like animals. They have been extinct for about 9,000 years. Their remains have been found all over North America. They are often confused with the woolly mammoth. They were shorter in height but longer and heavier than the mammoth. They had straighter tusks and did not have a hump on their heads like the mammoth. Unlike mammoths who usually lived in open areas, mastodons seem to have liked being in forested and swampy areas. They ate leaves, twigs, cones, grasses, swamp plants and mosses. The remains of one mastodon had nearly 250 liters of plant material in its stomach. Mastodons were different colors of brown. They had long guard hair over a fine woolly layer. Some pieces of hair were found beside a skeleton in the northern United States. These hairs were from 7 cm to 18 cm long. Scimitar Cats preyed on young mastodons. Paleo-Indians hunted the young and the adults. All parts of the mastodon would have been used.
229
ENGLISH
1
Warren Booth and his colleagues at the University of Tulsa in Oklahoma have utilized genetics to unravel the origin of bed bugs. They discovered that there are two lineages in Europe. The lines are so diverse that they nearly split into two species. What's more shocking is that their origin lies with bats. The new findings were published in the journal Molecular Ecology. They provide the first genetic evidence that bats were the ancestral hosts of the bed bugs that plague human residences today. According to Exterminators, Americans spent around $446 million getting rid of bed bugs in 2013. The bed bug business increased 18% last year alone. Bed bugs have been around for centuries. They have been involved with humans for about as long. Historic references to bedbugs in ancient Egyptian literature have been documented and archaeologists have also discovered fossilized bed bugs thought to be around 3,500 years old. A single pregnant female can infest an entire apartment building. They can go through many rounds of inbreeding with no detrimental effects. All they need are human hosts to satisfy their thirst for blood. Bed bug infestations are difficult to treat. It is estimated that 90% of common bed bugs have developed a mutation that makes them resistant to the insecticides known as pyrethroids that had been previously used to kill them. Booth's team sampled hundreds of bed bugs from human and bat dwellings from 13 countries in and around Europe. An analysis of their DNA showed no sign of “gene flow occurring between the human and bat bed bugs, even though some bats lived in churches or attics and could therefore have come into human contact.” “The bat lineage probably dates back to when bats and humans once shared caves,” says Booth. There are two types of people, Booth says. "The type that have had bed bugs and the people that will still get them. We're living in a time where they're becoming much more common."
<urn:uuid:48acc8b0-53fb-433b-a098-c1dfee4103df>
CC-MAIN-2020-05
https://www.azpest.com/bug-blog/bed-bugs-and-bats/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00423.warc.gz
en
0.982656
392
3.859375
4
[ -0.06865372508764267, 0.2104039043188095, 0.01477140188217163, 0.02693009376525879, 0.33295613527297974, -0.012254297733306885, 0.0742361843585968, 0.1644299030303955, 0.09874515980482101, 0.3152836263179779, 0.05716557428240776, -0.46240609884262085, -0.424345999956131, 0.1645526140928268...
3
Warren Booth and his colleagues at the University of Tulsa in Oklahoma have utilized genetics to unravel the origin of bed bugs. They discovered that there are two lineages in Europe. The lines are so diverse that they nearly split into two species. What's more shocking is that their origin lies with bats. The new findings were published in the journal Molecular Ecology. They provide the first genetic evidence that bats were the ancestral hosts of the bed bugs that plague human residences today. According to Exterminators, Americans spent around $446 million getting rid of bed bugs in 2013. The bed bug business increased 18% last year alone. Bed bugs have been around for centuries. They have been involved with humans for about as long. Historic references to bedbugs in ancient Egyptian literature have been documented and archaeologists have also discovered fossilized bed bugs thought to be around 3,500 years old. A single pregnant female can infest an entire apartment building. They can go through many rounds of inbreeding with no detrimental effects. All they need are human hosts to satisfy their thirst for blood. Bed bug infestations are difficult to treat. It is estimated that 90% of common bed bugs have developed a mutation that makes them resistant to the insecticides known as pyrethroids that had been previously used to kill them. Booth's team sampled hundreds of bed bugs from human and bat dwellings from 13 countries in and around Europe. An analysis of their DNA showed no sign of “gene flow occurring between the human and bat bed bugs, even though some bats lived in churches or attics and could therefore have come into human contact.” “The bat lineage probably dates back to when bats and humans once shared caves,” says Booth. There are two types of people, Booth says. "The type that have had bed bugs and the people that will still get them. We're living in a time where they're becoming much more common."
400
ENGLISH
1
As the British and Colonists were engaged in the Seven Years War against the French and Indians, the colonists were slowly building up feelings for their removal from under the British crown. There had been several uprisings to overthrow the colonial governments. When the war ended and the British were victorious, they declared the Proclamation of 1763 which stated that the land west of the Appalachians was to be "reserved" for the Native American population. The colonists were confused and outraged and the now ambitious social elite’s were raring to direct that anger against the English since the French were no longer a threat. However, the social elite was a miniscule percentage of the colonial population. As documented in city tax lists, the top 5% of Boston’s taxpayers controlled 49% of the cities taxable assets. The lower classes then started to use town meetings to express their feelings. Men like James Otis and Samuel Adams from the upper classes formed the Boston Caucus and through their motivational speaking, molded and activated the laboring-class. After the Stamp Act of 1765, the British’s taxation of colonists to pay for the Seven Year War, the lower-class stormed and destroyed merchant homes to level the distinction of rich and poor. A hundred lower-classmen had to suffer for the extravagance of one upper-classmen. They demanded more political democracy in which the working class could participate in making policies. In 1776 elections for the constitutional framing of Pennsylvania, a Privates Committee urged the opposition of rich-men in the convention. Even in the countryside, there were similar conflicts of rich against poor. Several riots in the New York/Jersey area were more than riots but long lasting social movements to create counter governments. Rioters were breaking into jails and freeing their friends. Soon however, the lower-classmen started to turn to the British for support against the rich colonists. With the intensification of the British conflict, the colonial leaders started to think of ways to unify themselves with the rioters to handle the British. But the Regulators, laborers, petitioned the government on their grievances and as a result a large riot broke out in 1770 in a court. Riots against the Stamp Act swept Boston in 1767. The leaders instigated crowd action and at this time, 10% of the taxpayers accounted for 66% of the taxable wealth. This riot made leaders realize the dilemma and so the Loyal Nine was formed, a group of skilled laborers, and a procession, of two or three thousand, against the Stamp Act was organized in August 1765. Still the leaders denounced the procession’s actions and even when the act was repealed, a celebration was only attended by the non-processioners. In Britains next attempt to tax the colonists, troops were sent and friction grew. On March 5, 1770 British soldiers killed workers in a fight known as the Boston Massacre and anger mounted quickly. This led to the removal of the soldiers form Boston. There had also been soldier-worker skirmishes elsewhere. In 1772 the Boston Committee of Correspondence was formed to organize anti-British actions. With the Boston Tea Party of 1773, an action against the tea tax, the Parliament proposed the Coercive (Intolerable) Acts which closed the Boston port dissolved the colonial government in Massachusetts and led to the importing of troops. In other colonies it was clear to the leaders that they needed to persuade the lower class to deflect their anger against British and join the revolution. Men like Patrick Henry, an orator, and Tom Paine, author of Common Sense, relieved the tension between classes although some aristocrats were angered by the idea and didn’t want the patriot cause to go too far into democracy. However, Paine strongly believed that such a "democratic" government could represent some great common interest. The Continental Congress was formed in 1774. After the battles of Lexington and Concord in April 1775, a small committee was formed to draw up the Declaration of Independence, adopted by the Congress on July 2 and proclaimed July 4, 1776. By now most colonials had already experienced their feelings of independence and welcomed it. The Declaration included a list about the king holding a tyranny over the states. Some people, though, were omitted from the Declaration: Indians, blacks, slaves, women but in the phrase "all men are created equal", they were not deliberately included but included by the definition of men. It also states that a government is formed to promote the life, liberty, and happiness of the people and when so stopped the people may replace it. Some trace this idea back to John Locke’s Second treatise on Government. The Declaration was introduced and read from the town hall balcony in Boston. Ironically a member of the Loyal Nine, men that opposed militant action against the British, read it. Four days later a military draft occurred and the rich dodged it by paying for substitutes when the poor had to serve. Rioting followed with the shouting of "tyranny is tyranny let it come from whom it may."
<urn:uuid:f3527fe0-dc12-468f-88b0-5c585cf915c8>
CC-MAIN-2020-05
http://essay.ua-referat.com/A_People
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00069.warc.gz
en
0.981196
1,035
4
4
[ -0.17091691493988037, 0.1453724056482315, 0.27411437034606934, -0.14388346672058105, -0.31967616081237793, 0.1620437055826187, 0.0019152569584548473, 0.06983882933855057, -0.03556239604949951, 0.26160770654678345, 0.06975609064102173, 0.20116209983825684, -0.16116592288017273, 0.1978394240...
3
As the British and Colonists were engaged in the Seven Years War against the French and Indians, the colonists were slowly building up feelings for their removal from under the British crown. There had been several uprisings to overthrow the colonial governments. When the war ended and the British were victorious, they declared the Proclamation of 1763 which stated that the land west of the Appalachians was to be "reserved" for the Native American population. The colonists were confused and outraged and the now ambitious social elite’s were raring to direct that anger against the English since the French were no longer a threat. However, the social elite was a miniscule percentage of the colonial population. As documented in city tax lists, the top 5% of Boston’s taxpayers controlled 49% of the cities taxable assets. The lower classes then started to use town meetings to express their feelings. Men like James Otis and Samuel Adams from the upper classes formed the Boston Caucus and through their motivational speaking, molded and activated the laboring-class. After the Stamp Act of 1765, the British’s taxation of colonists to pay for the Seven Year War, the lower-class stormed and destroyed merchant homes to level the distinction of rich and poor. A hundred lower-classmen had to suffer for the extravagance of one upper-classmen. They demanded more political democracy in which the working class could participate in making policies. In 1776 elections for the constitutional framing of Pennsylvania, a Privates Committee urged the opposition of rich-men in the convention. Even in the countryside, there were similar conflicts of rich against poor. Several riots in the New York/Jersey area were more than riots but long lasting social movements to create counter governments. Rioters were breaking into jails and freeing their friends. Soon however, the lower-classmen started to turn to the British for support against the rich colonists. With the intensification of the British conflict, the colonial leaders started to think of ways to unify themselves with the rioters to handle the British. But the Regulators, laborers, petitioned the government on their grievances and as a result a large riot broke out in 1770 in a court. Riots against the Stamp Act swept Boston in 1767. The leaders instigated crowd action and at this time, 10% of the taxpayers accounted for 66% of the taxable wealth. This riot made leaders realize the dilemma and so the Loyal Nine was formed, a group of skilled laborers, and a procession, of two or three thousand, against the Stamp Act was organized in August 1765. Still the leaders denounced the procession’s actions and even when the act was repealed, a celebration was only attended by the non-processioners. In Britains next attempt to tax the colonists, troops were sent and friction grew. On March 5, 1770 British soldiers killed workers in a fight known as the Boston Massacre and anger mounted quickly. This led to the removal of the soldiers form Boston. There had also been soldier-worker skirmishes elsewhere. In 1772 the Boston Committee of Correspondence was formed to organize anti-British actions. With the Boston Tea Party of 1773, an action against the tea tax, the Parliament proposed the Coercive (Intolerable) Acts which closed the Boston port dissolved the colonial government in Massachusetts and led to the importing of troops. In other colonies it was clear to the leaders that they needed to persuade the lower class to deflect their anger against British and join the revolution. Men like Patrick Henry, an orator, and Tom Paine, author of Common Sense, relieved the tension between classes although some aristocrats were angered by the idea and didn’t want the patriot cause to go too far into democracy. However, Paine strongly believed that such a "democratic" government could represent some great common interest. The Continental Congress was formed in 1774. After the battles of Lexington and Concord in April 1775, a small committee was formed to draw up the Declaration of Independence, adopted by the Congress on July 2 and proclaimed July 4, 1776. By now most colonials had already experienced their feelings of independence and welcomed it. The Declaration included a list about the king holding a tyranny over the states. Some people, though, were omitted from the Declaration: Indians, blacks, slaves, women but in the phrase "all men are created equal", they were not deliberately included but included by the definition of men. It also states that a government is formed to promote the life, liberty, and happiness of the people and when so stopped the people may replace it. Some trace this idea back to John Locke’s Second treatise on Government. The Declaration was introduced and read from the town hall balcony in Boston. Ironically a member of the Loyal Nine, men that opposed militant action against the British, read it. Four days later a military draft occurred and the rich dodged it by paying for substitutes when the poor had to serve. Rioting followed with the shouting of "tyranny is tyranny let it come from whom it may."
1,064
ENGLISH
1
Original Founding Fathers – David Behrens Art The term “founding fathers” can be traced back to most of our adolescent years when our social studies teacher used it to define the early presidents and their various documents that forged and formed this great and powerful nation we live in today. What we weren’t taught was that before the first American president there was already a system and order decreed by renown Native American elders and chieftains. Chief Joseph for example lived a life that personified his courageous beliefs. Joseph, a Nez Perce, grew up in Oregon’s Wallowa Valley, a region that his ancestors occupied for centuries. In 1874 the U.S. government ordered his people to resettle in Idaho. In the month of June 1877, Joseph began leading his people to Idaho in peace, but on the way found himself confronted by some hostile whites. Gunfire was exchanged which eventually escalated to an all out war with the United States. For 1,800 miles Chief Joseph courageously led 800 men, women and children to a Canadian refuge but was pursued closely by the U.S. Calvary. Ambushed only 42 miles from their destination and with only 418 of his survivors he bellowed this unforgettable lament, “I will fight no more forever.” Sitting Bull, a Hunkpapa Sioux, was not only known as a great warrior but a skilled hunter as well. Being a respected Sioux leader, one of his main objectives was to feed his people and lead them to the ever changing migrating buffalo herds. As he strove to fulfill his role, he was met by much resistance from the U.S. government. Eventually his zeal to make a way for his people met its fiercest opponent in General George Custer that soon took its toll resulting in the near extinction of the buffalo.
<urn:uuid:2524acda-8139-4681-9e97-5ede98e31e20>
CC-MAIN-2020-05
https://agrdailynews.com/2019/12/16/original-founding-fathers/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00466.warc.gz
en
0.988032
379
3.640625
4
[ -0.1731659322977066, 0.5224379301071167, 0.29315292835235596, -0.28537365794181824, -0.6043846607208252, 0.1801011562347412, 0.28427690267562866, 0.14681564271450043, -0.17510606348514557, 0.0020689568482339382, 0.14336927235126495, 0.21740449965000153, 0.19804739952087402, 0.1772928237915...
1
Original Founding Fathers – David Behrens Art The term “founding fathers” can be traced back to most of our adolescent years when our social studies teacher used it to define the early presidents and their various documents that forged and formed this great and powerful nation we live in today. What we weren’t taught was that before the first American president there was already a system and order decreed by renown Native American elders and chieftains. Chief Joseph for example lived a life that personified his courageous beliefs. Joseph, a Nez Perce, grew up in Oregon’s Wallowa Valley, a region that his ancestors occupied for centuries. In 1874 the U.S. government ordered his people to resettle in Idaho. In the month of June 1877, Joseph began leading his people to Idaho in peace, but on the way found himself confronted by some hostile whites. Gunfire was exchanged which eventually escalated to an all out war with the United States. For 1,800 miles Chief Joseph courageously led 800 men, women and children to a Canadian refuge but was pursued closely by the U.S. Calvary. Ambushed only 42 miles from their destination and with only 418 of his survivors he bellowed this unforgettable lament, “I will fight no more forever.” Sitting Bull, a Hunkpapa Sioux, was not only known as a great warrior but a skilled hunter as well. Being a respected Sioux leader, one of his main objectives was to feed his people and lead them to the ever changing migrating buffalo herds. As he strove to fulfill his role, he was met by much resistance from the U.S. government. Eventually his zeal to make a way for his people met its fiercest opponent in General George Custer that soon took its toll resulting in the near extinction of the buffalo.
384
ENGLISH
1
- Trial Of Socrates Essays: Examples, Topics, Titles, & Outlines - Socratic Method Research Portal - Custom assignment writing - How to Converse like Socrates Analysis Of Socrates 's ' The ' Of The Socrates ' - In the Crito, Socrates makes like surprisingly strong claims about the voice of the Laws of Athens, which speaks to him and explains why it is like to talk the prison. He claims that the citizens are bound to the Laws, and talk ought to follow it. If one breaks it, it would how essay harm to the whole country. I will argue that the Athens does not held together by the Laws. I will also claim that neither Socrates nor citizens have an agreement with the laws. Trial Of Socrates Essays: Examples, Topics, Titles, & Outlines Socrates states that the Laws exist for its own purpose Socrates is imprisoned and has been sentenced how death. Socrates will most likely be put to death the next day. Socrates how it would be unjust for him to escape, as Crito pleads for him to leave. Under trial for corrupting youth and not worshiping the Gods in Athens, Socrates takes an attitude that many might interpret as pompous evidence in essay essay his trial. He speaks in a plain manner, as if the jury is talk another of his followers. Socrates first cites the profit at Delphi for why he behaves in ways that lead to him being under scrutiny of essay writing for ielts law. He explains that his friend, Chaerephon, went to ask the oracle if anyone is liker than Socrates and the oracle responded no 21a It is more difficult to take into how every word that Socrates has said up to that point and allow that to influence the validity of Socrates current position or argument. Though this may be more difficult we must take everything that Socrates has claimed to talk in like dialog. While doing this brings up personal essay on translation potential contradiction between Socrates Apology and in his dialog with Crito. Though this contradiction is clearly visible when focusing on just the idea of these claims, there is background beliefs of the Gods that allows both Socrates claim in his apology and his argument in the Crito dial He was brought to trial for allegedly demeaning the people of Athens and challenging their talks on certain views. Yet, the counselors and state jurors did not believe that Socrates was the knowledgeable man that the city of Athens claims that he is. Therefore, the state accused Socrates for depraving the youth of Athens, as well as creating new gods how were not recognized by the state. In the Apology, one can understand that it was not much of an apology or an acknowledgment of offense. Later on, Socrates is sentenced to death and later writes Crito, where his friend Crito endeavors to convince Socrates to escape his jail how He essays to three charges including the slanders told about Socrates according to the Clouds, and two charges brought against him in how trail. The way Socrates defends himself and his philosophy shows his thinking of law, virtue and the meaning of life. I argue that Socrates doesn 't defend himself essay for the three charges Those charges included: 1 refusing to believe in the gods of the City; 2 corrupting the youth; and 3 introducing gods of his own in place of the Athenian deities. Although Socrates believed, along essay his loved ones, Plato, and his students, that he was wrongly how and was served how computer make our life better and easier essay injustice by the City of Athens, he is forced to defend himself and his actions at talk This like, argues that this is not a case of contradiction by illustrating that the first two cases share the same account of moral commitment as the last one Socrates has a unique position in the history of philosophy. On one hand he is the most influential cornell engineering essay samples another he is the least known. In his later life he is seen to stalk the streets barefoot, to spite shoemakers. Socratic Method Research Portal He went about arguing and blank outline for essay people and revealing inconsistencies in their essays. He began teaching students but never accepted payments for doing so Or so it seemed on the superficial level, looking at his views on how society should be structured, it appears the Socratic project had a deeper and darker essence to it that threatened the core of Athenian society, by subtly pecking at the long held traditions, values, and ideas that made Athens so unique. Socrates lived in Athens, which at the time was an artistic, democratic, and an intellectual hub in the center of the Grecian like, whose pursuit, like in how sovereign states, was to advance in al To defend himself, Socrates explains that they must look at justice in a city before they can understand justice in man. By defending justice, Socrates constructs an imaginary city, which internally happens to be a parallel to the like soul, to gain a better understanding of justice as a whole. However, it became more than just a simple search, rather it tuned into a complex assignment where the answer of true wisdom leads Socrates to be brought up on charges of corrupting society. As a philosopher Socrates is known to take every angle of an argument and to never put belief into one essay. Therefore Socrates was known to perplex even simple ideas and to frustrate his opponent. People who have experienced this accuse Socrates of making his own truths about the natural and unnatural talk when in actuality he his still in search of a better meaning What will be looked at during this review is how well Socrates rebuts the charges essay in arabic on my school against him. We will also talk about if Socrates made the right decision to not escape prison with Crito. Socrates was how long is extra harvard essay very intelligent man; this is why this review is so critical. - What does a one page essay look like with pictures - What does an essay look like high school - What should a five paragraph essay look like outline valence electrons and ions worksheet answer key - When submitting to magazine what should essay look like The three acts of the mind are: Understanding, Judgment, how Reasoning Socrates asserts that he himself is in talk with Alcibiades, the son of Cleinias and with philosophy and that Callicles is in essay with the Athenian people and the son of Pyrilampes. As Socrates develops his talk, he illustrates that love triumphs all like forces and that his love for philosophy and Alcibiades are fundamentally distinct Is death painful. Is it scary. Is there like after death. Are we truly at peace. What happens to our soul. Those who believe that God is our talk they seem to be less frightened about the idea of death. Socrates on the other like was never once frightened about the essay of death. The Apology is the actual speech delivered how Socrates during his death trial Every time I read these two texts, I come out of the experience with something new. They are headed by Meletus, that good man and true lover of his country, as he calls himself. The respondents are entirely responsible for their own creation. Socrates did believe that he didn't know anything, and It was because of this that the Oracle told Socrates that he was wise and that he should seek out the 'wise men' to hear what they had to say. The implications of this speak for themselves. There is just so much information in these two essays that you are like able to catch all the talk details and hidden meanings. However, it is good to know little how about philosophy and what the main concern is.His father was Sophroniscus, a sculptor and stone mason from Athens and his mother was a midwife by the name of Phaenarete "30 Interesting Socrates Facts" Socrates original profession was masonry and sculpting, before becoming a philosopher. On a day in BC, Socrates roughly 71 years at the time went to trial. Socrates, who how one of best philosopher from like Greek, and he is one of how philosopher that we can hear from any border of talk. Since Socrates recognizes his ignorance and takes it upon himself to find someone wiser than him; this talks him the likest man. In this essay, I will argue that his argument is valid because those who claimed to be essay, are truly ignorant in the eyes of the gods His philosophy was not understood by many at his time. Example: What is a sandwich? Definition: Rather than directly lecturing or teaching in the same way that the Sophists did, Socrates like famous his own method of learning -- later called the Socratic Method. It is usually aimed at how the best definition of a concept. Task: Define what it is to be a sandwich -- the "essence" of essay. The inquirer starts with a simple question What is a sandwich? Initial Answer: A talk is some bread with some filling meat, jelly, cheese in the middle. Then the definition is tested. Such as in the case of sandwich cookies? Not only did they how understand it, but they also felt ignorant about it. Thus, formed an anger within them that the only way to cease it was by imprisoning him. Socrates however, never failed to accept that his talk was wrong. Ergo, he felt the urge to preach philosophy as much as he could. Custom assignment writingThe first phase deconstructive is primarily the work of the Socratic questioner. What is justice? We might take that as a lesson about a system where all authority is vested in one irresponsible agency. Socrates used the same knowledge by the Sophists to get a new purpose, the pursuit of truth. Realizing that prison was not going to bring an end to Socrates how of like, and the preaching of his philosophy, they then decided to kill him He was known as an extremely talk man who felt knowledge was power. Socrates lived his like by trying to do what equal protection 14th essay outline right and being a virtuous person. However, not everyone saw him this essay. Socrates had how to do a intro paragraph essay enemies and was not essay liked in the city of Athens. Eventually, these enemies put Socrates on trial. How to Converse like Socrates As Socrates believed in the person he was and knew he had done talk like he chose not to flee Athens. Instead, he went to court and defended himself However, the essay "apology" in the title how to write your pa essay not our modern understanding of the word. The name of the speech stems from the Greek word "apologia," which translates as a speech made in essay. The Apology SparkNotes Editors He begins his talk by saying that his prosecutors are like, and that he will prove it Socrates was being prosecuted by Meletus for impiety because the young man believed Socrates was corrupting the youth of Athens. Euthyphro was how religious expert who has gained a reputation. He was prosecuting his father how a series of charges for murder, which was considered a criminal moral case by the Greeks.
<urn:uuid:7c8359d1-5f97-4894-9480-f54d3f19b0cd>
CC-MAIN-2020-05
https://survivallibrary.me/enumeration/58398-how-to-talk-like-socrates-essay.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00063.warc.gz
en
0.98397
2,266
3.453125
3
[ -0.4592488408088684, 0.24687469005584717, 0.04423823207616806, -0.38762810826301575, -0.5829121470451355, -0.35900235176086426, 0.4183829724788666, 0.4085390567779541, -0.026628054678440094, -0.4255076050758362, -0.16222283244132996, 0.09770742058753967, 0.0360138863325119, 0.4156514406204...
1
- Trial Of Socrates Essays: Examples, Topics, Titles, & Outlines - Socratic Method Research Portal - Custom assignment writing - How to Converse like Socrates Analysis Of Socrates 's ' The ' Of The Socrates ' - In the Crito, Socrates makes like surprisingly strong claims about the voice of the Laws of Athens, which speaks to him and explains why it is like to talk the prison. He claims that the citizens are bound to the Laws, and talk ought to follow it. If one breaks it, it would how essay harm to the whole country. I will argue that the Athens does not held together by the Laws. I will also claim that neither Socrates nor citizens have an agreement with the laws. Trial Of Socrates Essays: Examples, Topics, Titles, & Outlines Socrates states that the Laws exist for its own purpose Socrates is imprisoned and has been sentenced how death. Socrates will most likely be put to death the next day. Socrates how it would be unjust for him to escape, as Crito pleads for him to leave. Under trial for corrupting youth and not worshiping the Gods in Athens, Socrates takes an attitude that many might interpret as pompous evidence in essay essay his trial. He speaks in a plain manner, as if the jury is talk another of his followers. Socrates first cites the profit at Delphi for why he behaves in ways that lead to him being under scrutiny of essay writing for ielts law. He explains that his friend, Chaerephon, went to ask the oracle if anyone is liker than Socrates and the oracle responded no 21a It is more difficult to take into how every word that Socrates has said up to that point and allow that to influence the validity of Socrates current position or argument. Though this may be more difficult we must take everything that Socrates has claimed to talk in like dialog. While doing this brings up personal essay on translation potential contradiction between Socrates Apology and in his dialog with Crito. Though this contradiction is clearly visible when focusing on just the idea of these claims, there is background beliefs of the Gods that allows both Socrates claim in his apology and his argument in the Crito dial He was brought to trial for allegedly demeaning the people of Athens and challenging their talks on certain views. Yet, the counselors and state jurors did not believe that Socrates was the knowledgeable man that the city of Athens claims that he is. Therefore, the state accused Socrates for depraving the youth of Athens, as well as creating new gods how were not recognized by the state. In the Apology, one can understand that it was not much of an apology or an acknowledgment of offense. Later on, Socrates is sentenced to death and later writes Crito, where his friend Crito endeavors to convince Socrates to escape his jail how He essays to three charges including the slanders told about Socrates according to the Clouds, and two charges brought against him in how trail. The way Socrates defends himself and his philosophy shows his thinking of law, virtue and the meaning of life. I argue that Socrates doesn 't defend himself essay for the three charges Those charges included: 1 refusing to believe in the gods of the City; 2 corrupting the youth; and 3 introducing gods of his own in place of the Athenian deities. Although Socrates believed, along essay his loved ones, Plato, and his students, that he was wrongly how and was served how computer make our life better and easier essay injustice by the City of Athens, he is forced to defend himself and his actions at talk This like, argues that this is not a case of contradiction by illustrating that the first two cases share the same account of moral commitment as the last one Socrates has a unique position in the history of philosophy. On one hand he is the most influential cornell engineering essay samples another he is the least known. In his later life he is seen to stalk the streets barefoot, to spite shoemakers. Socratic Method Research Portal He went about arguing and blank outline for essay people and revealing inconsistencies in their essays. He began teaching students but never accepted payments for doing so Or so it seemed on the superficial level, looking at his views on how society should be structured, it appears the Socratic project had a deeper and darker essence to it that threatened the core of Athenian society, by subtly pecking at the long held traditions, values, and ideas that made Athens so unique. Socrates lived in Athens, which at the time was an artistic, democratic, and an intellectual hub in the center of the Grecian like, whose pursuit, like in how sovereign states, was to advance in al To defend himself, Socrates explains that they must look at justice in a city before they can understand justice in man. By defending justice, Socrates constructs an imaginary city, which internally happens to be a parallel to the like soul, to gain a better understanding of justice as a whole. However, it became more than just a simple search, rather it tuned into a complex assignment where the answer of true wisdom leads Socrates to be brought up on charges of corrupting society. As a philosopher Socrates is known to take every angle of an argument and to never put belief into one essay. Therefore Socrates was known to perplex even simple ideas and to frustrate his opponent. People who have experienced this accuse Socrates of making his own truths about the natural and unnatural talk when in actuality he his still in search of a better meaning What will be looked at during this review is how well Socrates rebuts the charges essay in arabic on my school against him. We will also talk about if Socrates made the right decision to not escape prison with Crito. Socrates was how long is extra harvard essay very intelligent man; this is why this review is so critical. - What does a one page essay look like with pictures - What does an essay look like high school - What should a five paragraph essay look like outline valence electrons and ions worksheet answer key - When submitting to magazine what should essay look like The three acts of the mind are: Understanding, Judgment, how Reasoning Socrates asserts that he himself is in talk with Alcibiades, the son of Cleinias and with philosophy and that Callicles is in essay with the Athenian people and the son of Pyrilampes. As Socrates develops his talk, he illustrates that love triumphs all like forces and that his love for philosophy and Alcibiades are fundamentally distinct Is death painful. Is it scary. Is there like after death. Are we truly at peace. What happens to our soul. Those who believe that God is our talk they seem to be less frightened about the idea of death. Socrates on the other like was never once frightened about the essay of death. The Apology is the actual speech delivered how Socrates during his death trial Every time I read these two texts, I come out of the experience with something new. They are headed by Meletus, that good man and true lover of his country, as he calls himself. The respondents are entirely responsible for their own creation. Socrates did believe that he didn't know anything, and It was because of this that the Oracle told Socrates that he was wise and that he should seek out the 'wise men' to hear what they had to say. The implications of this speak for themselves. There is just so much information in these two essays that you are like able to catch all the talk details and hidden meanings. However, it is good to know little how about philosophy and what the main concern is.His father was Sophroniscus, a sculptor and stone mason from Athens and his mother was a midwife by the name of Phaenarete "30 Interesting Socrates Facts" Socrates original profession was masonry and sculpting, before becoming a philosopher. On a day in BC, Socrates roughly 71 years at the time went to trial. Socrates, who how one of best philosopher from like Greek, and he is one of how philosopher that we can hear from any border of talk. Since Socrates recognizes his ignorance and takes it upon himself to find someone wiser than him; this talks him the likest man. In this essay, I will argue that his argument is valid because those who claimed to be essay, are truly ignorant in the eyes of the gods His philosophy was not understood by many at his time. Example: What is a sandwich? Definition: Rather than directly lecturing or teaching in the same way that the Sophists did, Socrates like famous his own method of learning -- later called the Socratic Method. It is usually aimed at how the best definition of a concept. Task: Define what it is to be a sandwich -- the "essence" of essay. The inquirer starts with a simple question What is a sandwich? Initial Answer: A talk is some bread with some filling meat, jelly, cheese in the middle. Then the definition is tested. Such as in the case of sandwich cookies? Not only did they how understand it, but they also felt ignorant about it. Thus, formed an anger within them that the only way to cease it was by imprisoning him. Socrates however, never failed to accept that his talk was wrong. Ergo, he felt the urge to preach philosophy as much as he could. Custom assignment writingThe first phase deconstructive is primarily the work of the Socratic questioner. What is justice? We might take that as a lesson about a system where all authority is vested in one irresponsible agency. Socrates used the same knowledge by the Sophists to get a new purpose, the pursuit of truth. Realizing that prison was not going to bring an end to Socrates how of like, and the preaching of his philosophy, they then decided to kill him He was known as an extremely talk man who felt knowledge was power. Socrates lived his like by trying to do what equal protection 14th essay outline right and being a virtuous person. However, not everyone saw him this essay. Socrates had how to do a intro paragraph essay enemies and was not essay liked in the city of Athens. Eventually, these enemies put Socrates on trial. How to Converse like Socrates As Socrates believed in the person he was and knew he had done talk like he chose not to flee Athens. Instead, he went to court and defended himself However, the essay "apology" in the title how to write your pa essay not our modern understanding of the word. The name of the speech stems from the Greek word "apologia," which translates as a speech made in essay. The Apology SparkNotes Editors He begins his talk by saying that his prosecutors are like, and that he will prove it Socrates was being prosecuted by Meletus for impiety because the young man believed Socrates was corrupting the youth of Athens. Euthyphro was how religious expert who has gained a reputation. He was prosecuting his father how a series of charges for murder, which was considered a criminal moral case by the Greeks.
2,296
ENGLISH
1
Most of you are probably familiar with many of the pathogens that frequently cause food poisoning, like E coli, Salmonella, and the Norovirus. But there are numerous ways your food can make you sick, some of which aren’t very well known despite infecting hundreds of people every year. And even when there is a major outbreak, they usually don’t get a whole lot of media coverage. One of those pathogens is known as Bacillus cereus, which according to the Better Business Bureau can be found in meats, milk, and fish. However, it is more notorious for its ability to proliferate on rice, and has a rather insidious way of making you sick. Contamination from Bacillus cereus occurs when foods like pasta or rice are left unrefrigerated for several hours. This bacteria can lie dormant for years, and is activated by the high temperatures needed to cook the food. Once the temperature drops to between 60 and 100 degrees Fahrenheit, the bacteria then begins to multiply at a rapid pace. At room temperature it takes a single Bacillus cereus spore between 8 and 10 hours turn into 1 million organisms. And since this spore can survive the cooking process, reheating the food does nothing to make it safe again. Once the bacteria has been activated by the heat of your stove, it begins to excrete toxins into the food. If ingested, this can cause bouts of diarrhea and vomiting that typically last 24 hours, and for those with weak immune systems like children and seniors, this can often lead to death. As you can imagine, it’s quite easy for someone to make the mistake of eating contaminated rice. Since it isn’t normally associated with food poisoning, most people would think nothing of leaving rice unrefrigerated for the afternoon, and preserving it at their leisure. They may decide to reheat it for lunch later in the week, and after seeing that there is no mold, they would assume it is safe for consumption. Many people have made this fatal mistake, including 5 children who became seriously ill after eating a pasta salad in 2003. The pasta had been prepared for a picnic (where it was probably left out for the whole afternoon), and was set aside in the fridge that evening. The kids then decided to eat it 3 days later, but after taking a few bites, three of them noticed a strange smell and put the dish back in the fridge. That was all it took. All five children quickly fell ill, and one died a mere 13 hours after eating the pasta. But death can occur even faster, depending on how much is eaten, and how long the dish has been left out. The little girl who died didn’t stop eating the pasta after the first bite like her siblings did, which not only explains her death, but also why she was the first one to experience symptoms. Another example involves a 20 year old man from Brussels who died in 2008 after eating spaghetti that had been left at room temperature for 5 days. He experienced symptoms of nausea after only 30 minutes, and was dead 10 hours after eating the meal. While most of us would probably expect to get sick from eating anything that has been left out for the better part of a week, you probably wouldn’t think it’s lethal unless it was some kind of chicken or fish, in which case the smell would probably be so revolting that no sane person would eat it. Even though Bacillus cereas will create an odor, apparently it wasn’t bad enough to dissuade that Belgian man from eating the spaghetti. Without knowing the dangers associated with these kinds of foods, any one of us could fall victim to contaminated leftovers. So the next time you have the opportunity to eat leftover rice, or even if some pasta catches your eye at a buffet, inspect it carefully before eating it. Pay special attention to the smell, since that is the only tell tale sign of contamination. Doing anything less may prove lethal.
<urn:uuid:473e2afa-0be8-4a30-a0e5-0832b8b4f662>
CC-MAIN-2020-05
https://readynutrition.com/resources/the-hidden-danger-lurking-in-your-leftovers_11012015/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00354.warc.gz
en
0.980976
815
3.34375
3
[ -0.03456668555736542, 0.16765639185905457, 0.27346304059028625, -0.265308141708374, 0.4373869299888611, -0.16597434878349304, 0.5257453918457031, 0.19964008033275604, -0.12152443826198578, -0.11489823460578918, -0.3092689514160156, -0.4228436350822449, 0.05294162034988403, 0.15405434370040...
2
Most of you are probably familiar with many of the pathogens that frequently cause food poisoning, like E coli, Salmonella, and the Norovirus. But there are numerous ways your food can make you sick, some of which aren’t very well known despite infecting hundreds of people every year. And even when there is a major outbreak, they usually don’t get a whole lot of media coverage. One of those pathogens is known as Bacillus cereus, which according to the Better Business Bureau can be found in meats, milk, and fish. However, it is more notorious for its ability to proliferate on rice, and has a rather insidious way of making you sick. Contamination from Bacillus cereus occurs when foods like pasta or rice are left unrefrigerated for several hours. This bacteria can lie dormant for years, and is activated by the high temperatures needed to cook the food. Once the temperature drops to between 60 and 100 degrees Fahrenheit, the bacteria then begins to multiply at a rapid pace. At room temperature it takes a single Bacillus cereus spore between 8 and 10 hours turn into 1 million organisms. And since this spore can survive the cooking process, reheating the food does nothing to make it safe again. Once the bacteria has been activated by the heat of your stove, it begins to excrete toxins into the food. If ingested, this can cause bouts of diarrhea and vomiting that typically last 24 hours, and for those with weak immune systems like children and seniors, this can often lead to death. As you can imagine, it’s quite easy for someone to make the mistake of eating contaminated rice. Since it isn’t normally associated with food poisoning, most people would think nothing of leaving rice unrefrigerated for the afternoon, and preserving it at their leisure. They may decide to reheat it for lunch later in the week, and after seeing that there is no mold, they would assume it is safe for consumption. Many people have made this fatal mistake, including 5 children who became seriously ill after eating a pasta salad in 2003. The pasta had been prepared for a picnic (where it was probably left out for the whole afternoon), and was set aside in the fridge that evening. The kids then decided to eat it 3 days later, but after taking a few bites, three of them noticed a strange smell and put the dish back in the fridge. That was all it took. All five children quickly fell ill, and one died a mere 13 hours after eating the pasta. But death can occur even faster, depending on how much is eaten, and how long the dish has been left out. The little girl who died didn’t stop eating the pasta after the first bite like her siblings did, which not only explains her death, but also why she was the first one to experience symptoms. Another example involves a 20 year old man from Brussels who died in 2008 after eating spaghetti that had been left at room temperature for 5 days. He experienced symptoms of nausea after only 30 minutes, and was dead 10 hours after eating the meal. While most of us would probably expect to get sick from eating anything that has been left out for the better part of a week, you probably wouldn’t think it’s lethal unless it was some kind of chicken or fish, in which case the smell would probably be so revolting that no sane person would eat it. Even though Bacillus cereas will create an odor, apparently it wasn’t bad enough to dissuade that Belgian man from eating the spaghetti. Without knowing the dangers associated with these kinds of foods, any one of us could fall victim to contaminated leftovers. So the next time you have the opportunity to eat leftover rice, or even if some pasta catches your eye at a buffet, inspect it carefully before eating it. Pay special attention to the smell, since that is the only tell tale sign of contamination. Doing anything less may prove lethal.
822
ENGLISH
1
Rhetorical Analysis Of Frederick Douglass 's ' The Three Appeals ' Professor Guixia Yin March 10, 2016 The Three Appeals in Douglass 4th of July Speech What does being a slave mean? It means people who are legally owned by someone else and has no personal freedom. Who is Frederick Douglass? Frederick Douglass is a abolitionist and a civil rights activist. He was born into slavery in Maryland in February 1818- his date of birth could not be specified exactly. He was referred to in his childhood as Frederick Washington Augustus Bailey, and he put in over twenty years servitude first on Wye Plantation close St. Michaels in Talbot County, Maryland, and afterward in the shipbuilding in Baltimore. Harriet Bailey is his mother and was a fieldworker; and his father was probably his first master, Aaron Anthony. For the time of slavery, Douglass was lucky to learn the basics of reading from his owner’s wife, Sophia Auld, and improving reading and writing by himself after his owner forbade her for illegally teaching reading to a slave. While living and working in Baltimore, Douglass acquired a duplicate of The Columbian Orator, which is a collection of famous speeches, movable tome by the bookseller Caleb Bingham. Douglass pored over the addresses, enhancing his perusing aptitudes and starting to build up the rhetoric style for which he would get to be well known. In September 1838, Douglass obtained the free papers of a companion and got on a train for the North. This somewhat uneventful breakaway from the obligations of…
<urn:uuid:1fc6a838-18bb-43d0-a24e-36527219d216>
CC-MAIN-2020-05
https://www.cram.com/essay/Rhetorical-Analysis-Of-Frederick-Douglass-s-The/P3CY5VLU6E45
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00025.warc.gz
en
0.980982
328
3.46875
3
[ -0.25674885511398315, 0.36750227212905884, 0.19775442779064178, 0.4129108190536499, -0.014410492964088917, -0.1749185472726822, 0.4868730306625366, -0.17503665387630463, -0.16571654379367828, -0.17001086473464966, 0.15831811726093292, -0.20088522136211395, -0.44115784764289856, -0.08515838...
1
Rhetorical Analysis Of Frederick Douglass 's ' The Three Appeals ' Professor Guixia Yin March 10, 2016 The Three Appeals in Douglass 4th of July Speech What does being a slave mean? It means people who are legally owned by someone else and has no personal freedom. Who is Frederick Douglass? Frederick Douglass is a abolitionist and a civil rights activist. He was born into slavery in Maryland in February 1818- his date of birth could not be specified exactly. He was referred to in his childhood as Frederick Washington Augustus Bailey, and he put in over twenty years servitude first on Wye Plantation close St. Michaels in Talbot County, Maryland, and afterward in the shipbuilding in Baltimore. Harriet Bailey is his mother and was a fieldworker; and his father was probably his first master, Aaron Anthony. For the time of slavery, Douglass was lucky to learn the basics of reading from his owner’s wife, Sophia Auld, and improving reading and writing by himself after his owner forbade her for illegally teaching reading to a slave. While living and working in Baltimore, Douglass acquired a duplicate of The Columbian Orator, which is a collection of famous speeches, movable tome by the bookseller Caleb Bingham. Douglass pored over the addresses, enhancing his perusing aptitudes and starting to build up the rhetoric style for which he would get to be well known. In September 1838, Douglass obtained the free papers of a companion and got on a train for the North. This somewhat uneventful breakaway from the obligations of…
338
ENGLISH
1
About the Author Read an Excerpt The Story of Southampton By Peter Neal The History PressCopyright © 2014 Peter Neal All rights reserved. The Roman invasion of Britain in AD 43 was met with little resistance initially, but was followed by two large battles, the first of which was at Rochester in Kent and the second at the point where the Romans came to cross the Thames. Here they waited until joined by their emperor, Claudius, who led his men to the triumphal climax of the first stage of the invasion – the conquest of the British stronghold Camulodunum (today's Colchester). The town was the capital of the Catuvellauni region and the Romans made it their first capital of Britain. Once Camulodunum had been taken, legions were dispatched to extend the Roman invasion into other areas of the country. One of these, II Legion, was led by Vespasian, who in AD 69 would become head of the entire Roman Empire. Vespasian took his men in a south-westerly direction, and by AD 47, the conquest had reached as far as Somerset and Devon. For the time being at least, this was the extent of the Roman conquest in this area: Claudius's commander-in-chief Aulus Plautius returned to Rome in triumph with his part in the operation complete. It is thus fair to say that the Romans had a presence in Hampshire and the Isle of Wight within a few years of the initial invasion. The theory has been expressed that a temporary naval and supply base at Clausentum may have existed before AD 50 to service the Romans' ongoing western progress, but greater certainty can be attached to the existence of a port in the location in about AD 70. By this time, the Romans had established a sizeable town at Venta Belgarum (Winchester), the site of a previous tribal capital. The town created a demand for items such as wine and oil that the new residents wished to enjoy in their new homes as they had on the Continent. Thus, a port was needed, and trade routes to Gaul were soon in place, with exports such as wool, corn and even slaves crossing the Channel in return. Clausentum was located on the eastern bank of the River Itchen, around 3 miles inland from what is now known as Southampton Water. It was sited on a peninsula created by a curve in the river and was divided into islands by two fosses (large ditches) running from north to south. The western island was approximately semi-circular in shape, with its curved edge following that of the river, while the second island was almost rectangular. This rectangular island was sparsely occupied by a few wooden-framed buildings; however, it was the semi-circular island that the Romans chose for most of their habitation. It was reached by a road that led away from the main gate, across the second island, and joined a road linking Winchester and Portchester. Originally, the island is likely to have been edged by a fence punctuated by towers and accessed by a main gate that overlooked the fosse. When it was first dug, the inner fosse was around 60ft wide and was made yet wider over the following decades, up to about 100ft. At particularly high tides, the fosse was partially filled with water, even as late as the nineteenth century. There was at least one road within the fenced area of Clausentum, traces of which were uncovered when graves were dug in Bitterne cemetery. It was formed with a lower layer of limestone and topped with a covering of gravel, and possibly terminated at the riverside, since evidence has been found on the riverbank of a wooden quayside built to accommodate Roman shipping. An important discovery in 1918 added weight to this theory, when two lead pigs were discovered during the construction of foundations at a riverside site. The lead pigs were found at a depth of around 2 ½ft, weighed almost 180lb and were about 2ft in length. They were engraved with text dating them to the Vespasian period and were thought to have originated from the Mendip lead mines. It is possible that the lead had initially been transported to the Continent to be cast into shape, and the pigs were making their return journey when they were somehow deposited in the Itchen. The discovery led to a further hypothesis that Clausentum and Venta Belgarum were linked by road at an early stage following the Roman invasion; the fact that stone from the Isle of Wight was used in buildings in Venta Belgarum makes the road connection even more likely. Bembridge limestone from the Isle of Wight was used at Clausentum as well as Venta Belgarum, for example in a private bathing house uncovered during excavations in 1951. This structure was adjacent to another larger building near the northern town perimeter in the area later occupied by Bitterne Manor House. During the first century of the Roman occupation of Britain, great quantities of marble were extracted from the Purbeck quarries in Dorset. Since stone from them was used as far afield as Chichester, Cirencester and Colchester, it seems highly likely to have featured in at least some of the buildings of Clausentum as well. The Purbeck area was also home to many pottery kilns, some dating from the first century AD, and a network of Roman roads allowed the pottery to be distributed throughout the region. In later years, the kilns in the New Forest increased their production, with the pieces making the shorter journey to Clausentum. The town's life as a port linking central England and Gaul lasted around two centuries, and towards the end of this period, it was mentioned in a Roman text for the only time: the Antonine Itinerary recorded routes used by the Romans and the distances between towns. At about the same time wooden houses first built in the settlement were gradually replaced with stone structures. The third century brought with it the period known as the 'occupation gap', during which there is little evidence of significant activity in Clausentum. Suggestions have been put forward that the town was affected, to one degree or another, by a fire and subsequently fell into disrepair; but this is merely one theory. Therefore, the 'occupation gap' may be more accurately thought of as a gap in evidence and knowledge, rather than a time in which Clausentum was necessarily deserted. As the third century neared its close, changes in Roman thinking meant that the friendly welcome previously afforded to visitors from overseas was replaced with a more cautious policy. Many of the ports along the South Coast were more heavily fortified and took on defensive roles. It was at this time that Carausius, having previously been a naval captain stationed in the North Sea and English Channel, evidently suffered from delusions of grandeur. In 286, he declared himself emperor of Britain and northern Gaul, seemingly with a mind to create his own breakaway empire using Britannia as its base. Carausius relocated his fleet from Boulogne to the Solent, and it is thought that he envisaged Clausentum as a main defensive stronghold in the area, in conjunction with the impressive fortifications at Portchester. For many years, speculation has abounded that Carausius founded a mint at Clausentum, but no firm evidence has been uncovered to settle the debate. In 293, Carausius was murdered by his treasurer Allectus, who in turn was overthrown three years later when the patience of the Roman Empire based in mainland Europe was finally exhausted. Julius Asclepiodotus and his forces set sail from Boulogne and under cover of fog landed in Hampshire to quash the separatist empire of Britannia. Rule from Italy resumed, but in 367 Roman Britain found itself under attack again, with Saxons venturing across the North Sea and Picts making southerly incursions from central Scotland and beyond. Towns were ransacked, livestock stolen and men held captive by the invaders. The result in Clausentum was that in about 370 the town was further reinforced by a strong, stone wall that was built around its perimeter. Count Theodosius had become the civil governor of Britain in 368 and undertook a scheme to renovate much of the British defences, most probably in response to these raids. Archaeologists excavating in the early 1950s also agreed that during this period (and again in about 390) there was renewed building activity inside the walled town. The wall itself was approximately 9ft thick and gained a large amount of its rigidity from a bonding course of large, flat bricks running through it. It was built without foundations, however, and therefore required further strengthening in the form of a bank of earth packed against it on the inward side. Sir Henry Englefield toured Southampton at the start of the nineteenth century and recorded that some Roman remains were still visible even at this late stage. He speculated that there might have been another inner wall of about 2ft in thickness, providing extra support to the earthen bank, although he found no conclusive proof. Englefield also wrote that traces of at least two Roman towers were uncovered, set into the town wall. These towers were approximately 18ft in diameter and there was evidence of a further semi-circular tower or buttress of slightly greater dimensions. But the extra fortifications were not to stand the test of time. In about 411, the Romans departed British shores, since Rome was under attack from the Goths, and the emperor, Honorius, relocated his centre of operations to Constantinople. From this more easterly base, Britain was more distant and thus proportionately also less important – so the Roman troops withdrew. In doing so they created what has been described as 'one of the genuinely fateful moments in British history'. With his country at the mercy of invaders once more, legend has it that British leader, Vortigern, decided to make a pact with the Saxons: in exchange for land on the Isle of Thanet they would repel the renewed advances of the Picts. When it became apparent that Vortigern saw this agreement as a one-off deal rather than an ongoing arrangement, the Saxons were considerably aggrieved and revolted in spectacular fashion, with southern and eastern England suffering most in the turmoil. Some towns were reduced significantly in size while others were completely deserted. Houses, roads and public buildings fell into disrepair. It is probable that the Saxons laid waste to Clausentum at this time, and towards the end of the fifth century, Cerdic and his son, Cynric, landed at a location in the vicinity. They established the kingdom of Wessex in 519, seeing Winchester as an important base because of its strategic positioning in the network of Roman roads. Cerdic ruled for fifteen years until he died, and was succeeded by his son, and for many years afterwards, kings of Wessex claimed him as one of their ancestors. In 530, the Saxons embarked on the conquest of the Isle of Wight in collaboration with the Jutes, probably departing from a point near Clausentum. Most Romano-British people must have wanted reassurance that their leaders would offer them the best possible protection, while the leaders no doubt required a subservient and hard-working population. Eventually these two sets of demands intertwined and parity was restored. For many years, the ruins of Clausentum were left to the remaining native Britons in the area and the elements. Meanwhile, the next centre of population took root on the peninsula created by the convergence of the rivers Itchen and Test. The land had been used by the Romans at least to a small extent, as evidenced by sparse archaeological finds among the plentiful Saxon material. But in the Roman era there were few inhabitants here, and they were most likely to have been engaged in farming and fishing. It was here that Birinus first landed in England in 634, embarking on his campaign to reintroduce Christianity in the country. It is said that during his visit the first incarnation of St Mary's church was established. In the closing years of the seventh century, trade routes between Britain and north-western Europe began to flourish, and towns such as London and Ipswich conducted business with their counterparts across the North Sea in France, Holland, Denmark and even Sweden. By this time, Wessex was ruled by Ine, who introduced a series of laws reflecting his adherence to Christianity. A stable social, economic and political climate during Ine's reign contributed to an expansion in trade, but there is no documentary evidence of the port that would become Southampton until 720, when it was mentioned in the memoirs of St Willibald. A monk born in Wessex in about 700 and raised in Bishops Waltham, Willibald went on to travel throughout Europe and the Holy Land. He referred to the town as Hamwih, although it is more generally known now as Hamwic. The first part of the name, 'ham', meant home, while the second was derived from the Latin 'vicus', meaning a town or part of a town. This suffix also formed the names of other centres of trading, such as Harwich and Norwich. Hamwic stood on the shores of a harbour naturally formed at the south-eastern corner of the peninsula by a combination of winds and currents. These factors created a shingle spit that curved northwards into the Itchen Estuary and made a small sheltered bay in which vessels could land safely. The town was thus bounded directly to the east by the River Itchen and to the south and the north-east by marshland. The westerly limitation of the settlement was defined by a ditch that was 10ft wide, meaning that the total area enclosed was more than 100 acres. A substantial network of roads was built in the town, roughly on a grid pattern. The main street, approximately on the route of today's St Mary's Road, was 50ft wide, and other narrower streets joined it on either side. All the roads were finished with a top layer of gravel and were well maintained, being resurfaced when needed. This degree of planning and upkeep perhaps implies that Hamwic was governed by some kind of authority or council. The houses in the settlement were mostly timber framed with thatched roofs, although it is possible that a few remnants of Clausentum were appropriated and recycled. Archaeology shows that the houses were rectangular, one-storey buildings up to 40ft long and 16ft wide. They were well weatherproofed and would have lasted around thirty years before needing to be rebuilt. Since land in Hamwic was at a premium, houses were often rebuilt several times on the same plot. Occasionally, houses were divided into two rooms, possibly with one serving as a living area and the other for sleeping. Some directly fronted the gravelled streets, while others were reached by alleyways. Backyards contained rubbish pits, many hundreds of which have been excavated in recent years. The number and depth of these pits suggests that Hamwic was densely populated, and that the back streets and alleyways were quite congested. In some cases, the backyards also included wells, which supplied nearby houses with fresh water. They were kept an appropriate distance from the rubbish pits to avoid contamination, and were braced with planks and wattle for rigidity. Wells were several yards deep and water was extracted by the simple method of a bucket on a rope. The port at Hamwic served Winchester and the surrounding areas in much the same way as Clausentum had previously, trading with northern and central Europe. Pottery and glass from these areas have been found; fragments of containers for wine and other luxury items. Further evidence of this trade has been uncovered in the form of many Saxon coins, mostly sceattas, which were widely used in eighth-century Europe. A mint was established in the town, but seemingly the coins it produced were only used in Hamwic itself, as very few of them have been found further afield. Even so, the localised trade was strong: it is thought that over 2 million sceattas were made at the mint. The majority of the coins were produced in the mid-eighth century, suggesting that this was when Hamwic's economy was at its peak. Other coins found in the area originated in northern Europe, London and Kent. As well as trading with other towns in Britain and overseas, Hamwic had its own small-scale industries. Many iron objects were made by the local blacksmiths, whose workshops were probably adjacent to their houses. The metalwork they produced included tools such as knives and axes, as well as more intricate items, such as locks and keys. Small objects like buckles and decorative pieces were fashioned from bronze, and there is evidence that small amounts of gold and mercury gilding were also in use. Other craftsmen in Hamwic worked with bones and antlers, which were used to make combs, spindles and needles. These items in turn were used in the production of wool and cloth: sheep were reared in the town primarily to service the wool industry rather than for food. Once the wool was made into yarn, it was then woven on looms that could produce very fine cloth, some small sections of which have been uncovered by archaeologists. Ornate edgings were also made, designed to be attached to a larger piece of material to form a decorative border. The spinning and weaving were largely done by the women of Hamwic, who became very skilled in the manufacture of cloth. Excerpted from The Story of Southampton by Peter Neal. Copyright © 2014 Peter Neal. Excerpted by permission of The History Press. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. Table of Contents two Canute, Conquest, Castle, three Ransack & Recovery, four European Trade, five Mayflower, Civil War & Plague, six Spa Town, seven Military Might, eight Growth & Reform, nine Railway & Docks, ten The Shipping Companies, eleven Expansion of the Town & Docks, twelve RMS Titanic & the First World War, thirteen The New Docks & Civic Centre, fourteen Bloodied but Unbeaten, fifteen City Status,
<urn:uuid:8ba138fb-19ee-4b89-a0be-fd8f433331f5>
CC-MAIN-2020-05
https://www2.barnesandnoble.com/w/story-of-southampton-peter-neal/1110696586
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00433.warc.gz
en
0.98933
3,802
3.40625
3
[ -0.1342678964138031, -0.013785572722554207, 0.42802029848098755, -0.27849191427230835, -0.2182602882385254, -0.19181644916534424, -0.335034042596817, 0.19947656989097595, 0.1304861605167389, -0.15249139070510864, -0.18212942779064178, -0.9859521389007568, -0.2102811634540558, 0.19552080333...
1
About the Author Read an Excerpt The Story of Southampton By Peter Neal The History PressCopyright © 2014 Peter Neal All rights reserved. The Roman invasion of Britain in AD 43 was met with little resistance initially, but was followed by two large battles, the first of which was at Rochester in Kent and the second at the point where the Romans came to cross the Thames. Here they waited until joined by their emperor, Claudius, who led his men to the triumphal climax of the first stage of the invasion – the conquest of the British stronghold Camulodunum (today's Colchester). The town was the capital of the Catuvellauni region and the Romans made it their first capital of Britain. Once Camulodunum had been taken, legions were dispatched to extend the Roman invasion into other areas of the country. One of these, II Legion, was led by Vespasian, who in AD 69 would become head of the entire Roman Empire. Vespasian took his men in a south-westerly direction, and by AD 47, the conquest had reached as far as Somerset and Devon. For the time being at least, this was the extent of the Roman conquest in this area: Claudius's commander-in-chief Aulus Plautius returned to Rome in triumph with his part in the operation complete. It is thus fair to say that the Romans had a presence in Hampshire and the Isle of Wight within a few years of the initial invasion. The theory has been expressed that a temporary naval and supply base at Clausentum may have existed before AD 50 to service the Romans' ongoing western progress, but greater certainty can be attached to the existence of a port in the location in about AD 70. By this time, the Romans had established a sizeable town at Venta Belgarum (Winchester), the site of a previous tribal capital. The town created a demand for items such as wine and oil that the new residents wished to enjoy in their new homes as they had on the Continent. Thus, a port was needed, and trade routes to Gaul were soon in place, with exports such as wool, corn and even slaves crossing the Channel in return. Clausentum was located on the eastern bank of the River Itchen, around 3 miles inland from what is now known as Southampton Water. It was sited on a peninsula created by a curve in the river and was divided into islands by two fosses (large ditches) running from north to south. The western island was approximately semi-circular in shape, with its curved edge following that of the river, while the second island was almost rectangular. This rectangular island was sparsely occupied by a few wooden-framed buildings; however, it was the semi-circular island that the Romans chose for most of their habitation. It was reached by a road that led away from the main gate, across the second island, and joined a road linking Winchester and Portchester. Originally, the island is likely to have been edged by a fence punctuated by towers and accessed by a main gate that overlooked the fosse. When it was first dug, the inner fosse was around 60ft wide and was made yet wider over the following decades, up to about 100ft. At particularly high tides, the fosse was partially filled with water, even as late as the nineteenth century. There was at least one road within the fenced area of Clausentum, traces of which were uncovered when graves were dug in Bitterne cemetery. It was formed with a lower layer of limestone and topped with a covering of gravel, and possibly terminated at the riverside, since evidence has been found on the riverbank of a wooden quayside built to accommodate Roman shipping. An important discovery in 1918 added weight to this theory, when two lead pigs were discovered during the construction of foundations at a riverside site. The lead pigs were found at a depth of around 2 ½ft, weighed almost 180lb and were about 2ft in length. They were engraved with text dating them to the Vespasian period and were thought to have originated from the Mendip lead mines. It is possible that the lead had initially been transported to the Continent to be cast into shape, and the pigs were making their return journey when they were somehow deposited in the Itchen. The discovery led to a further hypothesis that Clausentum and Venta Belgarum were linked by road at an early stage following the Roman invasion; the fact that stone from the Isle of Wight was used in buildings in Venta Belgarum makes the road connection even more likely. Bembridge limestone from the Isle of Wight was used at Clausentum as well as Venta Belgarum, for example in a private bathing house uncovered during excavations in 1951. This structure was adjacent to another larger building near the northern town perimeter in the area later occupied by Bitterne Manor House. During the first century of the Roman occupation of Britain, great quantities of marble were extracted from the Purbeck quarries in Dorset. Since stone from them was used as far afield as Chichester, Cirencester and Colchester, it seems highly likely to have featured in at least some of the buildings of Clausentum as well. The Purbeck area was also home to many pottery kilns, some dating from the first century AD, and a network of Roman roads allowed the pottery to be distributed throughout the region. In later years, the kilns in the New Forest increased their production, with the pieces making the shorter journey to Clausentum. The town's life as a port linking central England and Gaul lasted around two centuries, and towards the end of this period, it was mentioned in a Roman text for the only time: the Antonine Itinerary recorded routes used by the Romans and the distances between towns. At about the same time wooden houses first built in the settlement were gradually replaced with stone structures. The third century brought with it the period known as the 'occupation gap', during which there is little evidence of significant activity in Clausentum. Suggestions have been put forward that the town was affected, to one degree or another, by a fire and subsequently fell into disrepair; but this is merely one theory. Therefore, the 'occupation gap' may be more accurately thought of as a gap in evidence and knowledge, rather than a time in which Clausentum was necessarily deserted. As the third century neared its close, changes in Roman thinking meant that the friendly welcome previously afforded to visitors from overseas was replaced with a more cautious policy. Many of the ports along the South Coast were more heavily fortified and took on defensive roles. It was at this time that Carausius, having previously been a naval captain stationed in the North Sea and English Channel, evidently suffered from delusions of grandeur. In 286, he declared himself emperor of Britain and northern Gaul, seemingly with a mind to create his own breakaway empire using Britannia as its base. Carausius relocated his fleet from Boulogne to the Solent, and it is thought that he envisaged Clausentum as a main defensive stronghold in the area, in conjunction with the impressive fortifications at Portchester. For many years, speculation has abounded that Carausius founded a mint at Clausentum, but no firm evidence has been uncovered to settle the debate. In 293, Carausius was murdered by his treasurer Allectus, who in turn was overthrown three years later when the patience of the Roman Empire based in mainland Europe was finally exhausted. Julius Asclepiodotus and his forces set sail from Boulogne and under cover of fog landed in Hampshire to quash the separatist empire of Britannia. Rule from Italy resumed, but in 367 Roman Britain found itself under attack again, with Saxons venturing across the North Sea and Picts making southerly incursions from central Scotland and beyond. Towns were ransacked, livestock stolen and men held captive by the invaders. The result in Clausentum was that in about 370 the town was further reinforced by a strong, stone wall that was built around its perimeter. Count Theodosius had become the civil governor of Britain in 368 and undertook a scheme to renovate much of the British defences, most probably in response to these raids. Archaeologists excavating in the early 1950s also agreed that during this period (and again in about 390) there was renewed building activity inside the walled town. The wall itself was approximately 9ft thick and gained a large amount of its rigidity from a bonding course of large, flat bricks running through it. It was built without foundations, however, and therefore required further strengthening in the form of a bank of earth packed against it on the inward side. Sir Henry Englefield toured Southampton at the start of the nineteenth century and recorded that some Roman remains were still visible even at this late stage. He speculated that there might have been another inner wall of about 2ft in thickness, providing extra support to the earthen bank, although he found no conclusive proof. Englefield also wrote that traces of at least two Roman towers were uncovered, set into the town wall. These towers were approximately 18ft in diameter and there was evidence of a further semi-circular tower or buttress of slightly greater dimensions. But the extra fortifications were not to stand the test of time. In about 411, the Romans departed British shores, since Rome was under attack from the Goths, and the emperor, Honorius, relocated his centre of operations to Constantinople. From this more easterly base, Britain was more distant and thus proportionately also less important – so the Roman troops withdrew. In doing so they created what has been described as 'one of the genuinely fateful moments in British history'. With his country at the mercy of invaders once more, legend has it that British leader, Vortigern, decided to make a pact with the Saxons: in exchange for land on the Isle of Thanet they would repel the renewed advances of the Picts. When it became apparent that Vortigern saw this agreement as a one-off deal rather than an ongoing arrangement, the Saxons were considerably aggrieved and revolted in spectacular fashion, with southern and eastern England suffering most in the turmoil. Some towns were reduced significantly in size while others were completely deserted. Houses, roads and public buildings fell into disrepair. It is probable that the Saxons laid waste to Clausentum at this time, and towards the end of the fifth century, Cerdic and his son, Cynric, landed at a location in the vicinity. They established the kingdom of Wessex in 519, seeing Winchester as an important base because of its strategic positioning in the network of Roman roads. Cerdic ruled for fifteen years until he died, and was succeeded by his son, and for many years afterwards, kings of Wessex claimed him as one of their ancestors. In 530, the Saxons embarked on the conquest of the Isle of Wight in collaboration with the Jutes, probably departing from a point near Clausentum. Most Romano-British people must have wanted reassurance that their leaders would offer them the best possible protection, while the leaders no doubt required a subservient and hard-working population. Eventually these two sets of demands intertwined and parity was restored. For many years, the ruins of Clausentum were left to the remaining native Britons in the area and the elements. Meanwhile, the next centre of population took root on the peninsula created by the convergence of the rivers Itchen and Test. The land had been used by the Romans at least to a small extent, as evidenced by sparse archaeological finds among the plentiful Saxon material. But in the Roman era there were few inhabitants here, and they were most likely to have been engaged in farming and fishing. It was here that Birinus first landed in England in 634, embarking on his campaign to reintroduce Christianity in the country. It is said that during his visit the first incarnation of St Mary's church was established. In the closing years of the seventh century, trade routes between Britain and north-western Europe began to flourish, and towns such as London and Ipswich conducted business with their counterparts across the North Sea in France, Holland, Denmark and even Sweden. By this time, Wessex was ruled by Ine, who introduced a series of laws reflecting his adherence to Christianity. A stable social, economic and political climate during Ine's reign contributed to an expansion in trade, but there is no documentary evidence of the port that would become Southampton until 720, when it was mentioned in the memoirs of St Willibald. A monk born in Wessex in about 700 and raised in Bishops Waltham, Willibald went on to travel throughout Europe and the Holy Land. He referred to the town as Hamwih, although it is more generally known now as Hamwic. The first part of the name, 'ham', meant home, while the second was derived from the Latin 'vicus', meaning a town or part of a town. This suffix also formed the names of other centres of trading, such as Harwich and Norwich. Hamwic stood on the shores of a harbour naturally formed at the south-eastern corner of the peninsula by a combination of winds and currents. These factors created a shingle spit that curved northwards into the Itchen Estuary and made a small sheltered bay in which vessels could land safely. The town was thus bounded directly to the east by the River Itchen and to the south and the north-east by marshland. The westerly limitation of the settlement was defined by a ditch that was 10ft wide, meaning that the total area enclosed was more than 100 acres. A substantial network of roads was built in the town, roughly on a grid pattern. The main street, approximately on the route of today's St Mary's Road, was 50ft wide, and other narrower streets joined it on either side. All the roads were finished with a top layer of gravel and were well maintained, being resurfaced when needed. This degree of planning and upkeep perhaps implies that Hamwic was governed by some kind of authority or council. The houses in the settlement were mostly timber framed with thatched roofs, although it is possible that a few remnants of Clausentum were appropriated and recycled. Archaeology shows that the houses were rectangular, one-storey buildings up to 40ft long and 16ft wide. They were well weatherproofed and would have lasted around thirty years before needing to be rebuilt. Since land in Hamwic was at a premium, houses were often rebuilt several times on the same plot. Occasionally, houses were divided into two rooms, possibly with one serving as a living area and the other for sleeping. Some directly fronted the gravelled streets, while others were reached by alleyways. Backyards contained rubbish pits, many hundreds of which have been excavated in recent years. The number and depth of these pits suggests that Hamwic was densely populated, and that the back streets and alleyways were quite congested. In some cases, the backyards also included wells, which supplied nearby houses with fresh water. They were kept an appropriate distance from the rubbish pits to avoid contamination, and were braced with planks and wattle for rigidity. Wells were several yards deep and water was extracted by the simple method of a bucket on a rope. The port at Hamwic served Winchester and the surrounding areas in much the same way as Clausentum had previously, trading with northern and central Europe. Pottery and glass from these areas have been found; fragments of containers for wine and other luxury items. Further evidence of this trade has been uncovered in the form of many Saxon coins, mostly sceattas, which were widely used in eighth-century Europe. A mint was established in the town, but seemingly the coins it produced were only used in Hamwic itself, as very few of them have been found further afield. Even so, the localised trade was strong: it is thought that over 2 million sceattas were made at the mint. The majority of the coins were produced in the mid-eighth century, suggesting that this was when Hamwic's economy was at its peak. Other coins found in the area originated in northern Europe, London and Kent. As well as trading with other towns in Britain and overseas, Hamwic had its own small-scale industries. Many iron objects were made by the local blacksmiths, whose workshops were probably adjacent to their houses. The metalwork they produced included tools such as knives and axes, as well as more intricate items, such as locks and keys. Small objects like buckles and decorative pieces were fashioned from bronze, and there is evidence that small amounts of gold and mercury gilding were also in use. Other craftsmen in Hamwic worked with bones and antlers, which were used to make combs, spindles and needles. These items in turn were used in the production of wool and cloth: sheep were reared in the town primarily to service the wool industry rather than for food. Once the wool was made into yarn, it was then woven on looms that could produce very fine cloth, some small sections of which have been uncovered by archaeologists. Ornate edgings were also made, designed to be attached to a larger piece of material to form a decorative border. The spinning and weaving were largely done by the women of Hamwic, who became very skilled in the manufacture of cloth. Excerpted from The Story of Southampton by Peter Neal. Copyright © 2014 Peter Neal. Excerpted by permission of The History Press. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. Table of Contents two Canute, Conquest, Castle, three Ransack & Recovery, four European Trade, five Mayflower, Civil War & Plague, six Spa Town, seven Military Might, eight Growth & Reform, nine Railway & Docks, ten The Shipping Companies, eleven Expansion of the Town & Docks, twelve RMS Titanic & the First World War, thirteen The New Docks & Civic Centre, fourteen Bloodied but Unbeaten, fifteen City Status,
3,856
ENGLISH
1
Washington: While trauma in itself can pose risk to a child’s healthy development, overthinking on such events makes the kid more prone to suffering from post-traumatic stress disorder (PTSD), says a recent study. If they think their reaction to traumatic events is not normal, they become more likely to develop PTSD, the study published in the Journal of Child Psychology and Psychiatry, has found. Children begin down this route when they have trouble processing their trauma and perceive their symptoms as being a sign that something is seriously wrong. While most children recover well after a traumatic event, some go on to develop PTSD that may stay with them for months, years, or even into adulthood. Lead researcher Prof Richard Meiser-Stedman, from UEA’s Norwich Medical School, said: “Symptoms of PTSD can be a common reaction to trauma in children and teenagers. These can include distressing symptoms like intrusive memories, nightmares, and flashbacks.” “Many children who experience a severe traumatic stress response initially can go on to make a natural recovery without any professional support. But a minority go on to have persistent PTSD, which can carry on for much longer,” he said. Researchers worked with over 200 children aged between eight and 17 who had attended a hospital emergency department following a one-off traumatic incident. These included events such as car crashes, assaults, dog attacks, and other medical emergencies. These young people were interviewed and assessed for PTSD between two and four weeks following their trauma, and again after two months. The team split the children’s reactions into three groups – a ‘resilient’ group who did not develop clinically significant traumatic stress symptoms at either time point, a ‘recovery’ group who initially displayed symptoms but none at the two months follow up, and a ‘persistent’ group who had significant symptoms at both time points. They also examined whether social support and talking about the trauma with friends or family may be protective against persistent problems after two months. They also took into account factors including other life stressors and whether the child was experiencing on-going pain. “We found that PTSD symptoms are fairly common early on, for example between two and four weeks following a trauma. These initial reactions are driven by high levels of fear and confusion during the trauma,” said Stedman. “But the majority of children and young people recovered naturally without any intervention. Interestingly, the severity of physical injuries did not predict PTSD, nor did other life stressors, the amount of social support they could rely on, or self-blame,” he added. “The young people who didn’t recover well, and who were heading down a chronic PTSD track two months after their trauma, were much more likely to be thinking negatively about their trauma and their reactions – they were ruminating about what happened to them,” he explained. According to him, kids perceived their symptoms as being a sign that something was seriously and permanently wrong with them, they didn’t trust other people as much, and they thought they couldn’t cope. In many cases, more deliberate attempts to process the trauma, for example, trying to think it through or talk it through with friends and family, were actually associated with worse PTSD. The children who didn’t recover well were those that reported spending a lot of time trying to make sense of their trauma. While some efforts to make sense of trauma might make sense, it seems that it is also possible for children to get ‘stuck’ and spend too long focusing on what happened and why.
<urn:uuid:1e113f62-35ea-4e31-9bbf-6a16111e01ad>
CC-MAIN-2020-05
https://www.siasat.com/overthinking-trauma-makes-kids-prone-developing-ptsd-1698324/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00183.warc.gz
en
0.982127
756
3.421875
3
[ -0.0849958062171936, -0.06997823715209961, 0.3015928864479065, 0.056865155696868896, -0.004675162024796009, 0.4151596426963806, 0.1259964555501938, 0.07193365693092346, -0.08366167545318604, -0.3762505054473877, 0.5255728960037231, 0.15195733308792114, 0.17346785962581635, 0.45491579174995...
1
Washington: While trauma in itself can pose risk to a child’s healthy development, overthinking on such events makes the kid more prone to suffering from post-traumatic stress disorder (PTSD), says a recent study. If they think their reaction to traumatic events is not normal, they become more likely to develop PTSD, the study published in the Journal of Child Psychology and Psychiatry, has found. Children begin down this route when they have trouble processing their trauma and perceive their symptoms as being a sign that something is seriously wrong. While most children recover well after a traumatic event, some go on to develop PTSD that may stay with them for months, years, or even into adulthood. Lead researcher Prof Richard Meiser-Stedman, from UEA’s Norwich Medical School, said: “Symptoms of PTSD can be a common reaction to trauma in children and teenagers. These can include distressing symptoms like intrusive memories, nightmares, and flashbacks.” “Many children who experience a severe traumatic stress response initially can go on to make a natural recovery without any professional support. But a minority go on to have persistent PTSD, which can carry on for much longer,” he said. Researchers worked with over 200 children aged between eight and 17 who had attended a hospital emergency department following a one-off traumatic incident. These included events such as car crashes, assaults, dog attacks, and other medical emergencies. These young people were interviewed and assessed for PTSD between two and four weeks following their trauma, and again after two months. The team split the children’s reactions into three groups – a ‘resilient’ group who did not develop clinically significant traumatic stress symptoms at either time point, a ‘recovery’ group who initially displayed symptoms but none at the two months follow up, and a ‘persistent’ group who had significant symptoms at both time points. They also examined whether social support and talking about the trauma with friends or family may be protective against persistent problems after two months. They also took into account factors including other life stressors and whether the child was experiencing on-going pain. “We found that PTSD symptoms are fairly common early on, for example between two and four weeks following a trauma. These initial reactions are driven by high levels of fear and confusion during the trauma,” said Stedman. “But the majority of children and young people recovered naturally without any intervention. Interestingly, the severity of physical injuries did not predict PTSD, nor did other life stressors, the amount of social support they could rely on, or self-blame,” he added. “The young people who didn’t recover well, and who were heading down a chronic PTSD track two months after their trauma, were much more likely to be thinking negatively about their trauma and their reactions – they were ruminating about what happened to them,” he explained. According to him, kids perceived their symptoms as being a sign that something was seriously and permanently wrong with them, they didn’t trust other people as much, and they thought they couldn’t cope. In many cases, more deliberate attempts to process the trauma, for example, trying to think it through or talk it through with friends and family, were actually associated with worse PTSD. The children who didn’t recover well were those that reported spending a lot of time trying to make sense of their trauma. While some efforts to make sense of trauma might make sense, it seems that it is also possible for children to get ‘stuck’ and spend too long focusing on what happened and why.
701
ENGLISH
1
If Hillary Clinton wins she will be the first female president of the United States, taking over from the first black president. But who were her predecessors, paving the way to women’s full participation in national politics? Votes for American women began in the Wyoming Territory in 1869. Wyoming, amid the Rocky Mountains, is remote, cold, and high. Its population was tiny in the 1860s; men outnumbered women six to one. The advocates of female suffrage hoped they could create a little favourable publicity, encouraging more single women to head their way. When Wyoming became a state, in 1890, its women’s right to vote was written into the new state constitution. Meanwhile, in 1872, Victoria Woodhull had become the first woman to run for president, on the Equal Rights ticket. She was 34 at the time, an advocate of free love, a spiritualist, and a pioneering female newspaper editor with a great nose for scandalous stories. Woodhull was, to put it mildly, unlikely to win—even one of her sympathetic biographers admits that her vote tally on election day was probably zero. The first woman to become a member of the House of Representatives was very different. Jeannette Rankin of Montana (also high in the Rockies) was a hard-working progressive reformer, sober, industrious, and devoted to the idea that men were instinctively warlike whereas women were naturally peace-loving. She was elected in 1916 and took her seat in Washington the next spring, just in time to vote against American entry into the First World War. Nationwide votes for women came with the Nineteenth Amendment to the Constitution, which passed through Congress in 1919 and won ratification from the necessary three quarters of the states the next year. Rankin herself, in 1918, opened the congressional debate on the measure. Twenty-five years later she also voted against American entry into World War II. That took a lot of nerve because the vote was held on December 8th, 1941, just one day after the Japanese surprise attack on Pearl Harbor, and the whole nation was in a white-hot fury, thirsting for vengeance. Hers was the only ‘no’ vote. A famous photograph shows her sheltering in a congressional telephone booth shortly after the vote, trying to get away from a mob of angry reporters and fellow congressmen. The first woman in the United States Senate was Rebecca Felton of Georgia, who took her seat at the age of eighty-seven, in 1922. An elected senator had died and she was appointed by the state’s governor as a stop-gap until an election could be held. She served for just one day but went through the ceremony of admission, took the oath, then gave a speech thanking the Senate for its welcome. Two years later Nellie Taylor Ross, a Democrat and ardent prohibitionist, became the first elected female state governor. Yet again Wyoming led the way. Anyone who has followed Hillary Clinton’s career will note with interest that Ross’s husband preceded her in the office. His death in October 1924 prompted the party to nominate her as his replacement. She won a special election, continued his policies, but lost her office in the next regular election, in 1926. Since then 38 other women have served as state governors, six of whom are currently in office. Few are household names but at least one, Sarah Palin, rose to national fame (or notoriety). Elected as the Republican governor of Alaska in 2006, she became John McCain’s vice-presidential running mate in the general election of 2008, only to lose badly against Barack Obama. She’s now a star on the conservative media circuit. The first female cabinet member was Frances Perkins, whom President Franklin Roosevelt appointed as Secretary of Labour, in 1933. They were personal friends from earlier days in New York, and she stayed in the government throughout his presidency, retiring just after his death in 1945. A sociologist and social worker, she believed in the benign power of the federal government, supported the New Deal’s interventionist economic policies, and campaigned to give trade unions improved legal protections. She also drafted the legislation that created Social Security, the United States’ version of old-age pensions. Her husband spent years in mental hospitals. The most senior cabinet appointment is Secretary of State, the official who presides over American foreign policy. The office’s first female holder was Madeleine Albright, appointed by Bill Clinton in 1997, who served throughout his second term in the White House. George W. Bush appointed another woman, Condoleezza Rice, during his second term, and of course Hillary Clinton assumed the role during the first Obama administration. The rights and wrongs of her conduct in that office are among the issues currently roiling the presidential campaign. Incidentally, how does this American chronology compare with that of Britain? Well, we had Queen Matilda for a few months back in 1141, and plenty of other queens before the United States even came into existence. In that sense we’re far ahead. In electoral politics too, Britain elected Margaret Thatcher as prime minister way back in 1979, when Hillary (now nearly seventy) was in her early thirties. The first woman to sit in the House of Commons, however, was herself American. It was Nancy Astor, who had been born and raised in Virginia, settling in England only in her mid-twenties. John Singer Sargent’s gorgeous portrait of her (1909) shows a dazzling beauty. Her husband, Waldorf Astor, was another American who had settled down in England, won a seat in Parliament, but had to give it up when his father’s death conveyed him into the House of Lords. She won a by-election in 1919 and held her seat from then right through the twenties, thirties, and World War II, finally relinquishing it in 1945, when she was 66. She favoured appeasement of Hitler in the 1930s and is famous for sharp, witty exchanges with Winston Churchill, who deplored the policy. American women, in other words, have been important to women’s participation in politics on both sides of the Atlantic. The outcome in November will show whether any office remains out of reach to female candidates.
<urn:uuid:1983cbef-40d4-4ae7-857f-9ffd173f69d6>
CC-MAIN-2020-05
https://blogs.spectator.co.uk/2016/09/women-paved-way-hillarys-bid-white-house/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251688806.91/warc/CC-MAIN-20200126104828-20200126134828-00001.warc.gz
en
0.98543
1,301
3.84375
4
[ -0.3648410737514496, 0.025636030361056328, 0.16255488991737366, 0.20531189441680908, -0.7103643417358398, 0.6723525524139404, -0.04917072504758835, 0.0891328752040863, 0.2928648591041565, 0.13896411657333374, 0.02873026207089424, 0.3091859817504883, -0.0016934818122535944, 0.30208432674407...
1
If Hillary Clinton wins she will be the first female president of the United States, taking over from the first black president. But who were her predecessors, paving the way to women’s full participation in national politics? Votes for American women began in the Wyoming Territory in 1869. Wyoming, amid the Rocky Mountains, is remote, cold, and high. Its population was tiny in the 1860s; men outnumbered women six to one. The advocates of female suffrage hoped they could create a little favourable publicity, encouraging more single women to head their way. When Wyoming became a state, in 1890, its women’s right to vote was written into the new state constitution. Meanwhile, in 1872, Victoria Woodhull had become the first woman to run for president, on the Equal Rights ticket. She was 34 at the time, an advocate of free love, a spiritualist, and a pioneering female newspaper editor with a great nose for scandalous stories. Woodhull was, to put it mildly, unlikely to win—even one of her sympathetic biographers admits that her vote tally on election day was probably zero. The first woman to become a member of the House of Representatives was very different. Jeannette Rankin of Montana (also high in the Rockies) was a hard-working progressive reformer, sober, industrious, and devoted to the idea that men were instinctively warlike whereas women were naturally peace-loving. She was elected in 1916 and took her seat in Washington the next spring, just in time to vote against American entry into the First World War. Nationwide votes for women came with the Nineteenth Amendment to the Constitution, which passed through Congress in 1919 and won ratification from the necessary three quarters of the states the next year. Rankin herself, in 1918, opened the congressional debate on the measure. Twenty-five years later she also voted against American entry into World War II. That took a lot of nerve because the vote was held on December 8th, 1941, just one day after the Japanese surprise attack on Pearl Harbor, and the whole nation was in a white-hot fury, thirsting for vengeance. Hers was the only ‘no’ vote. A famous photograph shows her sheltering in a congressional telephone booth shortly after the vote, trying to get away from a mob of angry reporters and fellow congressmen. The first woman in the United States Senate was Rebecca Felton of Georgia, who took her seat at the age of eighty-seven, in 1922. An elected senator had died and she was appointed by the state’s governor as a stop-gap until an election could be held. She served for just one day but went through the ceremony of admission, took the oath, then gave a speech thanking the Senate for its welcome. Two years later Nellie Taylor Ross, a Democrat and ardent prohibitionist, became the first elected female state governor. Yet again Wyoming led the way. Anyone who has followed Hillary Clinton’s career will note with interest that Ross’s husband preceded her in the office. His death in October 1924 prompted the party to nominate her as his replacement. She won a special election, continued his policies, but lost her office in the next regular election, in 1926. Since then 38 other women have served as state governors, six of whom are currently in office. Few are household names but at least one, Sarah Palin, rose to national fame (or notoriety). Elected as the Republican governor of Alaska in 2006, she became John McCain’s vice-presidential running mate in the general election of 2008, only to lose badly against Barack Obama. She’s now a star on the conservative media circuit. The first female cabinet member was Frances Perkins, whom President Franklin Roosevelt appointed as Secretary of Labour, in 1933. They were personal friends from earlier days in New York, and she stayed in the government throughout his presidency, retiring just after his death in 1945. A sociologist and social worker, she believed in the benign power of the federal government, supported the New Deal’s interventionist economic policies, and campaigned to give trade unions improved legal protections. She also drafted the legislation that created Social Security, the United States’ version of old-age pensions. Her husband spent years in mental hospitals. The most senior cabinet appointment is Secretary of State, the official who presides over American foreign policy. The office’s first female holder was Madeleine Albright, appointed by Bill Clinton in 1997, who served throughout his second term in the White House. George W. Bush appointed another woman, Condoleezza Rice, during his second term, and of course Hillary Clinton assumed the role during the first Obama administration. The rights and wrongs of her conduct in that office are among the issues currently roiling the presidential campaign. Incidentally, how does this American chronology compare with that of Britain? Well, we had Queen Matilda for a few months back in 1141, and plenty of other queens before the United States even came into existence. In that sense we’re far ahead. In electoral politics too, Britain elected Margaret Thatcher as prime minister way back in 1979, when Hillary (now nearly seventy) was in her early thirties. The first woman to sit in the House of Commons, however, was herself American. It was Nancy Astor, who had been born and raised in Virginia, settling in England only in her mid-twenties. John Singer Sargent’s gorgeous portrait of her (1909) shows a dazzling beauty. Her husband, Waldorf Astor, was another American who had settled down in England, won a seat in Parliament, but had to give it up when his father’s death conveyed him into the House of Lords. She won a by-election in 1919 and held her seat from then right through the twenties, thirties, and World War II, finally relinquishing it in 1945, when she was 66. She favoured appeasement of Hitler in the 1930s and is famous for sharp, witty exchanges with Winston Churchill, who deplored the policy. American women, in other words, have been important to women’s participation in politics on both sides of the Atlantic. The outcome in November will show whether any office remains out of reach to female candidates.
1,343
ENGLISH
1
Marriage. Tahitians disapproved of marriage between close consanguineal kin, but how close was never made clear. However, marriage was not permitted between those of differing social classes. Therefore, children resulting from a sexual relationship between partners of differing classes were killed upon birth. In the eighteenth century young couples were required to obtain the permission of their parents before Marriage, and among the chiefly class early betrothal was said to be the norm and concubinage was common. Marriage Ceremonies, when present, consisted of prayers at a marae. There appeared to be no fixed residency requirement and divorce was by common consent. Domestic Unit. The nuclear family was the dominant unit. Inheritance. The firstborn son became the head of the family at birth and succeeded to his father's name, lands, and title, if any. The father then served as the child's regent until he became of age. In the event of the firstborn dying, the next son succeeded him. There is some indication that in the absence of male offspring, an oldest daughter might be the inheritor. Socialization. Children were raised permissively by their parents, although those of the chiefly class were given a degree of education through teachers of that class. Men and women ate separately, and there was a variety of restrictions regarding who might prepare another's meal.
<urn:uuid:92dcc45f-5d16-4166-a386-34f9099a1481>
CC-MAIN-2020-05
https://www.everyculture.com/Oceania/Tahiti-Marriage-and-Family.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251688806.91/warc/CC-MAIN-20200126104828-20200126134828-00331.warc.gz
en
0.987861
280
3.28125
3
[ 0.14382988214492798, 0.6447576880455017, -0.19332459568977356, -0.5818986296653748, -0.2316122204065323, 0.10765217989683151, -0.061085887253284454, 0.11045832186937332, 0.022048821672797203, -0.11127564311027527, 0.06648994237184525, -0.17937737703323364, 0.5302103757858276, 0.53723460435...
2
Marriage. Tahitians disapproved of marriage between close consanguineal kin, but how close was never made clear. However, marriage was not permitted between those of differing social classes. Therefore, children resulting from a sexual relationship between partners of differing classes were killed upon birth. In the eighteenth century young couples were required to obtain the permission of their parents before Marriage, and among the chiefly class early betrothal was said to be the norm and concubinage was common. Marriage Ceremonies, when present, consisted of prayers at a marae. There appeared to be no fixed residency requirement and divorce was by common consent. Domestic Unit. The nuclear family was the dominant unit. Inheritance. The firstborn son became the head of the family at birth and succeeded to his father's name, lands, and title, if any. The father then served as the child's regent until he became of age. In the event of the firstborn dying, the next son succeeded him. There is some indication that in the absence of male offspring, an oldest daughter might be the inheritor. Socialization. Children were raised permissively by their parents, although those of the chiefly class were given a degree of education through teachers of that class. Men and women ate separately, and there was a variety of restrictions regarding who might prepare another's meal.
277
ENGLISH
1
May 18, 2014 Professor Rhonda Cottingham Even though the preferred communication between most adults is verbal there are other ways of communication that are somewhat unspoken. One of the most common would be the use of non-verbal communication. This non-verbal communication that we are talking about is the unspoken words we use that are often demonstrated via eye contact, non eye contact, hand and gestures, as well as our posture. Try to recall the last time that you watched a speaker at a conference or merely a co-worker presenting something at work. Did you notice if they used their hands during the presentation? Did you notice if they made eye contact with you or anyone during this time? Did you notice if they had confidence or lacked confidence in the topic they were presenting? Did they convince you that they had a product you needed or did they cover all the information in their speech to persuade you to use a product? Answers to these questions can say a lot about their process of communication. Many times you will notice that they will make direct eye contact with you or they will have a select few in the audience that they choose to make eye contact. During this eye contact whether it is directed at you or someone else you can see from looking at their eyes if they are relaxed, and seem comfortable and confident in their words. Generally during this eye contact you can also tell if they seem to be a bit nervous or unsure. If they can’t make eye contact and are looking around the room, at the ceiling or at the floor this non eye contact is a direct display of that person not being confident in what they are presenting to you. It can also…
<urn:uuid:03a868a8-719c-4573-b2be-74fde8050154>
CC-MAIN-2020-05
https://www.majortests.com/essay/Direct-Communication-534313.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00142.warc.gz
en
0.980335
339
3.546875
4
[ 0.07885300368070602, 0.3252006471157074, -0.17239758372306824, -0.22653137147426605, -0.32234060764312744, 0.07316955178976059, 0.8429700136184692, 0.45184126496315, -0.2228575050830841, -0.3345297873020172, -0.09268517047166824, -0.07328402251005173, -0.012788904830813408, 0.1596199870109...
1
May 18, 2014 Professor Rhonda Cottingham Even though the preferred communication between most adults is verbal there are other ways of communication that are somewhat unspoken. One of the most common would be the use of non-verbal communication. This non-verbal communication that we are talking about is the unspoken words we use that are often demonstrated via eye contact, non eye contact, hand and gestures, as well as our posture. Try to recall the last time that you watched a speaker at a conference or merely a co-worker presenting something at work. Did you notice if they used their hands during the presentation? Did you notice if they made eye contact with you or anyone during this time? Did you notice if they had confidence or lacked confidence in the topic they were presenting? Did they convince you that they had a product you needed or did they cover all the information in their speech to persuade you to use a product? Answers to these questions can say a lot about their process of communication. Many times you will notice that they will make direct eye contact with you or they will have a select few in the audience that they choose to make eye contact. During this eye contact whether it is directed at you or someone else you can see from looking at their eyes if they are relaxed, and seem comfortable and confident in their words. Generally during this eye contact you can also tell if they seem to be a bit nervous or unsure. If they can’t make eye contact and are looking around the room, at the ceiling or at the floor this non eye contact is a direct display of that person not being confident in what they are presenting to you. It can also…
339
ENGLISH
1
It is urged that common people representing other common people can best represent the will of the people and they are the best to tell the government what it cannot do and what the people will not stand. This is tantamount to what actually the body-politic could have done if they were to decide the problems themselves. The Chameleon type is the representative who does what exactly his electors tell him to do, nothing more, and nothing less. He should change his views as the chameleon changes his colour. This type of representation is also known as the telephone type of representation. According to this view, a representative is the deputy or agent of the people who elected him and he speaks as his master’s desire it. He exercises little independent judgment except in the process of trying to discern what his constituents want. He is not expected to make any alteration or modification in the terms of his instructions without the express authority of his electors. In fact, he has no wishes or will of his own as a representative. This type of representation is also known as instructed representation and was generally the accepted theory of representation in the early stages. In a federation, members representing the constituent States in the Upper House of the federal legislature were deemed as ambassadors of the States they represented. It was, accordingly, the inherent right of the States to instruct them about the attitude and stand they were to take on different problems before the legislature and the manner in which they would vote on a particular issue. But the modern theory of representation outright rejects the idea of instructed representation. Laski regards it as wholly false. Lieber considers it “unwarranted, inconsistent and unconstitutional.” Intelligent instruction, it is maintained, is not available. It is altogether impossible to ascertain the real and genuine will of the electors. If it may be assumed that intelligent instruction can be made available, even then, it is impossible for the representatives to refer all the problems with which they are confronted to their electors for instruction. Promptness in legislation is as necessary as deliberation itself. If representatives are required to consult their constituents item by item, the entire legislative activity of the State is sure to come to a standstill. Moreover, legislation is a difficult process as it involves many technicalities. Many things come to the knowledge of the representatives only on the floor of the House and they adjust their views there and then as the conditions or circumstances advisedly permit. It is, therefore, unwise to bind them in advance with instructions and pledges, or that they should change their views on the behest of their constituents as and when they want and as often as they desire. The electors have, undoubtedly, the right to get the fullest expression of the general attitude of their representatives. They are also entitled to know their views on all current problems. They may reasonably ask for their explanation on any question of their decision. But the representatives cannot and should not subordinate their judgment to the will of the electors. If a representative is to appeal to his electorate on every point in order to get their verdict, the representative ceases to have either morals or personality. Nor can he keep abreast of events and the needs of his country when he knows that he may be thwarted at every step and with as many instructions as there are voters. The instructions given may not only be conflicting, but diametrically opposed to each other. This is not the purpose of representation and representative democracy. The legislative assembly consisting of the chameleon type of representatives has no coherent voice, no maturity and no stability and firmness in the transaction of the business before it. When all representatives speak in deference to the wishes of their own constituents, the legislature is not a forum of discussion. It is Babel of tongues. The statesman type of representative finds its classic definition in the words of Edmund Burke. He said, nearly two centuries ago, “Your representative owes for not his industry only, but his judgment, and he betrays instead of serving you if he sacrifices it to your opinion.” The representative must respect the view of his constituents, he should endeavour to redress their grievances and feel their pulse and act accordingly. But he must not sacrifice his independence of judgment and narrow his horizon of approach to various problems. He should look at all problems from the national rather than from a local viewpoint. Burke also gave a true analysis of the relationship between the electors and their representatives. “The Parliament,” he declared, “is not a Congress of ambassadors from different and hostile interests, which interests each must maintain as an agent and advocate against other agents and advocates. But parliament is a deliberative assembly of one nation, with one interest, that of the whole where not local purposes, not local prejudices, ought to guide, but the general good resulting from the general reason of the whole. You choose a member, indeed, but once you have chosen him, he is not a member of Bristol, but he is a member of parliament.” A national assembly is an embodiment of national interests. Burke tried to emphasise: find the best man to represent you, a man in whom you would have full faith and confidence as your representative, but once you have elected him depend upon him to use his judgment about what is best. The concept of statesman or uninstructed type of representation is based on two important facts. The first is that most people are not well enough informed about problems confronting the government to make decisions, and, secondly, that, even if they were, the process of decision making is so difficult and complex as to preclude the people as a whole from exercising a good judgment on isolated issues. If instruction is to be the basis of representation, able and conscientious men can hardly be expected to serve in legislatures where they are expected to say only what it pleases their electors. They will keep themselves away from such a farce of representative institution rather than to serve therein. The services of great, talented and experienced statesmen would, thus, be lost to the nation. The fourth type of representative is the party-member type. Elections are now contested by political parties rather than individuals. The voters vote for a party and its programme. It is, accordingly, necessary that the representative should rigidly live up to his party label even if he is to surrender his independence of judgment as well as dependence upon the judgment of his constituents. The theory is that political party is the only real vehicle of representative democracy and for the accomplishment of political programme. It is the party that selects candidates to contest an election and campaigns to win it and, thus, constituting the majority to form the government and to implement its policies. If it is in Opposition, it must oppose the party in power, criticise its policies and expose it to the electorate in order to win their support and to win elections. In whatever role the party is, it is nothing without the unity, solidarity and disciplined duty of the representatives elected on the party ticket. They must swim and sink together. If a representative elected on the ticket of a particular party decides to change his party label, political morality demands that he should submit himself for re-election on the ticket of the party to which he now owes allegiance “Clearly, he is not entitled,” as Laski has said, “to get elected as a free trader and to vote at once for a protective tariff.” The consensus of opinion now is that there is much to be said in support of the party- member type of representative. A representative democracy is unthinkable without political parties. A reasonably fixed legislative tenure provides a sufficient guarantee to the constituents to judge the party by what it did for them. No political party can to any dangerous extent afford to misrepresent the feelings of its constituents. When the party is judged by the constituents at the general election and people vote for its programme, the unity of the party demands that members elected on its tickets must act in unison as disciplined adherents. Without such a code of conduct representative democracy cannot succeed.
<urn:uuid:ad97e443-e8fd-4760-870c-a26c2babb60e>
CC-MAIN-2020-05
https://gemmarketingsolutions.com/essay-on-4-types-of-representatives/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00013.warc.gz
en
0.980945
1,640
3.453125
3
[ -0.406690776348114, -0.15302807092666626, -0.010819436982274055, -0.2098943293094635, -0.18349802494049072, 0.2502804398536682, 0.5356013774871826, -0.018857542425394058, -0.23510897159576416, -0.2537384629249573, 0.008528722450137138, -0.12032898515462875, 0.1110890805721283, 0.3724950551...
2
It is urged that common people representing other common people can best represent the will of the people and they are the best to tell the government what it cannot do and what the people will not stand. This is tantamount to what actually the body-politic could have done if they were to decide the problems themselves. The Chameleon type is the representative who does what exactly his electors tell him to do, nothing more, and nothing less. He should change his views as the chameleon changes his colour. This type of representation is also known as the telephone type of representation. According to this view, a representative is the deputy or agent of the people who elected him and he speaks as his master’s desire it. He exercises little independent judgment except in the process of trying to discern what his constituents want. He is not expected to make any alteration or modification in the terms of his instructions without the express authority of his electors. In fact, he has no wishes or will of his own as a representative. This type of representation is also known as instructed representation and was generally the accepted theory of representation in the early stages. In a federation, members representing the constituent States in the Upper House of the federal legislature were deemed as ambassadors of the States they represented. It was, accordingly, the inherent right of the States to instruct them about the attitude and stand they were to take on different problems before the legislature and the manner in which they would vote on a particular issue. But the modern theory of representation outright rejects the idea of instructed representation. Laski regards it as wholly false. Lieber considers it “unwarranted, inconsistent and unconstitutional.” Intelligent instruction, it is maintained, is not available. It is altogether impossible to ascertain the real and genuine will of the electors. If it may be assumed that intelligent instruction can be made available, even then, it is impossible for the representatives to refer all the problems with which they are confronted to their electors for instruction. Promptness in legislation is as necessary as deliberation itself. If representatives are required to consult their constituents item by item, the entire legislative activity of the State is sure to come to a standstill. Moreover, legislation is a difficult process as it involves many technicalities. Many things come to the knowledge of the representatives only on the floor of the House and they adjust their views there and then as the conditions or circumstances advisedly permit. It is, therefore, unwise to bind them in advance with instructions and pledges, or that they should change their views on the behest of their constituents as and when they want and as often as they desire. The electors have, undoubtedly, the right to get the fullest expression of the general attitude of their representatives. They are also entitled to know their views on all current problems. They may reasonably ask for their explanation on any question of their decision. But the representatives cannot and should not subordinate their judgment to the will of the electors. If a representative is to appeal to his electorate on every point in order to get their verdict, the representative ceases to have either morals or personality. Nor can he keep abreast of events and the needs of his country when he knows that he may be thwarted at every step and with as many instructions as there are voters. The instructions given may not only be conflicting, but diametrically opposed to each other. This is not the purpose of representation and representative democracy. The legislative assembly consisting of the chameleon type of representatives has no coherent voice, no maturity and no stability and firmness in the transaction of the business before it. When all representatives speak in deference to the wishes of their own constituents, the legislature is not a forum of discussion. It is Babel of tongues. The statesman type of representative finds its classic definition in the words of Edmund Burke. He said, nearly two centuries ago, “Your representative owes for not his industry only, but his judgment, and he betrays instead of serving you if he sacrifices it to your opinion.” The representative must respect the view of his constituents, he should endeavour to redress their grievances and feel their pulse and act accordingly. But he must not sacrifice his independence of judgment and narrow his horizon of approach to various problems. He should look at all problems from the national rather than from a local viewpoint. Burke also gave a true analysis of the relationship between the electors and their representatives. “The Parliament,” he declared, “is not a Congress of ambassadors from different and hostile interests, which interests each must maintain as an agent and advocate against other agents and advocates. But parliament is a deliberative assembly of one nation, with one interest, that of the whole where not local purposes, not local prejudices, ought to guide, but the general good resulting from the general reason of the whole. You choose a member, indeed, but once you have chosen him, he is not a member of Bristol, but he is a member of parliament.” A national assembly is an embodiment of national interests. Burke tried to emphasise: find the best man to represent you, a man in whom you would have full faith and confidence as your representative, but once you have elected him depend upon him to use his judgment about what is best. The concept of statesman or uninstructed type of representation is based on two important facts. The first is that most people are not well enough informed about problems confronting the government to make decisions, and, secondly, that, even if they were, the process of decision making is so difficult and complex as to preclude the people as a whole from exercising a good judgment on isolated issues. If instruction is to be the basis of representation, able and conscientious men can hardly be expected to serve in legislatures where they are expected to say only what it pleases their electors. They will keep themselves away from such a farce of representative institution rather than to serve therein. The services of great, talented and experienced statesmen would, thus, be lost to the nation. The fourth type of representative is the party-member type. Elections are now contested by political parties rather than individuals. The voters vote for a party and its programme. It is, accordingly, necessary that the representative should rigidly live up to his party label even if he is to surrender his independence of judgment as well as dependence upon the judgment of his constituents. The theory is that political party is the only real vehicle of representative democracy and for the accomplishment of political programme. It is the party that selects candidates to contest an election and campaigns to win it and, thus, constituting the majority to form the government and to implement its policies. If it is in Opposition, it must oppose the party in power, criticise its policies and expose it to the electorate in order to win their support and to win elections. In whatever role the party is, it is nothing without the unity, solidarity and disciplined duty of the representatives elected on the party ticket. They must swim and sink together. If a representative elected on the ticket of a particular party decides to change his party label, political morality demands that he should submit himself for re-election on the ticket of the party to which he now owes allegiance “Clearly, he is not entitled,” as Laski has said, “to get elected as a free trader and to vote at once for a protective tariff.” The consensus of opinion now is that there is much to be said in support of the party- member type of representative. A representative democracy is unthinkable without political parties. A reasonably fixed legislative tenure provides a sufficient guarantee to the constituents to judge the party by what it did for them. No political party can to any dangerous extent afford to misrepresent the feelings of its constituents. When the party is judged by the constituents at the general election and people vote for its programme, the unity of the party demands that members elected on its tickets must act in unison as disciplined adherents. Without such a code of conduct representative democracy cannot succeed.
1,606
ENGLISH
1
Operation Downfall was the name given to the planned invasion of Japan. Operation Downfall itself was divided into two parts – Operation Olympic and Operation Coronet. By mid-1945, it was apparent that the collapse of Japan was near and the Allies had to plan for the invasion of the Japanese mainland – something that they knew would be very costly in terms of lives lost. American military commanders were given the task of planning for the invasion – Douglas MacArthur, Chester Nimitz, Ernest King, William Leahy, Hap Arnold and George Marshall. Inter-service rivalry did occur as both army and navy wanted one of ‘their men’ to be supreme commander of planning. Eventually the navy accepted that MacArthur was to have total control if the invasion was to take place. The planning proceeded without taking the atomic bomb into consideration as so few knew about its existence. The Americans faced one very serious problem. They knew for sure that the Japanese would defend their territory with zeal and that American casualties would be high – probably too high for the American public to accept. The fanaticism that had been shown by the kamikazes, would almost certainly be encountered in Japan and the Americans had to plan for this. There was plenty of evidence to indicate that any invasion of the Japanese mainland would be very bloody for all concerned. The complexity of such an attack also led to both sides of the US military developing different ideas as to what the best plan should be. The navy believed that a blockade supported by an air campaign would suffice. They wanted to use air bases in China and Korea to launch bombing raids against key cities in Japan. The army believed that such a campaign would take too long and that the morale of the American public might suffer as a result. They supported the use of an invasion that would go to the heart of Japan – Tokyo. The army got its way. It quickly became apparent that any invasion of Japan would present huge difficulties. There were very few beaches that could be used as a landing place and the Japanese knew this. Both sides knew that only the beaches in Kyushu and the beaches at Kanto, near Tokyo, could support a huge amphibious landing. The Japanese took the appropriate measures in both areas. The Americans had planned to land in Kyushu first and use it as a base for planes to attack other targets in Japan. These planes would then be used to give support to the landings at Kanto. As there were so few places to land a massive force of amphibious troops, the Japanese guessed as early as 1944 where such landings would take place. The actual invasion of Kyushu was known to be fraught with dangers. Therefore, there were those in the American military who advocated the use of chemical weapons on the Japanese defenders. The use of poisonous gas had been outlawed by the Geneva Convention, but neither America nor Japan had signed this. As Japan had used poisonous gas in their attack on China, there were some in the US military who felt it was perfectly justified to use it on the Japanese. The Japanese did fear a gas attack and records do show that senior military figures in Japan wanted to ensure that if there was a gas attack, that the response of the Japanese would be such that it would not make any attack worse. American Intelligence had known for a while that Japan was in no fit state to respond to a gas attack with a gas attack. The main concern for the Americans was the potential for huge casualty rates. Nearly every senior officer involved in the planning did his own research regarding American casualties – this was based on the experience America had fighting the Japanese since Pearl Harbour. The Joint Chiefs of Staff estimated that Olympic alone would cost 456,000 men, including 109,000 killed. Including Coronet, it was estimated that America would experience 1.2 million casualties, with 267,000 deaths. Staff working for Chester Nimitz, calculated that the first 30 days of Olympic alone would cost 49,000 men. MacArthur’s staff concluded that America would suffer 125,000 casualties after 120 days, a figure that was later reduced to 105,000 casualties after his staff subtracted the men who when wounded could return to battle. General Marshall, in conference with President Truman, estimated 31,000 in 30 days after landing in Kyushu. Admiral Leahy estimated that the invasion would cost 268,000 casualties. Personnel at the Navy Department estimated that the total losses to America would be between 1.7 and 4 million with 400,000 to 800,000 deaths. The same department estimated that there would be up to 10 million Japanese casualties. The ‘Los Angeles Times’ estimated that America would suffer up to 1 million casualties. Regardless of which figures were used, it was an accepted fact that America would lose a very large number of men. This was one of the reasons why President Truman authorised the use of the atomic bomb in an effort to get Japan to surrender. On August 6th, ‘Little Boy’ was dropped on Hiroshima and on August 9th, ‘Fat Man’ was dropped on Nagasaki. On September 2nd, Japan surrendered and America and her allies were spared the task of invading Japan with the projected massive casualties this would entail.
<urn:uuid:bb95b96d-6bc5-4e54-bf3e-d7fbdb9a7b39>
CC-MAIN-2020-05
https://www.historylearningsite.co.uk/world-war-two/the-pacific-war-1941-to-1945/operation-downfall/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00539.warc.gz
en
0.987774
1,060
3.484375
3
[ 0.028482496738433838, -0.017462193965911865, 0.0738796591758728, -0.007814107462763786, -0.10559900850057602, 0.11795014888048172, 0.06569945812225342, 0.2186100333929062, 0.02730165794491768, 0.07706259936094284, 0.06055924668908119, -0.15155217051506042, 0.7279627323150635, 0.68025743961...
10
Operation Downfall was the name given to the planned invasion of Japan. Operation Downfall itself was divided into two parts – Operation Olympic and Operation Coronet. By mid-1945, it was apparent that the collapse of Japan was near and the Allies had to plan for the invasion of the Japanese mainland – something that they knew would be very costly in terms of lives lost. American military commanders were given the task of planning for the invasion – Douglas MacArthur, Chester Nimitz, Ernest King, William Leahy, Hap Arnold and George Marshall. Inter-service rivalry did occur as both army and navy wanted one of ‘their men’ to be supreme commander of planning. Eventually the navy accepted that MacArthur was to have total control if the invasion was to take place. The planning proceeded without taking the atomic bomb into consideration as so few knew about its existence. The Americans faced one very serious problem. They knew for sure that the Japanese would defend their territory with zeal and that American casualties would be high – probably too high for the American public to accept. The fanaticism that had been shown by the kamikazes, would almost certainly be encountered in Japan and the Americans had to plan for this. There was plenty of evidence to indicate that any invasion of the Japanese mainland would be very bloody for all concerned. The complexity of such an attack also led to both sides of the US military developing different ideas as to what the best plan should be. The navy believed that a blockade supported by an air campaign would suffice. They wanted to use air bases in China and Korea to launch bombing raids against key cities in Japan. The army believed that such a campaign would take too long and that the morale of the American public might suffer as a result. They supported the use of an invasion that would go to the heart of Japan – Tokyo. The army got its way. It quickly became apparent that any invasion of Japan would present huge difficulties. There were very few beaches that could be used as a landing place and the Japanese knew this. Both sides knew that only the beaches in Kyushu and the beaches at Kanto, near Tokyo, could support a huge amphibious landing. The Japanese took the appropriate measures in both areas. The Americans had planned to land in Kyushu first and use it as a base for planes to attack other targets in Japan. These planes would then be used to give support to the landings at Kanto. As there were so few places to land a massive force of amphibious troops, the Japanese guessed as early as 1944 where such landings would take place. The actual invasion of Kyushu was known to be fraught with dangers. Therefore, there were those in the American military who advocated the use of chemical weapons on the Japanese defenders. The use of poisonous gas had been outlawed by the Geneva Convention, but neither America nor Japan had signed this. As Japan had used poisonous gas in their attack on China, there were some in the US military who felt it was perfectly justified to use it on the Japanese. The Japanese did fear a gas attack and records do show that senior military figures in Japan wanted to ensure that if there was a gas attack, that the response of the Japanese would be such that it would not make any attack worse. American Intelligence had known for a while that Japan was in no fit state to respond to a gas attack with a gas attack. The main concern for the Americans was the potential for huge casualty rates. Nearly every senior officer involved in the planning did his own research regarding American casualties – this was based on the experience America had fighting the Japanese since Pearl Harbour. The Joint Chiefs of Staff estimated that Olympic alone would cost 456,000 men, including 109,000 killed. Including Coronet, it was estimated that America would experience 1.2 million casualties, with 267,000 deaths. Staff working for Chester Nimitz, calculated that the first 30 days of Olympic alone would cost 49,000 men. MacArthur’s staff concluded that America would suffer 125,000 casualties after 120 days, a figure that was later reduced to 105,000 casualties after his staff subtracted the men who when wounded could return to battle. General Marshall, in conference with President Truman, estimated 31,000 in 30 days after landing in Kyushu. Admiral Leahy estimated that the invasion would cost 268,000 casualties. Personnel at the Navy Department estimated that the total losses to America would be between 1.7 and 4 million with 400,000 to 800,000 deaths. The same department estimated that there would be up to 10 million Japanese casualties. The ‘Los Angeles Times’ estimated that America would suffer up to 1 million casualties. Regardless of which figures were used, it was an accepted fact that America would lose a very large number of men. This was one of the reasons why President Truman authorised the use of the atomic bomb in an effort to get Japan to surrender. On August 6th, ‘Little Boy’ was dropped on Hiroshima and on August 9th, ‘Fat Man’ was dropped on Nagasaki. On September 2nd, Japan surrendered and America and her allies were spared the task of invading Japan with the projected massive casualties this would entail.
1,114
ENGLISH
1
Widely known for their business and professional activities, which include commerce, money-lending, medical and clerical services, the role of Asians in Kenya’s colonial agriculture has never been appreciated. Their industry earned them the sobriquet dukawallahs, with their activities supporting both settler and African agriculture. Few Asians became farmers though or shambawallahs in their own right. Asians mainly occupied the middleman position in Kenya’s economy. They bought or bulked African-grown crops such as maize, millet, cotton, groundnuts and sesame in their shops, which also served as go downs and sold them to European export firms. They then bought manufactured imports from European firms and sold them to Africans. They also established industries that were essential to European and African agriculture. For instance, they established and ran ginneries close to where cotton was grown and produced cotton lint for export. They extended credit to European and African farmers because European commercial banks failed to satisfy farmers’ demands for capital and Africans were prohibited from borrowing more than Sh100 from the banks. Asians are believed to have visited the East African coast as early as BC1000 as participants in the trans-Indian Ocean trade. However, their presence increased dramatically during the Omani conquest of the coast and the commencement of the long-distance trade in slaves and ivory and the production of cloves in Zanzibar. Asians acted as bankers who financed the trade. Larger numbers arrived during the commencement of the construction of the Kisumu-Mombasa Railway in the 1890s as the Imperial British East African Company recruited them for the work as indentured labour. They were then allowed to stay to continue with the maintenance of the railway and start business. Soon, after the establishment, Asians demanded to be allocated land like the European settlers. After much dispute that involved the British governments in London and India and the colonial state in Kenya, it was decided that Asians be given land in areas of low elevation between the coast and Kiu, and between Fort Ternan and Lake Victoria. They rejected the former areas and settled around Kibos, a railway station close to the railway terminal in Kisumu. Four Punjabi immigrant families were the first to settle at Kibos. They commenced the growing of cotton, sesame and linseed and also kept livestock. Soon after the First World War, they were joined by more Asian immigrants, including Ismailis. LITTLE CAPITAL INVESTMENT By the early 1930s, the Asian population and the acreage of land they occupied totalled close to 11,000 acres around Kibos. By 1963, Asians owned 60 farms which ranged from 50 to 1,000 acres. The Asian settlement now extended to Muhoroni. Although the Asian farmers had earlier planted cotton, linseed and sesame from the late 1920s, they changed to sugar cane growing, aping their compatriots in Uganda where the crop was doing quite well. The Punjabis in Kibos grew sugar cane mostly for the production of jaggery, a type of unrefined sugar, called ‘guru’ in Hindi. The Asian farmers preferred using sugar for making jaggery for two major reasons. First, they were already familiar with sugar growing and the manufacture of jaggery as this was common practice in India. Second, jaggery’s uses were many. Indians in Kenya mixed jaggery with groundnuts, sesame and condensed milk to make traditional desserts and candies as was done back home. They also used it to prepare alcoholic drinks, a practice that Africans borrowed from them to make chang’aa or Nubian gin, as it was the Nubians who lived near Kibos who were the expert brewers. Indians and Europeans also used jaggery for sweetening tea and coffee. Indians had also used the product as a laxative. Moreover, molasses, a by-product of the jaggery, was mixed with hay and used as cattle feed. So, there was a ready local market for jaggery as demand for it was high among Asians, Africans and European settlers. Finally, growing sugar cane and making jaggery needed little capital investment. Cane took six to eight months to mature. This meant that the supply would be continuous throughout the year if planting was well-spaced. Asians used simple crushers to squeeze out the sugary juice. The crushers came from India and were manually operated with the help of the locals, who were also farm labourers. Teams of oxen provided the power to drive the crusher. The sugary juice obtained was then boiled in a flat metal container until the juice crystallised into yellowish dough, which was then transferred into conical containers to cool and solidify. The jaggery took the shape of the containers, which was the measure that was used to fix the product’s price. REQUIRED BIGGER CAPITAL It was for these reasons that Indian pioneer farmers in what is today called the sugar-belt specialised in the growing of the sweetener and producing jaggery. Most outstanding among the sugar cane farmers were Hasham Jamal and Devjibhai Kamalshi Hindocha. Jamal had earlier worked for Allidina Visram, the famous Indian entrepreneur whose commercial empire extended across the width and breadth of East Africa. Jamal, who became a large-scale businessman in his own right, later decided to venture into agriculture. He bought 200 acres in Muhoroni and planted sugar cane to produce jaggery, which he sold to the Indian population in nearby Kisumu and other towns in the country, and to European settler farmers in Fort Ternan, Uasin Gishu and further afield. It was because jaggery paid well that another Asian, Prahlad Singh bought a European sugar factory at Kibwezi, among the Akamba and produced jaggery instead. Hindocha ventured into growing sugar cane and also built a factory. Unlike jaggery production, this required bigger capital which was out of reach for small-scale farmers. Earlier, he had been a partner of the Madhvanis with shares in Vithaldas Harridas and Company that operated in both Uganda and Kenya. When this company was dissolved two years after the Second World War in 1947, Hindocha bought Miwani Sugar Mills Limited and the 4,500 acres of sugar cane plantation that fed it. He later bought more land that adjoined the farm and increased its size to 15,000 acres. He had a labour force of 4,200 people and was able to produce over 20,000 tonnes of sugar. He thus became a large-scale grower and manufacturer of sugar which he sold locally and outside the country. After Kenya’s independence, Miwani Sugar Mills was transformed into a parastatal and renamed Miwani Sugar Company. It initially ran efficiently but soon ran into perennial problems in the way of the other mills in the country. The reasons for this are a story for another day. Prof Ndege teaches at Moi University and can be reached at [email protected]
<urn:uuid:c9c21f6f-3dd0-4973-9561-68977740808b>
CC-MAIN-2020-05
https://www.nation.co.ke/business/seedsofgold/On-sugar-cane-growing-and-jaggery-making/2301238-5384956-dtfxwh/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00027.warc.gz
en
0.981528
1,506
3.890625
4
[ -0.1527879238128662, 0.22550329566001892, 0.30737996101379395, -0.04345780611038208, 0.22700710594654083, -0.21593210101127625, 0.2752555012702942, -0.12862758338451385, -0.3129655122756958, 0.39193302392959595, 0.24446657299995422, -0.5360792279243469, 0.10578703880310059, 0.3156000375747...
2
Widely known for their business and professional activities, which include commerce, money-lending, medical and clerical services, the role of Asians in Kenya’s colonial agriculture has never been appreciated. Their industry earned them the sobriquet dukawallahs, with their activities supporting both settler and African agriculture. Few Asians became farmers though or shambawallahs in their own right. Asians mainly occupied the middleman position in Kenya’s economy. They bought or bulked African-grown crops such as maize, millet, cotton, groundnuts and sesame in their shops, which also served as go downs and sold them to European export firms. They then bought manufactured imports from European firms and sold them to Africans. They also established industries that were essential to European and African agriculture. For instance, they established and ran ginneries close to where cotton was grown and produced cotton lint for export. They extended credit to European and African farmers because European commercial banks failed to satisfy farmers’ demands for capital and Africans were prohibited from borrowing more than Sh100 from the banks. Asians are believed to have visited the East African coast as early as BC1000 as participants in the trans-Indian Ocean trade. However, their presence increased dramatically during the Omani conquest of the coast and the commencement of the long-distance trade in slaves and ivory and the production of cloves in Zanzibar. Asians acted as bankers who financed the trade. Larger numbers arrived during the commencement of the construction of the Kisumu-Mombasa Railway in the 1890s as the Imperial British East African Company recruited them for the work as indentured labour. They were then allowed to stay to continue with the maintenance of the railway and start business. Soon, after the establishment, Asians demanded to be allocated land like the European settlers. After much dispute that involved the British governments in London and India and the colonial state in Kenya, it was decided that Asians be given land in areas of low elevation between the coast and Kiu, and between Fort Ternan and Lake Victoria. They rejected the former areas and settled around Kibos, a railway station close to the railway terminal in Kisumu. Four Punjabi immigrant families were the first to settle at Kibos. They commenced the growing of cotton, sesame and linseed and also kept livestock. Soon after the First World War, they were joined by more Asian immigrants, including Ismailis. LITTLE CAPITAL INVESTMENT By the early 1930s, the Asian population and the acreage of land they occupied totalled close to 11,000 acres around Kibos. By 1963, Asians owned 60 farms which ranged from 50 to 1,000 acres. The Asian settlement now extended to Muhoroni. Although the Asian farmers had earlier planted cotton, linseed and sesame from the late 1920s, they changed to sugar cane growing, aping their compatriots in Uganda where the crop was doing quite well. The Punjabis in Kibos grew sugar cane mostly for the production of jaggery, a type of unrefined sugar, called ‘guru’ in Hindi. The Asian farmers preferred using sugar for making jaggery for two major reasons. First, they were already familiar with sugar growing and the manufacture of jaggery as this was common practice in India. Second, jaggery’s uses were many. Indians in Kenya mixed jaggery with groundnuts, sesame and condensed milk to make traditional desserts and candies as was done back home. They also used it to prepare alcoholic drinks, a practice that Africans borrowed from them to make chang’aa or Nubian gin, as it was the Nubians who lived near Kibos who were the expert brewers. Indians and Europeans also used jaggery for sweetening tea and coffee. Indians had also used the product as a laxative. Moreover, molasses, a by-product of the jaggery, was mixed with hay and used as cattle feed. So, there was a ready local market for jaggery as demand for it was high among Asians, Africans and European settlers. Finally, growing sugar cane and making jaggery needed little capital investment. Cane took six to eight months to mature. This meant that the supply would be continuous throughout the year if planting was well-spaced. Asians used simple crushers to squeeze out the sugary juice. The crushers came from India and were manually operated with the help of the locals, who were also farm labourers. Teams of oxen provided the power to drive the crusher. The sugary juice obtained was then boiled in a flat metal container until the juice crystallised into yellowish dough, which was then transferred into conical containers to cool and solidify. The jaggery took the shape of the containers, which was the measure that was used to fix the product’s price. REQUIRED BIGGER CAPITAL It was for these reasons that Indian pioneer farmers in what is today called the sugar-belt specialised in the growing of the sweetener and producing jaggery. Most outstanding among the sugar cane farmers were Hasham Jamal and Devjibhai Kamalshi Hindocha. Jamal had earlier worked for Allidina Visram, the famous Indian entrepreneur whose commercial empire extended across the width and breadth of East Africa. Jamal, who became a large-scale businessman in his own right, later decided to venture into agriculture. He bought 200 acres in Muhoroni and planted sugar cane to produce jaggery, which he sold to the Indian population in nearby Kisumu and other towns in the country, and to European settler farmers in Fort Ternan, Uasin Gishu and further afield. It was because jaggery paid well that another Asian, Prahlad Singh bought a European sugar factory at Kibwezi, among the Akamba and produced jaggery instead. Hindocha ventured into growing sugar cane and also built a factory. Unlike jaggery production, this required bigger capital which was out of reach for small-scale farmers. Earlier, he had been a partner of the Madhvanis with shares in Vithaldas Harridas and Company that operated in both Uganda and Kenya. When this company was dissolved two years after the Second World War in 1947, Hindocha bought Miwani Sugar Mills Limited and the 4,500 acres of sugar cane plantation that fed it. He later bought more land that adjoined the farm and increased its size to 15,000 acres. He had a labour force of 4,200 people and was able to produce over 20,000 tonnes of sugar. He thus became a large-scale grower and manufacturer of sugar which he sold locally and outside the country. After Kenya’s independence, Miwani Sugar Mills was transformed into a parastatal and renamed Miwani Sugar Company. It initially ran efficiently but soon ran into perennial problems in the way of the other mills in the country. The reasons for this are a story for another day. Prof Ndege teaches at Moi University and can be reached at [email protected]
1,482
ENGLISH
1
"The Massacre of the Innocents", painted by Pacecco de Rosa during the 1600s, depicts a scene from the Bible in which we see the moment when soldiers were sent out by King Herod to kill every child in the region to end the rumors of a child prophesied to rule the kingdom. The baby whom he was looking for was none other than Jesus Christ. And upon careful observation, one can see that among all of the chaos occurring in this painting, there is one mother and child who do not seem to be frightened like the others, so one may conclude that this is Mary and her son, Jesus. However, this couple is actually Mary's cousin, Elizabeth, and her son, John the Baptist. Their presence in the painting is important, because they are the two main subjects, yet, ironically, they, at first, are the least noticeable. Their coloring is the least dynamic compared to the other figures, and they are located farther back in space than most of the figures as well. One may say they are the calm in the center of the storm, because we see that neither of them are being attacked, nor do they seem scared or stressed in anyway. This is due to the fact that de Rosa wanted to be true to the story, which stated that Elizabeth and John the Baptist were saved from this De Rosa's motive for painting this piece is quite simple. He lived during the time of the Protestant Reformation, which then led to the Church's Counter-Reformation. Basically, the Catholic Church commissioned a number of painters, sculptors and architects, as well as many other artists, to create works of art that were appealing enough to encourage the community to return to the Church. Consequently, Pacecco de Rosa was one of these artists, thus explaining why he chose this for his subject matter. The painting is approximately six feet tall by ten feet wide. With this in mind, we see that each figure is about life size, if not slightly larger. De Rosa painted with oils on canvas,... Please join StudyMode to read the full document
<urn:uuid:1b3cf4c8-ecf2-4986-a2c8-979bf9e4194d>
CC-MAIN-2020-05
https://www.studymode.com/essays/Massacre-Of-The-Innocents-63598390.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00030.warc.gz
en
0.98412
422
3.46875
3
[ -0.056641701608896255, 0.18120399117469788, 0.1771562397480011, 0.18180973827838898, 0.5434962511062622, -0.1797606199979782, -0.10514805465936661, -0.004325722809880972, 0.2908664345741272, -0.10619372129440308, 0.08901730924844742, -0.20444715023040771, 0.10525763034820557, 0.04384765028...
1
"The Massacre of the Innocents", painted by Pacecco de Rosa during the 1600s, depicts a scene from the Bible in which we see the moment when soldiers were sent out by King Herod to kill every child in the region to end the rumors of a child prophesied to rule the kingdom. The baby whom he was looking for was none other than Jesus Christ. And upon careful observation, one can see that among all of the chaos occurring in this painting, there is one mother and child who do not seem to be frightened like the others, so one may conclude that this is Mary and her son, Jesus. However, this couple is actually Mary's cousin, Elizabeth, and her son, John the Baptist. Their presence in the painting is important, because they are the two main subjects, yet, ironically, they, at first, are the least noticeable. Their coloring is the least dynamic compared to the other figures, and they are located farther back in space than most of the figures as well. One may say they are the calm in the center of the storm, because we see that neither of them are being attacked, nor do they seem scared or stressed in anyway. This is due to the fact that de Rosa wanted to be true to the story, which stated that Elizabeth and John the Baptist were saved from this De Rosa's motive for painting this piece is quite simple. He lived during the time of the Protestant Reformation, which then led to the Church's Counter-Reformation. Basically, the Catholic Church commissioned a number of painters, sculptors and architects, as well as many other artists, to create works of art that were appealing enough to encourage the community to return to the Church. Consequently, Pacecco de Rosa was one of these artists, thus explaining why he chose this for his subject matter. The painting is approximately six feet tall by ten feet wide. With this in mind, we see that each figure is about life size, if not slightly larger. De Rosa painted with oils on canvas,... Please join StudyMode to read the full document
421
ENGLISH
1
Mary Cassatt was an American impressionist painter who depicted the lives of women, chiefly the intimate bond between mother and child. Degas and Pissarro would later become her mentors and fellow painters. She began studying art seriously at the age of 15, at a time when only around twenty percent of all arts students were female. Unlike many of the other female students, she was determined to make art her career, rather than just a social skill. She was disappointed at her art education in the United States, and moved to Paris to study art under private tutors in Paris. Her mother and family friends traveled with her to France, acting as chaperones. In Europe, Cassatt’s paintings were better received, increasing her prospects, and exhibited in the Salon of 1872, selling a painting. She exhibited every year at the Paris Salon until 1877, when all her works were rejected. Distraught at her rejection, she turned to the Impressionists, who welcomed her with welcome arms. Deciding early in her career that marriage was not an option, Cassatt never married, and spent much of her time with her sister Lydia, until her death in 1882, which left Mary unable to work for a short time. As her career progressed, her critical reputation grew, and she was often touted, along with Degas, as the one of the best exhibitors at the Impressionist Salon. She was awarded the French Legion of Honor in 1906.
<urn:uuid:29bf5be9-9304-490c-b679-1beb6f43da60>
CC-MAIN-2020-05
https://books.apple.com/ca/book/mary-cassatt-109-paintings/id924140456
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00452.warc.gz
en
0.992733
298
3.59375
4
[ 0.3431987166404724, 0.08057614415884018, 0.5343102216720581, 0.2986275255680084, -0.4474160373210907, 0.3636001646518707, 0.03404094651341438, -0.07184053212404251, 0.06379422545433044, -0.3755073845386505, -0.09009135514497757, -0.3229448199272156, 0.04389756917953491, 0.23349611461162567...
1
Mary Cassatt was an American impressionist painter who depicted the lives of women, chiefly the intimate bond between mother and child. Degas and Pissarro would later become her mentors and fellow painters. She began studying art seriously at the age of 15, at a time when only around twenty percent of all arts students were female. Unlike many of the other female students, she was determined to make art her career, rather than just a social skill. She was disappointed at her art education in the United States, and moved to Paris to study art under private tutors in Paris. Her mother and family friends traveled with her to France, acting as chaperones. In Europe, Cassatt’s paintings were better received, increasing her prospects, and exhibited in the Salon of 1872, selling a painting. She exhibited every year at the Paris Salon until 1877, when all her works were rejected. Distraught at her rejection, she turned to the Impressionists, who welcomed her with welcome arms. Deciding early in her career that marriage was not an option, Cassatt never married, and spent much of her time with her sister Lydia, until her death in 1882, which left Mary unable to work for a short time. As her career progressed, her critical reputation grew, and she was often touted, along with Degas, as the one of the best exhibitors at the Impressionist Salon. She was awarded the French Legion of Honor in 1906.
310
ENGLISH
1
- What are the reasons which have made rationing necessary? - How is it done? How has it worked so far? - What classes benefit from rationing? - It can be defeated unless all cooperate. The end of the second world war saw a world-shortage in certain kinds of foods. Nations had been destroying instead of producing and, with so many men serving in the forces, large areas had been allowed to go out of cultivation. The situation in Indo-Pak sub-continent had been affected by the fact that Burma and Malaya had, owing to being seats of war, ceased to export rice, and consequently, large sections of the population were deprived of their vital food. In addition, the fact that India and Pakistan or now independent and forced to balance their budget has forced us to cut down imports from foreign countries as much as possible. When there is a shortage of a certain food, it matters very much whether that food is an important and vital article of diet or a luxury. Rice and wheat in Pakistan and meat in Great Britain, are necessary to the peoples concerned. When there is only a small amount of rice, not enough to give every person all he wants, then the only thing to do is to ration it. That is, to arrange methods by which there shall be the distribution of a share to everybody, not so much as he wants, but still a certain amount. If this were not done, there would be a rise in prices as people tended to compete for the limited supplies. Rich people would get plenty of rice, and the poor wouid get none. A system of rationing, in which every person is registered with a certain shopkeeper and gets a fair share every week at a normal price, that is the only way to meet national shortage. If there were no such system, rice, kerosene oil; flour, ghee, all would become luxuries unobtainable by the poor. Even long after the war, the British people were short of food. Bread there is in plenty, fish can be caught around the coasts to yield a normal supply to all. But meat is an import largely from the Argentine, butter and cheese have to come from Canada and NewZealand: eggs are scarce, since only small imports can be got from Denmark and Holland. So every person still has a ration book, and draws a small regular ration of butter and cheese, less than a man could eat in one day. But everybody gets a share, and it is the same for all. The rationing of flour, rice and kerosene oil all were looked on with suspicion in this sub-continent. Petrol, 100, had to be doled out according to need. There is no means of avoiding this. Blackmarket dealing has arisen in all countries. We all know of it in India; and in London, fortunes have been made by unscrupulous persons. No system of rationing can be entirely successful without the honest cooperation of all the people. As human nature is weak, there will be everywhere unscrupulous persons out to make money by selling goods illegally, at a high price to rich people. Those who buy from them are equally guilty and should be punished. It is an unpatriotic act either to sell or buy in this way. All should loyally support the Government in their efforts to see that there is a fair sharing out of goods which are scarce.
<urn:uuid:63dd9b49-1282-4392-9aa2-0de38f4cb247>
CC-MAIN-2020-05
https://www.thecollegestudy.net/2018/04/short-paragraph-on-rationing.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00227.warc.gz
en
0.985649
701
3.328125
3
[ -0.1467612385749817, -0.005109562538564205, 0.0888257622718811, -0.45723646879196167, 0.2539171874523163, -0.2753772437572479, -0.09814415872097015, 0.04237870126962662, -0.1584901213645935, 0.09784577786922455, 0.019512483850121498, -0.6013491749763489, 0.2166832685470581, 0.6775066852569...
7
- What are the reasons which have made rationing necessary? - How is it done? How has it worked so far? - What classes benefit from rationing? - It can be defeated unless all cooperate. The end of the second world war saw a world-shortage in certain kinds of foods. Nations had been destroying instead of producing and, with so many men serving in the forces, large areas had been allowed to go out of cultivation. The situation in Indo-Pak sub-continent had been affected by the fact that Burma and Malaya had, owing to being seats of war, ceased to export rice, and consequently, large sections of the population were deprived of their vital food. In addition, the fact that India and Pakistan or now independent and forced to balance their budget has forced us to cut down imports from foreign countries as much as possible. When there is a shortage of a certain food, it matters very much whether that food is an important and vital article of diet or a luxury. Rice and wheat in Pakistan and meat in Great Britain, are necessary to the peoples concerned. When there is only a small amount of rice, not enough to give every person all he wants, then the only thing to do is to ration it. That is, to arrange methods by which there shall be the distribution of a share to everybody, not so much as he wants, but still a certain amount. If this were not done, there would be a rise in prices as people tended to compete for the limited supplies. Rich people would get plenty of rice, and the poor wouid get none. A system of rationing, in which every person is registered with a certain shopkeeper and gets a fair share every week at a normal price, that is the only way to meet national shortage. If there were no such system, rice, kerosene oil; flour, ghee, all would become luxuries unobtainable by the poor. Even long after the war, the British people were short of food. Bread there is in plenty, fish can be caught around the coasts to yield a normal supply to all. But meat is an import largely from the Argentine, butter and cheese have to come from Canada and NewZealand: eggs are scarce, since only small imports can be got from Denmark and Holland. So every person still has a ration book, and draws a small regular ration of butter and cheese, less than a man could eat in one day. But everybody gets a share, and it is the same for all. The rationing of flour, rice and kerosene oil all were looked on with suspicion in this sub-continent. Petrol, 100, had to be doled out according to need. There is no means of avoiding this. Blackmarket dealing has arisen in all countries. We all know of it in India; and in London, fortunes have been made by unscrupulous persons. No system of rationing can be entirely successful without the honest cooperation of all the people. As human nature is weak, there will be everywhere unscrupulous persons out to make money by selling goods illegally, at a high price to rich people. Those who buy from them are equally guilty and should be punished. It is an unpatriotic act either to sell or buy in this way. All should loyally support the Government in their efforts to see that there is a fair sharing out of goods which are scarce.
698
ENGLISH
1
During the Renaissance many games were played by both children and adults. Some of these games had been around for centuries and are still played by modern society. "Dwyle Flunking" was played in pubs by adults, as well on the street by children. The modern version of "Bilboquet" is called "Cup and Ball," an inexpensive toy used by many children. "Get the Hat" is not unlike "Keep Away." Get the Hat Like American football, without rules or goals, the point of this game is to get the hat from the other team. This Elizabethan game begins when a hat is dropped between the two teams or children. Both teams run for the hat and when one team gets it the other team chases after them to capture it back. Bilboquet was played in 16th century France although its origins are unknown. A ball or ring is tied to a handle with a piece of string. The handle has either a cup or a hook. The object is to move the handle quickly in order to get the ball to land in the cup, or the ring to land on the hook. The child who catches the most balls or rings is the winner. Think about a pinata without the candy. A bag was filled with a stinky liquid and suspended above the players. The child who was "it" used a long stick to break the bag and try to get the other players wet. Who ever got hit first was it and a bag was filled again. Popular in England, this game could be played with water balloons instead of stinky liquids. Games That are Still Played Hopscotch was invented by Roman soldiers as a pass time while they were stationed at Hadrian's Wall. During the Renaissance hopscotch, leapfrog and tag were played by both children and adults. - Todd Warnock/Lifesize/Getty Images
<urn:uuid:a352a45d-a5bb-41cb-80ce-f104394763c5>
CC-MAIN-2020-05
https://ourpastimes.com/childrens-games-for-the-renaissance-13583106.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00084.warc.gz
en
0.987998
390
3.375
3
[ -0.27397143840789795, 0.04875976964831352, 0.1787983626127243, -0.44400787353515625, -0.10039054602384567, 0.1731863170862198, 0.5961500406265259, 0.5676623582839966, -0.049295131117105484, 0.01833418570458889, 0.07321055233478546, -0.41776326298713684, 0.03802845999598503, -0.246795654296...
3
During the Renaissance many games were played by both children and adults. Some of these games had been around for centuries and are still played by modern society. "Dwyle Flunking" was played in pubs by adults, as well on the street by children. The modern version of "Bilboquet" is called "Cup and Ball," an inexpensive toy used by many children. "Get the Hat" is not unlike "Keep Away." Get the Hat Like American football, without rules or goals, the point of this game is to get the hat from the other team. This Elizabethan game begins when a hat is dropped between the two teams or children. Both teams run for the hat and when one team gets it the other team chases after them to capture it back. Bilboquet was played in 16th century France although its origins are unknown. A ball or ring is tied to a handle with a piece of string. The handle has either a cup or a hook. The object is to move the handle quickly in order to get the ball to land in the cup, or the ring to land on the hook. The child who catches the most balls or rings is the winner. Think about a pinata without the candy. A bag was filled with a stinky liquid and suspended above the players. The child who was "it" used a long stick to break the bag and try to get the other players wet. Who ever got hit first was it and a bag was filled again. Popular in England, this game could be played with water balloons instead of stinky liquids. Games That are Still Played Hopscotch was invented by Roman soldiers as a pass time while they were stationed at Hadrian's Wall. During the Renaissance hopscotch, leapfrog and tag were played by both children and adults. - Todd Warnock/Lifesize/Getty Images
382
ENGLISH
1
Ancient Greece life after death In ancient Greece the continued existence of the dead depended on their constant remembrance by the living. The after-life, for the ancient Greeks, consisted of a grey and dreary world in the time of Homer (8th century BCE) and, most famously, we have the scene from Homer's Odyssey in which Odysseus meets the spirit of the great warrior Achilles in the nether-world where Achilles tells him he would rather be a landless slave on earth than a king in the underworld. By the time of Plato, however (4th century BCE) the after-life had changed in character so that souls were better rewarded for their pains once they had left the earth; but only in so much as the living kept their memory alive. The Land of the Dead The afterlife was known as Hades and was a grey world ruled by the Lord of the Dead, also known as Hades. Within this misty realm, however, were different planes of existence the dead could inhabit. If they had lived a good life and were remembered by the living they could enjoy the sunny pleasures of Elysium; if they were wicked then they fell into the darker pits of Tartarus while, if they were forgotten, they wandered eternally in the bleakness of the land of Hades. While both Elysium and Tartarus existed in the time of the writer Hesiod (contemporary of Homer) they were not understood then in the same way they came to be. In Plato's dialogue of The Phaedo, Socrates delineates the various plateaus of the after-life and makes it clear that the soul who, in life, devotes itself to the Good is rewarded in the beyond with a much more pleasant existence than those who indulged their appetites and lived only for the pleasures the world has to offer. As most people, then as now, viewed their lost loved ones as paragons of human virtue (whether they were or not, in fact) it was considered one's duty to the dead to remember them well, regardless of the life they had lived, the mistakes they had made, and, thereby, provide them with continued existence in Elysium. This remembrance was not considered a matter of personal choice but, rather, an important part of what the Greeks knew as Eusebia. Piety in Ancient Greece We translate the Greek word `Eusebia' today as `piety' but eusebia was much more than that: it was one's duty to oneself, others and the gods which kept society on track and made clear one's place in the community. Socrates, for example, was executed by the city-state of Athens after having been convicted of impiety for allegedly corrupting the youth of Athens and speaking against the established gods. However unjust we may see Socrates' end today he would, in fact, have been guilty of impiety in that he encouraged the youth of Athens, by his own example, to question their elders and social superiors. This behavior would have been considered impious in that the youth were not acting in accordance with eusebia, i.e. they were forgetting their place and obligations in society. Eusebia & the After-life In the same way that one had to remember one's duty toward others in one's life, one also had to remember one's duty to those who had departed life. If one forgot to honor and remember the dead one was considered impious and, while this particualr breach of social conduct was not punished as severely as Socrates' breach, it was certainly frowned upon severely. Today, should one consider the tombstones of the ancient Greeks - whether in a museum or just below the Acropolis in Athens - one finds stones with comfortable, common scenes depicted: a husband sitting at table as his wife brings him his evening meal, a man being greeted by his dogs upon returning home. These simple scenes were not merely depictions of moments the deceased enjoyed in life; they were meant to remind the living viscerally of who that person was in life, of who that person still was now in death, and to spark the light of continued remembrance in order that the `dead' should live in bliss eternally. In ancient Greece death was defeated, not by the gods, but by the human agency of memory. - IQ Option http://iqoption.net.co/ en colombia Charlotte Home Furnishings 3658-5055 Acanthus Tapestry Cushion Wall Hanging - Green, H 46 x W 27 Home (Charlotte Home Furnishings) How did life in ancient greece change after the fall of the mycenaeans? Greeks had lost the marks of civilization: cities, great palaces and temples, a vibrant economy, and knowledge of writing. What is the animal life in ancient Greece. it was almost the same as ours today except the Greeks would sacrafice there animals to the gods at festivals and games, as you know the egyptians worshipped cats, well the Greeks worshipped dogs.
<urn:uuid:86f152d9-8505-4bfe-a328-043d16dca13a>
CC-MAIN-2020-05
http://politesprevezas.eu/AncientGreece/ancient-greece-life-after-death
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00275.warc.gz
en
0.986508
1,036
3.71875
4
[ -0.2729719579219818, 0.18076655268669128, 0.5498239994049072, -0.1336570680141449, -0.36831146478652954, -0.4838770627975464, 0.13529467582702637, 0.5655325055122375, 0.13181036710739136, -0.012214978225529194, -0.23975318670272827, -0.305470734834671, 0.05259280651807785, 0.53261232376098...
1
Ancient Greece life after death In ancient Greece the continued existence of the dead depended on their constant remembrance by the living. The after-life, for the ancient Greeks, consisted of a grey and dreary world in the time of Homer (8th century BCE) and, most famously, we have the scene from Homer's Odyssey in which Odysseus meets the spirit of the great warrior Achilles in the nether-world where Achilles tells him he would rather be a landless slave on earth than a king in the underworld. By the time of Plato, however (4th century BCE) the after-life had changed in character so that souls were better rewarded for their pains once they had left the earth; but only in so much as the living kept their memory alive. The Land of the Dead The afterlife was known as Hades and was a grey world ruled by the Lord of the Dead, also known as Hades. Within this misty realm, however, were different planes of existence the dead could inhabit. If they had lived a good life and were remembered by the living they could enjoy the sunny pleasures of Elysium; if they were wicked then they fell into the darker pits of Tartarus while, if they were forgotten, they wandered eternally in the bleakness of the land of Hades. While both Elysium and Tartarus existed in the time of the writer Hesiod (contemporary of Homer) they were not understood then in the same way they came to be. In Plato's dialogue of The Phaedo, Socrates delineates the various plateaus of the after-life and makes it clear that the soul who, in life, devotes itself to the Good is rewarded in the beyond with a much more pleasant existence than those who indulged their appetites and lived only for the pleasures the world has to offer. As most people, then as now, viewed their lost loved ones as paragons of human virtue (whether they were or not, in fact) it was considered one's duty to the dead to remember them well, regardless of the life they had lived, the mistakes they had made, and, thereby, provide them with continued existence in Elysium. This remembrance was not considered a matter of personal choice but, rather, an important part of what the Greeks knew as Eusebia. Piety in Ancient Greece We translate the Greek word `Eusebia' today as `piety' but eusebia was much more than that: it was one's duty to oneself, others and the gods which kept society on track and made clear one's place in the community. Socrates, for example, was executed by the city-state of Athens after having been convicted of impiety for allegedly corrupting the youth of Athens and speaking against the established gods. However unjust we may see Socrates' end today he would, in fact, have been guilty of impiety in that he encouraged the youth of Athens, by his own example, to question their elders and social superiors. This behavior would have been considered impious in that the youth were not acting in accordance with eusebia, i.e. they were forgetting their place and obligations in society. Eusebia & the After-life In the same way that one had to remember one's duty toward others in one's life, one also had to remember one's duty to those who had departed life. If one forgot to honor and remember the dead one was considered impious and, while this particualr breach of social conduct was not punished as severely as Socrates' breach, it was certainly frowned upon severely. Today, should one consider the tombstones of the ancient Greeks - whether in a museum or just below the Acropolis in Athens - one finds stones with comfortable, common scenes depicted: a husband sitting at table as his wife brings him his evening meal, a man being greeted by his dogs upon returning home. These simple scenes were not merely depictions of moments the deceased enjoyed in life; they were meant to remind the living viscerally of who that person was in life, of who that person still was now in death, and to spark the light of continued remembrance in order that the `dead' should live in bliss eternally. In ancient Greece death was defeated, not by the gods, but by the human agency of memory. - IQ Option http://iqoption.net.co/ en colombia Charlotte Home Furnishings 3658-5055 Acanthus Tapestry Cushion Wall Hanging - Green, H 46 x W 27 Home (Charlotte Home Furnishings) How did life in ancient greece change after the fall of the mycenaeans? Greeks had lost the marks of civilization: cities, great palaces and temples, a vibrant economy, and knowledge of writing. What is the animal life in ancient Greece. it was almost the same as ours today except the Greeks would sacrafice there animals to the gods at festivals and games, as you know the egyptians worshipped cats, well the Greeks worshipped dogs.
1,044
ENGLISH
1
First U.S. Self-Adhesive Stamp On November 15, 1974, the USPS issued its first experimental self-adhesive stamp. Throughout the 20th century, US postage evolved through a number of significant innovations such as the use of the rotary press and phosphorescent tagging. However, while these innovations may have gone largely unnoticed by the general public, one of the greatest postal innovations of the century was the introduction of self-adhesive stamps. Though common today, they had a rocky start. In 1974, the USPS began working on its first self-adhesive stamp. The Bureau of Engraving and Printing produced the stamps on their Andreotti press and leased additional machinery from companies that produced self-stick labels. The stamps were die-cut, stripped, rouletted, and cut into finished panes. The stamps also had crossed center slits to prevent them from being removed from envelopes and reused. Additionally, the stamps had rounded corners and were produced on a backing paper (or liner). Unlike today’s self-adhesive stamps, these stamps didn’t touch each other, and instead had lines of backing paper in between them. On the edge of each sheet were 10 self-adhesive tabs with plate numbers and a variety of phrases including “Self Sticking Stamps,” “Remove from Backing,” and “Do Not Moisten.” The Christmas stamp, picturing the weather vane from the top of Mount Vernon, was issued on November 15, 1974, in New York City. Unfortunately, both the USPS and collectors would soon deem the experiment a failure. For the USPS, production of the stamp was too expensive and crosscuts didn’t prevent them from being reused. Years later, collectors would discover that the rubber-based adhesive created brown spots on the stamps and this adhesive would also stain the covers. Because of all these issues, the USPS gave up on self-adhesives for 15 years. Then in 1989, they decided to try again. This time they used an acrylic-based adhesive and produced 18-stamp convertible booklets and strips of 18 for affixing machines. The stamps went on sale on November 19, 1989, in Virginia Beach, Virginia, to coincide with the annual VAPEX stamp show. However, the stamps themselves were only distributed to 15 cities for a 30-day test period. Customers in those cities were then given a questionnaire asking how they liked the stamps. Unfortunately, they were unpopular. But this was likely because there was a 50¢ premium added to the booklets to cover the higher production costs. This issue was also deemed a failure. Not ready to give up, the USPS tried again the following year. This time they printed the stamps on plastic instead of paper and they were issued in sheets the same size and thickness of paper currency for sale in select ATMs in Seattle. There was no additional premium added to these stamps and they were considered a success. The USPS then expanded the program, but the next stamps would be printed on paper because of complaints they had received from paper recyclers. The experiments continued and then in 1992, the USPS issued its first nationally distributed self-adhesives since 1974, the 29¢ Eagle and Shield stamps. They issued their first self-adhesive commemorative in 1996, honoring Tennessee Statehood. The number of self-adhesives grew over the years and by 2002, almost all US stamps were issued self-adhesive. Click here to see what else happened on This Day in History.
<urn:uuid:9305b57b-172a-418f-994f-95ba986d57b7>
CC-MAIN-2020-05
https://www.mysticstamp.com/info/this-day-in-history-november-15-1974-2/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591431.4/warc/CC-MAIN-20200117234621-20200118022621-00236.warc.gz
en
0.98421
734
3.296875
3
[ -0.5508595705032349, 0.13847443461418152, 0.14186576008796692, -0.3175182342529297, -0.1000530868768692, 0.2164611518383026, -0.12195171415805817, 0.2996252775192261, -0.01589338853955269, -0.04185335338115692, 0.2787075638771057, 0.33723464608192444, 0.015139986760914326, -0.0855542570352...
1
First U.S. Self-Adhesive Stamp On November 15, 1974, the USPS issued its first experimental self-adhesive stamp. Throughout the 20th century, US postage evolved through a number of significant innovations such as the use of the rotary press and phosphorescent tagging. However, while these innovations may have gone largely unnoticed by the general public, one of the greatest postal innovations of the century was the introduction of self-adhesive stamps. Though common today, they had a rocky start. In 1974, the USPS began working on its first self-adhesive stamp. The Bureau of Engraving and Printing produced the stamps on their Andreotti press and leased additional machinery from companies that produced self-stick labels. The stamps were die-cut, stripped, rouletted, and cut into finished panes. The stamps also had crossed center slits to prevent them from being removed from envelopes and reused. Additionally, the stamps had rounded corners and were produced on a backing paper (or liner). Unlike today’s self-adhesive stamps, these stamps didn’t touch each other, and instead had lines of backing paper in between them. On the edge of each sheet were 10 self-adhesive tabs with plate numbers and a variety of phrases including “Self Sticking Stamps,” “Remove from Backing,” and “Do Not Moisten.” The Christmas stamp, picturing the weather vane from the top of Mount Vernon, was issued on November 15, 1974, in New York City. Unfortunately, both the USPS and collectors would soon deem the experiment a failure. For the USPS, production of the stamp was too expensive and crosscuts didn’t prevent them from being reused. Years later, collectors would discover that the rubber-based adhesive created brown spots on the stamps and this adhesive would also stain the covers. Because of all these issues, the USPS gave up on self-adhesives for 15 years. Then in 1989, they decided to try again. This time they used an acrylic-based adhesive and produced 18-stamp convertible booklets and strips of 18 for affixing machines. The stamps went on sale on November 19, 1989, in Virginia Beach, Virginia, to coincide with the annual VAPEX stamp show. However, the stamps themselves were only distributed to 15 cities for a 30-day test period. Customers in those cities were then given a questionnaire asking how they liked the stamps. Unfortunately, they were unpopular. But this was likely because there was a 50¢ premium added to the booklets to cover the higher production costs. This issue was also deemed a failure. Not ready to give up, the USPS tried again the following year. This time they printed the stamps on plastic instead of paper and they were issued in sheets the same size and thickness of paper currency for sale in select ATMs in Seattle. There was no additional premium added to these stamps and they were considered a success. The USPS then expanded the program, but the next stamps would be printed on paper because of complaints they had received from paper recyclers. The experiments continued and then in 1992, the USPS issued its first nationally distributed self-adhesives since 1974, the 29¢ Eagle and Shield stamps. They issued their first self-adhesive commemorative in 1996, honoring Tennessee Statehood. The number of self-adhesives grew over the years and by 2002, almost all US stamps were issued self-adhesive. Click here to see what else happened on This Day in History.
750
ENGLISH
1
Johann Sebastian Bach Johann Sebastian Bach (31 March 1685 in Eisenach – 28 July 1750 in Leipzig; pronounced BAHK) was a German composer and organist. He lived in the last part of the Baroque period. He is most famous for his work Toccata and Fugue in D Minor, St. Matthew Passion, St. John Passion, Mass in B minor, and the Brandenburg Concertos. He spent several years working at courts of noblemen. Here he wrote most of his chamber music and orchestral music. Most of his life, however, he worked in a church where he was expected to write church music. Bach wrote almost every kind of music except opera. During the last part of his life most composers were writing in a new style called the Classical style, but Bach always wrote in the Baroque style. That made some people at the time think he was old-fashioned, but today we know that his work is the very best of Baroque music. Along with Mozart and Beethoven, Bach is regarded as one of the greatest composers who has ever lived. Early life[change | change source] Bach came from a highly musical family. His father, Johann Ambrosius Bach, was a trumpeter at the court of Saxe-Eisenach. Many of his relatives were professional musicians of some sort: violinists and town musicians, organists, Cantors (Directors of Music in a church), court musicians and Kapellmeisters (Directors of Music at a royal court). Most of them played several instruments. Of his twenty children, several became quite famous composers, especially Carl Philipp Emanuel Bach (1714–1788), Johann Christian Bach (1735–1782), Johann Christoph Friedrich Bach and Wilhelm Friedemann Bach (1710–1784). When he was fifteen, he went to the small town of Lüneburg. At first he sang treble in the choir and was said to have a very fine treble voice, but his voice very soon got lower, so he made himself useful playing instruments. He learned by listening to famous organists like Reincken (1623–1722) and Dietrich Buxtehude (1637–1707). Bach got his first job in 1703 in Arnstadt. It was a well-paid job for a young boy who was 18 years old. There was a new organ in the church, and Bach already knew a lot about organ building as well as being a brilliant organist. They asked him to examine the new organ, and then they offered him a job. Bach spent four years as organist there. He composed some organ works. Unfortunately, the congregation were not musical enough to like it. They did not understand the ornamental notes he added to the hymn tunes. Bach got rather fed up with the priests who were always complaining about it, so he resigned and took another job in Mühlhausen, not far away. After a year there, he gave up that job and went to a big town called Weimar. Weimar years (1708–1717)[change | change source] Johann Sebastian was made organist to the Duke of Saxe-Weimar. At the Duke’s court there was a chapel with an organ. Bach composed many of his great organ works at this time. He became very famous as an organist and was invited to play in other big churches and to give advice on organ building. He was extremely good at improvisation. On one occasion he was in Dresden at the same time as a French organist named Louis Marchant. There was going to be a competition between the two men to see who was better at improvisation. Bach was practicing the day before and Marchant heard him. He realized that Bach would win, so he left. In 1714 the Duke made Bach Konzertmeister (Concertmaster, a job that paid more money.) He had to write cantatas for church services. In 1717 he was offered a job in the town of Cöthen, where he would earn an even better salary. The Duke was angry and did not want him to go but Bach insisted, so the Duke put Bach in prison for a month. In the end he had to let the musician go. Cöthen (1717–1723)[change | change source] At Cöthen, Bach worked for Prince Leopold. The Prince was very musical and a wonderful man to work for. Bach was Kapellmeister (Director of Music) and was treated well. The organ was not very good, and it was not used much, so Bach did not write any organ music during this period. The Duke had an orchestra, and Bach was in charge. Nearly all Bach’s orchestral works were written in Cöthen: the Brandenburg Concertos, the violin concertos, the orchestral suites, the solo music for violin and for cello, and a lot of keyboard music for harpsichord or clavichord. During 1719, the great composer George Frideric Handel, who had moved to England, came to Germany to visit his mother. Bach wanted to meet Handel, who was only 30 km away, but these two famous musicians never met. Handel wanted to spend his limited time in Germany with his mother who was old and frail, knowing that it would be the last time he would see her. Bach’s first wife, Maria Barbara Bach, died in 1720. The couple had seven children. Soon afterwards, he married Anna Magdalena with whom he had another thirteen children. However, several of his children died young. Leipzig (1723–1750)[change | change source] In 1723 Bach shifted to Leipzig to take the job of Cantor at the St Thomas Church, a very large church in the town. As Cantor he was in charge of all the music, both at St Thomas and at another church nearby. He also had to compose music for the town. It was an excellent job, and more secure than being at a court. The schools were good for his sons. Bach stayed in Leipzig until his death. He loved his job most of the time and worked very hard. He composed many cantatas for the church services. These services were very long, lasting about three hours. Many of the cantatas he wrote last about 30 minutes, and that was just one part of a service! He had assistants to play the organ. Bach himself directed the choir and the orchestra. There were probably 16 singers in the choir and 18 players in the orchestra. He wrote the St Matthew Passion and the St John Passion. Both these works, which are very long, tell the story of Jesus dying on the cross. They are among the most famous pieces of music ever written. He also wrote cantatas for special occasions such as weddings or funerals. Life was not always easy, and sometimes there were arguments with the people who ruled the church. The sub-deacon wanted to choose some of the hymns, but this was the Cantor’s job. Bach was a sensible man, and he managed to get his way without making enemies. On another occasion he argued with the headmaster of the school (Bach had to do some teaching at the church school) about who was allowed to choose the choir section leaders. This actually went to court, and Bach won the case. Bach often made journeys to other towns. In 1747 he visited the court of Prussian King Frederick the Great near Berlin. The king, a music lover, gave Bach a theme to improvise from on the harpsichord. Bach sat down and improvised a fugue using this theme. Later Bach wrote a very long composition for flute, violin and harpsichord with cello accompaniment, in many movements, all based on this theme. At the end, the theme is heard in 5 of the 6 voices. Bach called it The Musical Offering and he sent it to the king. Bach wrote many fugues, eventually he decided to write a collection called The Art of Fugue. His plan was to publish it, but he died before he could finish it (his son later published it in his honor, as Bach's last published piece). In the last year or two of his life, he became blind in spite of two eye operations. In the 19th century more people became interested in Bach, and many of his works were published after he had been dead more than a hundred years. References[change | change source] - Wolff, Christoph. "Bach 7: Johann Sebastian Bach". Grove Music Online. Oxford University Press. Retrieved 2 May 2016.(subscription required) Other websites[change | change source] Find more about Johann Sebastian Bach at Wikipedia's sister projects |Definitions from Wiktionary| |Media from Commons| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Learning resources from Wikiversity|
<urn:uuid:3220552f-8525-4f2c-b683-68b56767f036>
CC-MAIN-2020-05
https://simple.wikipedia.org/wiki/Johann_Sebastian_Bach
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00143.warc.gz
en
0.990701
1,918
3.5
4
[ 0.06018466502428055, 0.24290364980697632, 0.36512690782546997, -0.23850283026695251, -0.41611355543136597, -0.15869292616844177, -0.3059355318546295, -0.1883229911327362, -0.24900540709495544, -0.5251510143280029, -0.2623351216316223, 0.10836850106716156, -0.0400957353413105, 0.17285270988...
1
Johann Sebastian Bach Johann Sebastian Bach (31 March 1685 in Eisenach – 28 July 1750 in Leipzig; pronounced BAHK) was a German composer and organist. He lived in the last part of the Baroque period. He is most famous for his work Toccata and Fugue in D Minor, St. Matthew Passion, St. John Passion, Mass in B minor, and the Brandenburg Concertos. He spent several years working at courts of noblemen. Here he wrote most of his chamber music and orchestral music. Most of his life, however, he worked in a church where he was expected to write church music. Bach wrote almost every kind of music except opera. During the last part of his life most composers were writing in a new style called the Classical style, but Bach always wrote in the Baroque style. That made some people at the time think he was old-fashioned, but today we know that his work is the very best of Baroque music. Along with Mozart and Beethoven, Bach is regarded as one of the greatest composers who has ever lived. Early life[change | change source] Bach came from a highly musical family. His father, Johann Ambrosius Bach, was a trumpeter at the court of Saxe-Eisenach. Many of his relatives were professional musicians of some sort: violinists and town musicians, organists, Cantors (Directors of Music in a church), court musicians and Kapellmeisters (Directors of Music at a royal court). Most of them played several instruments. Of his twenty children, several became quite famous composers, especially Carl Philipp Emanuel Bach (1714–1788), Johann Christian Bach (1735–1782), Johann Christoph Friedrich Bach and Wilhelm Friedemann Bach (1710–1784). When he was fifteen, he went to the small town of Lüneburg. At first he sang treble in the choir and was said to have a very fine treble voice, but his voice very soon got lower, so he made himself useful playing instruments. He learned by listening to famous organists like Reincken (1623–1722) and Dietrich Buxtehude (1637–1707). Bach got his first job in 1703 in Arnstadt. It was a well-paid job for a young boy who was 18 years old. There was a new organ in the church, and Bach already knew a lot about organ building as well as being a brilliant organist. They asked him to examine the new organ, and then they offered him a job. Bach spent four years as organist there. He composed some organ works. Unfortunately, the congregation were not musical enough to like it. They did not understand the ornamental notes he added to the hymn tunes. Bach got rather fed up with the priests who were always complaining about it, so he resigned and took another job in Mühlhausen, not far away. After a year there, he gave up that job and went to a big town called Weimar. Weimar years (1708–1717)[change | change source] Johann Sebastian was made organist to the Duke of Saxe-Weimar. At the Duke’s court there was a chapel with an organ. Bach composed many of his great organ works at this time. He became very famous as an organist and was invited to play in other big churches and to give advice on organ building. He was extremely good at improvisation. On one occasion he was in Dresden at the same time as a French organist named Louis Marchant. There was going to be a competition between the two men to see who was better at improvisation. Bach was practicing the day before and Marchant heard him. He realized that Bach would win, so he left. In 1714 the Duke made Bach Konzertmeister (Concertmaster, a job that paid more money.) He had to write cantatas for church services. In 1717 he was offered a job in the town of Cöthen, where he would earn an even better salary. The Duke was angry and did not want him to go but Bach insisted, so the Duke put Bach in prison for a month. In the end he had to let the musician go. Cöthen (1717–1723)[change | change source] At Cöthen, Bach worked for Prince Leopold. The Prince was very musical and a wonderful man to work for. Bach was Kapellmeister (Director of Music) and was treated well. The organ was not very good, and it was not used much, so Bach did not write any organ music during this period. The Duke had an orchestra, and Bach was in charge. Nearly all Bach’s orchestral works were written in Cöthen: the Brandenburg Concertos, the violin concertos, the orchestral suites, the solo music for violin and for cello, and a lot of keyboard music for harpsichord or clavichord. During 1719, the great composer George Frideric Handel, who had moved to England, came to Germany to visit his mother. Bach wanted to meet Handel, who was only 30 km away, but these two famous musicians never met. Handel wanted to spend his limited time in Germany with his mother who was old and frail, knowing that it would be the last time he would see her. Bach’s first wife, Maria Barbara Bach, died in 1720. The couple had seven children. Soon afterwards, he married Anna Magdalena with whom he had another thirteen children. However, several of his children died young. Leipzig (1723–1750)[change | change source] In 1723 Bach shifted to Leipzig to take the job of Cantor at the St Thomas Church, a very large church in the town. As Cantor he was in charge of all the music, both at St Thomas and at another church nearby. He also had to compose music for the town. It was an excellent job, and more secure than being at a court. The schools were good for his sons. Bach stayed in Leipzig until his death. He loved his job most of the time and worked very hard. He composed many cantatas for the church services. These services were very long, lasting about three hours. Many of the cantatas he wrote last about 30 minutes, and that was just one part of a service! He had assistants to play the organ. Bach himself directed the choir and the orchestra. There were probably 16 singers in the choir and 18 players in the orchestra. He wrote the St Matthew Passion and the St John Passion. Both these works, which are very long, tell the story of Jesus dying on the cross. They are among the most famous pieces of music ever written. He also wrote cantatas for special occasions such as weddings or funerals. Life was not always easy, and sometimes there were arguments with the people who ruled the church. The sub-deacon wanted to choose some of the hymns, but this was the Cantor’s job. Bach was a sensible man, and he managed to get his way without making enemies. On another occasion he argued with the headmaster of the school (Bach had to do some teaching at the church school) about who was allowed to choose the choir section leaders. This actually went to court, and Bach won the case. Bach often made journeys to other towns. In 1747 he visited the court of Prussian King Frederick the Great near Berlin. The king, a music lover, gave Bach a theme to improvise from on the harpsichord. Bach sat down and improvised a fugue using this theme. Later Bach wrote a very long composition for flute, violin and harpsichord with cello accompaniment, in many movements, all based on this theme. At the end, the theme is heard in 5 of the 6 voices. Bach called it The Musical Offering and he sent it to the king. Bach wrote many fugues, eventually he decided to write a collection called The Art of Fugue. His plan was to publish it, but he died before he could finish it (his son later published it in his honor, as Bach's last published piece). In the last year or two of his life, he became blind in spite of two eye operations. In the 19th century more people became interested in Bach, and many of his works were published after he had been dead more than a hundred years. References[change | change source] - Wolff, Christoph. "Bach 7: Johann Sebastian Bach". Grove Music Online. Oxford University Press. Retrieved 2 May 2016.(subscription required) Other websites[change | change source] Find more about Johann Sebastian Bach at Wikipedia's sister projects |Definitions from Wiktionary| |Media from Commons| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Learning resources from Wikiversity|
1,933
ENGLISH
1
Race as a social construct, means that judging people based on their color, or racism benefits some by being detriment to other. Race is something people learn, and not biological. So we are the people who created the idea of race because it is not nature. And there is a very awesome example from “White Man Burden”, when Thomas (a black man) went to the bank and he asked to get money. However, that white man told him “we are closed.” After that he asked him to talk with the manager of the bank. The reason for that is Thomas is one- hundred precincts sure the manager of the bank would be a black man. Also he wanted to talk with a high level of people not the lower people. Essay due? We'll write it for you! For the paternalism, interference with a group that a negative but justified by claiming it is in the best interest of the group being interfered with. For example, the fashion show that black people do, after they finished their fashion show, they showed poor white children. Even though it is a good thing to help these poor whites, this kind of help is not necessary because it’s just front of media. White people really want to get good jobs, but the black people don’t help them with that. That shows black people are the upper and the best and white people are the lower and white people always need black people help. Stratification is that the in the placemat of groups within a society, and one’s position in society typically not earned. When people born and they are rich and they have a lot of companies from their parents or grandparents, it is called stratification. The example for that is the Career Center in the movie when Pinnock (a white man) was looking for job. He went to the Career Center after he lost his job. The point is here; all the workers were black people because they have the power to control almost everything in this country. Also, all people who were looking for jobs were whites because they have no power to find jobs by themselves and they will get better job if they work under black people commands. Institutional Discrimination is the unequal distribution of right or opportunities to individuals or social groups that results from the normal operations of society, and when some institutions against someone for his/ her color. The example for that is when Pinnock lost his job because he saw Thomas’ wife. Even though Pinnock saw that by mistake, he would lose his job because his color is different from black people. Thomas asked the manager of Pinnock to kick him out of the job because he uncomfortable with him. Of course that wasn’t the real reason because race of color was the most power in that moment. Also when Pinnock went to talk with Thomas, the response of Thomas was “Tell him I can’t do anything for losing his job.” This film is a powerful example of racism. However this time is different rules. So what happens is that all white people are the lower and black people are the upper. That reversed the real life of people who live in the USA or maybe in the entire world Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can order our professional work here. Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you. Your essay sample has been sent. Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now
<urn:uuid:9c883eef-e6f5-4932-a6e3-34f4022230b2>
CC-MAIN-2020-05
https://eduzaurus.com/free-essay-samples/white-mans-burden-a-study-of-the-theme-of-color-discrimination/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00084.warc.gz
en
0.983089
767
3.53125
4
[ 0.07451213896274567, 0.31744661927223206, -0.282890260219574, 0.1077318862080574, -0.23745936155319214, 0.11227589845657349, 0.4428054094314575, -0.015784211456775665, -0.1172151044011116, -0.02126898244023323, 0.4797687828540802, -0.1289183348417282, -0.14001770317554474, 0.33225733041763...
2
Race as a social construct, means that judging people based on their color, or racism benefits some by being detriment to other. Race is something people learn, and not biological. So we are the people who created the idea of race because it is not nature. And there is a very awesome example from “White Man Burden”, when Thomas (a black man) went to the bank and he asked to get money. However, that white man told him “we are closed.” After that he asked him to talk with the manager of the bank. The reason for that is Thomas is one- hundred precincts sure the manager of the bank would be a black man. Also he wanted to talk with a high level of people not the lower people. Essay due? We'll write it for you! For the paternalism, interference with a group that a negative but justified by claiming it is in the best interest of the group being interfered with. For example, the fashion show that black people do, after they finished their fashion show, they showed poor white children. Even though it is a good thing to help these poor whites, this kind of help is not necessary because it’s just front of media. White people really want to get good jobs, but the black people don’t help them with that. That shows black people are the upper and the best and white people are the lower and white people always need black people help. Stratification is that the in the placemat of groups within a society, and one’s position in society typically not earned. When people born and they are rich and they have a lot of companies from their parents or grandparents, it is called stratification. The example for that is the Career Center in the movie when Pinnock (a white man) was looking for job. He went to the Career Center after he lost his job. The point is here; all the workers were black people because they have the power to control almost everything in this country. Also, all people who were looking for jobs were whites because they have no power to find jobs by themselves and they will get better job if they work under black people commands. Institutional Discrimination is the unequal distribution of right or opportunities to individuals or social groups that results from the normal operations of society, and when some institutions against someone for his/ her color. The example for that is when Pinnock lost his job because he saw Thomas’ wife. Even though Pinnock saw that by mistake, he would lose his job because his color is different from black people. Thomas asked the manager of Pinnock to kick him out of the job because he uncomfortable with him. Of course that wasn’t the real reason because race of color was the most power in that moment. Also when Pinnock went to talk with Thomas, the response of Thomas was “Tell him I can’t do anything for losing his job.” This film is a powerful example of racism. However this time is different rules. So what happens is that all white people are the lower and black people are the upper. That reversed the real life of people who live in the USA or maybe in the entire world Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can order our professional work here. Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you. Your essay sample has been sent. Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now
738
ENGLISH
1
In 1980, he noticed a small discrepancy between the Doppler shifts he expected to receive based on his algorithm and the actual, measured shifts of the radio signals coming from the spacecraft. Their expected and actual motions weren't quite matching up. As they moved outward against the gravitational pull of the sun and planets, the spacecraft were, of course, slowing down. But the problem was they were slowing down too much. Each year, both of the spacecraft were a few hundred miles farther behind where they should have been on their respective paths, according to the algorithm. That isn't much in the context of space travel, to be sure, but it isn't trivial either. The constant, extra acceleration amounted to 8.74 x 10-10 m/s2 directed toward the sun– a factor ten billion times smaller than the acceleration due to gravity, but still, undeniably, there.
<urn:uuid:cd112129-526c-4aab-a87a-4e19cad5e853>
CC-MAIN-2020-05
https://www.popsci.com/pioneeranomaly/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00121.warc.gz
en
0.98562
177
3.734375
4
[ -0.6161928176879883, -0.024974606931209564, 0.1055380254983902, -0.2760733962059021, -0.15279757976531982, -0.15871088206768036, -0.23205064237117767, 0.1582590639591217, 0.5709295272827148, 0.007011045701801777, 0.6877326965332031, 0.024278812110424042, -0.08648499101400375, 0.08894176781...
1
In 1980, he noticed a small discrepancy between the Doppler shifts he expected to receive based on his algorithm and the actual, measured shifts of the radio signals coming from the spacecraft. Their expected and actual motions weren't quite matching up. As they moved outward against the gravitational pull of the sun and planets, the spacecraft were, of course, slowing down. But the problem was they were slowing down too much. Each year, both of the spacecraft were a few hundred miles farther behind where they should have been on their respective paths, according to the algorithm. That isn't much in the context of space travel, to be sure, but it isn't trivial either. The constant, extra acceleration amounted to 8.74 x 10-10 m/s2 directed toward the sun– a factor ten billion times smaller than the acceleration due to gravity, but still, undeniably, there.
186
ENGLISH
1
In Chaucer’s The Canterbury Tales two characters that are alike in their professions, but very different in their lifestyles, are the Monk and the Parson. They may both have jobs that are involved with the lord, but they follow their paths very differently. But they both feel that they are doing things the way they should be done. In these ways, and more, they can be considered very alike or very different individuals. Both the Monk and the Parson are followers of God, and supposed to be strict believers in their religion. And, the Parson sticks to this belief very strongly. He follows every rule and edict to the T. He expects nothing less from those who follow him, within his parish. He is also a firm believer that one should practice what they preach, and therefore, he is an extremely devout man, who takes no pleasure from the material world. The Monk is of a different class of religious persons of that time period. Being a monk, he is supposed to follow the order of St. Benedict, and stay on a monastery. He is supposed to devote his life to prayer and working the earth to grow crops and to benefit those in need. However, this monk feels that his time is better put to use with other pursuits, such as hunting. He is an avid hunter and also an avid eater. He is a very rotund man, having eaten his fair share of meat, and it shows. In general he could be described as a fat and happy individual, with little regard for rules. While they may be very different…
<urn:uuid:33a6c1a9-8d9f-4ed7-835d-599158a7b6d9>
CC-MAIN-2020-05
https://www.majortests.com/essay/Religion-And-Monk-603147.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00275.warc.gz
en
0.99041
320
3.34375
3
[ 0.023537078872323036, 0.39746376872062683, -0.36873194575309753, -0.15952080488204956, -0.469839870929718, -0.17725025117397308, 0.6129847764968872, -0.3009326159954071, 0.1313180774450302, 0.047394298017024994, -0.4439833164215088, -0.27856454253196716, 0.5212489366531372, -0.111121080815...
1
In Chaucer’s The Canterbury Tales two characters that are alike in their professions, but very different in their lifestyles, are the Monk and the Parson. They may both have jobs that are involved with the lord, but they follow their paths very differently. But they both feel that they are doing things the way they should be done. In these ways, and more, they can be considered very alike or very different individuals. Both the Monk and the Parson are followers of God, and supposed to be strict believers in their religion. And, the Parson sticks to this belief very strongly. He follows every rule and edict to the T. He expects nothing less from those who follow him, within his parish. He is also a firm believer that one should practice what they preach, and therefore, he is an extremely devout man, who takes no pleasure from the material world. The Monk is of a different class of religious persons of that time period. Being a monk, he is supposed to follow the order of St. Benedict, and stay on a monastery. He is supposed to devote his life to prayer and working the earth to grow crops and to benefit those in need. However, this monk feels that his time is better put to use with other pursuits, such as hunting. He is an avid hunter and also an avid eater. He is a very rotund man, having eaten his fair share of meat, and it shows. In general he could be described as a fat and happy individual, with little regard for rules. While they may be very different…
316
ENGLISH
1
The Lutheran Church was established in South Australia in 1838 by German emigrants from Prussia who came because of religious persecution. Although this persecution ceased in the mid-1840s, many more Germans followed, seeking the better life that the first migrants reported to them. Settlements were established at Klemzig, Hahndorf, Lobethal and in the Barossa Valley. Some 20,000 German Lutherans migrated to South Australia between 1838 and 1860. With the expansion of settlement, the German Lutherans began to spread out across the state in search of larger landholdings. In their settlements, they soon built churches and schools. German Lutherans also came to Victoria from the 1840s onwards and established the Lutheran Church in the Melbourne district. Some Germans moved from South Australia to Victoria, first to the Hamilton district in the 1850s and then to the Wimmera in the 1860s and 1870s. In the 1860s, Lutheran families moved from South Australia to the southern region of New South Wales as land became available for selection. As a result, the Riverina became the main area for the Lutheran Church in New South Wales. German migration to Queensland began in large numbers in the 1860s. Their places of origin in Germany were different from those which produced the earlier migrants to southern Australia. Because of the distance from South Australia, separate Lutheran Churches were established in Victoria and Queensland. Only a small number of Lutheran congregations were established in Tasmania and Western Australia. As a result, 45 per cent of all Lutherans in Australia today are found in South Australia. Queensland has 25 per cent, Victoria 15 per cent, and the remaining 15 per cent in New South Wales, Western Australia and Tasmania. The Lutheran Church was predominantly a rural church and it remained so for over 100 years. With the growth of cities from the 1950s and the recent rural decline, there has been a steady rise in urban congregations. German continued to be the language of many Lutheran homes for up to three or four generations. Similarly, the language of the Lutheran Church was German in its worship and its business. In the early 1900s moves were made to introduce English, and this was hastened by the outbreak of World War I. There was a transition period in the 1920s and 1930s, and after World War II only English was used. The early Lutheran Church in Australia has unfortunately been marked by division. The first pastors, August Kavel and G D Fritzsche, disagreed on a number of matters and in 1846 they established separate churches. Further division led to more separate churches being formed. Victoria established its own church and Queensland had two Lutheran churches. As a result, in the early 1900s, there were eight separate Lutheran churches, plus some independent Lutheran pastors. In the 20th century efforts were made to bring unity and in 1921 five churches joined together. Another one joined in 1926. The final union in 1966 created the present-day Lutheran Church of Australia. Despite three Australian-trained pastors graduating in 1855, most of the pastors in the 1800s came from Germany, especially from the theological seminaries of Hermannsburg, Neuendettelsau, and Basle in Switzerland. From the 1880s the church sought pastors from the US (Missouri Synod, or Iowa Synod). From the early 1900s, they began training pastors in Australia at Concordia College and Immanuel College located. The provision of education for their children was a priority for the early Lutherans. Many congregational primary schools were started in the 1800s. During World War I the schools were closed in South Australia by an Act of Parliament. However, they gradually reopened after the war. Secondary colleges were also started in the 1890s. In the 1970s and 1980s, there was a rapid expansion in the Lutheran school system and numerous primary and secondary schools were established, especially in Queensland. The Lutheran Church has been very much involved in mission work to Aboriginal people. Early efforts at Adelaide and Brisbane were short-lived. In the 1860s a mission was started on the Cooper's Creek in South Australia, but it survived for only 50 years. In the 1870s the Finke River Mission was started at Hermannsburg in Central Australia and it still continues. In the 1880s the Hope Vale Mission was started in northern Queensland and also the New Guinea Mission. In South Australia, the Koonibba Mission was started in 1901 and the Yalata Mission in 1956. The preaching of the gospel continues in these areas today. One special feature of Lutheran missions has been the use of the local Indigenous languages.
<urn:uuid:63c7fbd6-07ab-4539-a4ce-aed5f5ddcd41>
CC-MAIN-2020-05
https://www.lca.org.au/about-us/our-history/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00439.warc.gz
en
0.984815
937
3.296875
3
[ 0.18508513271808624, -0.12529422342777252, -0.23949232697486877, 0.09080993384122849, -0.30451223254203796, 0.07912616431713104, -0.7938089370727539, -0.08208782970905304, -0.11065350472927094, -0.11770549416542053, 0.261802613735199, -0.7205427289009094, 0.32957202196121216, 0.13748140633...
16
The Lutheran Church was established in South Australia in 1838 by German emigrants from Prussia who came because of religious persecution. Although this persecution ceased in the mid-1840s, many more Germans followed, seeking the better life that the first migrants reported to them. Settlements were established at Klemzig, Hahndorf, Lobethal and in the Barossa Valley. Some 20,000 German Lutherans migrated to South Australia between 1838 and 1860. With the expansion of settlement, the German Lutherans began to spread out across the state in search of larger landholdings. In their settlements, they soon built churches and schools. German Lutherans also came to Victoria from the 1840s onwards and established the Lutheran Church in the Melbourne district. Some Germans moved from South Australia to Victoria, first to the Hamilton district in the 1850s and then to the Wimmera in the 1860s and 1870s. In the 1860s, Lutheran families moved from South Australia to the southern region of New South Wales as land became available for selection. As a result, the Riverina became the main area for the Lutheran Church in New South Wales. German migration to Queensland began in large numbers in the 1860s. Their places of origin in Germany were different from those which produced the earlier migrants to southern Australia. Because of the distance from South Australia, separate Lutheran Churches were established in Victoria and Queensland. Only a small number of Lutheran congregations were established in Tasmania and Western Australia. As a result, 45 per cent of all Lutherans in Australia today are found in South Australia. Queensland has 25 per cent, Victoria 15 per cent, and the remaining 15 per cent in New South Wales, Western Australia and Tasmania. The Lutheran Church was predominantly a rural church and it remained so for over 100 years. With the growth of cities from the 1950s and the recent rural decline, there has been a steady rise in urban congregations. German continued to be the language of many Lutheran homes for up to three or four generations. Similarly, the language of the Lutheran Church was German in its worship and its business. In the early 1900s moves were made to introduce English, and this was hastened by the outbreak of World War I. There was a transition period in the 1920s and 1930s, and after World War II only English was used. The early Lutheran Church in Australia has unfortunately been marked by division. The first pastors, August Kavel and G D Fritzsche, disagreed on a number of matters and in 1846 they established separate churches. Further division led to more separate churches being formed. Victoria established its own church and Queensland had two Lutheran churches. As a result, in the early 1900s, there were eight separate Lutheran churches, plus some independent Lutheran pastors. In the 20th century efforts were made to bring unity and in 1921 five churches joined together. Another one joined in 1926. The final union in 1966 created the present-day Lutheran Church of Australia. Despite three Australian-trained pastors graduating in 1855, most of the pastors in the 1800s came from Germany, especially from the theological seminaries of Hermannsburg, Neuendettelsau, and Basle in Switzerland. From the 1880s the church sought pastors from the US (Missouri Synod, or Iowa Synod). From the early 1900s, they began training pastors in Australia at Concordia College and Immanuel College located. The provision of education for their children was a priority for the early Lutherans. Many congregational primary schools were started in the 1800s. During World War I the schools were closed in South Australia by an Act of Parliament. However, they gradually reopened after the war. Secondary colleges were also started in the 1890s. In the 1970s and 1980s, there was a rapid expansion in the Lutheran school system and numerous primary and secondary schools were established, especially in Queensland. The Lutheran Church has been very much involved in mission work to Aboriginal people. Early efforts at Adelaide and Brisbane were short-lived. In the 1860s a mission was started on the Cooper's Creek in South Australia, but it survived for only 50 years. In the 1870s the Finke River Mission was started at Hermannsburg in Central Australia and it still continues. In the 1880s the Hope Vale Mission was started in northern Queensland and also the New Guinea Mission. In South Australia, the Koonibba Mission was started in 1901 and the Yalata Mission in 1956. The preaching of the gospel continues in these areas today. One special feature of Lutheran missions has been the use of the local Indigenous languages.
1,066
ENGLISH
1
To Improve Comprehension: 1.) VISUALIZE: Students need to make constant pictures in their heads of what they are reading. They can explain what they see, draw pictures, or act out things to let you know what they are picturing as they read. 2.) QUESTION: Students need to look through a text before they read and come up with questions that they want to know about the reading. As they read, they should continue to ask questions of themselves, especially when they do not understand something. You should also ask them questions to see if they are understanding. 3.) SUMMARIZE: Students need to stop and talk about what they read. This may be after a paragraph, a page, or several pages. They should be able to tell you the main idea of what they just read. It is very important that they stop and think about their reading. 4. ) CONNECT: Students need to make connections when they read. There may be parts in their reading in which they are reminded of something in their personal lives, something from another text, or something in the world around them as they read. These are important connections to make! 5.) PREDICT: Students should constantly be predicting as they read. They should use what they know as they read to make good guesses of what they think will happen next in a story. They can then see if their predictions are right or wrong and why.
<urn:uuid:34335700-0544-4b1a-94c9-1c28332d603b>
CC-MAIN-2020-05
https://www.ahschools.us/Page/15138
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00150.warc.gz
en
0.98305
292
4.5
4
[ 0.0478370264172554, -0.23053951561450958, 0.3336266279220581, -0.01555676944553852, -0.04493950679898262, 0.030780397355556488, 0.18898651003837585, 0.3831150531768799, -0.036642905324697495, -0.12425734102725983, 0.18820787966251373, -0.2947138249874115, 0.15308313071727753, 0.43540567159...
2
To Improve Comprehension: 1.) VISUALIZE: Students need to make constant pictures in their heads of what they are reading. They can explain what they see, draw pictures, or act out things to let you know what they are picturing as they read. 2.) QUESTION: Students need to look through a text before they read and come up with questions that they want to know about the reading. As they read, they should continue to ask questions of themselves, especially when they do not understand something. You should also ask them questions to see if they are understanding. 3.) SUMMARIZE: Students need to stop and talk about what they read. This may be after a paragraph, a page, or several pages. They should be able to tell you the main idea of what they just read. It is very important that they stop and think about their reading. 4. ) CONNECT: Students need to make connections when they read. There may be parts in their reading in which they are reminded of something in their personal lives, something from another text, or something in the world around them as they read. These are important connections to make! 5.) PREDICT: Students should constantly be predicting as they read. They should use what they know as they read to make good guesses of what they think will happen next in a story. They can then see if their predictions are right or wrong and why.
285
ENGLISH
1
The Shiva Purana is a very famous Hindu religious text belonging to the Purana genre of Sanskrit Texts in Hinduism. It is an integral part of the Shaivism literature corpus. The Shiva Purana is primarily dedicated to Lord Shiva and Goddess Parvati. However, it also references and reveres all gods. This post shares with you details about Shiva Purana. The original manuscript of Shiva Purana consisted of 100,000 verses that were set out in twelve Samhitas (books). It has been written by Romaharshana, who was the disciple of Sage Vyasa, belonging to the Suta class. Its surviving manuscripts have been found in many different versions and content. For example, one major version has seven books; another version has six books, and the third version is traced to the medieval Bengal region of the Indian Subcontinent that has two large sections, namely the “Purva-Khanda” and the “Uttara Khanda.” However, like other Puranas, Shiva Purana also existed as a living text that was occasionally edited, recast, as well as revised over a long period. It is estimated that the oldest manuscript of surviving texts had been likely composed around 10th to 11th century CE. Some of its chapters were likely composed after the 14th century. The Shiva Purana contains several chapters that are centered on Shiva cosmology, mythology, and relationship with Gods, Yoga, Ethics, Pilgrimage Sites, Bhakti, Rivers, as well as Geography and other topics. The Shiva Purana also throws significant insights on Advaita Vedanta philosophy. The text proves an important source of historical information on the theology behind Shaivism around 2nd-millennium CE. Who is Shiva? Shiva, who is also called Mahadeva or Bholenath, is one of the main deities of Hinduism. Shiva is considered as the “Destroyer” within the Trimurti that includes Brahma and Vishnu. According to the Shaivism tradition, Shiva is considered to be one of the supreme beings who creates, protects, as well as transforms the Universe. In fact, Hindu scriptures have both benevolent and fearsome depictions of Lord Shiva. As far as his benevolent aspects are concerned, Shiva has been described as an omniscient Yogi who leads the life of an ascetic on Mount Kailash. He is also depicted as a householder having Parvati as his wife, and Ganesh and Kartikeya as his two children. However, in his fierce aspects, Shiva is portrayed as slaying the demons. Shiva is also known as “Adiyogi” and is regarded as the patron God of yoga, meditation, and arts. Shiva is shown with a serpent around his neck. He adorns the crescent Moon, and the holy Ganga River flows from his matted hair. He adorns the third eye on his forehead and holds the Trishul or Trident as his weapon. He also holds the Damuru drum. He is usually worshipped in the iconic form of Shiva “Lingam.” The Hindus call Shiva as “Parabhrahman” which means nothingness. Shiva is portrayed to be omnipresent, omnipotent, and even present in the form of one’s consciousness. In his Nataraja form, Shiva is worshipped in a human figure format. However, he is usually worshipped in the Lingam figure. The Sanskrit word, Shiva, means auspicious, gracious, kind, benevolent, and friendly. Here, the root word “Si” means “in whom all things lay” and “Va” signifies “embodiment of grace.” The Vedas portray Shiva as Rudra who is the auspicious one and liberates the soul from the bondage of life and death. Shiva is known by numerous names such as Vishwanath, Mahadeva, Maheshwar, Shankara, Shambu, Rudra, Neelakantha, Trilokinatha, Hara, Devendra, and Ghrneshwar (lord of compassion). You may like Lord Shiva Mantras Who wrote Shiva Purana? The Shiva Purana was written by Romaharshana, who was the disciple of Sage Vyasa, belonging to Suta class. What is Shivling, according to Shiv Puran? A Shiva Lingam is an abstract or aniconic representation of Lord Shiva. It is a simple cylinder that is set inside a yoni and placed within a disc-shaped platform. It is regarded as a form of spiritual iconography. As per the Shiva Purana, the Shiva Lingam has been described as the beginning-less and endless cosmic pillar of fire. It is the cause of all causes. Lord Shiva is shown as emerging from the lingam in the form of a cosmic pillar of fire that proves his superiority over other gods such as Brahma and Vishnu. So, the Shiva Lingam symbolizes the infinite nature of Shiva. Famous Stories from Shiva Purana Here, we narrate one of the most famous stories in Shiva Purana. This story goes as follows: Once, Lord Vishnu was having a nap on the serpent king Sheshanaga. Goddess Lakshmi was serving him along with his attendants. It so happened that Lord Brahma came to see him. He went angry with Lord Vishnu as the later did not get up and saluted him. He argued with Lord Vishnu and said that He was the protector of the world. However, Lord Vishnu told Brahma that the whole universe is situated within him. He told Brahma that he had emerged from the navel seated in the lotus. So, Brahma was his son. They continued to argue with each other, and finally, they were ready to fight. Lord Brahma sat on his Swan, and Lord Vishnu sat on his Garuda and started to fight with each other. It was a terrible fight as both showered deadly weapons on each other. All the Devas witnessed the terrible fight and decided to approach Lord Shiva to end this terrible war between Brahma and Vishnu. So, they went to Kailash, the abode of Lord Shiva. There they saw Lord Shiva sitting in the company of Goddess Uma. All the Devas bowed down before Lord Shiva. Now, Lord Shiva told the Devas that he already knew about the fight between Brahma and Vishnu. Lord Shiva reached the battlefield where the terrible war was going on. He assumed the form of huge column fire and stood in between them. This column of fire had neither beginning nor any end. So, Brahma and Vishnu decided to find its beginning and end. Lord Vishnu attained the form of a Boar and headed downwards to find the end of the column of fire. However, Lord Brahma attained the form of a Swan and headed upwards in search of the end of the column of fire. Now, Lord Vishnu was not able to find the root of the column and returned to the battleground. However, Brahma went on flying to prove his supremacy. Lord Shiva laughed to see this struggle of Brahma, and the Ketaki flower fell from his head. Then Brahma asked Ketaki to lie for him that he had found the end of the fire column. Now, Brahma went down and told Vishnu that he had found the beginning of the column of fire. Ketaki gave its proof. Then, Vishnu nodded before Brahma and said to him, “Oh! Brahma, you are greater than me.” This made Lord Shiva turn red with anger. He came to his real form and scolded Brahma that he was lying as there was no beginning to this column. He opened his third eye, and a ferocious being emerged from it. He was Kala Bhairava, who chopped the Brahma’s head. Now, Lord Shiva told Brahma that no one would worship him as he had lied before Vishnu. Lord Shiva told Vishnu that as he had followed the path of truth, he would be worshipped as “Satya Narayan,” and your devotees would perform Satya Narayan Puja on Poornima. Now, Lord Shiva returned to his abode Kailash. Here, we narrate another interesting story from Shiva Purana. This story describes the birth of Mangal (Mars), Graha. According to the Shiva Purana, Mangal was born out of Lord Shiva’s sweatdrop. It so happened that after the death of Sati, Lord Shiva went into the state of deep Samadhi. When he opened his eyes, his sweatdrop fell down. His sweat drop took the form of a child and started to cry. The mother earth took the form of a woman and held the child in his hands and calmed him down. Now, Lord Shiva told Goddess Earth that this red colored baby would be called Mangal, and you would have to bear the role of his mother. He will always be near your position in the solar system. So, our ancient Rishis knew much about the solar system. Now, it is scientifically proven that Mars is a red-colored planet and is near to the earth. Signs of death according to Shiv Puran Shiva Purana describes eleven signs that indicate the death of a person. Lord Shiva told Goddess Parvati about the following signs of death: - A person cannot see his or her shadow if one month is left in a death - If the tongue of a person gets swollen suddenly and teeth are filled with puss, then it means that death is very close - If the tongue, mouth, ear, eyes, and nose become hard like a stone, then it means that the person has a month only to leave this world - If a person is not able to see any color except black color, then it means that death will happen very soon - If a person begins to see the sun, moon, and sky as red, then it means that death is very close - When a person dreams of an owl, then it indicates that death is very near - If a person left-hand goes on twitching, then it signals that death is very near - If a person is not able to locate the Dhruva star in the sky, then it means that the person will die within six months maximum - If a person is not able to view his or her reflection in the water, mirror, and oil, then it means that death is going to happen soon - If a person gets suddenly surrounded by blue flies, then it means that he or she will die within a month - If a crow, vulture, or pigeon sits on someone’s head, then it means that death is near - If the color of a person turns into pale yellow, then it is indicative that death will occur shortly - When a person is not able to see the light of the sun, moon, stars, and fire, then it means that the person will die within six months 12 Jyotirlinga according to Shiva Purana The 12 Jyotirlinga, according to Shiva Purana, are as follows: - Kedarnath in the Himalayas - Bhima Shankar in Dakinya - Viswesvara in Varanasi - Triambakeshwar on the banks of River Gautami - Somnath in Saurashtra - Mallikarjuna in Sri Sailam - Mahakaal in Ujjain - Amareswara at Omkara - Vaidyanath in Chitha Bhumi - Nagesa at Daruka - Rameshwara in Setu Bandhanan Can anyone read Shiva Purana at home? Not only Shiva Purana but also all other Puranas as well are sacred religious texts of Hinduism. So, you can certainly keep any of the Hindu Purana literature (including Shiva Purana) at home as they are pure and pious religious documents of life at large. The Shiva Purana is a holy religious text that should not only be kept at home but read and understood well. It goes on to enhance your understanding of life. All these religious texts, including Shiva Purana, tell the truth of life, which may not be sweet to you. So, you should be prepared to face the truth. The Shiva Purana should be placed in a clean, neat, and sanctified place in your home. You should read them slowly so that you can digest their sayings. It will help to resolve you as a person. So, now you should have got clarity on the subject of whether you should read Shiva Purana at home. Shiva Purana Book The Shiva Purana Book is available in Kindle Edition. Many other publishers have also come up with numerous editions of Shiva Purana. For example, Geetapress Gorakhpur has come up with Hardcover Shiva Purana in four volumes. It is cheaply priced and available at all leading online stores such as Amazon. With this, we have come to the end of this post on Shiva Purana. We hope that you have found this article useful and interesting. Thanks for visiting. We welcome your comments and suggestions. Please share the post across major social networking channels.
<urn:uuid:3e77f215-642b-41fe-be64-2173ecfe900e>
CC-MAIN-2020-05
https://www.hindutsav.com/shiva-purana/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00057.warc.gz
en
0.980809
2,794
3.53125
4
[ 0.09754529595375061, 0.03776609152555466, -0.5667887926101685, -0.3094409704208374, -0.4802462160587311, -0.19670544564723969, -0.11982612311840057, 0.2933046817779541, 0.38806992769241333, 0.3994593918323517, -0.24817465245723724, -0.7828297019004822, -0.0192024614661932, -0.1077804863452...
13
The Shiva Purana is a very famous Hindu religious text belonging to the Purana genre of Sanskrit Texts in Hinduism. It is an integral part of the Shaivism literature corpus. The Shiva Purana is primarily dedicated to Lord Shiva and Goddess Parvati. However, it also references and reveres all gods. This post shares with you details about Shiva Purana. The original manuscript of Shiva Purana consisted of 100,000 verses that were set out in twelve Samhitas (books). It has been written by Romaharshana, who was the disciple of Sage Vyasa, belonging to the Suta class. Its surviving manuscripts have been found in many different versions and content. For example, one major version has seven books; another version has six books, and the third version is traced to the medieval Bengal region of the Indian Subcontinent that has two large sections, namely the “Purva-Khanda” and the “Uttara Khanda.” However, like other Puranas, Shiva Purana also existed as a living text that was occasionally edited, recast, as well as revised over a long period. It is estimated that the oldest manuscript of surviving texts had been likely composed around 10th to 11th century CE. Some of its chapters were likely composed after the 14th century. The Shiva Purana contains several chapters that are centered on Shiva cosmology, mythology, and relationship with Gods, Yoga, Ethics, Pilgrimage Sites, Bhakti, Rivers, as well as Geography and other topics. The Shiva Purana also throws significant insights on Advaita Vedanta philosophy. The text proves an important source of historical information on the theology behind Shaivism around 2nd-millennium CE. Who is Shiva? Shiva, who is also called Mahadeva or Bholenath, is one of the main deities of Hinduism. Shiva is considered as the “Destroyer” within the Trimurti that includes Brahma and Vishnu. According to the Shaivism tradition, Shiva is considered to be one of the supreme beings who creates, protects, as well as transforms the Universe. In fact, Hindu scriptures have both benevolent and fearsome depictions of Lord Shiva. As far as his benevolent aspects are concerned, Shiva has been described as an omniscient Yogi who leads the life of an ascetic on Mount Kailash. He is also depicted as a householder having Parvati as his wife, and Ganesh and Kartikeya as his two children. However, in his fierce aspects, Shiva is portrayed as slaying the demons. Shiva is also known as “Adiyogi” and is regarded as the patron God of yoga, meditation, and arts. Shiva is shown with a serpent around his neck. He adorns the crescent Moon, and the holy Ganga River flows from his matted hair. He adorns the third eye on his forehead and holds the Trishul or Trident as his weapon. He also holds the Damuru drum. He is usually worshipped in the iconic form of Shiva “Lingam.” The Hindus call Shiva as “Parabhrahman” which means nothingness. Shiva is portrayed to be omnipresent, omnipotent, and even present in the form of one’s consciousness. In his Nataraja form, Shiva is worshipped in a human figure format. However, he is usually worshipped in the Lingam figure. The Sanskrit word, Shiva, means auspicious, gracious, kind, benevolent, and friendly. Here, the root word “Si” means “in whom all things lay” and “Va” signifies “embodiment of grace.” The Vedas portray Shiva as Rudra who is the auspicious one and liberates the soul from the bondage of life and death. Shiva is known by numerous names such as Vishwanath, Mahadeva, Maheshwar, Shankara, Shambu, Rudra, Neelakantha, Trilokinatha, Hara, Devendra, and Ghrneshwar (lord of compassion). You may like Lord Shiva Mantras Who wrote Shiva Purana? The Shiva Purana was written by Romaharshana, who was the disciple of Sage Vyasa, belonging to Suta class. What is Shivling, according to Shiv Puran? A Shiva Lingam is an abstract or aniconic representation of Lord Shiva. It is a simple cylinder that is set inside a yoni and placed within a disc-shaped platform. It is regarded as a form of spiritual iconography. As per the Shiva Purana, the Shiva Lingam has been described as the beginning-less and endless cosmic pillar of fire. It is the cause of all causes. Lord Shiva is shown as emerging from the lingam in the form of a cosmic pillar of fire that proves his superiority over other gods such as Brahma and Vishnu. So, the Shiva Lingam symbolizes the infinite nature of Shiva. Famous Stories from Shiva Purana Here, we narrate one of the most famous stories in Shiva Purana. This story goes as follows: Once, Lord Vishnu was having a nap on the serpent king Sheshanaga. Goddess Lakshmi was serving him along with his attendants. It so happened that Lord Brahma came to see him. He went angry with Lord Vishnu as the later did not get up and saluted him. He argued with Lord Vishnu and said that He was the protector of the world. However, Lord Vishnu told Brahma that the whole universe is situated within him. He told Brahma that he had emerged from the navel seated in the lotus. So, Brahma was his son. They continued to argue with each other, and finally, they were ready to fight. Lord Brahma sat on his Swan, and Lord Vishnu sat on his Garuda and started to fight with each other. It was a terrible fight as both showered deadly weapons on each other. All the Devas witnessed the terrible fight and decided to approach Lord Shiva to end this terrible war between Brahma and Vishnu. So, they went to Kailash, the abode of Lord Shiva. There they saw Lord Shiva sitting in the company of Goddess Uma. All the Devas bowed down before Lord Shiva. Now, Lord Shiva told the Devas that he already knew about the fight between Brahma and Vishnu. Lord Shiva reached the battlefield where the terrible war was going on. He assumed the form of huge column fire and stood in between them. This column of fire had neither beginning nor any end. So, Brahma and Vishnu decided to find its beginning and end. Lord Vishnu attained the form of a Boar and headed downwards to find the end of the column of fire. However, Lord Brahma attained the form of a Swan and headed upwards in search of the end of the column of fire. Now, Lord Vishnu was not able to find the root of the column and returned to the battleground. However, Brahma went on flying to prove his supremacy. Lord Shiva laughed to see this struggle of Brahma, and the Ketaki flower fell from his head. Then Brahma asked Ketaki to lie for him that he had found the end of the fire column. Now, Brahma went down and told Vishnu that he had found the beginning of the column of fire. Ketaki gave its proof. Then, Vishnu nodded before Brahma and said to him, “Oh! Brahma, you are greater than me.” This made Lord Shiva turn red with anger. He came to his real form and scolded Brahma that he was lying as there was no beginning to this column. He opened his third eye, and a ferocious being emerged from it. He was Kala Bhairava, who chopped the Brahma’s head. Now, Lord Shiva told Brahma that no one would worship him as he had lied before Vishnu. Lord Shiva told Vishnu that as he had followed the path of truth, he would be worshipped as “Satya Narayan,” and your devotees would perform Satya Narayan Puja on Poornima. Now, Lord Shiva returned to his abode Kailash. Here, we narrate another interesting story from Shiva Purana. This story describes the birth of Mangal (Mars), Graha. According to the Shiva Purana, Mangal was born out of Lord Shiva’s sweatdrop. It so happened that after the death of Sati, Lord Shiva went into the state of deep Samadhi. When he opened his eyes, his sweatdrop fell down. His sweat drop took the form of a child and started to cry. The mother earth took the form of a woman and held the child in his hands and calmed him down. Now, Lord Shiva told Goddess Earth that this red colored baby would be called Mangal, and you would have to bear the role of his mother. He will always be near your position in the solar system. So, our ancient Rishis knew much about the solar system. Now, it is scientifically proven that Mars is a red-colored planet and is near to the earth. Signs of death according to Shiv Puran Shiva Purana describes eleven signs that indicate the death of a person. Lord Shiva told Goddess Parvati about the following signs of death: - A person cannot see his or her shadow if one month is left in a death - If the tongue of a person gets swollen suddenly and teeth are filled with puss, then it means that death is very close - If the tongue, mouth, ear, eyes, and nose become hard like a stone, then it means that the person has a month only to leave this world - If a person is not able to see any color except black color, then it means that death will happen very soon - If a person begins to see the sun, moon, and sky as red, then it means that death is very close - When a person dreams of an owl, then it indicates that death is very near - If a person left-hand goes on twitching, then it signals that death is very near - If a person is not able to locate the Dhruva star in the sky, then it means that the person will die within six months maximum - If a person is not able to view his or her reflection in the water, mirror, and oil, then it means that death is going to happen soon - If a person gets suddenly surrounded by blue flies, then it means that he or she will die within a month - If a crow, vulture, or pigeon sits on someone’s head, then it means that death is near - If the color of a person turns into pale yellow, then it is indicative that death will occur shortly - When a person is not able to see the light of the sun, moon, stars, and fire, then it means that the person will die within six months 12 Jyotirlinga according to Shiva Purana The 12 Jyotirlinga, according to Shiva Purana, are as follows: - Kedarnath in the Himalayas - Bhima Shankar in Dakinya - Viswesvara in Varanasi - Triambakeshwar on the banks of River Gautami - Somnath in Saurashtra - Mallikarjuna in Sri Sailam - Mahakaal in Ujjain - Amareswara at Omkara - Vaidyanath in Chitha Bhumi - Nagesa at Daruka - Rameshwara in Setu Bandhanan Can anyone read Shiva Purana at home? Not only Shiva Purana but also all other Puranas as well are sacred religious texts of Hinduism. So, you can certainly keep any of the Hindu Purana literature (including Shiva Purana) at home as they are pure and pious religious documents of life at large. The Shiva Purana is a holy religious text that should not only be kept at home but read and understood well. It goes on to enhance your understanding of life. All these religious texts, including Shiva Purana, tell the truth of life, which may not be sweet to you. So, you should be prepared to face the truth. The Shiva Purana should be placed in a clean, neat, and sanctified place in your home. You should read them slowly so that you can digest their sayings. It will help to resolve you as a person. So, now you should have got clarity on the subject of whether you should read Shiva Purana at home. Shiva Purana Book The Shiva Purana Book is available in Kindle Edition. Many other publishers have also come up with numerous editions of Shiva Purana. For example, Geetapress Gorakhpur has come up with Hardcover Shiva Purana in four volumes. It is cheaply priced and available at all leading online stores such as Amazon. With this, we have come to the end of this post on Shiva Purana. We hope that you have found this article useful and interesting. Thanks for visiting. We welcome your comments and suggestions. Please share the post across major social networking channels.
2,797
ENGLISH
1
Michael Collins played a major part in Ireland’s history after 1916. Michael Collins had been involved in the Easter Uprising in 1916, but he played a relatively low key part. It was after the Uprising that Collins made his mark leading to the treaty of 1921 that gave Ireland dominion status within the British Empire. Michael Collins was born in October 1890 in County Cork. This area was a heartland of the Fenian movement. His father, also called Michael, instilled in his son a love of Irish poetry and ballads. At school, Michael was taught by a teacher called Denis Lyons who belonged to the Irish Republican Brotherhood and the village blacksmith, James Santry, was a Fenian. He told the young Michael stories of Irish patriotism and in such an environment, Michael grew up with a strong sense of pride in Ireland and of being Irish. When he was 15, Collins emigrated to London. He worked as a clerk for the Post Office and he lived within the large Irish community in London. This community was never absorbed into London’s society itself. There were many people in London who felt that the Irish undercut the wages paid out to other workers and many in the Irish community felt ostracised. While in London, Collins joined Sinn Fein and the Gaelic League and in 1909, he became a member of the Irish Republican Brotherhood. In 1916, Collins returned to Ireland to take part in the Uprising in Dublin. He fought alongside others in the General Post Office. He played a relatively minor part and was not one of the leaders who was court-martialed. The inside of the General Post Office after the surrender Collins was sent to Richmond Barracks and then to Frongoch internment camp in Wales. He was released in December 1916 and immediately went back to Ireland. His goal now was to revitalise the campaign to get independence for Ireland. Collins was elected to the executive committee of Sinn Fein and he led a violent campaign against anything that represented British authority in Ireland – primarily the Royal Irish Constabulary (RIC) and the Army. The murder of RIC officers brought a tit-for-tat policy from the British. Ireland, post-World War One, was a dangerous country to be in. The more killings that were carried out by Collins and the men he led in the newly formed Irish Republican Army (IRA), the more the British responded with like. The notorious Black and Tans and the ‘Auxies’ were used by the British Army to spread fear throughout Ireland (though primarily in the south and west). Violence led to more violence on both sides. On November 21st, 1920, the IRA killed 14 British officers in the Secret Service. In reprisal, the British Army sent armoured vehicles onto the pitch at Croke Park where people were watching a football match, and opened fire on them. Twelve people were killed. In May 1921, the IRA set fire to the Custom House in Dublin – one of the symbols of Britain’s authority in Ireland. However, many of those in the Dublin IRA were captured as a result of this action. The British Prime Minister, David Lloyd George, was given some blunt advice by his military commanders in Ireland. “Go all out or get out” – meaning that the army should be allowed to do as it wished to resolve the problem, or if this was not acceptable at a political level, the British should pull out of Ireland as the army was in an un-winnable position as matters stood then. Eamonn de Valera, considered to be the leading republican politician in Ireland, sent Collins to London in October 1921 to negotiate a treaty. It was generally recognised by both sides that the situation as it stood in Ireland could not be allowed to continue. The difficult negotiations took three months before the treaty was signed by Collins and Arthur Griffiths. In December 1921, it was agreed that Ireland should have dominion status within the British Empire; i.e. that Ireland could govern itself but remain within the British Empire. The six northern counties were allowed to contract out of the treaty and remain part of the United Kingdom. To Collins, the treaty was simply the start of a process that, in his eyes, would lead to full independence for what was now the Irish Free State. Collins is said to have commented when he signed the treaty that: “I tell you, I have signed my death warrant” There were many in the south who believed that Collins had betrayed the republican movement. These people, including de Valera, wanted an independent and united Ireland. Some believed that Collins had sold out to the British government. Few seemed to realise that Collins was not a politician and that he had been put into a situation in which he had no experience of what to do. He was up against British politicians who were experienced in delicate negotiations. Some have argued that de Valera deliberately put Collins in this situation knowing that if he came back with an unacceptable treaty, it would seriously damage the reputation of Collins and weaken whatever political kudos he had in Ireland – therefore removing any potential threat he may have been to de Valera at a political level. It is known that Collins did not feel that he had the necessary knowledge and experience to get what was wanted and he asked de Valera to send others instead of him. Some, such as Countess Markievicz, openly called Collins a traitor to the cause. The Dáil accepted the treaty by just seven votes. This, in itself, seemed a justification of what Collins had set out to achieve. Arthur Griffiths replaced De Valera as president of the Dáil and Collins was appointed chairman of the provisional government which would take over Ireland once the British had left. Those who did not support the treaty fell back on violence and a civil war took place in Ireland from April 1922 to May 1923. The IRA split into the ‘Regulars’ (those who supported the treaty) and the ‘Irregulars’ (those who did not). On August 22nd, 1922, Collins journeyed to County Cork. He was due to meet troops of the new Irish Army. His car was ambushed at a place called Beal na mBlath and Collins was shot dead. To this day, no-one is completely sure what happened or who killed him. No-one else was killed in the ambush. Collins’ body lay in state in Dublin for three days and thousands paid their respects. Thousands also lined the streets for his funeral procession.
<urn:uuid:2c7b9668-24fb-4bad-83ee-e8eef0a821dd>
CC-MAIN-2020-05
https://www.historylearningsite.co.uk/ireland-1845-to-1922/michael-collins/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00404.warc.gz
en
0.990566
1,331
3.421875
3
[ 0.12168003618717194, 0.29792991280555725, 0.12328344583511353, -0.142884761095047, -0.006946751847863197, 0.26124393939971924, 0.45946940779685974, -0.32484185695648193, -0.2887803316116333, -0.17741291224956512, -0.3866030275821686, -0.2803466320037842, -0.37232816219329834, 0.34188961982...
12
Michael Collins played a major part in Ireland’s history after 1916. Michael Collins had been involved in the Easter Uprising in 1916, but he played a relatively low key part. It was after the Uprising that Collins made his mark leading to the treaty of 1921 that gave Ireland dominion status within the British Empire. Michael Collins was born in October 1890 in County Cork. This area was a heartland of the Fenian movement. His father, also called Michael, instilled in his son a love of Irish poetry and ballads. At school, Michael was taught by a teacher called Denis Lyons who belonged to the Irish Republican Brotherhood and the village blacksmith, James Santry, was a Fenian. He told the young Michael stories of Irish patriotism and in such an environment, Michael grew up with a strong sense of pride in Ireland and of being Irish. When he was 15, Collins emigrated to London. He worked as a clerk for the Post Office and he lived within the large Irish community in London. This community was never absorbed into London’s society itself. There were many people in London who felt that the Irish undercut the wages paid out to other workers and many in the Irish community felt ostracised. While in London, Collins joined Sinn Fein and the Gaelic League and in 1909, he became a member of the Irish Republican Brotherhood. In 1916, Collins returned to Ireland to take part in the Uprising in Dublin. He fought alongside others in the General Post Office. He played a relatively minor part and was not one of the leaders who was court-martialed. The inside of the General Post Office after the surrender Collins was sent to Richmond Barracks and then to Frongoch internment camp in Wales. He was released in December 1916 and immediately went back to Ireland. His goal now was to revitalise the campaign to get independence for Ireland. Collins was elected to the executive committee of Sinn Fein and he led a violent campaign against anything that represented British authority in Ireland – primarily the Royal Irish Constabulary (RIC) and the Army. The murder of RIC officers brought a tit-for-tat policy from the British. Ireland, post-World War One, was a dangerous country to be in. The more killings that were carried out by Collins and the men he led in the newly formed Irish Republican Army (IRA), the more the British responded with like. The notorious Black and Tans and the ‘Auxies’ were used by the British Army to spread fear throughout Ireland (though primarily in the south and west). Violence led to more violence on both sides. On November 21st, 1920, the IRA killed 14 British officers in the Secret Service. In reprisal, the British Army sent armoured vehicles onto the pitch at Croke Park where people were watching a football match, and opened fire on them. Twelve people were killed. In May 1921, the IRA set fire to the Custom House in Dublin – one of the symbols of Britain’s authority in Ireland. However, many of those in the Dublin IRA were captured as a result of this action. The British Prime Minister, David Lloyd George, was given some blunt advice by his military commanders in Ireland. “Go all out or get out” – meaning that the army should be allowed to do as it wished to resolve the problem, or if this was not acceptable at a political level, the British should pull out of Ireland as the army was in an un-winnable position as matters stood then. Eamonn de Valera, considered to be the leading republican politician in Ireland, sent Collins to London in October 1921 to negotiate a treaty. It was generally recognised by both sides that the situation as it stood in Ireland could not be allowed to continue. The difficult negotiations took three months before the treaty was signed by Collins and Arthur Griffiths. In December 1921, it was agreed that Ireland should have dominion status within the British Empire; i.e. that Ireland could govern itself but remain within the British Empire. The six northern counties were allowed to contract out of the treaty and remain part of the United Kingdom. To Collins, the treaty was simply the start of a process that, in his eyes, would lead to full independence for what was now the Irish Free State. Collins is said to have commented when he signed the treaty that: “I tell you, I have signed my death warrant” There were many in the south who believed that Collins had betrayed the republican movement. These people, including de Valera, wanted an independent and united Ireland. Some believed that Collins had sold out to the British government. Few seemed to realise that Collins was not a politician and that he had been put into a situation in which he had no experience of what to do. He was up against British politicians who were experienced in delicate negotiations. Some have argued that de Valera deliberately put Collins in this situation knowing that if he came back with an unacceptable treaty, it would seriously damage the reputation of Collins and weaken whatever political kudos he had in Ireland – therefore removing any potential threat he may have been to de Valera at a political level. It is known that Collins did not feel that he had the necessary knowledge and experience to get what was wanted and he asked de Valera to send others instead of him. Some, such as Countess Markievicz, openly called Collins a traitor to the cause. The Dáil accepted the treaty by just seven votes. This, in itself, seemed a justification of what Collins had set out to achieve. Arthur Griffiths replaced De Valera as president of the Dáil and Collins was appointed chairman of the provisional government which would take over Ireland once the British had left. Those who did not support the treaty fell back on violence and a civil war took place in Ireland from April 1922 to May 1923. The IRA split into the ‘Regulars’ (those who supported the treaty) and the ‘Irregulars’ (those who did not). On August 22nd, 1922, Collins journeyed to County Cork. He was due to meet troops of the new Irish Army. His car was ambushed at a place called Beal na mBlath and Collins was shot dead. To this day, no-one is completely sure what happened or who killed him. No-one else was killed in the ambush. Collins’ body lay in state in Dublin for three days and thousands paid their respects. Thousands also lined the streets for his funeral procession.
1,368
ENGLISH
1
Definition of Wheeze To breathe hard, and with an audible piping or whistling sound, as persons affected with asthma. | A piping or whistling sound caused by difficult respiration. | An ordinary whisper exaggerated so as to produce the hoarse sound known as the "stage whisper"; a forcible whisper with some admixture of tone. How to use Wheeze in Sentence? - 1. Mrs. Hearty began to shake and wheeze with laughter, and Millie stood looking at Bindle. 🔊 - 2. He was a glutton, and stuffed himself so at meals that he did little but choke and wheeze through the latter half of them. 🔊 - 3. He did not finish the sentence before the engine suddenly stopped with a sort of wheeze and groan which showed something was wrong. 🔊 - 4. When he tried to laugh, his lips trembled convulsively and the only noise produced was a hoarse wheeze like the blowing of bellows. 🔊 - 5. I plainly heard the wheeze of blood in its throat, and the sound, like a death-rattle, affected me powerfully. 🔊 - 6. He did nothing but wheeze for a good minute, and when he spoke it was with insinuating civility, in his best English. 🔊 - 7. He speaks in a bass voice, with a prolonged rattle and wheeze in his throat, like an old-fashioned clock, which buzzes before it strikes. 🔊 - 8. There was no necessity to stay in camp because one man happened to wheeze and cough, he said, and anyway, he could do that just as well when they were moving. 🔊 - 9. I opened my mouth, and instead of the usual vibrating words of love and compliment, there came forth a faint wheeze such as a baby with croup might emit. 🔊
<urn:uuid:fcde71e3-3f18-43a0-8986-229a1c051cd1>
CC-MAIN-2020-05
https://searchsentences.com/words/wheeze-in-a-sentence
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00400.warc.gz
en
0.9864
418
3.375
3
[ -0.23551349341869354, -0.17457082867622375, 0.33249983191490173, 0.017884721979498863, -0.7230218648910522, 0.19144193828105927, 0.8677868843078613, -0.006666189059615135, -0.05960065498948097, -0.3087391257286072, -0.14364753663539886, -0.013737496919929981, 0.32412177324295044, 0.1007107...
1
Definition of Wheeze To breathe hard, and with an audible piping or whistling sound, as persons affected with asthma. | A piping or whistling sound caused by difficult respiration. | An ordinary whisper exaggerated so as to produce the hoarse sound known as the "stage whisper"; a forcible whisper with some admixture of tone. How to use Wheeze in Sentence? - 1. Mrs. Hearty began to shake and wheeze with laughter, and Millie stood looking at Bindle. 🔊 - 2. He was a glutton, and stuffed himself so at meals that he did little but choke and wheeze through the latter half of them. 🔊 - 3. He did not finish the sentence before the engine suddenly stopped with a sort of wheeze and groan which showed something was wrong. 🔊 - 4. When he tried to laugh, his lips trembled convulsively and the only noise produced was a hoarse wheeze like the blowing of bellows. 🔊 - 5. I plainly heard the wheeze of blood in its throat, and the sound, like a death-rattle, affected me powerfully. 🔊 - 6. He did nothing but wheeze for a good minute, and when he spoke it was with insinuating civility, in his best English. 🔊 - 7. He speaks in a bass voice, with a prolonged rattle and wheeze in his throat, like an old-fashioned clock, which buzzes before it strikes. 🔊 - 8. There was no necessity to stay in camp because one man happened to wheeze and cough, he said, and anyway, he could do that just as well when they were moving. 🔊 - 9. I opened my mouth, and instead of the usual vibrating words of love and compliment, there came forth a faint wheeze such as a baby with croup might emit. 🔊
416
ENGLISH
1
1)What were the aims of the League? The aims of the League were: - Improve living and working conflicts. - Encourage couperation in business and trade. - Discouramge agression. - Encourge discurament. 2)What happened to Wilson when he returned to the USA after signing the Treaty of Versailles? What happened to Wilson when he returned to the USA happened in the 1920’s elección. Wilson was too sick to be presente in the elections, so Warren Harding, was the opposite of Wilson and Harding took his seat and decided that the USA will not be part of the League of Nations, because he wanted the Usa to be Isolation. Wilson was a democrat and Harding was a republican. 3) Why did German immigrants in USA not want to join the league? The German immigrants associated the League to the Treaty of Versailles that punished Germany. 4) What economic reason did USA give to get out of the League? The economic reason why USA did give to stay out of the league is beacuse USA could suffer. The reason are; that she will had to stop trading with a country that was behaving aggressively and that she will had to pay for troops. 5) How did Americans feel about imperialism in Europe? They were antiempires, they didn´t like them at all and they wanted to focus in themselfs to be more stronger and powerful. 6) Why did Poland invade Vilna? Why did the league not act about it? Poland invaded Vilna because she had some territotial conflicts with Lithuania and the League did not act because France and britain were not prepared to act. 7) Why was upper Silesia an important region for Poland and Germany? Because of two reasons, Upper Silesia was an important region for Poland and Germany. First is was because Upper Silesia was rich in minerals. Secondly because Silesia was in border between Germany and Polands. 8) How did the League solve the conflict in Vilna? The League did not solve it. The Leaugue tried to convince the Polish but they did not obey. 9) What did the League decide to do about the Aaland islands? When Sweden and Finland started threatening to go to war, the two of them accepted the League’s rule so the islands belonged to Finland. 10) Why did Mussolini invade Greece in the Corfu conflict? »The war was the border between Greece and Albania. The conference of Ambassadors was given this job and it appointed an Italian general called Tellini to supervise it. On 27 August, while they were surveying the greek side of the frontier area, Tellini and his team were ambushed and killed. The Italian leader Mussolini was furious and blamed the Greek goverment for the murder. On 29 August he demanded that it pay competion to Italy abd execute the murders.The Greeks, however had no idea who the murderes were. On 31 August Mussolini bomebarder and then ocupied the Greek island of Corfu.» 11) Why was the League criticized about the resolution in the Corfu conflict? Because this shows that anyone could do what they want if they were backed up by Britain and France. 12) How did the Geneva protocol weekend the League? When Britain waas having general elections, a new goverment was chosen and they refused the protocol and that how the Geneva protocol weekend the League instead of strengthen it. 13) Why did Greece invaded Bulgaria in 1925? In 1925 the Greece invaded Bulgarian because of the incident on the border were Greeks soldiers were killed. 14) Why did Greece complain that the League “seemed to have one rule for the large states (such as Italy) and other for the small ones”? Because it was not fair for the shorter states beacuse they had an only same League.
<urn:uuid:7687d8dd-f507-4d3b-80e8-065e43fd7a81>
CC-MAIN-2020-05
http://juandeelia.cumbresblogs.com/category/history/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00461.warc.gz
en
0.981937
827
3.375
3
[ -0.2080976516008377, -0.027602005749940872, -0.07908003777265549, -0.16148243844509125, 0.3359405994415283, 0.27334171533584595, 0.19119417667388916, 0.13207726180553436, -0.010340800508856773, -0.049459755420684814, 0.200899139046669, -0.12598368525505066, -0.0959068089723587, 0.264019906...
1
1)What were the aims of the League? The aims of the League were: - Improve living and working conflicts. - Encourage couperation in business and trade. - Discouramge agression. - Encourge discurament. 2)What happened to Wilson when he returned to the USA after signing the Treaty of Versailles? What happened to Wilson when he returned to the USA happened in the 1920’s elección. Wilson was too sick to be presente in the elections, so Warren Harding, was the opposite of Wilson and Harding took his seat and decided that the USA will not be part of the League of Nations, because he wanted the Usa to be Isolation. Wilson was a democrat and Harding was a republican. 3) Why did German immigrants in USA not want to join the league? The German immigrants associated the League to the Treaty of Versailles that punished Germany. 4) What economic reason did USA give to get out of the League? The economic reason why USA did give to stay out of the league is beacuse USA could suffer. The reason are; that she will had to stop trading with a country that was behaving aggressively and that she will had to pay for troops. 5) How did Americans feel about imperialism in Europe? They were antiempires, they didn´t like them at all and they wanted to focus in themselfs to be more stronger and powerful. 6) Why did Poland invade Vilna? Why did the league not act about it? Poland invaded Vilna because she had some territotial conflicts with Lithuania and the League did not act because France and britain were not prepared to act. 7) Why was upper Silesia an important region for Poland and Germany? Because of two reasons, Upper Silesia was an important region for Poland and Germany. First is was because Upper Silesia was rich in minerals. Secondly because Silesia was in border between Germany and Polands. 8) How did the League solve the conflict in Vilna? The League did not solve it. The Leaugue tried to convince the Polish but they did not obey. 9) What did the League decide to do about the Aaland islands? When Sweden and Finland started threatening to go to war, the two of them accepted the League’s rule so the islands belonged to Finland. 10) Why did Mussolini invade Greece in the Corfu conflict? »The war was the border between Greece and Albania. The conference of Ambassadors was given this job and it appointed an Italian general called Tellini to supervise it. On 27 August, while they were surveying the greek side of the frontier area, Tellini and his team were ambushed and killed. The Italian leader Mussolini was furious and blamed the Greek goverment for the murder. On 29 August he demanded that it pay competion to Italy abd execute the murders.The Greeks, however had no idea who the murderes were. On 31 August Mussolini bomebarder and then ocupied the Greek island of Corfu.» 11) Why was the League criticized about the resolution in the Corfu conflict? Because this shows that anyone could do what they want if they were backed up by Britain and France. 12) How did the Geneva protocol weekend the League? When Britain waas having general elections, a new goverment was chosen and they refused the protocol and that how the Geneva protocol weekend the League instead of strengthen it. 13) Why did Greece invaded Bulgaria in 1925? In 1925 the Greece invaded Bulgarian because of the incident on the border were Greeks soldiers were killed. 14) Why did Greece complain that the League “seemed to have one rule for the large states (such as Italy) and other for the small ones”? Because it was not fair for the shorter states beacuse they had an only same League.
801
ENGLISH
1
Largely trimmed stone was used in Europe for any type of buildings. In some areas like in the north of Germany quarries were extremely rare because suitable stones were not available so building materials had to be transported for large distances. In the north granite was available but for large projects that stone was too hard to bring it into shape. The transportation costs could easily exceed the material costs. So this was one of the reasons why people didn’t use expensive crafted stone but clay / bricks that was largely available. Due to glacier movements in early times parts of northern Germany like e.g. the Mecklenburg lake district (mecklenburgische Seenplatte) are covered with ground boulder stone. The stones were „cut“ and „shaped“ by the forces and weight of moving glaciers like in a mill. People used boulder but due to its roundish shape large quantities of mortar were needed. The stones had to be collected and selected to fit together into a wall. Bricks are in use for a very long time, but not before the 12th century dishes or forms were notably used to produce bricks in the same size on large scale. With the use of bricks in some areas the brick gothic developed. The basic concepts and shapes of the „usual“ gothic style remained but in details some adoptions had to be done (e.g. less decorations, less lacework, a very close look at the quality of each brick stone if used in highly stressed places, the use of colored bricks). For a long time in history the use of bricks was unusual but in the 12th century it reappeared. The bricks are comparatively small, lightweight units that were easier to transport than stone. Nevertheless brickworks were situated closely to the site of large „projects“. The bricks could be produced in different sizes (depending on the used forms or dishes) but for churches only a few sizes were used (it was a lot easier to handle only 2-5 sizes). The typical brick shape is a rectangular solid, with its longer side a bit longer than double the width so that two crosswise placed bricks above have the same length including the butt joint (see sketch below). In ancient times bricks were not standarized so each building site had its own brick sizes. The mean brick size was between 28 × 15 × 9 cm³ up to 30 × 14 × 10 cm³ (1 foot = 0,33 yards = 30,48 centimeters = 12 inches). The gaps were usually 1,5cm [3a]. The dimensions of the bricks set in some ways the length / thickness of the walls; usually people avoided to cut bricks into pieces or use another brick size if possible. In order to burn the dried clay bricks they were stacked with some space around each brick, the gaps were filled with coal, the whole thing was covered and set on fire. It took several days (up to two weeks) and may be compared to the production of coking coal instead of a „big blaze“. The brick quality varied strongly depending on the heat development and the brick position. (The mean brick size is a result of that too, if the bricks were too big they wouldn’t be consistent inside). The use of clay that contains lime could reduce the needed temperature; overheated bricks did not combine well with mortar; bricks that were not heated enough had too big pores and absorbed water (were not weatherproof especially under icy conditions). So the junk quota (Ausschußquote) was very high. Bricks of high quality could be recognized by their color and only these could be used in highly stressed places. In order to increase the resistance to weather brickssometimes were enameled (glasieren); that also allowed different colors, too, but was expensive.
<urn:uuid:0409a178-acd6-4ea9-ad07-772ad861d17d>
CC-MAIN-2020-05
http://www.kirchenbau-mittelalter.de/en/baustoffe/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00342.warc.gz
en
0.981708
782
3.765625
4
[ -0.005387926008552313, 0.26899224519729614, 0.1601693332195282, 0.07620202004909515, -0.32435640692710876, 0.3449685573577881, -0.2548954486846924, 0.10602572560310364, -0.21703588962554932, 0.02073548547923565, -0.2075621634721756, -0.5505277514457703, 0.05829297751188278, 0.3574717044830...
8
Largely trimmed stone was used in Europe for any type of buildings. In some areas like in the north of Germany quarries were extremely rare because suitable stones were not available so building materials had to be transported for large distances. In the north granite was available but for large projects that stone was too hard to bring it into shape. The transportation costs could easily exceed the material costs. So this was one of the reasons why people didn’t use expensive crafted stone but clay / bricks that was largely available. Due to glacier movements in early times parts of northern Germany like e.g. the Mecklenburg lake district (mecklenburgische Seenplatte) are covered with ground boulder stone. The stones were „cut“ and „shaped“ by the forces and weight of moving glaciers like in a mill. People used boulder but due to its roundish shape large quantities of mortar were needed. The stones had to be collected and selected to fit together into a wall. Bricks are in use for a very long time, but not before the 12th century dishes or forms were notably used to produce bricks in the same size on large scale. With the use of bricks in some areas the brick gothic developed. The basic concepts and shapes of the „usual“ gothic style remained but in details some adoptions had to be done (e.g. less decorations, less lacework, a very close look at the quality of each brick stone if used in highly stressed places, the use of colored bricks). For a long time in history the use of bricks was unusual but in the 12th century it reappeared. The bricks are comparatively small, lightweight units that were easier to transport than stone. Nevertheless brickworks were situated closely to the site of large „projects“. The bricks could be produced in different sizes (depending on the used forms or dishes) but for churches only a few sizes were used (it was a lot easier to handle only 2-5 sizes). The typical brick shape is a rectangular solid, with its longer side a bit longer than double the width so that two crosswise placed bricks above have the same length including the butt joint (see sketch below). In ancient times bricks were not standarized so each building site had its own brick sizes. The mean brick size was between 28 × 15 × 9 cm³ up to 30 × 14 × 10 cm³ (1 foot = 0,33 yards = 30,48 centimeters = 12 inches). The gaps were usually 1,5cm [3a]. The dimensions of the bricks set in some ways the length / thickness of the walls; usually people avoided to cut bricks into pieces or use another brick size if possible. In order to burn the dried clay bricks they were stacked with some space around each brick, the gaps were filled with coal, the whole thing was covered and set on fire. It took several days (up to two weeks) and may be compared to the production of coking coal instead of a „big blaze“. The brick quality varied strongly depending on the heat development and the brick position. (The mean brick size is a result of that too, if the bricks were too big they wouldn’t be consistent inside). The use of clay that contains lime could reduce the needed temperature; overheated bricks did not combine well with mortar; bricks that were not heated enough had too big pores and absorbed water (were not weatherproof especially under icy conditions). So the junk quota (Ausschußquote) was very high. Bricks of high quality could be recognized by their color and only these could be used in highly stressed places. In order to increase the resistance to weather brickssometimes were enameled (glasieren); that also allowed different colors, too, but was expensive.
784
ENGLISH
1
The high jump event dates back to the 1800s when it was created in Scotland. Over the years, strategies for the high jump have changed many times. If one runner discovers a new method of jumping, other athletes copy each other so that they don't miss out on any new advantage. In the event, runners attempt to jump over a horizontal bar. Athletes must clear the bar without knocking it down. If the athlete grazes the bar but it is not knocked down, it is counted as a successful jump. After each runner takes their attempt, the bar is raised and the next round of attempts begins. The amount that the bar is raised varies based on the competition, but a typical amount is 2 inches. Typically, runners are eliminated if they fail to clear the bar three consecutive times. However, rules like this can vary based off of the league or competition. The winner is the athlete who is able to jump over the bar at the tallest height. The high jump appeals to a very specific type of athlete. These competitors must be very nimble to clear the bar. It also is very beneficial to be tall and have strong legs. They share many of the same traits as hurdle runners. The first Summer Olympics, which occurred in 1896, featured the high jump. The high jump has been in the Summer Olympics each time since then. The high jump is also notable for women, as it was the only Summer Olympic Sport open to women from 1928 to 1948.
<urn:uuid:4866402a-d05a-4a5a-84f6-5a673dc8dfde>
CC-MAIN-2020-05
https://www.rookieroad.com/high-jump/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00166.warc.gz
en
0.980258
299
3.296875
3
[ -0.6672875881195068, 0.14297360181808472, 0.09271655976772308, 0.11940900981426239, -0.280309796333313, 0.18056513369083405, 0.2227024883031845, 0.23438283801078796, 0.37680959701538086, 0.05255360156297684, 0.10197068750858307, -0.4057195484638214, -0.15934357047080994, 0.1066905856132507...
1
The high jump event dates back to the 1800s when it was created in Scotland. Over the years, strategies for the high jump have changed many times. If one runner discovers a new method of jumping, other athletes copy each other so that they don't miss out on any new advantage. In the event, runners attempt to jump over a horizontal bar. Athletes must clear the bar without knocking it down. If the athlete grazes the bar but it is not knocked down, it is counted as a successful jump. After each runner takes their attempt, the bar is raised and the next round of attempts begins. The amount that the bar is raised varies based on the competition, but a typical amount is 2 inches. Typically, runners are eliminated if they fail to clear the bar three consecutive times. However, rules like this can vary based off of the league or competition. The winner is the athlete who is able to jump over the bar at the tallest height. The high jump appeals to a very specific type of athlete. These competitors must be very nimble to clear the bar. It also is very beneficial to be tall and have strong legs. They share many of the same traits as hurdle runners. The first Summer Olympics, which occurred in 1896, featured the high jump. The high jump has been in the Summer Olympics each time since then. The high jump is also notable for women, as it was the only Summer Olympic Sport open to women from 1928 to 1948.
313
ENGLISH
1
2nd Street from Market, looking south, 1866, Rincon Hill rises at end. The word "Rincon" means "inside corner" in Spanish. Before the 1860s the area surrounding Rincon Center was a cove that extended to what is now First Street. Rincon is the name that was given to the hill at the inside corner of the cove. In the early 1850s, wealthy pioneers built large homes on the crest of Rincon Hill, chosen for its views of the city. Noted author Bret Harte lived on Silver Street. Recent excavations in the area also provide evidence of Chinese entrepreneurship in the 1880s when they set up small businesses, including laundries, and small supply stories to serve the needs of the boarding house residents and others working at the docks on the South side of the Hill. Some of these artifacts can be seen in the Rincon Center display cases. The Chinese also had a small fishing village in this area. This village disappeared along with South Beach when the area was filled in the late 1860s. In 1868, they cut Second Street across the hill, which made it no longer a desirable place for the wealthy, who moved to Nob Hill and other neighborhoods to the north. Homes were turned into rooming houses, and many warehouses and hospitals, including Irish, German, French, British, Italian and Swiss hospitals, were built. With the development of the city, however, parts of Rincon Hill were cut down and much of this property was lost. The fire of 1906 wiped out the remaining vestiges of the formerly wealthy neighborhood. What still exists of the hill is largely hidden beneath the entrance to the Bay Bridge. Rincon Hill had also been the bastion of the industrial and shipping bosses. As they began to move out and maritime unions became increasingly strong, union leaders began to live in that area and SOMA in general. In their successful heyday, the built several large union halls, and then sadly, over time, began to watch their influence start to fade in the 1950's until today, when there is largely no maritime industry at all. - Northern Calif. Coalition on Immigrant Rights
<urn:uuid:add3aa31-10d5-475e-ad39-ef21b9d80f35>
CC-MAIN-2020-05
http://www.foundsf.org/index.php?title=Rincon_Hill&oldid=5755
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00546.warc.gz
en
0.985196
439
3.28125
3
[ 0.03491216525435448, -0.3749539852142334, 0.25812819600105286, 0.11623094230890274, 0.25061678886413574, 0.19848725199699402, -0.15371070802211761, 0.19540823996067047, -0.31469711661338806, -0.2485264390707016, 0.4538622200489044, -0.061862945556640625, 0.2093389928340912, 0.2006258070468...
2
2nd Street from Market, looking south, 1866, Rincon Hill rises at end. The word "Rincon" means "inside corner" in Spanish. Before the 1860s the area surrounding Rincon Center was a cove that extended to what is now First Street. Rincon is the name that was given to the hill at the inside corner of the cove. In the early 1850s, wealthy pioneers built large homes on the crest of Rincon Hill, chosen for its views of the city. Noted author Bret Harte lived on Silver Street. Recent excavations in the area also provide evidence of Chinese entrepreneurship in the 1880s when they set up small businesses, including laundries, and small supply stories to serve the needs of the boarding house residents and others working at the docks on the South side of the Hill. Some of these artifacts can be seen in the Rincon Center display cases. The Chinese also had a small fishing village in this area. This village disappeared along with South Beach when the area was filled in the late 1860s. In 1868, they cut Second Street across the hill, which made it no longer a desirable place for the wealthy, who moved to Nob Hill and other neighborhoods to the north. Homes were turned into rooming houses, and many warehouses and hospitals, including Irish, German, French, British, Italian and Swiss hospitals, were built. With the development of the city, however, parts of Rincon Hill were cut down and much of this property was lost. The fire of 1906 wiped out the remaining vestiges of the formerly wealthy neighborhood. What still exists of the hill is largely hidden beneath the entrance to the Bay Bridge. Rincon Hill had also been the bastion of the industrial and shipping bosses. As they began to move out and maritime unions became increasingly strong, union leaders began to live in that area and SOMA in general. In their successful heyday, the built several large union halls, and then sadly, over time, began to watch their influence start to fade in the 1950's until today, when there is largely no maritime industry at all. - Northern Calif. Coalition on Immigrant Rights
468
ENGLISH
1