text
stringlengths
10
951k
source
stringlengths
39
44
Koo Stark Kathleen Norris Stark (born April 26, 1956), better known as Koo Stark, is an American photographer and actress, known for her relationship with Prince Andrew. As a photographer, she continues to hold solo exhibitions. She is also a patron of the Julia Margaret Cameron Trust. Stark was born in New York. Her parents were Wilbur Stark (1912–1995), a writer and producer, and Kathi Norris (1919–2005), a writer and television presenter in New York City. She is the youngest of three children, the others being Pamela (born 1946) and Brad (born 1952). At the time of her birth, the family was living in Manhattan. Her grandfather, Edwin Earl Norris (1876–1957) was a cabinetmaker and musician, playing the French horn and the viola in the Newark Symphony Orchestra. Her mother's family were Presbyterians. After a divorce in the 1960s, her mother remarried. Koo Stark attended the Hewitt School in New York and the Glendower Preparatory School in Kensington, London. After training at a stage school, she embarked on an acting career. Her first film role was in the comedy "All I Want Is You... and You... and You..." (1974), produced by her father. In 1975 she appeared in "Las adolescentes" (The Adolescents), opposite Anthony Andrews, and starred in an episode of "Shades of Greene". Also that year she had an uncredited role as a bridesmaid in "The Rocky Horror Picture Show". Her best remembered performance is the lead role in the avant-garde film "Emily" (1976), directed by Henry Herbert, 17th Earl of Pembroke. Uncertain whether to accept the part, Stark did so on the advice of Graham Greene, with whom she had worked the year before. Of working with her in "Emily", actor Victor Spinetti later wrote "I found Koo Stark to be an enchanting girl and terribly bright and interesting". She also appeared in "Cruel Passion" (1977), a film based on the novel "Justine". Around the same time, she played the part of Camie Marstrap in "Star Wars" (1977); the scenes in which she appeared were cut from the film before its original release, but can be seen in "" (1998). Stark also began to work as a fashion model, particularly for Norman Parkinson. In February 1981, she was at the National Theatre as an understudy in the Edward Albee play "Who's Afraid of Virginia Woolf?" She appeared in the comedy "Eat the Rich" (1987), and then featured in "Timeslides", an episode of the sci-fi show "Red Dwarf" (1989), playing Lady Sabrina Mulholland-Jjones, the fiancée of a more successful Dave Lister in a parallel universe. In September 1987, she returned to the stage, taking the part of Vera Claybourne in Agatha Christie's "And Then There Were None" at the Duke of York's Theatre. The "London Theatre Record" posed the question "Why has a girl so obviously three-dimensional chosen a part so obviously two-dimensional?" She played Miss Scarlett in the 1991 series of "Cluedo", succeeding Toyah Willcox and befriending Rula Lenska. Stark has worked as a photographer since the 1980s, and may have been the first person to turn the tables on the pursuing paparazzi by taking photos of them. Prince Andrew has told how in 1983 a photographic printer, Gene Nocon, invited Stark to take photographs of people taking photos of her, for his exhibition, "Personal Points of View", planned for October. She persuaded Nocon to include Andrew's work as well. Her early photographs led to a book deal, for which she took lessons from Norman Parkinson. She travelled to Tobago, where he lived, and he became her mentor. Her book "Contrasts" (1985) included about a hundred of her photographs. She went on to study the work of leading photographers, including Angus McBean, whom she met and photographed. developing her interests in photography to include reportage, portraits, landscapes, still life, and other work. The book, "Contrasts" was launched at Hamiltons Gallery, London, in September 1985, at an exhibition of the same name. In 1994, the Gallery Bar at the Grosvenor House Hotel in Park Lane hosted an exhibition called 'The Stark Image', forty photographs by Stark, including several previously unpublished. In 1998, her work was featured at the Como Lario in Holbein Place, Belgravia. In July 2001 she had an exhibition called 'Stark Images" at the Fruitmarket Gallery in Edinburgh, duplicated from June to July 2001 at Dimbola Lodge on the Isle of Wight. A solo exhibition of portraits was at the Winter Gardens, Ventnor, from September to October 2010, and another at Dimbola Lodge from February to April, 2011. On 22 April 1987, a charity auction at Christie's, St James's, for the Campaign to Protect Rural England, featured signed work by David Bailey, Patrick Lichfield, Don McCullin, Terence Donovan, Fay Godwin, Heather Angel, Clive Arrowsmith, Linda McCartney, Koo Stark, and fifteen others, Views by Stark, including some of Kirby Muxloe Castle, were in G. H. Davies's "England's Glory" (1987), a CPRE book launched at the same time. Pictures by Stark have appeared in "Country Life" and other magazines. Several of her portraits are in the National Portrait Gallery, and work is also in the collections of the Victoria and Albert Museum, both in London. A Leica user, Stark has said her camera transcends mere function and is a personal friend. A solo exhibition hosted by the Leica gallery in Mayfair in May 2017 was entitled "Kintsugi", a Japanese word for a way of renovating things that have been broken. Stark explained the title: "Kintsugi is a way of learning to see individual beauty, and to appreciate the value of experience and honesty. It is the antithesis of digital, airbrushed, Photoshop-homogenised ‘beauty’." In August the exhibition was repeated in Manchester, to mark the opening of a new Leica store there. Stark met Prince Andrew in February 1981, and they were close for some two years, before and after his active service in the Falklands War. Tina Brown has claimed that this was Andrew's only serious love affair. In October 1982 they took a holiday together on the island of Mustique. According to Lady Colin Campbell, Andrew was in love, and the Queen was "much taken with the elegant, intelligent, and discreet Koo". However, in 1983, after 18 months of dating, they split up under pressure from the queen. In 1988, she brought a successful libel action against "The Mail on Sunday" over an untrue story headed 'Koo dated Andy after she wed'. In 1989, "The Spectator" reported that she had received £300,000 from one newspaper "for years of inaccurate persecution" and was also collecting money from others. In 1997, Prince Andrew became the godfather of Stark's daughter, and in 2015, when the Prince was accused by Virginia Roberts over the Jeffrey Epstein connection, Stark came to his defence, stating that he was a good man and she could help to rebut the claims. Stark married Tim Jefferies, manager of a photographic gallery, in August 1984, at St Saviour's, Hampstead, with the minister, Christopher Neil-Smith, commenting that "It was such a quiet affair you wouldn't have known it was happening." They stayed together for a year, later divorcing. About 1993, Stark was hit by a taxi in Old Compton Street, losing two teeth and also suffering a deep wound to her forehead, dented by collision with her camera. This accident left her temporarily disfigured, but the wound eventually closed leaving a small scar just under the hair-line. She has been a practising Buddhist since meeting the Dalai Lama. She was later engaged to Warren Walker, an American banker, but he cancelled their wedding before the birth of their daughter, Tatiana, in May 1997. In 2002, Stark was diagnosed with breast cancer and endured a double mastectomy and chemotherapy, causing her to lose her hair for a time. In another libel action in 2007, Stark won an apology and substantial damages from "Zoo Weekly" magazine, which had described her as a porn star. She commented "I am relieved that my name has been cleared of this false, highly damaging and serious allegation which has been proved to be completely untrue." In 2011, "The Daily Telegraph" called her an early Kate Middleton prototype and suggested that if she had not appeared in the film "Emily" early in her career she might have gone on to become the Duchess of York. In February 2015, the "Daily Mail" printed an apology and retraction of the claim it had printed in June 2013 that Stark had stolen a valuable painting from Warren Walker. Stark continues to live in London and is a member of the Chelsea Arts Club. She is a Patron of the Julia Margaret Cameron Trust.
https://en.wikipedia.org/wiki?curid=17284
Kliment Voroshilov Kliment Yefremovich Voroshilov (, "Kliment Yefremovich Voroshilov"; , "Klyment Okhrimovyč Vorošylov"), popularly known as Klim Voroshilov (, "Klim Vorošilov") (4 February 1881 – 2 December 1969), was a prominent Soviet military officer and politician during the Stalin era. He was one of the original five Marshals of the Soviet Union (the highest military rank of the Soviet Union), along with Chief of the General Staff of the Red Army Alexander Yegorov, and three senior commanders, Vasily Blyukher, Semyon Budyonny, and Mikhail Tukhachevsky. Voroshilov was born in the settlement of Verkhnyeye, Bakhmut uyezd, Yekaterinoslav Governorate, Russian Empire (now part of Lysychansk city in Luhansk Oblast, Ukraine), into a railway worker's family of Russian ethnicity. According to the Soviet Major General Petro Grigorenko, Voroshilov himself alluded to the heritage of his birth-country, Ukraine, and to the previous family name of "Voroshilo". Voroshilov joined the Bolshevik faction of the Russian Social Democratic Labour Party in 1905. Following the Russian Revolution of 1917, Voroshilov became a member of the Ukrainian Council of People's Commissars and Commissar for Internal Affairs along with Vasiliy Averin. He was well known for aiding Joseph Stalin in the Revolutionary Military Council (led by Leon Trotsky), having become closely associated with Stalin during the Red Army's 1918 defense of Tsaritsyn. Voroshilov was active as a commander of the Southern Front during the Russian Civil War and the Polish–Soviet War while with the 1st Cavalry Army. As Political Commissar serving co-equally with Stalin, Voroshilov was responsible for the morale of the 1st Cavalry Army, which was composed chiefly of peasants from southern Russia. Voroshilov headed the Petrograd Police during 1917 and 1918. Voroshilov served as a member of the Central Committee from his election in 1921 until 1961. In 1925, after the death of Mikhail Frunze, Voroshilov was appointed People's Commissar for Military and Navy Affairs and Chairman of the Revolutionary Military Council of the USSR, a post he held until 1934. His main accomplishment in this period was to move key Soviet war industries east of the Urals, so that the Soviet Union could strategically retreat, while keeping its manufacturing capability intact. Frunze's political position adhered to that of the Troika (Grigory Zinoviev, Lev Kamenev, Stalin), but Stalin preferred to have a close, personal ally in charge (as opposed to Frunze, a "Zinovievite"). Frunze was urged by a group of Stalin's hand-picked doctors to have surgery to treat an old stomach ulcer, despite previous doctors recommendations to avoid surgery and Frunze's own unwillingness. He died on the operating table of a massive overdose of chloroform, an anaesthetic. Voroshilov became a full member of the newly formed Politburo in 1926, remaining a member until 1960. Voroshilov was appointed People's Commissar (Minister) for Defence in 1934 and a Marshal of the Soviet Union in 1935. He played a central role in Stalin's Great Purge of the 1930s, denouncing many of his own military colleagues and subordinates when asked to do so by Stalin. He wrote personal letters to exiled former Soviet officers and diplomats such as commissar Mikhail Ostrovsky, asking them to return voluntarily to the Soviet Union and falsely reassuring them that they would not face retribution from authorities. Voroshilov personally signed 185 documented execution lists, fourth among the Soviet leadership after Molotov, Stalin and Kaganovich. Voroshilov did not personally share the paranoia towards upper-class elements of the officer corps. He openly declared that the saboteurs in the red army were few in number and tried to save the lives of officers like Lukin, who would serve with distinction during World War Two, and Sokolov-Strakhov at which he was sometimes successful. He had no problem denouncing officers he disliked such as Tukhachevsky. Despite taking part in the purging of many "mechanisers" (supporters of wide usage of tanks rather than cavalry) from the Red Army, Voroshilov became convinced that reliance on cavalry should be decreased while more modern arms should receive higher priority. Marshal Budyonny tried to recruit him to his cause of protecting the status of cavalry in the Red Army but Voroshilov openly declared his intention to do the opposite. He praised the army’s combined arms warfare capabilities as well as the high quality and ability to take initiative of the officers during the 1936 summer manoeuvers. However he also pointed out issues in the Red Army as a whole in his full report. Among the issues he pointed out were insufficient communication, ineffective staffs, insufficient cooperation between arms and the rudimentary nature of the command structure in tank units and other modern arms. When the great purge ended some reforms were undertaken by the high command in order to bring the theory of how the red army should be, for example deep operations doctrine, and the real state of the red army closer together. The politically appointed commanders of the post-purge red army saw that the army, especially after the purge, was not suitable to carry out deep operations style warfare. Commanders such as Voroshilov and Kulik were among the instigators of these reforms which positively impacted the red army. These commanders themselves turned out not to be able to carry out such operations in practice. Voroshilov and Kulik turned out to be unable to put these reforms into practice. One of these reforms was a reorganization of the red army field units which accidentally moved red army organization to a far less advanced state than it had been in 1936, the concept of this reorganization was conceived of by Kulik but put into practice by Voroshilov. When territorial units were abolished Voroshilov noted that among the reasons for disbanding them was inability to train conscripts in the use of modern technology. He had openly proclaimed that the system was inadequate in an era in which imperialist powers (such as Germany) were expanding the capabilities of their armies. The territorial units had been very unpopular, not only with Voroshilov, but with the red army leadership a whole. They were hopelessly ineffective, territorial conscript Alexey Grigorovich Maslov noted that he never fired a shot during his training, while it was noted that these units only underwent real training in the one month a year when experienced veterans returned. Between 1941–1944, during World War II, Voroshilov was a member of the State Defense Committee. Voroshilov commanded Soviet troops during the Winter War from November 1939 to January 1940 but, due to poor Soviet planning and Voroshilov's incompetence as a general, the Red Army suffered about 320,000 casualties compared to 70,000 Finnish casualties. When the leadership gathered at Stalin's dacha at Kuntsevo, Stalin shouted at Voroshilov for the losses; Voroshilov replied in kind, blaming the failure on Stalin for eliminating the Red Army's best generals in his purges. Voroshilov followed this retort by smashing a platter of roast suckling pig on the table. Nikita Khrushchev said it was the only time he ever witnessed such an outburst. Voroshilov was nonetheless made the scapegoat for the initial failures in Finland. He was later replaced as Defense Commissar by Semyon Timoshenko. Voroshilov was then made Deputy Premier responsible for cultural matters. Voroshilov initially argued that thousands of Polish army officers captured in September 1939 should be released, but he later signed the order for their execution in the Katyn massacre of 1940. After the German invasion of the Soviet Union in June 1941, Voroshilov became commander of the short-lived Northwestern Direction (July to August 1941), controlling several fronts. In September 1941 he commanded the Leningrad Front. Working alongside military commander Andrei Zhdanov as German advances threatened to cut off Leningrad, he displayed considerable personal bravery in defiance of heavy shelling at Ivanovskoye; at one point he rallied retreating troops and personally led a counter-attack against German tanks armed only with a pistol. However, the style of counterattack he launched had long since been abandoned by strategists and drew mostly contempt from his military colleagues; he failed to prevent the Germans from surrounding Leningrad and he was dismissed from his post and replaced by the far abler Georgy Zhukov on 8 September 1941. Stalin had a political need for popular wartime leaders, however, and Voroshilov remained as an important figurehead. Between 1945 and 1947, Voroshilov supervised the establishment of the communist regime in postwar Hungary. He attributed the poor showing of the Hungarian Communist Party in the October 1945 Budapest municipal elections to the number of Jews in leadership positions, arguing that it was ‘detrimental to the party that its leaders are not of Hungarian origin’. In 1952, Voroshilov was appointed a member of the Presidium of the Communist Party of the Soviet Union. Stalin's death on 5 March 1953 prompted major changes in the Soviet leadership. On 15 March 1953, Voroshilov was approved as Chairman of the Presidium of the Supreme Soviet (i.e., the head of state) with Nikita Khrushchev as First Secretary of the Communist Party and Georgy Malenkov as Premier of the Soviet Union. Voroshilov, Malenkov, and Khrushchev brought about the 26 June 1953 arrest of Lavrenty Beria after Stalin's death. After Khrushchev removed most of the old Stalinists like Molotov and Malenkov from the party, Voroshilov's career began to fade. On 7 May 1960, the Supreme Soviet of the Soviet Union granted Voroshilov's request for retirement and elected Leonid Brezhnev chairman of the Presidium of the Supreme Council (the head of state). The Central Committee also relieved him of duties as a member of the Party Presidium (as the Politburo had been called since 1952) on 16 July 1960. In October 1961, his political defeat was complete at the 22nd party congress when he was excluded from election to the Central Committee.Following Khrushchev's fall from power, Soviet leader Brezhnev brought Voroshilov out of retirement into a figurehead political post. Voroshilov was again re-elected to the Central Committee in 1966. Voroshilov was awarded a second medal of Hero of the Soviet Union 1968. Voroshilov died in 1969 in Moscow and was buried in the Kremlin Wall Necropolis. Voroshilov was married to Ekaterina Voroshilova, born Golda Gorbman, who came from a Jewish Ukrainian family from Mardarovka. She changed her name when she converted to Orthodox Christianity in order to be allowed to marry Voroshilov. They met while both exiled in Arkhangelsk, where Ekaterina was sent in 1906. While both serving on the Tsaritsyn Front in 1918, where Ekaterina was helping orphans, they adopted a four-year-old orphan boy who they named Petya. They also adopted the children of Mikhail Frunze following his death in 1925. During Stalin's rule they lived in the Kremlin at the Horse Guards. His personality as it was described by Molotov in 1974: "Voroshilov was nice, but only in certain times. He always stood for the political line of the party, because he was from a working class, a common man, very good orator. He was clean, yes. And he was personally devoted to Stalin. But his devotion was not very strong. However in this period he advocated Stalin very actively, supported him in everything, though not entirely sure in everything. It also affected their relationship. This is a very complex issue. This must be taken into account to understand why Stalin treated him critically and not invited him at all our conversations. At least at private ones. But he came by himself. Stalin frowned. Under Khrushchev, Voroshilov behaved badly." The KV (Kliment Voroshilov) series of tanks, used in World War II, was named after him. Two towns were also named after him: Voroshilovgrad in Ukraine (now changed back to the historical Luhansk) and Voroshilov in the Soviet Far East (now renamed Ussuriysk after the Ussuri river), as well as the General Staff Academy in Moscow. Stavropol was called Voroshilovsk from 1935 to 1943.
https://en.wikipedia.org/wiki?curid=17289
Kristi Yamaguchi Kristine Tsuya "Kristi" Yamaguchi (born July 12, 1971) is an American former figure skater. In ladies' singles, Yamaguchi is the 1992 Olympic champion, a two-time World champion (1991 and 1992), and the 1992 U.S. champion. As a pairs skater with Rudy Galindo, she is the 1988 World Junior champion and a two-time national champion (1989 and 1990). In December 2005, she was inducted into the U.S. Olympic Hall of Fame. In 2008, Yamaguchi became the celebrity champion in the sixth season of "Dancing with the Stars". Yamaguchi was born on July 12, 1971, in Hayward, California, to Jim Yamaguchi, a dentist, and Carole (née Doi), a medical secretary. Yamaguchi is Sansei (a third-generation descendant of Japanese emigrants). Her paternal grandparents and maternal great-grandparents emigrated to the United States from Japan, originating from Wakayama Prefecture and Saga Prefecture. Yamaguchi's grandparents were sent to an internment camp during World War II, where her mother was born. Her maternal grandfather, George A. Doi, was in the U.S. Army and fought in Germany and France during World War II during the time his family was interned at the Heart Mountain and Amache camps. Research done in 2010 by Harvard Professor Henry Louis Gates, Jr. for the PBS series "Faces of America" showed that Yamaguchi's heritage can be traced back to Wakayama and Saga prefectures in Japan and that her paternal grandfather, Tatsuichi Yamaguchi, emigrated to Hawaii in 1899. Yamaguchi and her siblings, Brett and Lori, grew up in Fremont, California. In order to accommodate her training schedule, Yamaguchi was home-schooled for her first two years of high school, but attended Mission San Jose High School for her junior and senior years, where she graduated. Yamaguchi began skating and taking ballet lessons, as a child, as physical therapy for her club feet. With Rudy Galindo she won the junior title at the U.S. championships in 1986. Two years later, Yamaguchi won the singles and, with Galindo, the pairs titles at the 1988 World Junior Championships; Galindo had won the 1987 World Junior Championship in singles. In 1989 Yamaguchi and Galindo won the senior pairs title at the U.S. Championships. They won the title again in 1990. As a pairs team, Yamaguchi and Galindo were unusual in that they were both accomplished singles skaters, which allowed them to consistently perform difficult elements like side by side triple flip jumps, which are still more difficult than side by side jumps performed by current top international pairs teams. They also jumped and spun in opposite directions, Yamaguchi counter-clockwise, and Galindo clockwise, which gave them an unusual look on the ice. In 1990, Yamaguchi decided to focus solely on singles. Galindo went on to have a successful singles career as well, winning the 1996 U.S. championships and the 1996 World bronze medal. Yamaguchi won her first major international gold medal in figure skating at the 1990 Goodwill Games. In 1991, Yamaguchi moved to Edmonton, Alberta, to train with coach Christy Ness. There, she took psychology courses at the University of Alberta. The same year Yamaguchi placed second to Tonya Harding at the U.S. championships, her third consecutive silver medal at Nationals. The following month in Munich, Germany, Yamaguchi won the 1991 World Championships. That year, the American ladies team, consisting of Yamaguchi, Harding and Nancy Kerrigan, became the only national ladies team to have its members sweep the Worlds podium. In 1992, Yamaguchi won her first U.S. title and gained a spot to the 1992 Winter Olympics in Albertville, France. Joining her on the U.S. team were again Kerrigan and Harding. While competitors Harding and Japan’s Midori Ito were consistently landing the difficult triple axel jump in competition, Yamaguchi instead focused on her artistry and her triple-triple combinations in hopes of becoming a more well-rounded skater. Both Harding and Ito fell on their triple axels at the Olympics (though Ito successfully landed the jump later on in her long program after missing the first time), allowing Yamaguchi to win the gold, despite errors in her free program, including putting a hand to the ice on a triple loop and a double salchow instead of a planned triple. She later explained her mindset during the long program: “You just do your best and forget the rest." Yamaguchi went on to successfully defend her World title that same year. Yamaguchi turned professional after the 1991–92 competitive season. She toured for many years with Stars on Ice and also participated in the pro competition circuit. In 1996, Yamaguchi established the Always Dream Foundation for children. The goal of the foundation is to provide funding for after school programs, computers, back-to-school clothes for underprivileged children, and summer camps for kids with disabilities. Commenting in 2009, she explained her inspiration for the project: "I was inspired by the Make-A-Wish foundation to make a positive difference in children’s lives. We’ve been helping out various children’s organizations, which is rewarding. Our latest project is a playground designed so that kids of all abilities can play side by side. That’s our focus now." Currently her Always Dream Foundation is focused on early childhood literacy with a statement of "Empowering Children to reach their dreams through education and inspiration." ADF has partnered with "Raising a Reader" to launch a reading program in schools throughout California and eventually nationwide. The foundation is also providing a language arts program "Footsteps to Brilliance" to kindergarten and first grade. Both programs integrate innovative technology into the classrooms. Yamaguchi is the author of "Always Dream", "Pure Gold", and "Figure Skating for Dummies". In 2011, she published an award-winning children's book, "Dream Big, Little Pig", which was #2 on the New York Times bestseller list, and received the Gelett Burgess Children's Book Award; a portion of the proceeds went to the Always Dream Foundation to support early childhood literacy programs. A sequel, "It's a Big World Little Pig", was scheduled to be published March 6, 2012. Yamaguchi made a fitness video with the California Raisins in 1993 called "Hip to be Fit: The California Raisins and Kristi Yamaguchi". She has appeared as herself on "Everybody Loves Raymond" and in "", "Frosted Pink", and the Disney Channel original movie "Go Figure". Yamaguchi has also performed in numerous television skating specials, including the Disney special "Aladdin on Ice", in which she played Princess Jasmine. In 2006 Yamaguchi was the host of WE tv series "Skating's Next Star", created and produced by Major League Figure Skating. Yamaguchi was a local commentator on figure skating for San Jose TV station KNTV (NBC 11) during the 2006 Winter Olympics. On May 20, 2008, Yamaguchi became the champion of the sixth season of ABC's reality program "Dancing with the Stars", in which she was paired with Mark Ballas, defeating finalist couple Jason Taylor and Edyta Śliwińska. Yamaguchi made a special appearance in the finale of the sixteenth season where she danced alongside Dorothy Hamill. Yamaguchi received the Inspiration Award at the 2008 Asian Excellence Awards. Two days after her Dancing with the Stars champion crowning, she received the 2008 Sonja Henie Award from the Professional Skaters Association. Among her other awards are the Thurman Munson Award, Women's Sports Foundation Flo Hyman Award,the Heisman Humanitarian Award, and the Great Sports Legends Award. She is also a member of the U.S. Olympic Committee Olympic Hall of Fame, World Skating Hall of Fame, and the US Figure Skating Hall of Fame. In 2010, Yamaguchi worked as a daily NBC Olympics skating broadcast analyst on NBC's Universal Sports Network. During the 2010 Winter Olympics, Kristi was also a special correspondent for the Today Show. In early 2012, Yamaguchi created a woman's active wear line focused on function, comfort, and style to empower women to look good and feel good. The lifestyle brand is called Tsu.ya by Kristi Yamaguchi. Tsu.ya donates a portion of its proceeds to support early childhood literacy through Yamaguchi's Always Dream Foundation. In November 2017, Yamaguchi returned to Dancing With the Stars' 25th season in Week eight, to participate in a trio Jazz with Lindsey Stirling and her professional partner Mark Ballas. On July 8, 2000, she married Bret Hedican, a professional hockey player she met at the 1992 Winter Olympics when he played for Team USA. After their wedding, Yamaguchi and Hedican resided in Raleigh, North Carolina where Hedican played for the Carolina Hurricanes NHL team and won his only Stanley Cup in 2006. He played for one year with the Anaheim Ducks and they now live in Alamo, California in Northern California with their two daughters, Keara Kiyomi (born 2003) and Emma Yoshiko (born 2005). In 2011, she authored a children's book, "Dream Big, Little Pig!", with illustrator Tim Bowers.
https://en.wikipedia.org/wiki?curid=17291
Krzysztof Penderecki Krzysztof Eugeniusz Penderecki (; 23 November 1933 – 29 March 2020) was a Polish composer and conductor. Among his best known works are "Threnody to the Victims of Hiroshima", Symphony No. 3, his "St. Luke Passion", "Polish Requiem", "Anaklasis" and "Utrenja". Penderecki composed four operas, eight symphonies and other orchestral pieces, a variety of instrumental concertos, choral settings of mainly religious texts, as well as chamber and instrumental works"." Born in Dębica to a lawyer, Penderecki studied music at Jagiellonian University and the Academy of Music in Kraków. After graduating from the Academy, he became a teacher there and began his career as a composer in 1959 during the Warsaw Autumn festival. His "Threnody to the Victims of Hiroshima" for string orchestra and the choral work "St. Luke Passion" have received popular acclaim. His first opera, "The Devils of Loudun", was not immediately successful. Beginning in the mid-1970s, Penderecki's composing style changed, with his first violin concerto focusing on the semitone and the tritone. His choral work "Polish Requiem" was written in the 1980s and expanded in 1993 and 2005. Penderecki won many prestigious awards, including the Prix Italia in 1967 and 1968; four Grammy Awards in 1987, 1998 (twice), and 2017; the Wolf Prize in Arts in 1987; and the University of Louisville Grawemeyer Award for Music Composition in 1992. In 2012, Sean Michaels of "The Guardian" called him 'arguably Poland's greatest living composer'. Penderecki was born on 23 November 1933 in Dębica, the son of Zofia and Tadeusz Penderecki, a lawyer. Penderecki's grandfather, Robert Berger, was a highly talented painter and director of the local bank at the time of Penderecki's birth; Robert's father Johann, a German Protestant, moved to Dębica from Breslau (now Wrocław) in the mid-19th century. Out of love for his wife, he subsequently converted to Catholicism. Penderecki's grandmother Stefania was an Armenian from Stanislau in Austria-Hungary (present-day Ivano-Frankivsk in Western Ukraine). Penderecki used to go to the Armenian Church in Kraków with her. He was the youngest of three siblings; his sister, Barbara, was married to a mining engineer, and his older brother, Janusz, was studying law and medicine at the time of his birth. Tadeusz was a violinist and also played piano. In 1939, the Second World War broke out, and Penderecki's family moved out of their apartment as the Ministry of Food was to operate there. After the war, Penderecki began attending grammar school in 1946. He began studying the violin under Stanisław Darłak, Dębica's military bandmaster who organized an orchestra for the local music society after the war. Upon graduating from grammar school, Penderecki moved to Kraków in 1951, where he attended Jagiellonian University. He studied violin with Stanisław Tawroszewicz and music theory with Franciszek Skołyszewski. In 1954, Penderecki entered the Academy of Music in Kraków and, having finished his studies on violin after his first year, focused entirely on composition. Penderecki's main teacher there was Artur Malawski, a composer known for his choral and orchestral works, as well as chamber music and songs. After Malawski's death in 1957, Penderecki took further lessons with Stanisław Wiechowicz, a composer primarily known for his choral works. At the time, the 1956 overthrow of Stalinism in Poland lifted strict cultural censorship and opened the door to a wave of creativity. On graduating from the Academy of Music in Kraków in 1958, Penderecki took up a teaching post at the Academy. His early works show the influence of Anton Webern and Pierre Boulez (Penderecki was also influenced by Igor Stravinsky). Penderecki's international recognition began in 1959 at the Warsaw Autumn with the premieres of the works "Strophen", "Psalms of David", and "Emanations", but the piece that truly brought him to international attention was "Threnody to the Victims of Hiroshima" (see threnody and atomic bombing of Hiroshima), written in 1960 for 52 string instruments. In it, he makes use of extended instrumental techniques (for example, playing behind the bridge, bowing on the tailpiece). There are many novel textures in the work, which makes extensive use of tone clusters. He originally titled the work "8' 37"", but decided to dedicate it to the victims of Hiroshima. "Fluorescences" followed a year later; it increases the orchestral density with more wind and brass, and an enormous percussion section of 32 instruments for six players, including a Mexican güiro, typewriters, gongs and other unusual instruments. The piece was composed for the Donaueschingen Festival of contemporary music of 1962, and its performance was regarded as provocative and controversial. Even the score appeared revolutionary; the form of graphic notation that Penderecki had developed rejected the familiar look of notes on a staff, instead representing music as morphing sounds. His intentions at this stage were quite Cagean: 'All I'm interested in is liberating sound beyond all tradition'. Another noteworthy piece of this period is the "Canon" for 52 strings and 2 tapes. This is in a similar style to other pieces in the late 1950s in its use of sound masses, dramatically juxtaposed with traditional means although the use of standard techniques or idioms is often disguised or distorted. Indeed, the Canon brings to mind the choral tradition and indeed the composer has the players sing, albeit with the performance indication of "bocca chiusa" (with closed mouth) at various points; nevertheless, Penderecki uses the 52 'voices' of the string orchestra to play in massed glissandi and harmonics at times - this is then recorded by one of the tapes for playback later on in the piece. It was performed at the Warsaw Autumn Festival in 1962 and caused a riot although curiously the rioters were young music students and not older concertgoers. The large-scale "St. Luke Passion" (1963–66) brought Penderecki further popular acclaim, not least because it was devoutly religious, yet written in an avant-garde musical language, and composed within Communist Eastern Europe. Western audiences saw it as a snub to the Soviet authorities. Various different musical styles can be seen in the piece. The experimental textures, such as were employed in the "Threnody", are balanced by the work's Baroque form and the occasional use of more traditional harmonic and melodic writing. Penderecki makes use of serialism in this piece, and one of the tone rows he uses includes the BACH motif, which acts as a bridge between the conventional and more experimental elements. The Stabat Mater section toward the end of the piece concludes on a simple chord of D major, and this gesture is repeated at the very end of the work, which finishes on a triumphant E major chord. These are the only tonal harmonies in the work, and both come as a surprise to the listener; Penderecki's use of tonal triads such as these remains a controversial aspect of the work. Penderecki continued to write pieces that explored the sacred in music. In the early 1970s he wrote a Dies irae, a Magnificat, and Canticum Canticorum Salomonis (Song of Songs) for chorus and orchestra. Penderecki's preoccupation with sound culminated in "De Natura Sonoris I", which frequently calls upon the orchestra to use non-standard playing techniques to produce original sounds and colours. A sequel, "De Natura Sonoris II", was composed in 1971: with its more limited orchestra, it incorporates more elements of post-Romanticism than its predecessor. This foreshadowed Penderecki's renunciation of the avant-garde in the mid-1970s, although both pieces feature dramatic glissandos, dense clusters, use of harmonics, and unusual instruments (the musical saw features in the second piece). In 1968 Penderecki received the State Prize 1st class. During the jubilee of the People's Republic of Poland he received the Commander's Cross (1974) and Knight's Cross of Order of Polonia Restituta (1964). Towards the end of the decade, Penderecki received a commission to write for the twenty-fifth anniversary of the founding of the United Nations. The result was "Kosmogonia", a piece of twenty minutes for 3 soloists (soprano, tenor, bass), mixed choir and orchestra. The Los Angeles Philharmonic premiered the piece on 24 October 1970 with Zubin Mehta as conductor and Robert Nagy as tenor. The piece uses texts from ancient writers Sophocles and Ovid in addition to contemporary statements from Soviet and American astronauts to musically explore the idea of the cosmos. In the mid-1970s, while he was a professor at the Yale School of Music, Penderecki's style began to change. The Violin Concerto No. 1 largely leaves behind the dense tone clusters with which he had been associated, and instead focuses on two melodic intervals: the semitone and the tritone. This direction continued with the Symphony No. 2, "Christmas" (1980), which is harmonically and melodically quite straightforward. It makes frequent use of the tune of the Christmas carol "Silent Night". Penderecki explained this shift by stating that he had come to feel that the experimentation of the avant-garde had gone too far from the expressive, non-formal qualities of Western music: 'The avant-garde gave one an illusion of universalism. The musical world of Stockhausen, Nono, Boulez and Cage was for us, the young – hemmed in by the aesthetics of socialist realism, then the official canon in our country – a liberation...I was quick to realise however, that this novelty, this experimentation, and formal speculation, is more destructive than constructive; I realised the Utopian quality of its Promethean tone'. Penderecki concluded that he was 'saved from the avant-garde snare of formalism by a return to tradition'. Penderecki has written relatively little chamber music. However, compositions for smaller ensembles range in date from the start of his career to the end, reflecting the changes his style of writing has undergone. In 1975 the Lyric Opera of Chicago had asked him to write a work to commemorate the US Bicentennial in 1976; this became the opera "Paradise Lost." Delays to the project however meant it did not see its premiere until 1978. The music continued to illustrate Penderecki's move away from avant-garde techniques: it is tonal music and the composer explained: 'This is not music by the angry young man I used to be'. In 1980, Penderecki was commissioned by Solidarity to compose a piece to accompany the unveiling of a statue at the Gdańsk shipyards to commemorate those killed in anti-government riots there in 1970. Penderecki responded with "Lacrimosa", which he later expanded into one of the best-known works of his later period, the "Polish Requiem" (1980–84, 1993, 2005). Later, he tended towards more traditionally conceived tonal constructs, as heard in works such as the Cello Concerto No. 2 and the Credo, which received the Grammy Award for best choral performance for the world-premiere recording made by the Oregon Bach Festival, which commissioned the piece. The same year, Penderecki was awarded the Prince of Asturias Prize in Spain, one of the highest honours given in Spain to individuals, entities, organizations or others from around the world who make notable achievements in the sciences, arts, humanities, or public affairs. Invited by Walter Fink, he was the eleventh composer featured in the annual Komponistenporträt of the Rheingau Musik Festival in 2001. He conducted the Credo on the occasion of the 70th birthday of Helmuth Rilling, 29 May 2003. Penderecki received an honorary doctorate from the Seoul National University, Korea, in 2005 and the University of Münster, Germany, in 2006. His notable students include Chester Biscardi and Walter Mays. In celebration of his 75th birthday, he conducted three of his works at the Rheingau Musik Festival in 2008, among them Ciaccona from the "Polish Requiem". In 2010, he worked on an opera based on Phèdre by Racine for 2014, which was never realized, and expressed his wish to write a 9th symphony. In 2014, he was engaged in the creation of a choral work to coincide with the Armenian Genocide centennial. In 2018, he conducted Credo in Kyiv at the 29th Kyiv Music Fest, marking the centenary of Polish independence. Penderecki had three children, first a daughter Beata with pianist Barbara Penderecka ("née" Graca; married 1954, then divorced). He then had a son Łukasz (b. 1966) and daughter Dominika (b. 1971) with his second wife, Elżbieta Penderecka ("née" Solecka), whom he married on 19 December 1965. He lived in the Kraków suburb of Wola Justowska. On 29 March 2020, Penderecki died in his home in Kraków, Poland after a long illness. In 1979, a bronze bust by artist Marian Konieczny honouring Penderecki was unveiled in The Gallery of Composers' Portraits at the Pomeranian Philharmonic in Bydgoszcz. His monument is also located on the Celebrity Alley at the Scout Square ("Skwer Harcerski") in Kielce. A main-belt asteroid – 21059 Penderecki is named in his honor. Radiohead guitarist Jonny Greenwood is noted for his admiration of the Polish composer's work. He visited Penderecki in 2012 and wrote a work, "48 Responses to Polymorphia", for strings which was conducted by Penderecki himself in various performances throughout Europe. Penderecki's compositions include operas, symphonies, choral works, as well as chamber and instrumental music. Some of Penderecki's music has been adapted for film soundtracks. "The Exorcist" (1973) features his String Quartet and "Kanon For Orchestra and Tape"; fragments of the Cello Concerto and "The Devils of Loudun". Writing about "The Exorcist", the film critic for "The New Republic" wrote that 'even the music is faultless, most of it by Krzysztof Penderecki, who at last is where he belongs'. Stanley Kubrick's "The Shining" (1980) features six pieces of Penderecki's music: "Utrenja II: Ewangelia", "Utrenja II: Kanon Paschy", "The Awakening of Jacob", "De Natura Sonoris No. 1", "De Natura Sonoris No. 2" and "Polymorphia". David Lynch has used Penderecki's music in the soundtracks of the films "Wild at Heart" (1990), "Inland Empire" (2006), and the TV series "Twin Peaks" (2017). In the film "Fearless" (1993) by Peter Weir, the piece "Polymorphia" was once again used for an intense plane crash scene, seen from the point of view of the passenger played by Jeff Bridges. Penderecki's "Threnody for the Victims of Hiroshima" was also used during one of the final sequences in the film "Children of Men" (2006). Penderecki composed music for Andrzej Wajda's 2007 Academy Award nominated film "Katyń", while Martin Scorsese's "Shutter Island" (2010) featured his Symphony No. 3 and "Fluorescences". Some of Penderecki's oeuvre inspired Jonny Greenwood of Radiohead to release an album, which thereafter appeared in his score for "There Will Be Blood", a 2007 Paul Thomas Anderson film. Penderecki was an honorary doctor and honorary professor of several universities: Georgetown University, Washington, D.C., University of Glasgow, Moscow Tchaikovsky Conservatory, Fryderyk Chopin Music Academy in Warsaw, Seoul National University, Universities of Rochester, Bordeaux, Leuven, Belgrade, Madrid, Poznan and St. Olaf College (Northfield, Minnesota), Duquesne University, Pontifical Catholic University of Peru, University of Pittsburgh (PA), University of St. Petersburg, Beijing Conservatory, Yale University and Westfälische Wilhelms-Universität in Münster (Westphalia) (2006 Faculty of Arts). He was an honorary member of the following academies and music companies: Royal Academy of Music (London), Accademia Nazionale di Santa Cecilia (Rome), Royal Swedish Academy of Music (Stockholm), Academy of Arts (London), Academia Nacional de Bellas Artes (Buenos Aires), the Society of Friends of Music in Vienna, Academy of Arts in Berlin, Académie Internationale de Philosophie et de l’Art in Bern, and the Académie Nationale des Sciences, Belles-lettres et Arts in Bordeaux. In 2009, he became an honorary citizen of the city of Bydgoszcz.
https://en.wikipedia.org/wiki?curid=17292
Krugerrand The Krugerrand (; ) is a South African coin, first minted on 3 July 1967 to help market South African gold and produced by Rand Refinery and the South African Mint. The name is a compound of "Paul Kruger", the former President of the South African Republic (depicted on the obverse), and "rand", the South African unit of currency. On the reverse side of the Krugerrand is a springbok, South Africa's national animal. By 1980 the Krugerrand accounted for more than 90% of the global coin market and was the number one choice for investors buying gold. However, during the 1970s and 1980s, Krugerrands fell out of favor as some western countries forbade import of the Krugerrand because of its association with the apartheid government of South Africa. Although gold Krugerrand coins have no face value, they are considered legal tender in South Africa by the South African Reserve Bank Act (SARBA) of 1989. In 2017, the Rand Refinery began minting silver versions, which have the same overall design as the gold coin. The Krugerrand was introduced in 1967 as a vehicle for private ownership of gold. It was minted in a copper-gold alloy more durable than pure gold. By 1980 the Krugerrand accounted for 90% of the global gold coin market. That year, South Africa introduced three smaller coins with a half troy ounce, quarter ounce, and tenth ounce of gold. Economic sanctions against South Africa for its policy of apartheid made the Krugerrand an illegal import in many Western countries during the 1970s and 1980s. The United States which had historically been the largest market for the coin, banned imports in 1985; the previous year, over US$600 million of Krugerrands had been marketed in that country. Most sanctions ended in 1991, after the South African government took steps toward ending its apartheid policy. Production levels of Krugerrands have significantly varied since its introduction. From 1967 to 1969, around 40,000 coins were minted each year. In 1970, the number rose to over 200,000 coins. More than one million coins were produced in 1974, and in 1978 a total of six million were produced. The production dropped to 23,277 coins in 1998 and then increased again, although not reaching previous levels. Over 50 million ounces of gold Krugerrand coins have been sold since production started in 1967. During the bull market in gold of the 1970s, the gold Krugerrand quickly became the primary choice for gold investors worldwide. Between 1974 and 1985, it is estimated that 22 million gold Krugerrand coins were imported into the United States alone. This huge success of the Krugerrand encouraged other gold-producing countries to mint and issue gold bullion coins of their own, including the Canadian Gold Maple Leaf in 1979, the Australian Nugget in 1987, the Chinese Gold Panda in 1982, the American Gold Eagle in 1986, and the British Britannia coin in 1987. Private mints have also attempted minting gold and silver bullion "rounds" (the term "coin" denotes legal currency) in the style of the Krugerrand. The rounds often depict Paul Kruger and a springbok antelope, some even blatantly copying the design of the Krugerrands themselves, though the inscriptions are altered. These bullion rounds are not offered by the South African Mint or the Government of South Africa, and are therefore not official, have no legal tender value, and cannot technically be considered coins. The Krugerrand is in diameter and thick. The Krugerrand's actual weight is . It is minted from gold alloy that is 91.67% pure (22 karats), so the coin contains of gold. The remaining 8.33% of the coin's weight of is copper (an alloy known historically as crown gold which has long been used for British gold sovereigns), which gives the Krugerrand a more orange appearance than silver-alloyed gold coins. Copper alloy coins are harder and more durable, so they can resist scratches and dents. The coin is so named because the obverse, designed by Otto Schultz, bears the face of Boer statesman Paul Kruger, four-term president of the old South African Republic. The reverse depicts a springbok, the national animal of South Africa. The image was designed by Coert Steynberg, and was previously used on the reverse of the earlier South African five shilling coin. The name "South Africa" and the gold content are inscribed in both Afrikaans and English (as can be seen on the pictures of the coin). Since September 1980, Krugerrands have also become available in three additional sizes containing , and of gold. The word "Krugerrand" is a registered trademark owned by Rand Refinery Limited, of Germiston. The South African Mint Company produces limited edition proof Krugerrands intended to be collectors' items rather than bullion investments. These coins are priced above bullion value, although non-proof Krugerrands also have a premium above gold bullion value. They can be distinguished from the bullion Krugerrands by the number of serrations on the edge of the coin. Proof coins have 220 edge serrations, while bullion coins have 160. 2017 marked the 50th year of issuance (1967–2017) and to commemorate the anniversary, the South African Mint produced "Premium Uncirculated" versions in gold (.916 or 22 carat) and for the first time also in platinum (.999 fine) and silver (.999 fine). The issue limit for these commemorative platinum, gold and silver coins was 2,017 for platinum, 5,000 for gold and 1,000,000 for silver. The commemorative issues are distinguished by a '50' privy seal mark above the springbok design on the reverse for the platinum and silver issues and to the right of the springbok design on the gold issues. In addition to the "Premium Uncirculated" issue, 15,000 silver "Proof" krugerrands were also issued as well as "Proof" krugerrands in gold and platinum. The South African Reserve Bank restricts the exportation of Krugerrands by a South African resident to a non-resident to a maximum of R30,000 (about US$2,100 or 1,870 Euro as of June 2018). Visitors to South Africa can export up to 15 coins by declaring the items to the South African Revenue Service. In the 21st century, Krugerrands have received media attention in the United States after anonymous donors have left the valuable coin in the Salvation Army's annual "Christmas Kettle" donation jars in various cities around the country.
https://en.wikipedia.org/wiki?curid=17293
Karl Böttiger Karl August Böttiger (8 June 1760 – 17 November 1835) was a German archaeologist and classicist, and a prominent member of the literary and artistic circles in Weimar and Jena. Böttiger was born in Reichenbach, in the kingdom of Saxony, and educated at Schulpforta and Leipzig. Under the influence of Johann Gottfried Herder, he was for 13 years headmaster at the gymnasium and consistorial councillor in Weimar, from 1790 to 1804. For the remaining 31 years of his life, he resided at Dresden as director of the Museum of Antiquities, and was active as a journalist and public lecturer. As a schoolmaster, he had published a considerable number of pedagogic and philological programs. In 1810, Böttiger with Swiss painter Heinrich Meyer released a monograph on the painting in the Vatican known as the "Aldobrandini marriage". His archaeological works, mainly produced at Dresden, fall into three groups: The first of these is private antiquities, best represented by his "Sabina, or morning scenes in the dressing room of a wealthy Roman lady" (; 1803, 2 vols.; 2nd ed., 1806), which was translated into French and served as a model for Wilhelm Adolf Becker's "Gallus" and "Charicles". The second, the Greek theatre, which Böttiger had been interested in since his time as a drama critic in Weimar; his unfavorable review of August Wilhelm Schlegel's "Ion" was withdrawn at the request of Goethe. It was mainly as a schoolmaster in Weimar that he wrote his papers on the distribution of the parts, on the masks and dresses, and on the machinery of the ancient stage, as well as a dissertation on the masks of the Furies in 1801. Thirdly, he worked in the domain of ancient art and mythology; his work in this area was popular but, according to some 20th-century critics, superficial. His accomplishments in Dresden led him to be noticed by the court of the Kingdom of Saxony, and he was the Aulic councilor of the kings of Saxony. Böttiger supplied the descriptive letter-press to the 1797 German edition of Tischbein's reproductions from William Hamilton's second collection of Greek vases, and thus introduced the study of Greek vase-painting into Germany. He published lectures on the history of ancient sculpture in 1806, and painting in 1811, and edited the three volumes of an archaeological periodical called "Amalthea" from 1820 to 1825, which included contributions from the most eminent classical archaeologists of the day. In 1832 Böttiger was elected a member of the French Institute. He died in Dresden. His pupil, who edited many of Böttiger's works after his death, was the German classicist Karl Julius Sillig. The are two medals that were commissioned for him. One in occasion of his 70th birthday in 1830 and the other after he died. His son, Karl Wilhelm Böttiger (15 August 1790 – 26 November 1862; not to be confused with the Swedish writer Carl Wilhelm Böttiger), was a historian and biographer of his father. He wrote "Karl August Böttiger. Eine biographische Skizze", a biographical sketch (Leipzig, 1837). From his father's papers, he edited the posthumous work "Litterarische Zustände und Zeitgenossen" (Literary circumstances and contemporaries, 2 vols., Leipzig, 1838). Karl Wilhelm Böttiger contributed the history of Saxony to Heeren and Ukert's "Europäische Staatengeschichte", and his "Allgemeine Geschichte für Schule und Haus" (Universal history for school and home) and "Deutsche Geschichte für Schule und Haus" (German history for school and home) passed through many editions. From 1821 until his death he was professor of history in Erlangen.
https://en.wikipedia.org/wiki?curid=17296
Karl Ferdinand Braun Karl Ferdinand Braun (6 June 1850 – 20 April 1918) was a German electrical engineer, inventor, physicist and Nobel laureate in physics. Braun contributed significantly to the development of radio and television technology: he shared the 1909 Nobel Prize in Physics with Guglielmo Marconi "for their contributions to the development of wireless telegraphy". Braun was born in Fulda, Germany, and educated at the University of Marburg and received a Ph.D. from the University of Berlin in 1872. In 1874, he discovered that a point-contact semiconductor rectifies alternating current. He became director of the Physical Institute and professor of physics at the University of Strassburg in 1895. In 1897, he built the first cathode-ray tube (CRT) and cathode ray tube oscilloscope. CRT became the cornerstone in developing fully electronic television. In early 21st century, the flat screen technologies (such as liquid crystal display (LCD), light emitting diode (LED) and plasma displays) began to replace the CRT technology on both television sets and computer monitors. The CRT is still called the "Braun tube" in German-speaking countries ("Braunsche Röhre") and other countries such as Korea (브라운관: "Buraun-kwan") and Japan (ブラウン管: "Buraun-kan"). During the development of radio, he also worked on wireless telegraphy. In 1897, Braun joined the line of wireless pioneers. His major contributions were the introduction of a closed tuned circuit in the generating part of the transmitter, its separation from the radiating part (the antenna) by means of inductive coupling, and later on the usage of crystals for receiving purposes. Around 1898, he invented a crystal detector . Wireless telegraphy claimed Dr. Braun's full attention in 1898, and for many years after that he applied himself almost exclusively to the task of solving its problems. Dr. Braun had written extensively on wireless subjects and was well known through his many contributions to the Electrician and other scientific journals. In 1899, he would apply for the patent "Wireless electro transmission of signals over surfaces".. Also in 1899, he is said to have applied for a patent on "Electro telegraphy by means of condensers and induction coils" . Pioneers working on wireless devices eventually came to a limit of distance they could cover. Connecting the antenna directly to the spark gap produced only a heavily damped pulse train. There were only a few cycles before oscillations ceased. Braun's circuit afforded a much longer sustained oscillation because the energy encountered less losses swinging between coil and Leyden Jars. And by means of inductive antenna coupling the radiator was better matched to the generator. The resultant stronger and less bandwidth consuming signals bridged a much longer distance. Braun invented the phased array antenna in 1905. He described in his Nobel Prize lecture how he carefully arranged three antennas to transmit a directional signal. This invention led to the development of radar, smart antennas, and MIMO. Braun's British patent on tuning was used by Marconi in many of his tuning patents. Guglielmo Marconi used Braun's patents (among others). Marconi would later admit to Braun himself that he had ""borrowed"" portions of Braun's work . In 1909, Braun shared the Nobel Prize for physics with Marconi for "contributions to the development of wireless telegraphy." The prize awarded to Braun in 1909 depicts this design. Braun experimented at first at the University of Strasbourg. Not before long he bridged a distance of 42 km to the city of Mutzig. In spring 1899, Braun, accompanied by his colleagues Cantor and Zenneck, went to Cuxhaven to continue their experiments at the North Sea. On 24 September 1900 radio telegraphy signals were exchanged regularly with the island of Heligoland over a distance of 62 km. Light vessels in the river Elbe and a coast station at Cuxhaven commenced a regular radio telegraph service. Braun went to the United States at the beginning of World War I (before the U.S. had entered the war) to help defend the German wireless station at Sayville, New York, against attacks by the British-controlled Marconi Corporation. After the US entered the war, Braun was detained, but could move freely within Brooklyn, New York. Braun died in his house in Brooklyn, before the war ended in 1918. In 1987 the Society for Information Display created the Karl Ferdinand Braun Prize, awarded for an outstanding technical achievement in display technology.
https://en.wikipedia.org/wiki?curid=17297
Khunjerab Pass Khunjerab Pass (; ) is a high mountain pass in the Karakoram Mountains, in a strategic position on the northern border of Pakistan (Gilgit–Baltistan's Hunza and Nagar Districts) and on the southwest border of China (Xinjiang). Its elevation is . Its name is derived from two words of the local Wakhi language: "khun" means blood and "jerab" means a creek coming from a spring or waterfall. The Khunjerab Pass is the highest-paved international border crossing in the world and the highest point on the Karakoram Highway. The roadway across the pass was completed in 1982, and has superseded the unpaved Mintaka and Kilik Passes as the primary passage across the Karakoram Range. The choice of Khunjerab Pass for Karakoram Highway was decided in 1966: China citing the fact that Mintaka would be more susceptible to air strikes recommended the steeper Khunjerab Pass instead. On the Pakistani-administered side, the pass is from the National Park station and checkpoint in Dih, from the customs and immigration post in Sost, from Gilgit, and from Islamabad. On the Chinese side, the pass is the southwest terminus of China National Highway 314 (G314) and is from Tashkurgan, from Kashgar and some from Urumqi. The Chinese port of entry is located along the road from the pass in Tashkurgan County. The long, relatively flat pass is often snow-covered during the winter season and as a consequence is generally closed for heavy vehicles from November 30 to May 1 and for all vehicles from December 30 to April 1. The reconstructed Karakoram Highway passes through Khunjerab Pass. Since June 1, 2006, there has been a daily bus service across the boundary from Gilgit to Kashgar, Xinjiang This is one of the international borders where left-hand traffic (Pakistan-administered Gilgit-Baltistan) changes to right-hand traffic (China) and vice versa. The Pakistani side features the highest ATM in the world administered by the National Bank of Pakistan and 1LINK. In 2007, consultants were hired to evaluate the construction of a railway through this pass to connect China with transport in Pakistani-administered Gilgit-Baltistan. A feasibility study started in November 2009 for a line connecting Havelian away in Pakistan and Kashgar in Xinjiang. However, no progress has been made thereafter and this project is also not part of the current CPEC plan.
https://en.wikipedia.org/wiki?curid=17299
Kazimir Malevich Kazimir Severinovich Malevich ( – May 15, 1935) was a Russian avant-garde artist and art theorist, whose pioneering work and writing had a profound influence on the development of non-objective, or abstract art, in the 20th century. Born in Kiev to an ethnic Polish family, his concept of Suprematism sought to develop a form of expression that moved as far as possible from the world of natural forms (objectivity) and subject matter in order to access "the supremacy of pure feeling" and spirituality. Malevich is considered to be part of the Ukrainian avant-garde (together with Alexander Archipenko, Vladimir Tatlin, Sonia Delaunay, Aleksandra Ekster, and David Burliuk) that was shaped by Ukrainian-born artists who worked first in Ukraine and later over a geographical span between Europe and America. Early on, Malevich worked in a variety of styles, quickly assimilating the movements of Impressionism, Symbolism and Fauvism, and after visiting Paris in 1912, Cubism. Gradually simplifying his style, he developed an approach with key works consisting of pure geometric forms and their relationships to one another, set against minimal grounds. His "Black Square" (1915), a black square on white, represented the most radically abstract painting known to have been created so far and drew "an uncrossable line (…) between old art and new art"; "Suprematist Composition: White on White" (1918), a barely differentiated off-white square superimposed on an off-white ground, would take his ideal of pure abstraction to its logical conclusion. In addition to his paintings, Malevich laid down his theories in writing, such as "From Cubism and Futurism to Suprematism" (1915) and "The Non-Objective World: The Manifesto of Suprematism" (1926). Malevich's trajectory in many ways mirrored the tumult of the decades surrounding the October Revolution (O.S.) in 1917. In its immediate aftermath, vanguard movements such as Suprematism and Vladimir Tatlin's Constructivism were encouraged by Trotskyite factions in the government. Malevich held several prominent teaching positions and received a solo show at the Sixteenth State Exhibition in Moscow in 1919. His recognition spread to the West with solo exhibitions in Warsaw and Berlin in 1927. From 1928 to 1930 he taught at the Kiev Art Institute, with Alexander Bogomazov, Victor Palmov, Vladimir Tatlin and published his articles in a Kharkiv magazine Nova Generatsia (New Generation). But the start of repression in Ukraine against the intelligentsia forced Malevich return to modern-day Saint Petersburg. From the beginning of the 1930s, modern art was falling out of favor with the new government of Joseph Stalin. Malevich soon lost his teaching position, artworks and manuscripts were confiscated, and he was banned from making art. In 1930, he was imprisoned for two months due to suspicions raised by his trip to Poland and Germany. Forced to abandon abstraction, he painted in a representational style in the years before his death from cancer in 1935, at the age of 56. Nonetheless, his art and his writing influenced contemporaries such as El Lissitzky, Lyubov Popova and Alexander Rodchenko, as well as generations of later abstract artists, such as Ad Reinhardt and the Minimalists. He was celebrated posthumously in major exhibits at the Museum of Modern Art (1936), the Guggenheim Museum (1973) and the Stedelijk Museum in Amsterdam (1989), which has a large collection of his work. In the 1990s, the ownership claims of museums to many Malevich works began to be disputed by his heirs. Kazimir Malevich was born Kazimierz Malewicz to a Polish family, who settled near Kiev in Kiev Governorate of the Russian Empire during the partitions of Poland. His parents, Ludwika and Seweryn Malewicz, were Roman Catholic like most ethnic Poles, though his father attended Orthodox services as well. They both had fled from the former eastern territories of the Commonwealth (present-day Kopyl Region of Belarus) to Kiev in the aftermath of the failed Polish January Uprising of 1863 against the tsarist army. His native language was Polish, but he also spoke Russian, as well as Ukrainian due to his childhood surroundings. Malevich would later write a series of articles in Ukrainian about art. Kazimir's father managed a sugar factory. Kazimir was the first of fourteen children, only nine of whom survived into adulthood. His family moved often and he spent most of his childhood in the villages of modern-day Ukraine, amidst sugar-beet plantations, far from centers of culture. Until age twelve he knew nothing of professional artists, although art had surrounded him in childhood. He delighted in peasant embroidery, and in decorated walls and stoves. He was able to paint in the peasant style. He studied drawing in Kiev from 1895 to 1896. From 1896 to 1904 Kazimir Malevich lived in Kursk. In 1904, after the death of his father, he moved to Moscow. He studied at the Moscow School of Painting, Sculpture, and Architecture from 1904 to 1910 and in the studio of Fedor Rerberg in Moscow. In 1911 he participated in the second exhibition of the group, "Soyuz Molodyozhi" (Union of Youth) in St. Petersburg, together with Vladimir Tatlin and, in 1912, the group held its third exhibition, which included works by Aleksandra Ekster, Tatlin, and others. In the same year he participated in an exhibition by the collective, "Donkey's Tail" in Moscow. By that time his works were influenced by Natalia Goncharova and Mikhail Larionov, Russian avant-garde painters, who were particularly interested in Russian folk art called "lubok". Malevich described himself as painting in a "Cubo-Futurist" style in 1912. In March 1913 a major exhibition of Aristarkh Lentulov's paintings opened in Moscow. The effect of this exhibition was comparable with that of Paul Cézanne in Paris in 1907, as all the main Russian avant-garde artists of the time (including Malevich) immediately absorbed the cubist principles and began using them in their works. Already in the same year the Cubo-Futurist opera, "Victory Over the Sun", with Malevich's stage-set, became a great success. In 1914 Malevich exhibited his works in the "Salon des Indépendants" in Paris together with Alexander Archipenko, Sonia Delaunay, Aleksandra Ekster, and Vadim Meller, among others. Malevich also co-illustrated, with Pavel Filonov, "Selected Poems with Postscript, 1907–1914" by Velimir Khlebnikov and another work by Khlebnikov in 1914 titled "Roar! Gauntlets, 1908–1914", with Vladimir Burliuk. Later in that same year he created a series of lithographs in support of Russia's entry into WWI. These prints, accompanied by captions by Vladimir Mayakovsky and published by the Moscow-based publication house Segodniashnii Lubok (Contemporary Lubok), on the one hand show the influence of traditional folk art, but on the other are characterised by solid blocks of pure colours juxtaposed in compositionally evocative ways that anticipate his Suprematist work. In 1911 Brocard & Co. produced an eau de cologne called "Severny". Malevich conceived the advertisement and design of the perfume bottle with craquelure of an iceberg and a polar bear on the top, which lasted through the mid-1920s. In 1915, Malevich laid down the foundations of Suprematism when he published his manifesto, "From Cubism to Suprematism". In 1915–1916 he worked with other Suprematist artists in a peasant/artisan co-operative in Skoptsi and Verbovka village. In 1916–1917 he participated in exhibitions of the Jack of Diamonds group in Moscow together with Nathan Altman, David Burliuk, Aleksandra Ekster and others. Famous examples of his Suprematist works include "Black Square" (1915) and "White On White" (1918). Malevich exhibited his first "Black Square", now at the Tretyakov Gallery in Moscow, at the Last Futurist Exhibition 0,10 in Petrograd (Saint Petersburg) in 1915. A black square placed against the sun appeared for the first time in the 1913 scenic designs for the Futurist opera "Victory over the Sun". The second "Black Square" was painted around 1923. Some believe that the third "Black Square" (also at the Tretyakov Gallery) was painted in 1929 for Malevich's solo exhibition, because of the poor condition of the 1915 square. One more "Black Square", the smallest and probably the last, may have been intended as a diptych together with the "Red Square" (though of smaller size) for the exhibition Artists of the RSFSR: 15 Years, held in Leningrad (1932). The two squares, Black and Red, were the centerpiece of the show. This last square, despite the author's note "1913" on the reverse, is believed to have been created in the late twenties or early thirties, for there are no earlier mentions of it. In 1918, Malevich decorated a play, "Mystery-Bouffe", by Vladimir Mayakovskiy produced by Vsevolod Meyerhold. He was interested in aerial photography and aviation, which led him to abstractions inspired by or derived from aerial landscapes. Some Ukrainian authors argue that Malevich's Suprematism is rooted in the traditional Ukrainian culture. After the October Revolution (1917), Malevich became a member of the Collegium on the Arts of Narkompros, the Commission for the Protection of Monuments and the Museums Commission (all from 1918–1919). He taught at the Vitebsk Practical Art School in Belarus (1919–1922) alongside Marc Chagall, the Leningrad Academy of Arts (1922–1927), the Kiev Art Institute (1928–1930), and the House of the Arts in Leningrad (1930). He wrote the book "The World as Non-Objectivity", which was published in Munich in 1926 and translated into English in 1959. In it, he outlines his Suprematist theories. In 1923, Malevich was appointed director of Petrograd State Institute of Artistic Culture, which was forced to close in 1926 after a Communist party newspaper called it "a government-supported monastery" rife with "counterrevolutionary sermonizing and artistic debauchery." The Soviet state was by then heavily promoting an idealized, propagandistic style of art called Socialist Realism—a style Malevich had spent his entire career repudiating. Nevertheless, he swam with the current, and was quietly tolerated by the Communists. In 1927, Malevich traveled to Warsaw where he was given a hero's welcome. There he met with artists and former students Władysław Strzemiński and Katarzyna Kobro, whose own movement, Unism, was highly influenced by Malevich. He held his first foreign exhibit in the Hotel Polonia Palace. From there the painter ventured on to Berlin and Munich for a retrospective which finally brought him international recognition. He arranged to leave most of the paintings behind when he returned to the Soviet Union. Malevich's assumption that a shifting in the attitudes of the Soviet authorities toward the modernist art movement would take place after the death of Vladimir Lenin and Leon Trotsky's fall from power was proven correct in a couple of years, when the government of Joseph Stalin turned against forms of abstraction, considering them a type of "bourgeois" art, that could not express social realities. As a consequence, many of his works were confiscated and he was banned from creating and exhibiting similar art. In autumn 1930 he was arrested interrogated by the KGB in Leningrad, accused of Polish espionage, and threatened with execution. He was released from imprisonment In early December. Critics derided Malevich's art as a negation of everything good and pure: love of life and love of nature. The Westernizer artist and art historian Alexandre Benois was one such critic. Malevich responded that art can advance and develop for art's sake alone, saying that "art does not need us, and it never did". When Malevich died of cancer at the age of fifty-seven, in Leningrad on 15 May 1935, his friends and disciples buried his ashes in a grave marked with a black square. They didn't fulfill his stated wish to have the grave topped with an "architekton"—one of his skyscraper-like maquettes of abstract forms, equipped with a telescope through which visitors were to gaze at Jupiter. On his deathbed Malevich had been exhibited with the "Black Square" above him, and mourners at his funeral rally were permitted to wave a banner bearing a black square. Malevich had asked to be buried under an oak tree on the outskirts of Nemchinovka, a place to which he felt a special bond. His ashes were sent to Nemchinovka, and buried in a field near his dacha. Nikolai Suetin, a friend of Malevich's and a fellow artist, designed a white cube with a black square to mark the burial site. The memorial was destroyed during World War II. The city of Leningrad bestowed a pension on Malevich's mother and daughter. In 2013, an apartment block was built on the place of the tomb and burial site of Kazimir Malevich. Another nearby monument to Malevich, put up in 1988, is now also situated on the grounds of a gated community. Malevich's family was one of the millions of Poles who lived within the Russian Empire following the Partitions of Poland. Kazimir Malevich was born near Kiev on lands that had previously been part of the Polish-Lithuanian Commonwealth of parents who were ethnic Poles. Both Polish and Russian were native languages of Malevich, who would sign his artwork in the Polish form of his name as "Kazimierz Malewicz". In a visa application to travel to France, Malewicz claimed "Polish" as his nationality. French art historian Andrei Nakov, who re-established Malevich's birth year as 1879 (and not 1878), has argued for restoration of the Polish spelling of Malevich's name. In 1985 Polish performance artist Zbigniew Warpechowski performed "Citizenship for a Pure Feeling of Kazimierz Malewicz" as an homage to the great artist and critique of Polish authorities that refused to grant Polish citizenship to Kazimir Malevich. In 2013, Malevich's family in New York City and fans founded the not-for-profit "The Rectangular Circle of Friends of Kazimierz Malewicz", whose dedicated goal is to promote awareness of Kazimir's Polish ethnicity. Russian art historian gained access to the artist's criminal case and found that in some documents Malevich specified his nationality as Ukrainian. Alfred H. Barr Jr. included several paintings in the groundbreaking exhibition "Cubism and Abstract Art" at the Museum of Modern Art in New York in 1936. In 1939, the Museum of Non-Objective Painting opened in New York, whose founder, Solomon R. Guggenheim—an early and passionate collector of the Russian avant-garde—was inspired by the same aesthetic ideals and spiritual quest that exemplified Malevich's art. The first U.S. retrospective of Malevich's work in 1973 at the Solomon R. Guggenheim Museum provoked a flood of interest and further intensified his impact on postwar American and European artists. However, most of Malevich's work and the story of the Russian avant-garde remained under lock and key until Glasnost. In 1989, the Stedelijk Museum in Amsterdam held the West's first large-scale Malevich retrospective, including the paintings they owned and works from the collection of Russian art critic Nikolai Khardzhiev. Malevich's works are held in several major art museums, including the State Tretyakov Gallery in Moscow, and in New York, the Museum of Modern Art and the Guggenheim Museum. The Stedelijk Museum in Amsterdam owns 24 Malevich paintings, more than any other museum outside of Russia. Another major collection of Malevich works is held by the State Museum of Contemporary Art in Thessaloniki. "Black Square", the fourth version of his magnum opus painted in the 1920s, was discovered in 1993 in Samara and purchased by Inkombank for US$250,000. In April 2002 the painting was auctioned for an equivalent of US$1 million. The purchase was financed by the Russian philanthropist Vladimir Potanin, who donated funds to the Russian Ministry of Culture, and ultimately, to the State Hermitage Museum collection. According to the Hermitage website, this was the largest private contribution to state art museums since the October Revolution. In 2008 the Stedelijk Museum restituted five works to the heirs of Malevich's family from a group that had been left in Berlin by Malevich, and acquired by the gallery in 1958, in exchange for undisputed title to the remaining pictures. On 3 November 2008 one of these works entitled "Suprematist Composition" from 1916 set the world record for any Russian work of art and any work sold at auction for that year, selling at Sotheby's in New York City for just over US$60 million (surpassing his previous record of US$17 million set in 2000). In May 2018, the same painting "Suprematist Composition" 1916 sold at Christie's New York for over US$85 million (including fees), a record auction price for a Russian work of art. Malevich's life inspires many references featuring events and the paintings as players. The smuggling of Malevich paintings out of Russia is a key to the plot line of writer Martin Cruz Smith's thriller "Red Square". Noah Charney's novel, "The Art Thief" tells the story of two stolen Malevich "White on White" paintings, and discusses the implications of Malevich's radical Suprematist compositions on the art world. British artist Keith Coventry has used Malevich's paintings to make comments on modernism, in particular his Estate Paintings. Malevich's work also is featured prominently in the Lars Von Trier film, "Melancholia". At the Closing Ceremony of the Sochi 2014 Olympic Winter Games, Malevich visual themes were featured (via projections) in a section on 20th century Russian modern art. † Also known as "Red Square: Painterly Realism of a Peasant Woman in Two Dimensions". †† Also known as "Black Square and Red Square: Painterly Realism of a Boy with a Knapsack - Color Masses in the Fourth Dimension".
https://en.wikipedia.org/wiki?curid=17300
Kakinomoto no Hitomaro Kakinomoto no Hitomaro (柿本 人麻呂 or 柿本 人麿; – ) was a Japanese "waka" poet and aristocrat of the late Asuka period. He was the most prominent of the poets included in the "Man'yōshū", the oldest "waka" anthology, but apart from what can be gleaned from hints in the "Man'yōshū", the details of his life are largely uncertain. He was born to the Kakinomoto clan, based in Yamato Province, probably in the 650s, and likely died in Iwami Province around 709. He served as court poet to Empress Jitō, creating many works praising the imperial family, and is best remembered for his elegies for various imperial princes. He also composed well-regarded travel poems. He is ranked as one of the Thirty-six Poetry Immortals. Ōtomo no Yakamochi, the presumed compiler of the "Man'yōshū", and Ki no Tsurayuki, the principal compiler of the "Kokin Wakashū", praised Hitomaro as "Sanshi no Mon" (山柿の門) and "Uta no Hijiri" (歌の聖) respectively. From the Heian period on, he was often called Hito-maru (人丸). He has come to be revered as a god of poetry and scholarship, and is considered one of the four greatest poets in Japanese history, along with Fujiwara no Teika, Sōgi and Bashō. The sole early source for the life of the poet Kakinomoto no Hitomaro is the "Man'yōshū". His name does not appear in any of the official court documents, perhaps on account of his low rank. Hitomaro was born into the Kakinomoto clan, an offshoot of the ancient . Centred in the northeastern part of the Nara Basin, the Wani clan had furnished many imperial consorts in the fourth through sixth centuries, and extended their influence from Yamato Province to Yamashiro, Ōmi, Tanba and Harima provinces. Many of their clan traditions (including genealogies, songs, and tales) are preserved in the "Nihon Shoki" and, especially, the "Kojiki". The Kakinomoto clan were headquartered in either Shinjō, Nara or, perhaps more likely, the Ichinomoto area of Tenri, Nara. The main Wani clan were also based in this area, so the Kakinomoto clan may have had a particularly close relationship with their parent clan. According to the "Shinsen Shōjiroku", the clan's name derives from the persimmon ("kaki") tree that grew on their land during the reign of Emperor Bidatsu. The Kakinomoto clan had their hereditary title promoted from Omi to Ason in the eleventh month (see "Japanese calendar") of 684. According to the "Nihon Shoki", Kakinomoto no Saru, the probable head of the clan, had been among ten people appointed "", equivalent to Junior Fifth Rank, in the twelfth month of 681. These facts lead Watase to conjecture that the Kakinomoto clan may have had some literary success in the court of Emperor Tenmu. According to the "Shoku Nihongi", Saru died in 708, having attained the Junior Fourth Rank, Lower Grade. There are several theories regarding the relationship of this Kakinomoto no Saru to the poet Hitomaro, including the former being the latter's father, brother, uncle, or them being the same person. The theory that they were the same person has been advanced by Takeshi Umehara, but has little supporting evidence. While the other theories cannot be confirmed, it is certain that they were members of the same clan (probably close relatives), and were active at the same time. It is likely that their mutual activity at court had a significant effect on each other. The year in which he was born is not known, nor can much be said with certainty about any aspects of his life beyond his poetic activities. Watase tentatively takes Hitomaro as being 21 years old (by Japanese reckoning) between 673 and 675, which would put his birth between 653 and 655. The earliest dated work attributed to him in the "Man'yōshū" is his Tanabata poem ("Man'yōshū" 2033) composed in the ninth year of Emperor Tenmu's reign (680). The content of this poem reveals an awareness of the mythology that, according to the preface to the "Kojiki" (completed in 712) had begun to be compiled during Tenmu's reign. Watase also observes that Hitomaro's having composed a Tanabata poem means that he was probably attending Tanabata gatherings during this period. A significant number of poems in the "Kakinomoto no Ason Hitomaro Kashū" were apparently recorded by Hitomaro before 690, and are characteristic of court poetry, leading to the conclusion that he was active at court from the early part of Emperor Tenmu's reign. From this point he was active in recording and composing love poems at court. Watase speculates that Hitomaro came to court in the service of the in response to an imperial edict in 673. Based on Hitomaro's poetic activities during Empress Jitō's reign, there are a few possibilities for where Hitomaro was serving at Tenmu's court. Watase presents three principal theories: first under the empress-consort Princess Uno-no-sarara (who later became Empress Jitō); second under Crown Prince Kusakabe; third in the palace of Prince Osakabe. Hitomaro acted as a court poet during the reigns of Empress Jitō and Emperor Monmu. In the fourth month of 689, Prince Kusakabe died, and Hitomaro composed an elegy commemorating the prince. He also composed an elegy for Princess Asuka, who died in the fourth month of 700, and a poem commemorating an imperial visit to Kii Province. His poetic composition flourished during the period in which Empress Jitō was active (both during her reign and after her retirement). He composed poetry for numerous members of the imperial family, including the empress, Prince Kusakabe, Prince Karu, Prince Takechi, Prince Osakabe, Prince Naga, Prince Yuge, Prince Toneri, , Princess Hatsusebe and Princess Asuka. He apparently composed poetry in Yamato Province (his home), Yamashiro Province and Ōmi Province in the north, Kii Province in the south, Shikoku, Kyūshū and the Seto Inland Sea in the west, as well as Iwami Province in the northwest. Susumu Nakanishi remarks that the fact that he did not apparently compose elegies for emperors themselves, and that most of his poems centre around princes and princesses, indicates that he was probably a writer affiliated with the literary circles that formed around these junior members of the imperial family. The ordering of poems, and their headnotes, in volume 2 of the "Man'yōshū", implies that Hitomaro died shortly before the moving of the capital to Nara in 710. He would have been in Iwami Province, at the Sixth Rank or lower. The date, site and manner of his death are a matter of scholarly debate, due to some contradictory details that are gleaned from poems attributed to Hitomaro and his wife . Taking Watase's rough dates, he would have been in his mid-fifties in 709, when Watase speculates he died. Mokichi Saitō postulated that Hitomaro died in an epidemic that swept Iwami and Izumo provinces in 707. Hitomaro's final poem gives the strong impression that he met his death in the mountains. Saitō was convinced he had located the site of the Kamoyama of the above poem and erected a monument there, but two poems by Yosami that immediately follow the above in the "Man'yōshū" suggest otherwise, as they mention "shells" (貝 "kai") and a "Stone River" (石川 "Ishikawa"), neither of which seem likely in the context of Saitō's Kamoyama. The above-quoted translation is based on Saitō's interpretation of "kai" as referring to a "ravine" (峡). Other scholars take the presence of "shells" as meaning Hitomaro died near the mouth of a river where it meets the sea. (This interpretation would give the translation "Alas! he lies buried, men say, / With the shells of the Stone River.") There is no river named "Ishikawa" near the present Kamoyama; Saitō explained this as "Ishikawa" perhaps being an archaic name for upper part of another river. An unknown member of the Tajihi clan wrote a response to Yosami in the persona of Hitomaro, very clearly connecting Hitomaro's death to the sea. Hitomaro was a court poet during the reigns of Empress Jitō and Emperor Monmu, with most of his dateable poems coming from the last decade or so of the seventh century. He apparently left a private collection, the so-called "Kakinomoto no Ason Hitomaro Kashū", which does not survive as an independent work but was cited extensively by the compilers of the "Man'yōshū". 18 "chōka" and 67 "tanka" (of which 36 are envoys to his long poems) are directly attributed to him in the "Man'yōshū". All are located in the first four books of the collection. Of these, six "chōka" and 29 "tanka" are classified as "zōka" (miscellaneous poems), three "chōka" and 13 "tanka" as "sōmon" (mutual exchanges of love poetry), and nine "chōka" and 25 "tanka" as "banka" (elegies). Of note is the fact that he contributed "chōka" to all three categories, and that he composed so many "banka". Broken down by topic, the above poems include: From the above it can be said that Hitomaro's poetry was primarily about affairs of the court, but that he also showed a marked preference for poems on travel. In addition to the 85 poems directly attributed to Hitomaro by the "Man'yōshū", two "chōka" and three "tanka" in books 3 and 9 are said to be traditionally attributed to Hitomaro. Additionally, there is one Hitomaro "tanka" in book 15 said to have been recited in 736 by an envoy sent to Silla. Including these "traditional" Hitomaro poems, that gives 20 "chōka" and 71 "tanka". It is quite possible that a significant number of these poems were incorrectly attributed to Hitomaro by tradition. In addition to Hitomaro's own compositions, there are also many poems said to have been recorded by him in his personal collection, the "Kakinomoto no Asomi Hitomaro Kashū" (柿本朝臣人麿歌集). The "Hitomaro Kashū" included 333 "tanka", 35 "sedōka", and two "chōka". This adds up to a total figure of close to 500 poems directly associated with Hitomaro. Hitomaro is known for his solemn and mournful elegies of members of the imperial family, whom he described in his courtly poems as "gods" and "children of the sun". He incorporated elements of the national mythology seen in the "Kojiki" and "Nihon Shoki" and historical narrative in his poetry. While he is known for his poems praising the imperial family, his poetry is also filled with human sensitivity and a new, fresh "folkiness". His lament for the Ōmi capital is noted for its vivid, sentimental descriptions of the ruins, while his elegy for Prince Takechi powerfully evokes the Jinshin War. His Yoshino and poems praise splendidly the natural scenery and the divinity of the Japanese islands, and his Iwami exchange vividly describes the powerful emotions of being separated from the woman he loved. His romantic poems convey honest emotions, and his travel poems exquisitely describe the mood of the courtiers on these trips. He shed tears for the deaths of even random commoners on country paths and court women whose names he did not even know. Watase credits him with the creation of an ancient lyricism that expressed both human sentiment and sincere emotions across both his poems of praise and mourning. There is evidence that Hitomaro exerted direct influence on the poetry composed during his own time. For example, poems 171 through 193 of Book 1 of the "Man'yōshū" bear similarities to his work. It is generally accepted that the court poets of the following generation (the so-called "third period" of "Man'yō" poetry), including Yamabe no Akahito, were influenced by Hitomaro's courtly poems. Ōtomo no Yakamochi, a poet of the "fourth period" who probably had a hand in the final compilation of the collection, held Hitomaro in high regard, praising him as "Sanshi no Mon" (山柿の門). As discussed above, the death of Hitomaro appears to have already taken on some legendary characteristics. In his Japanese preface to the tenth-century "Kokin Wakashū", Ki no Tsurayuki referred to Hitomaro as "Uta no Hijiri" ("Saint of Poetry"). In the Heian period the practice of Hitomaru-eigu (人丸影供) also gained currency, showing that Hitomaro had already begun to be apotheosized. Hitomaro's divinity status continued to grow in the Kamakura and Muromachi periods. The Edo period scholars Keichū and Kamo no Mabuchi tended to reject the various legends about Hitomaro. In Akashi, Hyōgo Prefecture there is a Kakinomoto Shrine dedicated to him, commemorating an early Heian belief that Hitomaro's spirit came to rest in Akashi, an area the historical Hitomaro probably visited multiple times. Hitomaro is today ranked, along with Fujiwara no Teika, Sōgi and Bashō, as one of the four greatest poets in Japanese history.
https://en.wikipedia.org/wiki?curid=17303
Karl Ernst von Baer Karl Ernst Ritter von Baer Edler von Huthorn ( – ) was a Baltic German scientist and explorer. Baer is also known in Russia as Karl Maksímovich Ber (). Baer was a naturalist, biologist, geologist, meteorologist, geographer, and a founding father of embryology. He was an explorer of European Russia and Scandinavia. He was a member of the Russian Academy of Sciences, a co-founder of the Russian Geographical Society, and the first president of the Russian Entomological Society, making him a distinguished Baltic German scientist. Karl Ernst von Baer was born into the Baltic German noble Baer family () in the Piep Manor (), Jerwen County, Governorate of Estonia (in present-day Lääne-Viru County, Estonia), as a knight by birthright. His family was of Westphalian origin and originated in Osnabrück. He spent his early childhood at Lasila manor, Estonia. Many of his ancestors had come from Westphalia. He was educated at the Knight and Cathedral School in Reval (Tallinn) and the Imperial University of Dorpat (Tartu), each of which he found lacking in quality of education. In 1812, during his tenure at the university, he was sent to Riga to aid the city after Napoleon's armies had laid siege to it. As he attempted to help the sick and wounded, he realized that his education at Dorpat had been inadequate, and upon his graduation, he notified his father that he would need to go abroad to "finish" his education. In his autobiography, his discontent with his education at Dorpat inspired him to write a lengthy appraisal of education in general, a summary that dominated the content of the book. After leaving Tartu, he continued his education in Berlin, Vienna, and Würzburg, where Ignaz Döllinger introduced him to the new field of embryology. In 1817, he became a professor at Königsberg University (Kaliningrad) and full professor of zoology in 1821, and of anatomy in 1826. In 1829, he taught briefly in St Petersburg, but returned to Königsberg. In 1834, Baer moved back to St Petersburg and joined the St Petersburg Academy of Sciences, first in zoology (1834–46) and then in comparative anatomy and physiology (1846–62). His interests while there were anatomy, ichthyology, ethnography, anthropology, and geography. While embryology had kept his attention in Königsberg, then in Russia von Baer engaged in a great deal of field research, including the exploration of the island Novaya Zemlya. The last years of his life (1867–76) were spent in Dorpat, where he became a major critic of Charles Darwin. von Baer studied the embryonic development of animals, discovering the blastula stage of development and the notochord. Together with Heinz Christian Pander and based on the work by Caspar Friedrich Wolff, he described the germ layer theory of development (ectoderm, mesoderm, and endoderm) as a principle in a variety of species, laying the foundation for comparative embryology in the book "Über Entwickelungsgeschichte der Thiere" (1828). In 1826, Baer discovered the mammalian ovum. The human ovum was first described by Edgar Allen in 1928. In 1827, he completed research "Ovi Mammalium et Hominis genesi" for St Petersburg's Academy of Science (published at Leipzig). In 1827 von Baer became the first person to observe human ova. Only in 1876 did Oscar Hertwig prove that fertilization is due to fusion of an egg and sperm cell. von Baer formulated what became known as Baer's laws of embryology: From his studies of comparative embryology, Baer had believed in the transmutation of species but rejected later in his career the theory of natural selection proposed by Charles Darwin. He produced an early phylogenetic tree revealing the ontogeny and phylogeny of vertebrate embryos. In the fifth edition of "On the Origin of Species" published in 1869, Charles Darwin added a "Historical Sketch" giving due credit to naturalists who had preceded him in publishing the opinion that species undergo modification, and that the existing forms of life have descended by true generation from pre-existing forms. According to Darwin: Baer believed in a teleological force in nature which directed evolution (orthogenesis). The term Baer's law is also applied to the unconfirmed proposition that in the Northern Hemisphere, erosion occurs mostly on the right banks of rivers, and in the Southern Hemisphere on the left banks. In its more thorough formulation, which Baer never formulated himself, the erosion of rivers depends on the direction of flow, as well. For example, in the Northern Hemisphere, a section of river flowing in a North-South direction, according to the theory, erodes on its right bank due to the coriolis effect, while in an East-West section there is no preference. However, this was repudiated by Albert Einstein's tea leaf paradox. Baer was interested in the northern part of Russia, and explored Novaya Zemlya in 1837, collecting biological specimens. Other travels led him to the Caspian Sea, the North Cape, and Lapland. He was one of the founders of the Russian Geographical Society. He was a pioneer in studying biological time – the perception of time in different organisms. In 1849, he was elected a foreign honorary of the American Academy of Arts and Sciences. He was elected a foreign member of the Royal Swedish Academy of Sciences in 1850. He was the president of the Estonian Naturalists' Society in 1869–1876, and was a co-founder and first president of the Russian Entomological Society. In 1875, he became a foreign member of the Royal Netherlands Academy of Arts and Sciences. A statue honouring him can be found on Toome Hill in Tartu, as well as at Lasila manor, Estonia, and at the Zoological Museum in St Petersburg, Russia. Before the Estonian conversion to the euro, the 2-kroon bank note bore his portrait. Baer Island in the Kara Sea was named after Karl Ernst von Baer for his important contributions to the research of arctic meteorology between 1830 and 1840. A duck, Baer's pochard, was also named after him.
https://en.wikipedia.org/wiki?curid=17304
Kentucky and Virginia Resolutions The Virginia and Kentucky Resolutions were political statements drafted in 1798 and 1799, in which the Kentucky and Virginia legislatures took the position that the federal Alien and Sedition Acts were unconstitutional. The resolutions argued that the states had the right and the duty to declare as unconstitutional those acts of Congress that were not authorized by the Constitution. In doing so, they argued for states' rights and strict constructionism of the Constitution. The Kentucky and Virginia Resolutions of 1798 were written secretly by Vice President Thomas Jefferson and James Madison, respectively. The principles stated in the resolutions became known as the "Principles of '98". Adherents argued that the states could judge the constitutionality of central government laws and decrees. The Kentucky Resolutions of 1798 argued that each individual state has the power to declare that federal laws are unconstitutional and void. The Kentucky Resolution of 1799 added that when the states determine that a law is unconstitutional, nullification by the states is the proper remedy. The Virginia Resolutions of 1798 refer to "interposition" to express the idea that the states have a right to "interpose" to prevent harm caused by unconstitutional laws. The Virginia Resolutions contemplated joint action by the states. The Resolutions had been controversial since their passage, eliciting disapproval from ten state legislatures. Ron Chernow assessed the theoretical damage of the resolutions as "deep and lasting ... a recipe for disunion". George Washington was so appalled by them that he told Patrick Henry that if "systematically and pertinaciously pursued", they would "dissolve the union or produce coercion". Their influence reverberated right up to the Civil War and beyond. In the years leading up to the Nullification Crisis, the resolutions divided Jeffersonian democrats, with states' rights proponents such as John C. Calhoun supporting the Principles of '98 and President Andrew Jackson opposing them. Years later, the passage of the Fugitive Slave Act of 1850 led anti-slavery activists to quote the Resolutions to support their calls on Northern states to nullify what they considered unconstitutional enforcement of the law. The resolutions opposed the federal Alien and Sedition Acts, which extended the powers of the federal government. They argued that the Constitution was a "compact" or agreement among the states. Therefore, the federal government had no right to exercise powers not specifically delegated to it. If the federal government assumed such powers, its acts could be declared unconstitutional by the states. So, states could decide the constitutionality of laws passed by Congress. Kentucky's Resolution 1 stated: That the several states composing the United States of America are not united on the principle of unlimited submission to their general government; but that, by compact, under the style and title of a Constitution for the United States, and of amendments thereto, they constituted a general government for special purposes, delegated to that government certain definite powers, reserving, each state to itself, the residuary mass of right to their own self-government; and that whensoever the general government assumes undelegated powers, its acts are unauthoritative, void, and of no force; that to this compact each state acceded as a state, and is an integral party, its co-States forming, as to itself, the other party; that this government, created by this compact, was not made the exclusive or final judge of the extent of the powers delegated to itself, since that would have made its discretion, and not the Constitution, the measure of its powers; but that, as in all other cases of compact among powers having no common judge, each party has an equal right to judge for itself, as well of infractions as of the mode and measure of redress. A key provision of the Kentucky Resolutions was Resolution 2, which denied Congress more than a few penal powers by arguing that Congress had no authority to punish crimes other than those specifically named in the Constitution. The Alien and Sedition Acts were asserted to be unconstitutional, and therefore void, because they dealt with crimes not mentioned in the Constitution: That the Constitution of the United States, having delegated to Congress a power to punish treason, counterfeiting the securities and current coin of the United States, piracies, and felonies committed on the high seas, and offenses against the law of nations, and no other crimes, whatsoever; and it being true as a general principle, and one of the amendments to the Constitution having also declared, that "the powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people," therefore the act of Congress, passed on the 14th day of July, 1798, and intitled "An Act in addition to the act intitled An Act for the punishment of certain crimes against the United States," as also the act passed by them on the—day of June, 1798, intitled "An Act to punish frauds committed on the bank of the United States," (and all their other acts which assume to create, define, or punish crimes, other than those so enumerated in the Constitution,) are altogether void, and of no force whatsoever. The Virginia Resolution of 1798 also relied on the compact theory and asserted that the states have the right to determine whether actions of the federal government exceed constitutional limits. The Virginia Resolution introduced the idea that the states may "interpose" when the federal government acts unconstitutionally, in their opinion: That this Assembly doth explicitly and peremptorily declare, that it views the powers of the federal government as resulting from the compact to which the states are parties, as limited by the plain sense and intention of the instrument constituting that compact, as no further valid than they are authorized by the grants enumerated in that compact; and that, in case of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the states, who are parties thereto, have the right, and are in duty bound, to interpose, for arresting the progress of the evil, and for maintaining, within their respective limits, the authorities, rights and liberties, appertaining to them. There were two sets of Kentucky Resolutions. The Kentucky state legislature passed the first resolution on November 16, 1798 and the second on December 3, 1799. Jefferson wrote the 1798 Resolutions. The author of the 1799 Resolutions is not known with certainty. Both resolutions were stewarded by John Breckinridge who was falsely believed to have been their author. James Madison wrote the Virginia Resolution. The Virginia state legislature passed it on December 24, 1798. The Kentucky Resolutions of 1798 stated that acts of the national government beyond the scope of its constitutional powers are "unauthoritative, void, and of no force". While Jefferson's draft of the 1798 Resolutions had claimed that each state has a right of "nullification" of unconstitutional laws, that language did not appear in the final form of those Resolutions. Rather than purporting to nullify the Alien and Sedition Acts, the 1798 Resolutions called on the other states to join Kentucky "in declaring these acts void and of no force" and "in requesting their repeal at the next session of Congress". The Kentucky Resolutions of 1799 were written to respond to the states who had rejected the 1798 Resolutions. The 1799 Resolutions used the term "nullification", which had been deleted from Jefferson's draft of the 1798 Resolutions, resolving: "That the several states who formed [the Constitution], being sovereign and independent, have the unquestionable right to judge of its infraction; and, That a nullification, by those sovereignties, of all unauthorized acts done under color of that instrument, is the rightful remedy." The 1799 Resolutions did not assert that Kentucky would unilaterally refuse to enforce the Alien and Sedition Acts. Rather, the 1799 Resolutions declared that Kentucky "will bow to the laws of the Union" but would continue "to oppose in a constitutional manner" the Alien and Sedition Acts. The 1799 Resolutions concluded by stating that Kentucky was entering its "solemn protest" against those Acts. The Virginia Resolution did not refer to "nullification", but instead used the idea of "interposition" by the states. The Resolution stated that when the national government acts beyond the scope of the Constitution, the states "have the right, and are in duty bound, to interpose, for arresting the progress of the evil, and for maintaining, within their respective limits, the authorities, rights and liberties, appertaining to them". The Virginia Resolution did not indicate what form this "interposition" might take or what effect it would have. The Virginia Resolutions appealed to the other states for agreement and cooperation. Numerous scholars (including Koch and Ammon) have noted that Madison had the words "void, and of no force or effect" excised from the Virginia Resolutions before adoption. Madison later explained that he did this because an individual state does not have the right to declare a federal law null and void. Rather, Madison explained that "interposition" involved a collective action of the states, not a refusal by an individual state to enforce federal law, and that the deletion of the words "void, and of no force or effect" was intended to make clear that no individual state could nullify federal law. The Kentucky Resolutions of 1799, while claiming the right of nullification, did not assert that individual states could exercise that right. Rather, nullification was described as an action to be taken by "the several states" who formed the Constitution. The Kentucky Resolutions thus ended up proposing joint action, as did the Virginia Resolution. The Resolutions joined the foundational beliefs of Jefferson's party and were used as party documents in the 1800 election. As they had been shepherded to passage in the Virginia House of Delegates by John Taylor of Caroline, they became part of the heritage of the "Old Republicans". Taylor rejoiced in what the House of Delegates had made of Madison's draft: it had read the claim that the Alien and Sedition Acts were unconstitutional as meaning that they had "no force or effect" in Virginia – that is, that they were void. Future Virginia Governor and U.S. Secretary of War James Barbour concluded that "unconstitutional" included "void, and of no force or effect", and that Madison's textual change did not affect the meaning. Madison himself strongly denied this reading of the Resolution. The long-term importance of the Resolutions lies not in their attack on the Alien and Sedition Acts, but rather in their strong statements of states' rights theory, which led to the rather different concepts of nullification and interposition. The resolutions were submitted to the other states for approval, but with no success. Seven states formally responded to Kentucky and Virginia by rejecting the Resolutions and three other states passed resolutions expressing disapproval, with the other four states taking no action. No other state affirmed the resolutions. At least six states responded to the Resolutions by taking the position that the constitutionality of acts of Congress is a question for the federal courts, not the state legislatures. For example, Vermont's resolution stated: "It belongs not to state legislatures to decide on the constitutionality of laws made by the general government; this power being exclusively vested in the judiciary courts of the Union." In New Hampshire, newspapers treated them as military threats and replied with foreshadowings of civil war. "We think it highly probable that Virginia and Kentucky will be sadly disappointed in their infernal plan of exciting insurrections and tumults," proclaimed one. The state legislature's unanimous reply was blunt: Alexander Hamilton, then building up the army, suggested sending it into Virginia, on some "obvious pretext". Measures would be taken, Hamilton hinted to an ally in Congress, "to act upon the laws and put Virginia to the Test of resistance". At the Virginia General Assembly, delegate John Mathews was said to have objected to the passing of the resolutions by "tearing them into pieces and trampling them underfoot." In January 1800, the Virginia General Assembly passed the Report of 1800, a document written by Madison to respond to criticism of the Virginia Resolution by other states. The Report of 1800 reviewed and affirmed each part of the Virginia Resolution, affirming that the states have the right to declare that a federal action is unconstitutional. The Report went on to assert that a declaration of unconstitutionality by a state would be an expression of opinion, without legal effect. The purpose of such a declaration, said Madison, was to mobilize public opinion and to elicit cooperation from other states. Madison indicated that the power to make binding constitutional determinations remained in the federal courts: Madison then argued that a state, after declaring a federal law unconstitutional, could take action by communicating with other states, attempting to enlist their support, petitioning Congress to repeal the law in question, introducing amendments to the Constitution in Congress, or calling a constitutional convention. However, in the same document Madison explicitly argued that the states retain the ultimate power to decide about the constitutionality of the federal laws, in "extreme cases" such as the Alien and Sedition Act. The Supreme Court can decide in the last resort only in those cases which pertain to the acts of other branches of the federal government, but cannot takeover the ultimate decision making power from the states which are the "sovereign parties" in the Constitutional compact. According to Madison states could override not only the Congressional acts, but also the decisions of the Supreme Court: Madison later strongly denied that individual states have the right to nullify federal law. Although the New England states rejected the Kentucky and Virginia Resolutions in 1798–99, several years later, the state governments of Massachusetts, Connecticut, and Rhode Island threatened to ignore the Embargo Act of 1807 based on the authority of states to stand up to laws deemed by those states to be unconstitutional. Rhode Island justified its position on the embargo act based on the explicit language of interposition. However, none of these states actually passed a resolution nullifying the Embargo Act. Instead, they challenged it in court, appealed to Congress for its repeal, and proposed several constitutional amendments. Several years later, Massachusetts and Connecticut asserted their right to test constitutionality when instructed to send their militias to defend the coast during the War of 1812. Connecticut and Massachusetts questioned another embargo passed in 1813. Both states objected, including this statement from the Massachusetts legislature, or General Court: Massachusetts and Connecticut, along with representatives of some other New England states, held a convention in 1814 that issued a statement asserting the right of interposition. But the statement did not attempt to nullify federal law. Rather, it made an appeal to Congress to provide for the defense of New England and proposed several constitutional amendments. During the "nullification crisis" of 1828–1833, South Carolina passed an purporting to nullify two federal tariff laws. South Carolina asserted that the Tariff of 1828 and the Tariff of 1832 were beyond the authority of the Constitution, and therefore were "null, void, and no law, nor binding upon this State, its officers or citizens". Andrew Jackson issued a proclamation against the doctrine of nullification, stating: "I consider ... the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed." He also denied the right to secede: "The Constitution ... forms a government not a league. ... To say that any State may at pleasure secede from the Union is to say that the United States is not a nation." James Madison also opposed South Carolina's position on nullification. Madison argued that he had never intended his Virginia Resolution to suggest that each individual state had the power to nullify an act of Congress. Madison wrote: "But it follows, from no view of the subject, that a nullification of a law of the U. S. can as is now contended, belong rightfully to a single State, as one of the parties to the Constitution; the State not ceasing to avow its adherence to the Constitution. A plainer contradiction in terms, or a more fatal inlet to anarchy, cannot be imagined." Madison explained that when the Virginia Legislature passed the Virginia Resolution, the "interposition" it contemplated was "a concurring and cooperating interposition of the States, not that of a single State. ... [T]he Legislature expressly disclaimed the idea that a declaration of a State, that a law of the U. S. was unconstitutional, had the effect of annulling the law." Madison went on to argue that the purpose of the Virginia Resolution had been to elicit cooperation by the other states in seeking change through means provided in the Constitution, such as amendment. The Supreme Court rejected the compact theory in several nineteenth century cases, undermining the basis for the Kentucky and Virginia resolutions. In cases such as "Martin v. Hunter's Lessee", "McCulloch v. Maryland", and "Texas v. White", the Court asserted that the Constitution was established directly by the people, rather than being a compact among the states. Abraham Lincoln also rejected the compact theory saying the Constitution was a binding contract among the states and no contract can be changed unilaterally by one party. In 1954, the Supreme Court decided "Brown v. Board of Education", which ruled that segregated schools violate the Constitution. Many people in southern states strongly opposed the "Brown" decision. James J. Kilpatrick, an editor of the "Richmond News Leader", wrote a series of editorials urging "massive resistance" to integration of the schools. Kilpatrick, relying on the Virginia Resolution, revived the idea of interposition by the states as a constitutional basis for resisting federal government action. A number of southern states, including Arkansas, Louisiana, Virginia, and Florida, subsequently passed interposition and nullification laws in an effort to prevent integration of their schools. In the case of "Cooper v. Aaron", the Supreme Court unanimously rejected Arkansas' effort to use nullification and interposition. The Supreme Court held that under the Supremacy Clause, federal law was controlling and the states did not have the power to evade the application of federal law. The Court specifically rejected the contention that Arkansas' legislature and governor had the power to nullify the "Brown" decision. In a similar case arising from Louisiana's interposition act, "Bush v. Orleans Parish School Board", the Supreme Court affirmed the decision of a federal district court that rejected interposition. The district court stated: "The conclusion is clear that interposition is not a constitutional doctrine. If taken seriously, it is illegal defiance of constitutional authority. Otherwise, 'it amounted to no more than a protest, an escape valve through which the legislators blew off steam to relieve their tensions.' ... However solemn or spirited, interposition resolutions have no legal efficacy." Merrill Peterson, Jefferson's otherwise very favorable biographer, emphasizes the negative long-term impact of the Resolutions, calling them "dangerous" and a product of "hysteria": Jefferson's biographer Dumas Malone argued that the Kentucky resolution might have gotten Jefferson impeached for treason, had his actions become known at the time. In writing the Kentucky Resolutions, Jefferson warned that, "unless arrested at the threshold", the Alien and Sedition Acts would "necessarily drive these states into revolution and blood." Historian Ron Chernow says of this "he wasn't calling for peaceful protests or civil disobedience: he was calling for outright rebellion, if needed, against the federal government of which he was vice president." Jefferson "thus set forth a radical doctrine of states' rights that effectively undermined the constitution." Chernow argues that neither Jefferson nor Madison sensed that they had sponsored measures as inimical as the Alien and Sedition Acts themselves. Historian Garry Wills argued "Their nullification effort, if others had picked it up, would have been a greater threat to freedom than the misguided [alien and sedition] laws, which were soon rendered feckless by ridicule and electoral pressure". The theoretical damage of the Kentucky and Virginia resolutions was "deep and lasting, and was a recipe for disunion". George Washington was so appalled by them that he told Patrick Henry that if "systematically and pertinaciously pursued", they would "dissolve the union or produce coercion". The influence of Jefferson's doctrine of states' rights reverberated right up to the Civil War and beyond. Future president James Garfield, at the close of the Civil War, said that Jefferson's Kentucky Resolution "contained the germ of nullification and secession, and we are today reaping the fruits".
https://en.wikipedia.org/wiki?curid=17306
Keystone Cops The Keystone Cops (often spelled "Keystone Kops") are fictional, humorously incompetent policemen featured in silent film slapstick comedies produced by Mack Sennett for his Keystone Film Company between 1912 and 1917. The idea for the Keystone Cops came from Hank Mann, who also played police chief Tehiezel in the first film before being replaced by Ford Sterling. Their first film was "Hoffmeyer's Legacy" (1912) but their popularity stemmed from the 1913 short "The Bangville Police" starring Mabel Normand. As early as 1914, Sennett shifted the Keystone Cops from starring roles to background ensemble in support of comedians such as Charlie Chaplin and Fatty Arbuckle. The Keystone Cops served as supporting players for Marie Dressler, Mabel Normand, and Chaplin in the first full-length Sennett comedy feature "Tillie's Punctured Romance" (1914); "Mabel's New Hero" (1913) with Normand and Arbuckle; "Making a Living" (1914) with Chaplin in his first pre-Tramp screen appearance; "In the Clutches of the Gang" (1914) with Normand, Arbuckle, and Al St. John; and "Wished on Mabel" (1915) with Arbuckle and Normand, among others. Comic actors Chester Conklin, Jimmy Finlayson, and Ford Sterling were also Keystone Cops, as was director Del Lord. The original Keystone Cops were George Jeske, Bobby Dunn, Mack Riley, Charles Avery, Slim Summerville, Edgar Kennedy, and Hank Mann. In 2010, the lost short "A Thief Catcher" was discovered at an antique sale in Michigan. It was filmed in 1914 and stars Ford Sterling, Mack Swain, Edgar Kennedy, and Al St. John and includes a previously unknown appearance of Charlie Chaplin as a Keystone Kop. Mack Sennett continued to use the Keystone Cops intermittently through the 1920s, but their popularity had waned by the time that sound films arrived. In 1935, director Ralph Staub staged a revival of the Sennett gang for his Warner Brothers short subject "Keystone Hotel", featuring a re-creation of the Kops clutching at their hats, leaping in the air in surprise, running energetically in any direction, and taking extreme pratfalls. The Staub version of the Keystone Cops became a template for later re-creations. 20th Century Fox's 1939 film "Hollywood Cavalcade" had Buster Keaton in a Keystone chase scene. "Abbott and Costello Meet the Keystone Kops" (1955) included a lengthy chase scene, showcasing a group of stuntmen dressed as Sennett's squad. (Two original Keystone Cops in this film were Heinie Conklin as an elderly studio guard and Hank Mann as a prop man. Sennett also starred in a cameo appearance as himself). Richard Lester's "A Hard Day' s Night" (1964) has a scene where a reminiscent Keystone cops chase the Beatles around the streets. Mel Brooks directed a car chase scene in the Keystone Cops' style in his comedy film "Silent Movie" (1976). The name has since been used to criticize any group for its mistakes, particularly if the mistakes happened after a great deal of energy and activity, or for a lack of coordination among the members. For example, in criticizing the Department of Homeland Security's response to Hurricane Katrina, Senator Joseph Lieberman claimed that emergency workers under DHS chief Michael Chertoff "ran around like Keystone Kops, uncertain about what they were supposed to do or uncertain how to do it." In sport, the term has come into common usage by television commentators, particularly in the United Kingdom and Ireland. The rugby commentator Liam Toland uses the term to describe a team's incompetent performance on the pitch. The phrase "Keystone cops defending" has become a catchphrase for describing a situation in an English football match where a defensive error or a series of defensive errors leads to a goal. The term was also used in American Football commentary to describe the play of the New York Jets against the New England Patriots in the 2012 ""Buttfumble" game", with sportscaster Cris Collinsworth declaring "This is the Keystone Cops", after the Jets gave up 21 points in 51 seconds. According to Dave Filoni, supervising director of the animated television series "", the look of the police droid is based on the appearance of the Keystone Kops. The 1983 video game "Keystone Kapers", released for the Atari 2600, 5200, MSX and Colecovision, by Activision, featured Keystone Kop Officer Kelly.
https://en.wikipedia.org/wiki?curid=17307
Koenigsegg Koenigsegg Automotive AB () is a manufacturer of high-performance sports cars, based in Ängelholm, Skåne County, Sweden. The company was founded in 1994 in Sweden by Christian von Koenigsegg, with the intention of producing a "world-class" sports car. Many years of development and testing led to the CC8S, the company's first street-legal production car which was introduced in 2002. In 2006, Koenigsegg began production of the CCX, which uses an engine created in-house especially for the car. The goal was to develop a car homologated for use worldwide, particularly the United States whose strict regulations didn't allow the import of earlier Koenigsegg models. In March 2009, the CCXR was listed by "Forbes" as one of "the world's most beautiful cars". In December 2010, the Agera won the BBC Top Gear Hypercar of the Year Award. Apart from developing, manufacturing and selling the Koenigsegg line of sports cars, Koenigsegg is also involved in "green technology" development programmes beginning with the CCXR ("Flower Power") flex-fuel sports car and continuing through the present with the Jesko. Koenigsegg is also active in development programs of plug-in electric cars' systems and next-generation reciprocating engine technologies. Koenigsegg has also developed a camless piston engine which found its first application in the Regera, which was introduced in 2015. Koenigsegg develops and produces most of the main systems, subsystems and components needed for their cars in-house instead of relying on subcontractors. In January 2019, Koenigsegg sold a 20% stake in the company to Swedish electric car manufacturer, National Electric Vehicle Sweden (NEVS), for . The initial design of the CC was penned down by Christian von Koenigsegg. Industrial designer David Crafoord realised the sketches as a 1:5 scale model. This model was later scaled up in order to create the base plug for the initial Koenigsegg prototype that was finished in 1996. During the next years, the prototype went through extensive testing and several new prototypes were built. The prototypes initially used an Audi V8 engine but after the engine supply contract fell through, the next candidate was the Flat-12 race engine developed by Motori Moderni for the Scuderia Coloni Formula one team, in which this engine was raced under the Subaru badge in the 1990 season. These Subaru 1235 engines were purchased and modified for use in the CC; this deal failed when the founder of Motori Moderni died, sending the company into bankruptcy. Eventually Koenigsegg developed its own engine based on the Ford Modular architecture which was used in its early sports cars. Later on, Koenigsegg developed their own engines from scratch, including control systems and transmissions, which is very unusual for a small size sports car producer. Christian von Koenigsegg got the idea to build his own car after watching the Norwegian stop-motion animated movie "Pinchcliffe Grand Prix" in his youth. He took his first steps in the world of business in his early 20s running a trading company called "Alpraaz" in Stockholm, Sweden. Alpraaz exported food from Europe to the developing world. The success of this venture gave von Koenigsegg the necessary financial standing to launch his chosen career as a car manufacturer. Initially, Koenigsegg Automotive was based in Olofström. In 1997, the company needed larger production facilities and moved to a farm, just outside Ängelholm. On 22 February 2003, one of the production facilities caught fire and was badly damaged. Koenigsegg then acquired an abandoned air field to use as his new factory building and in late 2003, one of the two large fighter-jet hangars and an office building were converted into a car factory. Since then, the company is located near the still-active Ängelholm airport, clients can arrive by private jet nearby. Koenigsegg controls and uses the former military runway for shakedown runs of production cars and high-speed testing. The Koenigsegg badge was designed in 1994 by Jacob Låftman, based on the heraldic coat of arms of the Koenigsegg family. The shield has been the family's coat of arms since the 12th century when a family member was knighted by the Germany-based Holy Roman Empire. After moving into the abandoned airfield, which once housed the Swedish air force's "Johan Röd" squadron, Koenigsegg adopted the "ghost symbol" that the squadron had on their planes, which were Saab AJS37 Viggens aircraft (the squad also used the English phrase "The show must go on" on their aircraft as well) as a tribute to the squadron. The badge is seen on models built in the factory that was converted from their hangar. On 12 June 2009, the media reported that Koenigsegg Group, consisting of Koenigsegg Automotive AB, Christian von Koenigsegg, Bård Eker and a group of investors led by Mark Bishop had signed a letter of intent with Saab to take over the brand from General Motors. General Motors confirmed on June 16 that they had chosen Koenigsegg Group as the buyer of Saab Automobile. The deal, set to close 30 September 2009, included in financing from the European Investment Bank, guaranteed by the Swedish government. By comparison, in 2008 Koenigsegg with its staff of 45 produced 18 cars at an average price of ; Saab employed 3,400 workers and made more than 93,000 cars. General Motors announced on 18 August that the deal had been signed, although certain financing details remained to be completed. On 9 September 2009, Koenigsegg announced that BAIC was going to join as a minority stakeholder in Koenigsegg. In November 2009, Koenigsegg decided not to finalise the purchase of Saab and therefore, left the negotiations. The timing uncertainty of finalisation of the take over was the reason Koenigsegg stated for leaving the deal. A Koenigsegg CC prototype was first publicised in 1996, while the full carbon fibre production prototype having white paintwork was finally unveiled at the 2000 Paris Motor Show. The first customer took delivery of a red CC8S in 2002 at the Geneva Auto Show and four more cars were built that year. Koenigsegg was established in Asia later that year with a premiere at the Seoul Auto Show. In 2004, the new CCR, which was basically a high performance variant of the CC8S, was unveiled at the Geneva Auto Show; only 14 were produced. In 2006, Koenigsegg introduced the CCX, a new model, that was developed in order to meet worldwide regulations for road use. This meant the car had to go through extensive development in order to meet the latest and most stringent safety and emission standards that the world's authorities demanded; Koenigsegg had to, for example, develop their own engines and other related technologies. Furthermore, Koenigsegg is the only low-volume sports car manufacturer to pass the new European pedestrian impact tests. Just after Koenigsegg passed this test, the requirement was deemed too complicated for compliance by low-volume manufacturers, so it is now unnecessary to meet these regulations if the production volume of a given model is less than 10,000 cars annually. In 2007, Koenigsegg premiered the CCXR, a biofuel/flex-fuel version of the CCX. The car features a modified engine, fuel system, and engine management system that enables the car to run on normal gasoline or ethanol, and in any mixture between these two fuels. Ethanol has a higher octane rating compared to regular fuel and has an internal cooling effect on the combustion chamber, which allows for increased performance. In 2009, Koenigsegg released information about a special edition car called the "Trevita", of which three were planned to be made but only two were finished due to technical problems. The "Trevita", which translates into English as "three whites", has a body made entirely of Koenigsegg's proprietary material consisting of diamond-coated carbon fibre. The "Trevita" is based on the CCXR, and therefore has a power output of when running on biofuel. In 2010 Koenigsegg released information at the 2010 Geneva Motor Show about a new model called the Agera, which translates into English as "take action/act". The Agera features a Koenigsegg developed 5.0-litre V8 engine coupled with variable turbo geometry turbochargers having a power output of , mated to a newly developed 7-speed dual clutch transmission. The Agera's design follows a clear lineage from the previous Koenigsegg sports cars, but adds many special new features, such as a wider front track, new styling and aerodynamic features, and a new interior; including a new lighting technique called "Ghost Light" by the manufacturer which consists of microscopic holes to hide the interior lighting until it is turned on, which then shines through what appears to be solid aluminium. Production of the Agera ended in July 2018 after being in production for eight years when two of the three final edition cars were presented to their customers. At the 2015 Geneva Motor Show, Koenigsegg presented a new model named the "Regera", which translates into English as to "reign" or "rule". The Regera uses the Koenigsegg Direct Drive (KDD) transmission. Below , motive power is by two electric motors on the rear wheels and the internal combustion engine (ICE) is disconnected. Above , the ICE is connected by a fixed ratio transmission with no gearbox, torque vectoring by the previously mentioned electric motors and boosted by a third electric motor attached to the driveshaft. Koenigsegg initially based its engine on a V8 engine block from Ford Racing. These engines powered the initial run of the CC monikered cars. The block for the V8 in the CCX (Competition Coupe Ten, to celebrate ten years of the company) was cast for Koenigsegg by Grainger & Worrall of the UK who also casted the block for the Agera's 5.0-litre engine In late 2018, Koenigsegg showed potential customers in Australia the replacement of the Agera via VR. Teaser sketches were released by the company at the same time. Initially, the model was rumoured to be called "Ragnarok" but the public unveil of the car at the 2019 Geneva Motor Show revealed the name to be Jesko, after the founder's father Jesko Von Koenigsegg. The Jesko uses a development of the 5.0-litre V8 engine used in the Agera which has a power output of on normal gasoline and has a power output of and of torque at 5,100 rpm on E85 biofuel. The engine is mated to a 9-speed multi-clutch transmission having seven clutches called the "Light Speed Transmission" (LST) by the manufacturer. The focus of this transmission is to have faster shift times. The car will come in either a high-downforce, track-oriented or a low-drag, high speed Absolut variant. On 28 February 2005, at 12:08 pm local time, in Nardò, Italy, the CCR broke the Guinness World Record for the fastest production car in the world, having attained on the Nardò Ring (a circular track of circumference), breaking the record previously held by the McLaren F1. It held the record until September 2005 when the Bugatti Veyron broke the record again by attaining a speed of , proven both by "Car and Driver" and "Top Gear". Both of the records set by Bugatti and McLaren were set on Volkswagen's own test-track Ehra-Lessien, which features a straight. In 2008 the German magazine "sport auto" conducted a test for production cars, with the CCX winning the event in a total time of . The CCX also accelerated from 0-200 km/h in 9.3 seconds. In September 2011, the Agera R broke the Guinness World Record for 0–300 km/h with a time of just 14.53 seconds and a 0-300-0 km/h time of 21.19 seconds. Koenigsegg improved this record with the One:1 on 8 June 2015. It attained 0–300 km/h in 11.92 seconds and 0-300-0 km/h in 17.95 seconds (a 3.24 sec improvement over the 2011 Koenigsegg Agera R record), it also attained 0–322 km/h (0–200 mph) in 14.328 seconds and 0-322-0 km/h in 20.71 seconds. On 1 October 2017, an Agera RS set an unofficial record for with a time of 36.44 seconds. The record was set at the Vandel Airfield in Denmark and broke the record of 42 seconds set by the Bugatti Chiron a few weeks prior. On 4 November 2017, an Agera RS set a new record for the world's fastest production car with an average speed of . The record breaking run was done on a closed section of Nevada State Route 160 in Pahrump, Nevada, United States. On the same day they also beat their own 0–400–0 km/h record they set a few weeks prior (33.29 seconds compared to the old record of 36.44 seconds). It was later confirmed via the instrumentation that the car topped out at 457.94 km/h (284.55 mph) On 23 September 2019, Koenigsegg set a new 0-400-0 km/h world record when a Koenigsegg Regera completed the run in 31.49 seconds. This was 1.8 seconds faster than Koenigsegg’s previously unbeaten record, set by the Agera RS in 2017.
https://en.wikipedia.org/wiki?curid=17310
Kaliningrad Oblast Kaliningrad Oblast (, "Kaliningradskaya oblast"), often referred to as the Kaliningrad Region in English, or simply Kaliningrad, is a federal subject of the Russian Federation and an exclave of it, located on the coast of the Baltic Sea. Its constitutional status is equal to each of the other 85 federal subjects. Its administrative center is the city of Kaliningrad, formerly known as Königsberg. Baltiysk, in the oblast, is the only Baltic port in the Russian Federation that remains ice-free in winter. According to the 2010 census, it had a population of 941,873. The oblast is a semi-exclave, bordered by Poland to the south and Lithuania to the north and east, and the Baltic Sea to the west. It is impossible to travel overland between the oblast and Russia proper without passing through the territory of another country. The territory was formerly the northern part of the Prussian province of East Prussia, the southern part of which is today part of the Warmian-Masurian Voivodeship in Poland. With the defeat of Nazi Germany in 1945 in the Second World War, the territory was annexed as part of the Russian SSR by the Soviet Union. Following the post-war migration and flight and expulsion of Germans, the territory was populated with citizens from the Soviet Union. Today only a small number of ethnic Germans remain; most of the several thousand who live in the oblast are recent immigrants from other parts of the former Soviet Union. Early in the 21st century the economy of Kaliningrad Oblast became one of the best performing economies in Russia. This was helped by a low manufacturing tax rate related to its status as the Yantar "Special Economic Zone" (SEZ). , one in three televisions manufactured in Russia came from Kaliningrad. The territory's population was one of the few in Russia that was expected to show strong growth after the collapse of the USSR. During the Middle Ages, the territory of what is now the Kaliningrad Oblast was inhabited by tribes of Old Prussians (Sambians) in the western part and by Lithuanians in the eastern part. The tribes were divided by the rivers Pregolya and Łyna. The Teutonic Knights conquered the region and established a monastic state. On the foundations of a destroyed Prussian settlement known as Tvanksta, the Order founded the city of Königsberg (modern Kaliningrad). Germans resettled the territory and assimilated the indigenous Old Prussians. The Lithuanian-inhabited areas became known as Lithuania Minor. Speakers of the old Baltic languages became extinct around the 17th century, having been assimilated and Germanised. In 1525, Grand Master Albert of Brandenburg secularized the Prussian branch of the Teutonic Order and established himself as the sovereign of the Duchy of Prussia. The duchy was nominally a fief of the Polish crown. It later merged with the Margraviate of Brandenburg. Königsberg was the duchy's capital from 1525 until 1701. As the centre of Prussia moved westward, the position of the capital became too peripheral and Berlin became the new Prussian capital city. During the Seven Years' War it was occupied by the Russian Empire. The region was reorganized into the Province of East Prussia within the Kingdom of Prussia in 1773. The territory of the Kaliningrad Oblast lies in the northern part of East Prussia. In 1817, East Prussia had 796,204 Protestants, 120,123 Roman Catholics, 2,389 Jews, and 864 Mennonites. In 1824, shortly before its merger with West Prussia, the population of East Prussia was 1,080,000 people. Of that number, according to Karl Andree, Germans were slightly more than half, while 280,000 (~26%) were ethnically Polish and 200,000 (~19%) were ethnically Lithuanian. As of 1819 there were also 20,000 strong ethnic Curonian and Latvian minorities as well as 2,400 Jews, according to Georg Hassel. Similar numbers are given by August von Haxthausen in his 1839 book, with a breakdown by county. However, the majority of East Prussian Polish and Lithuanian inhabitants were Lutherans, not Roman Catholics like their ethnic kinsmen across the border in the Russian Empire. Only in Southern Warmia (German: Ermland) Catholic Poles – so called Warmiaks (not to be confused with predominantly Protestant Masurians) – comprised the majority of population, numbering 26,067 people (~81%) in county Allenstein (Polish: Olsztyn) in 1837. Another minority in 19th century East Prussia, were ethnically Russian Old Believers, also known as Philipponnen – their main town was Eckersdorf (Wojnowo). East Prussia was an important centre of German culture. Many important figures, such as Immanuel Kant and E. T. A. Hoffmann, came from this region. Despite being heavily damaged during World War II and thereafter, the cities of the oblast still contain examples of German architecture. The Jugendstil style showcases the rich German history and cultural importance of the area. By the early 20th century, Lithuanians formed a majority only in rural parts of the north-eastern corner of East Prussia (Memelland and Lithuania Minor). The same was true of the Latvian-speaking Kursenieki who had settled the coast of East Prussia between Gdańsk and Klaipėda. The rest of the area, with the exception of the Slavic Masurians in southern Prussia, was overwhelmingly German-speaking. The Memel Territory (Klaipėda region), formerly part of north-eastern East Prussia as well as Lithuania Minor, was annexed by Lithuania in 1923. In 1938, Nazi Germany radically renamed about a third of the place names of this area, replacing Old Prussian and Lithuanian names with newly invented German names. On August 29, 1944, Soviet troops reached the border of East Prussia. By January 1945, they had taken all of East Prussia except for the area around Königsberg. Many inhabitants fled west at this time. During the last days of the war, over two million people fled before the Red Army and were evacuated by sea. Under the terms of the Potsdam Agreement, the city became part of the Soviet Union pending the final determination of territorial questions at a peace settlement. This final determination took place on September 12, 1990 with the signing of the Treaty on the Final Settlement with Respect to Germany. The excerpt pertaining to the partition of East Prussia including the area surrounding Königsberg is as follows (note that Königsberg is spelt "Koenigsberg" in the original document): VI. CITY OF KOENIGSBERG AND THE ADJACENT AREA The Conference examined a proposal by the Soviet Government that pending the final determination of territorial questions at the peace settlement, the section of the western frontier of the Union of Soviet Socialist Republics which is adjacent to the Baltic Sea should pass from a point on the eastern shore of the Bay of Danzig to the east, north of Braunsberg – Goldep, to the meeting point of the frontiers of Lithuania, the Polish Republic and East Prussia. The Conference has agreed in principle to the proposal of the Soviet Government concerning the ultimate transfer to the Soviet Union of the city of Koenigsberg and the area adjacent to it as described above, subject to expert examination of the actual frontier. The President of the United States and the British Prime Minister have declared that they will support the proposal of the Conference at the forthcoming peace settlement. Königsberg was renamed Kaliningrad in 1946 in memory of Chairman of the Presidium of the Supreme Soviet of the USSR Mikhail Kalinin. The remaining German population was forcibly expelled between 1947 and 1948. The conquered territory was populated with citizens of the Soviet Union, mostly ethnic Russians but to a lesser extent also Ukrainians and Belarusians. The German language was replaced with the Russian language. In 1950, there were 1,165,000 inhabitants, which was only half the number of the pre-war population. The city was rebuilt during the Cold War. The territory became strategically important as the headquarters of the Soviet Baltic Fleet, as the port is ice-free in winter unlike Saint Petersburg (then Leningrad). Consequently, the city was closed to foreign visitors. In 1957, an agreement was signed and later came into force which delimited the border between Poland and the Soviet Union. The region was added as a semi-exclave to the Russian SFSR; since 1946 it has been known as the Kaliningrad Oblast. According to some historians, Stalin created it as an oblast separate from the Lithuanian SSR because it further separated the Baltic states from the West. The names of the cities, towns, rivers and other geographical features were changed to Russian names. The area was administered by the planning committee of the Lithuanian SSR, although it had its own Communist Party committee. However, the leadership of the Lithuanian SSR (especially Antanas Sniečkus) refused to annex the territory. In 2010, the German magazine "Der Spiegel" published a report claiming that Kaliningrad had been offered to Germany in 1990 (against payment), but this was denied by Mikhail Gorbachev. Kaliningrad's isolation was exacerbated by the collapse of the Soviet Union in 1991 when Lithuania became an independent country and even more when both Poland and Lithuania became members of NATO and subsequently the European Union in 2004. Since the dissolution of the Soviet Union and the independence of the Baltic states, Kaliningrad Oblast has been separated from the rest of Russia by other countries instead of by other Soviet republics. Neighboring nations imposed strict border controls when they joined the European Union. All military and civilian land links between the region and the rest of Russia have to pass through members of NATO and the EU. Russian proposals for visa-free travel between the EU and Kaliningrad have so far been rejected by the EU. Travel arrangements, based on the "Facilitated Transit Document (FTD)" and "Facilitated Rail Transit Document (FRTD)" have been made. On January 12, 1996, Kaliningrad Oblast, alongside Sverdlovsk, became the first oblasts of Russia to sign a power-sharing treaty with the federal government, granting it autonomy. However this agreement was abolished on May 31, 2002. The territory's economic situation was badly affected by its geographic isolation and the significant reduction in the size of the Russian military garrison, which had previously been one of the major employers and helped the local economy. After 1991, some ethnic Germans immigrated to the area, such as Volga Germans from other parts of Russia and Kazakhstan, especially after Germany raised the requirements for people from the former Soviet Union to be accepted as ethnic Germans and have a "right of return". These Germans are overwhelmingly Russian-speaking and as such were rejected for resettlement within Germany under Germany's new rules. A similar migration by Poles from the lands of the former Soviet Union to the Kaliningrad Oblast occurred at this time as well. The situation has begun to change, albeit slowly. Germany, Lithuania, and Poland have renewed contact with Kaliningrad Oblast, through town twinning and other projects. This has helped to promote interest in the history and culture of the East Prussian and Lietuvininkai communities. In July 2005, the 750-year jubilee of the city was widely celebrated. In July 2007, Russian First Deputy Prime Minister Sergei Ivanov declared that if US-controlled missile defense systems were deployed in Poland, then nuclear weapons might be deployed in Kaliningrad. On November 5, 2008, Russian leader Dmitry Medvedev said that installing missiles in Kaliningrad was almost a certainty. These plans were suspended in January 2009, but implemented in October 2016. In 2011, a long range Voronezh radar was commissioned to monitor missile launches within about 6,000 km. It is situated in the settlement of Pionersky (formerly German "Neukuhren") in Kaliningrad Oblast. Kaliningrad is the only Russian Baltic Sea port that is ice-free all year round and hence plays an important role in maintenance of the Baltic Fleet. As a semi-exclave of Russia, it is surrounded by Poland (Pomeranian Voivodeship and Warmian-Masurian Voivodeship), Lithuania (Klaipėda County, Marijampolė County and Tauragė County) and the Baltic Sea. Its largest river is the Pregolya. It starts as a confluence of the Instruch and the Angrapa and drains into the Baltic Sea through the Vistula Lagoon. Its length under the name of Pregolya is 123 km (76 mi), 292 km (181 mi), including the Angrapa. Notable geographical features include: Major cities and towns: † Pre-1946 (the German-language names were also used in English in this period) The current governor (since 2017) of Kaliningrad Oblast is Anton Alikhanov. The latest elections to the region's legislative body, the 40-seat Kaliningrad Oblast Duma, were held in September 2016. According to the 2010 Census, the population of the oblast was 941,873; down from 955,281 recorded in the 2002 Census. The 1989 Census recorded 871,283 inhabitants. Kaliningrad Oblast was the fourth most densely populated federal subject in Russia, with 62.5 persons/km2 (162 persons/sq mi). Population-wise, the oblast is thoroughly Russian and Russophone in character, with almost none of the pre–World War II German, Lithuanian (Lietuvininks), Latvian-speaking Kursenieki, or Polish population remaining in today's Kaliningrad Oblast. However, after 1991, some ethnic Germans and Poles immigrated to the area, from Kazakhstan, Russia, and other sites in the former Soviet Union. According to the 2010 Census, the ethnic composition of the oblast was as follows: Total fertility rate According to a 2012 survey 34 per cent of the population of Kaliningrad Oblast declare themselves to be "spiritual but not religious", 30.9 per cent adhere to the Russian Orthodox Church, 22 per cent are atheist and 11.1 per cent follow other religions or did not give an answer to the question, 1 per cent are unaffiliated generic Christians and 1 per cent adhere to the Catholic Church. Until 1945, the region was overwhelmingly Lutheran, with a small number of Catholics and Jews. The state church of Prussia was dominant in the region. Although it was both Reformed and Lutheran since 1817, there was an overwhelming Lutheran majority and very few Reformed adherents in East Prussia. For some years after the fall of the Soviet Union, Kaliningrad Oblast was one of the most militarized areas of the Russian Federation and the density of military installations was the highest in Europe, as much of the Soviet equipment pulled out of Eastern Europe was left there. As of 2009, there were 11,600 Russian ground troops based in the oblast, plus additional naval and air force personnel. Thus military troops amount to less than 2% of the oblast's population. Kaliningrad is the headquarters of the Russian Baltic Fleet together with Chernyakhovsk (air base), Donskoye (air base) and Kaliningrad Chkalovsk (naval air base). "The Washington Times" wrote on January 3, 2001, citing anonymous intelligence reports, that Russia had transferred tactical nuclear weapons into a military base in Kaliningrad for the first time since the end of the Cold War. Russian top-level military leaders denied those claims. A Pentagon spokesperson said that such deployment would violate the Russian pledge to remove nuclear weapons from the Baltics. Russia and the United States announced in 1991 and 1992 a non-binding agreement to reduce arsenals of tactical nuclear weapons. On November 5, 2008, Russian President Dimitry Medvedev said that Russia would deploy Iskander missiles in the oblast "as a response to U.S. plans for basing missile defense missiles in Poland," adding that the country also deployed equipment to electronically hamper the operation of future U.S. missile facilities in Poland and the Czech Republic. However, on January 28, 2009, a Russian defense official stated that the deployment of short-range missiles in Kaliningrad Oblast would cease, due to "perceived changes in the attitude of the United States government towards the Russian Federation," following the election of United States President Barack Obama. In September 2009, Russia fully scrapped plans to send short-range missiles into the Kaliningrad Oblast in response to Obama's decision to cancel the missile defense system. In November 2011, Dmitry Medvedev issued another stern warning that Russia would deploy new missiles aimed at U.S. missile defense sites in Europe if the U.S. went ahead with the planned shield. Then in 2012, Russia chose Kaliningrad as the second region (after Moscow) to deploy the S-400 (SAM) missile system. Subsequently, the Russian newspaper "Izvestia" reported in December 2013 that the short-range Iskander-M 9K720 operational-tactical missile systems had been commissioned by the Western Military District's missile and artillery forces at about the same time. In 2017, the nominal GDP of Kaliningrad Oblast was US$7 billion, equivalent to US$7,000 per capita. The oblast derives an economic advantage from its geographic position as an ice-free port and its proximity to the European Union. It also has the world's largest deposits of amber. The region has developed its tourism infrastructure and promotes attractions such as the Curonian Spit. To address the oblast's high rate of unemployment, in 1996 the Russian authorities granted the oblast a special economic status with tax incentives that were intended to attract investors. The oblast's economy has since benefited substantially and in recent years experienced a boom. A US$45 million airport terminal has been opened and the European Commission provides funds for business projects under its special program for the region. Trade with the countries of the EU has increased. Economic output has increased. According to official statistics, the Gross Regional Product in 2006 was 115 billion roubles. GRP per capita in 2007 was 155 669 roubles. Car and truck assembly (GM, BMW, Kia, Yuejin), and production of auto parts, are major industries in Kaliningrad Oblast. There are shipbuilding facilities in Kaliningrad and Sovetsk. Food processing is a mature industry in the region. OKB Fakel, a world leader in the field of Hall thruster development, as well as a leading Russian developer and manufacturer of electric propulsion systems, is based in Neman. The company employs 960 people. General Satellite (GS) is the biggest employer in Gusev city producing satellite receivers, cardboard packaging, nanomaterials etc. Kaliningrad Oblast possesses more than 90 per cent of the world's amber deposits. Until recently raw amber was exported for processing to other countries, but in 2013 the Russian government banned the export of raw amber in order to boost the amber processing industry in Russia. There are small oil reservoirs beneath the Baltic Sea not far from Kaliningrad's shore. Small-scale offshore exploration started in 2004. Poland, Lithuania, and some local NGOs, voiced concerns about possible environmental effects. Fishing is an important regional industry, with big fishing ports in Kaliningrad and Pionersky (formerly Neukuhren) and smaller ones in Svetly and Rybachy. Average yearly power consumption in the Kaliningrad Oblast was 3.5 terawatt-hours in 2004 with local power generation providing just 0.235 terawatt-hours. The balance of energy needs was imported from neighboring countries. A new Kaliningrad power station was built in 2005, covering 50% of the oblast's energy needs. A second part of this station was built in 2010, making the oblast independent from electricity imports. , two nuclear power reactors were under construction in the eastern part of the region. The project is now abandoned.
https://en.wikipedia.org/wiki?curid=17311
Kenneth MacAlpin Kenneth MacAlpin (, ; 810 – 13 February 858), known in most modern regnal lists as Kenneth I, was a king of the Picts who, according to national myth, was the first king of Scots. He was thus later known by the posthumous nickname of , "The Conqueror". He became the apex and eponym of a dynasty—sometimes called Clann Chináeda—that ruled Scotland from the ninth- to the early eleventh-century. The Kenneth of myth, conqueror of the Picts and founder of the Kingdom of Alba, was born in the centuries after the real Kenneth died. In the reign of Kenneth II (), when the Chronicle of the Kings of Alba was compiled, the annalist wrote: In the 15th century, Andrew of Wyntoun's "Orygynale Cronykil of Scotland", a history in verse, added little to the account in the Chronicle: When humanist scholar George Buchanan wrote his history ' in the 1570s, a great deal of lurid detail had been added to the story. Buchanan included an account of how Kenneth's father had been murdered by the Picts and a detailed, and entirely unsupported, account of how Kenneth avenged him and conquered the Picts. Buchanan was not as credulous as many and he did not include the tale of MacAlpin's treason, a story from Gerald of Wales, who reused a tale of Saxon treachery at a feast in Geoffrey of Monmouth's inventive '. Later 19th-century historians, such as William Forbes Skene, brought new standards of accuracy to early Scottish history, while Celticists, such as Whitley Stokes and Kuno Meyer, cast a critical eye over Welsh and Irish sources. As a result, much of the misleading and vivid detail was removed from the scholarly series of events, even if it remained in the popular accounts. Rather than a conquest of the Picts, instead, the idea of Pictish matrilineal succession, mentioned by Bede and apparently the only way to make sense of the list of Kings of the Picts found in the Pictish Chronicle, advanced the idea that Kenneth was a Gael, and a king of , who had inherited the throne of Pictland through a Pictish mother. Other Gaels, such as and , the sons of Fergus, were identified among the Pictish king lists, as were Angles such as Talorcen son of Eanfrith, and Britons such as Bridei son of Beli. Later historians would reject parts of the Kenneth produced by Skene and subsequent historians, while accepting others. Medievalist Alex Woolf, interviewed by "The Scotsman" in 2004, is quoted as saying: Many other historians could be quoted in terms similar to Woolf. A feasible synopsis of the emerging consensus may be put forward, namely, that the kingships of Gaels and Picts underwent a process of gradual fusion, starting with Kenneth, and rounded off in the reign of Constantine II. The Pictish institution of kingship provided the basis for merger with the Gaelic Alpin dynasty. The meeting of King Constantine and Bishop at the "Hill of Belief" near the (formerly Pictish) royal city of Scone in 906 cemented the rights and duties of Picts on an equal basis with those of Gaels (""). Hence the change in styling from "King of the Picts" to "King of ". The legacy of Gaelic as the first national language of Scotland does not obscure the foundational process in the establishment of the Scottish kingdom of Alba. Kenneth's origins are uncertain, as are his ties, if any, to previous kings of the Picts or . Among the genealogies contained in the Rawlinson B 502 manuscript, dating from around 1130, is the supposed descent of Malcolm II of Scotland. Medieval genealogies are unreliable sources, but many historians still accept Kenneth's descent from the established , or at the very least from some unknown minor sept of the . The manuscript provides the following ancestry for Kenneth:... son of son of son of son of son of son of son of son of son of son of ... Leaving aside the shadowy kings before son of , the genealogy is certainly flawed insofar as , who died , could not reasonably be the son of , who was killed . The conventional account would insert two generations between and : , father of , who died , and his father . Although later traditions provided details of his reign and death, Kenneth's father Alpin is not listed as among the kings in the , which provides the following sequence of kings leading up to Kenneth: Naoi m-bliadhna Cusaintin chain, a naoi Aongusa ar Albain, cethre bliadhna Aodha áin, is a tri déug Eoghanáin. Tríocha bliadhain Cionaoith chruaidh, The nine years of the fair, The nine of over , The four years of the noble, And the thirteen of . The thirty years of the hardy, It is supposed that these kings are the Constantine son of Fergus and his brother (Angus II), who have already been mentioned, 's son (), as well as the obscure , but this sequence is considered doubtful if the list is intended to represent kings of , as it should if Kenneth were king there. That Kenneth was a Gael is not widely rejected, but modern historiography distinguishes between Kenneth as a Gael by culture and/or in ancestry, and Kenneth as a king of Gaelic . Kings of the Picts before him, from son of , his brother as well as son of Fergus and his presumed descendants were all at least partly Gaelicised. The idea that the Gaelic names of Pictish kings in Irish annals represented translations of Pictish ones was challenged by the discovery of the inscription "", the latinised name of the Pictish king son of Fergus, on the Dupplin Cross. Other evidence, such as that furnished by place-names, suggests the spread of Gaelic culture through western Pictland in the centuries before Kenneth. For example, Atholl, a name used in the "Annals of Ulster" for the year 739, has been thought to be "New Ireland", and Argyll derives from ", the land of the "eastern Gaels". Compared with the many questions on his origins, Kenneth's ascent to power and subsequent reign can be dealt with simply. Kenneth's rise can be placed in the context of the recent end of the previous dynasty, which had dominated for two or four generations. This followed the death of king son of of , his brother , "and others almost innumerable" in battle against the Vikings in 839. The resulting succession crisis seems, if the Pictish Chronicle king-lists have any validity, to have resulted in at least four would-be kings warring for supreme power. Kenneth's reign is dated from 843, but it was probably not until 848 that he defeated the last of his rivals for power. The Pictish Chronicle claims that he was king in for two years before becoming Pictish king in 843, but this is not generally accepted. It is also said that his reign began in 834 and ended in 863, particularly predominating in the 17th and 18th centuries where many depictions of Kenneth would state his reign as either 834-863 or 843-863. In 849, Kenneth had relics of Columba, which may have included the Monymusk Reliquary, transferred from Iona to Dunkeld. Other than these bare facts, the Chronicle of the Kings of Alba reports that he invaded " six times, captured Melrose and burnt Dunbar, and also that Vikings laid waste to Pictland, reaching far into the interior. The "Annals of the Four Masters", not generally a good source on Scottish matters, do make mention of Kenneth, although what should be made of the report is unclear:, chief of , went to Alba, to strengthen the , at the request of Kenneth MacAlpin. The reign of Kenneth also saw an increased degree of Norse settlement in the outlying areas of modern Scotland. Shetland, Orkney, Caithness, Sutherland, the Western Isles and the Isle of Man, and part of Ross were settled; the links between Kenneth's kingdom and Ireland were weakened, those with southern England and the continent almost broken. In the face of this, Kenneth and his successors were forced to consolidate their position in their kingdom, and the union between the Picts and the Gaels, already progressing for several centuries, began to strengthen. By the time of Donald II, the kings would be called kings neither of the Gaels or the Scots but of "". Kenneth died from a tumour on 13 February 858 at the palace of "", perhaps near Scone. The annals report the death as that of the "king of the Picts", not the "king of Alba". The title "king of Alba" is not used until the time of Kenneth's grandsons, Donald II () and Constantine II (). The "Fragmentary Annals of Ireland" quote a verse lamenting Kenneth's death:Because with many troops lives no longerthere is weeping in every house;there is no king of his worth under heavenas far as the borders of Rome. The Irish Annal 'Ireland's Battle with the Foreigners' refers to him as 'High King of Alba.' Kenneth left at least two sons, Constantine and , who were later kings, and at least two daughters. One daughter married , king of Strathclyde, being the result of this marriage. Kenneth's daughter married two important Irish kings of the . Her first husband was of the . , ancestor of the O'Neill, was the son of this marriage. Her second husband was of . As the wife and mother of kings, when died in 913, her death was reported by the Annals of Ulster, an unusual thing for the male-centred chronicles of the age. "For primary sources see under" External links "below."
https://en.wikipedia.org/wiki?curid=17313
Khandi Alexander Khandi Alexander (born September 4, 1957) is an American dancer, choreographer and actress. She began her career as a dancer in the 1980s and was a choreographer for Whitney Houston's world tour from 1988 to 1992. During the 1990s, Alexander appeared in a number of films, include "CB4" (1993), "What's Love Got to Do with It" (1993), "Sugar Hill" (1994), and "There's Something About Mary" (1998). She starred as Catherine Duke in the NBC sitcom "NewsRadio" from 1995 to 1998. She also had a major recurring role in the NBC medical drama "ER" (1995–2001) as Jackie Robbins, sister to Dr. Peter Benton. Alexander also received critical acclaim for her leading performance in the HBO miniseries "The Corner" in 2000. From 2002 to 2009, Alexander starred as Dr. Alexx Woods in the CBS police procedural series "". From 2010 to 2013 she starred as LaDonna Batiste-Williams in the HBO drama "Treme". Later in 2013, she joined the cast of the ABC drama "Scandal" as Maya Lewis, Olivia Pope's mother, for which she received a Primetime Emmy Award nomination in 2015. Alexander also received a Critics' Choice Television Award nomination for playing Bessie Smith's sister in the 2015 HBO film "Bessie". Khandi Alexander was born in Jacksonville, Florida, the daughter of Alverina Yavonna (Masters), an opera and jazz singer, and Henry Roland Alexander, who owned a construction company. She was raised in Queens, New York, and was educated at Queensborough Community College. She appeared on Broadway, starring in "Chicago", Bob Fosse's "Dancin", and "Dreamgirls". She was a choreographer for Whitney Houston's world tour from 1988–1992, and also appeared as a dancer in Natalie Cole's video for "Pink Cadillac" in 1988. Alexander began her acting career in the late 1980s. She made her television debut on the 1985 sketch-comedy show "FTV". Since the early 1990s, Alexander has concentrated on film and TV, playing supporting roles in several movies, including "CB4", "Joshua Tree", "What's Love Got to Do with It", "Poetic Justice", and "Sugar Hill". In 1995, Alexander was cast as Catherine Duke on the NBC comedy series "NewsRadio". She stayed with the show until season 4 episode 7, "Catherine Moves On", then returned for a final appearance in the season 5 premiere episode, "Bill Moves On". She played the recurring character of Jackie Robbins in medical drama series "ER". Alexander has made a number of guest appearances on other television shows, including "", "NYPD Blue", "Third Watch", "Cosby", "Better off Ted", "La Femme Nikita", and "Body of Proof". In 2000, Alexander won critical acclaim for her performance as Fran Boyd, a mother addicted to drugs in the Emmy Award-winning HBO miniseries "The Corner". She later appeared in the films "Emmett's Mark" and "Dark Blue", and starred opposite Rob Lowe in the Lifetime television movie "Perfect Strangers". In 2002 through 2008, she portrayed the character of Alexx Woods, a medical examiner in the CBS police drama "". Alexander left "CSI: Miami" shortly before the end of the 2007–2008 season. Her final appearance aired on May 5, 2008. On February 2, 2009, she returned to the role of Alexx Woods for a guest appearance in the episode "". She returned again as Alexx Woods in guest appearances in the episodes "Out of Time" on September 21, 2009 and "" on October 19, 2009. In fall 2008, Alexander was cast as a lead character in the HBO drama pilot "Treme", that premiered on April 11, 2010. She played a bar owner in a neighborhood of New Orleans affected by Hurricane Katrina. She received critical acclaim for her performance in the show. Alexander starred in the award-winning HBO television series by David Simon from 2010 to 2013. The series ended after four seasons. She later was cast in Shonda Rhimes's drama series "Scandal" as Kerry Washington's character's mother. In 2015, she was nominated for a Primetime Emmy Award for Outstanding Guest Actress in a Drama Series for her performance. In 2014, Alexander was cast as older sister of Queen Latifah's title character in the HBO Film "Bessie" about iconic blues singer Bessie Smith. She was nominated for a Critics' Choice Television Award for Best Supporting Actress in a Movie/Miniseries.
https://en.wikipedia.org/wiki?curid=17314
Klaus Fuchs Klaus Emil Julius Fuchs (29 December 1911 – 28 January 1988) was a German theoretical physicist and atomic spy who supplied information from the American, British, and Canadian Manhattan Project to the Soviet Union during and shortly after World War II. While at the Los Alamos National Laboratory, Fuchs was responsible for many significant theoretical calculations relating to the first nuclear weapons and, later, early models of the hydrogen bomb. After his conviction in 1950, he served nine years in prison in the United Kingdom and then moved to East Germany where he resumed his career as a physicist and scientific leader. The son of a Lutheran pastor, Fuchs attended the University of Leipzig, where his father was a professor of theology, and became involved in student politics, joining the student branch of the Social Democratic Party of Germany (SPD), and the "Reichsbanner Schwarz-Rot-Gold", the SPD's paramilitary organisation. He was expelled from the SPD in 1932, and joined the Communist Party of Germany (KPD). He went into hiding after the 1933 Reichstag fire, and fled to the United Kingdom, where he received his PhD from the University of Bristol under the supervision of Nevill Mott, and his DSc from the University of Edinburgh, where he worked as an assistant to Max Born. After the Second World War broke out in Europe, he was interned in the Isle of Man, and later in Canada. After he returned to Britain in 1941, he became an assistant to Rudolf Peierls, working on "Tube Alloys"—the British atomic bomb project. He began passing information on the project to the Soviet Union through Ruth Kuczynski, codenamed "Sonia", a German communist and a major in Soviet Military Intelligence who had worked with Richard Sorge's spy ring in the Far East. In 1943, Fuchs and Peierls went to Columbia University, in New York City, to work on the Manhattan Project. In August 1944, Fuchs joined the Theoretical Physics Division at the Los Alamos Laboratory, working under Hans Bethe. His chief area of expertise was the problem of implosion, necessary for the development of the plutonium bomb. After the war, he returned to the UK and worked at the Atomic Energy Research Establishment at Harwell as head of the Theoretical Physics Division. In January 1950, Fuchs confessed that he was a spy. A British court sentenced him to fourteen years' imprisonment and stripped him of his British citizenship. He was released in 1959, after serving nine years, and emigrated to the German Democratic Republic (East Germany), where he was elected to the Academy of Sciences and became a member of the Socialist Unity Party of Germany (SED) central committee. He was later appointed deputy director of the Institute for Nuclear Research in Rossendorf, where he served until he retired in 1979. Klaus Emil Julius Fuchs was born in Rüsselsheim, Grand Duchy of Hesse, on 29 December 1911, the third of four children of a Lutheran pastor, Emil Fuchs, and his wife Else Wagner. He had an older brother Gerhard, an older sister Elisabeth, and a younger sister, Kristel. The family moved to Eisenach, where Fuchs attended the "Gymnasium", and took his "Abitur". At school, Fuchs and his siblings were taunted over his father's unpopular political views, which they came to share. They became known as the "red foxes", Fuchs being the German word for fox. Fuchs entered the University of Leipzig in 1930, where his father was a professor of theology. He became involved in student politics, joining the student branch of the Social Democratic Party of Germany (SPD), a party that his father had joined in 1912, and the "Reichsbanner Schwarz-Rot-Gold", the party's paramilitary organisation. His father took up a new position as professor of religion at the Pedagogical Academy in Kiel, and in the autumn Fuchs transferred to the University of Kiel, which his brother Gerhard and sister Elisabeth also attended. Fuchs continued his studies in mathematics and physics at the university. In October 1931, his mother committed suicide by drinking hydrochloric acid. The family later discovered that his maternal grandmother had also taken her own life. In the March 1932 German presidential election, the SPD supported Paul von Hindenburg for President, fearing that a split vote would hand the job to the National Socialist German Workers' Party (NSDAP) candidate, Adolf Hitler. However, when the Communist Party of Germany (KPD) ran its own candidate, Ernst Thälmann, Fuchs offered to speak for him, and was expelled from the SPD. That year Fuchs and all three of his siblings joined the KPD. Fuchs and his brother Gerhard were active speakers at public meetings, and occasionally attempted to disrupt NSDAP gatherings. At one such gathering, Fuchs was beaten up and thrown into the river. When Hitler became Chancellor of Germany in January 1933, Fuchs decided to leave Kiel, where the NSDAP was particularly strong and he was a well-known KPD member. He enrolled at the Kaiser Wilhelm Institute for Physics in Berlin. On 28 February, he took an early train to Berlin for a KPD meeting there. On the train, he read about the Reichstag fire in a newspaper. Fuchs correctly assumed that opposition parties would be blamed for the fire, and quietly removed his hammer and sickle lapel pin. The KPD meeting in Berlin was held in secret. Fellow party members urged him to continue his studies in another country. He went into hiding for five months in the apartment of a fellow party member. In August 1933, he attended an anti-fascist conference in Paris chaired by Henri Barbusse, where he met an English couple, Ronald and Jessie Gunn, who invited Fuchs to stay with them in Clapton, Somerset. He was expelled from the Kaiser Wilhelm Institute in October 1933. Fuchs arrived in England on 24 September 1933. Jessie Gunn was a member of the Wills family, the heirs to Imperial Tobacco and benefactors of the University of Bristol. She arranged for Fuchs to meet Nevill Mott, Bristol's professor of physics, and he agreed to take Fuchs on as a research assistant. Fuchs earned his Ph.D. in physics there in 1937. A paper on "A Quantum Mechanical Calculation of the Elastic Constants of Monovalent Metals" was published in the "Proceedings of the Royal Society" in 1936. By this time, Mott had a number of German refugees working for him, and lacked positions for them all. He did not think that Fuchs would make much of a teacher, so he arranged a research post for Fuchs, at the University of Edinburgh working under Max Born, who was himself a German refugee. Fuchs published papers with Born on "The Statistical Mechanics of Condensing Systems" and "On Fluctuations in Electromagnetic radiation" in the "Proceedings of the Royal Society". He also received a Doctorate in Science degree from Edinburgh. Fuchs proudly mailed copies back to his father in Germany. In Germany, Emil had been dismissed from his academic post, and, disillusioned with the Lutheran Church's support of the NSDAP, had become a Quaker in 1933. He was arrested for speaking out against the government, but was only held for one month. Elisabeth married a fellow communist, Gustav Kittowski, with whom she had a child they named Klaus. Elisabeth and Kittowski were arrested in 1933, and sentenced to 18 months imprisonment, but were freed at Christmas. Gerhard and his wife Karin were arrested in 1934, and spent the next two years in prison. Gerhard, Karin, Elisabeth and Kittowski established a car rental agency in Berlin, which they used to smuggle Jews and opponents of the government out of Germany. After Emil was arrested in 1933, Kristel fled to Zurich, where she studied education and psychology at the University of Zurich. She returned to Berlin in 1934, where she too worked at the car rental agency. In 1936, Emil arranged with Quaker friends in the United States for Kristel to attend Swarthmore College there. She visited Fuchs in England "en route" to America, where she eventually married an American communist, Robert Heineman, and settled in Cambridge, Massachusetts. She became a permanent resident in the United States in May 1938. In 1936, Kittowski and Elisabeth were arrested again, and the rental cars were impounded. Gerhard and Karin fled to Czechoslovakia. Elisabeth was released and went to live with Emil, while Kittowski, sentenced to six years, later escaped from prison and also made his way to Czechoslovakia. In August 1939, Elisabeth committed suicide by throwing herself from a train, leaving Emil to raise young Klaus. Fuchs applied to become a British citizen in August 1939, but his application had not been processed before the Second World War broke out in Europe in September 1939. There was a classification system for enemy aliens, but Born provided Fuchs with a reference that said that he had been a member of the SPD from 1930 to 1932, and an anti-Nazi. There matters stood until June 1940, when the police arrived and took Fuchs into custody. He was first interned on the Isle of Man and then, in July, he was sent to an internment camp in Sherbrooke, Quebec, Canada. During his internment in 1940, he continued to work and published four more papers with Born: "The Mass Centre in Relativity", "Reciprocity, Part II: Scalar Wave Functions", "Reciprocity, Part III: Reciprocal Wave Functions" and "Reciprocity, Part IV: Spinor Wave Functions", and one by himself, "On the Statistical Method in Nuclear Theory". While interned in Quebec, he joined a communist discussion group led by Hans Kahle. Kahle was a KPD member who had fought in the Spanish Civil War. After fleeing to Britain with his family, Kahle had helped Jürgen Kuczynski organise the KPD in Britain. Kristel arranged for Israel Halperin, the brother-in-law of a friend of hers, Wendell H. Furry, to bring Fuchs some magazines. Max Born lobbied for his release. On Christmas Day 1940, Fuchs and Kahle were among the first group of internees to board a ship to return to Britain. Fuchs returned to Edinburgh in January, and resumed working for Born. In May 1941, he was approached by Rudolf Peierls of the University of Birmingham to work on the "Tube Alloys" programme – the British atomic bomb research project. Despite wartime restrictions, he was granted British citizenship on 7 August 1942 and signed an Official Secrets Act declaration form. As accommodation was scarce in wartime Birmingham, he stayed with Rudolf and Genia Peierls. Fuchs and Peierls did some important work together, which included a fundamental paper about isotope separation. Soon after, Fuchs contacted Jürgen Kuczynski, who was now teaching at the London School of Economics. Kuczynski put him in contact with Simon Davidovitch Kremer (codename: "Alexander"), the secretary to the military attaché at the Soviet Union's embassy, who worked for the GRU (Russian: "Главное Разведывательное Управление"), the Red Army's foreign military intelligence directorate. After three meetings, Fuchs was teamed up with a courier so he would not have to find excuses to travel to London. She was Ruth Kuczynski (codename: "Sonia"), the sister of Jürgen Kuczynski. She was a German communist, a major in Soviet Military Intelligence and an experienced agent who had worked with Richard Sorge's spy ring in the Far East. In late 1943, Fuchs (codename: "Rest"; he became "Charles" in May 1944) transferred along with Peierls to Columbia University, in New York City, to work on gaseous diffusion as a means of uranium enrichment for the Manhattan Project. Although Fuchs was "an asset" of GRU in Britain, his "control" was transferred to the NKGB (Russian: "Народный Kомиссариат Государственной Безопасности"), the Soviet Union's civilian intelligence organisation, when he moved to New York. He spent Christmas 1943 with Kristel and her family in Cambridge. He was contacted by Harry Gold (codename: "Raymond"), an NKGB agent in early 1944. From August 1944, Fuchs worked in the Theoretical Physics Division at the Los Alamos Laboratory, under Hans Bethe. His chief area of expertise was the problem of imploding the fissionable core of the plutonium bomb. At one point, Fuchs did calculation work that Edward Teller had refused to do because of lack of interest. He was the author of techniques (such as the still-used Fuchs-Nordheim method) for calculating the energy of a fissile assembly that goes highly prompt critical, and his report on blast waves is still considered a classic. Fuchs was one of the many Los Alamos scientists present at the Trinity test in July 1945. In April 1946 he attended a conference at Los Alamos that discussed the possibility of a thermonuclear weapon; one month later, he filed a patent with John von Neumann, describing a method to initiate fusion in a thermonuclear weapon with an implosion trigger. Bethe considered Fuchs "one of the most valuable men in my division" and "one of the best theoretical physicists we had." Fuchs, who was known as "Karl" rather than "Klaus" at Los Alamos, dated grade school teachers Evelyn Kline and Jean Parker. He befriended Richard Feynman. Fuchs and Peierls were the only members of the British Mission to Los Alamos who owned cars, and Fuchs lent his Buick to Feynman so Feynman could visit his dying wife in hospital in Albuquerque. Klaus Fuchs's main courier was Harry Gold. Allen Weinstein, the author of "The Haunted Wood: Soviet Espionage in America" (1999), has pointed out: "The NKVD had chosen Gold, an experienced group handler, as Fuchs' contact on the grounds that it was safer than having him meet directly with a Russian operative, but Semyon Semyonov was ultimately responsible for the Fuchs relationship." Gold reported after his first meeting with Klaus Fuchs: At the request of Norris Bradbury, who had replaced Robert Oppenheimer as director of the Los Alamos Laboratory in October 1945, Fuchs stayed on at the laboratory into 1946 to help with preparations for the Operation Crossroads weapons tests. The US Atomic Energy Act of 1946 (McMahon Act) prohibited the transfer of information on nuclear research to any foreign country, including Britain, without explicit official authority, and Fuchs supplied highly classified U.S. information to nuclear scientists in Britain as well as to his Soviet contacts. , British official files on Fuchs were still being withheld. He was highly regarded as a scientist by the British, who wanted him to return to the United Kingdom to work on Britain's post-war nuclear weapons programme. He returned in August 1946 and became the head of the Theoretical Physics Division at the Atomic Energy Research Establishment at Harwell. From late 1947 to May 1949 he gave Alexander Feklisov, his Soviet case officer, the principal theoretical outline for creating a hydrogen bomb and the initial drafts for its development as the work progressed in England and America. Meeting with Feklisov six times, he provided the results of the test at Eniwetok Atoll of uranium and plutonium bombs and the key data on production of uranium-235. Also in 1947, Fuchs attended a conference of the Combined Policy Committee (CPC), a committee created to facilitate exchange of atomic secrets at the highest levels of governments of the United States, United Kingdom and Canada; Donald Maclean, another Soviet spy, was also in attendance as British co-secretary of CPC. By September 1949, information from the Venona project indicated to GCHQ that Fuchs was a spy, but the British intelligence services were wary of indicating the source of their information. The Soviets had broken off contact with him in February. Fuchs may have been subsequently tipped off by Kim Philby. In October 1949, Fuchs approached Henry Arnold, the head of security at Harwell, with the news that his father had been given a chair at the University of Leipzig in East Germany. Under interrogation by MI5 officer William Skardon at an informal meeting in December 1949, Fuchs initially denied being a spy and was not detained. In January 1950, Fuchs arranged another interview with Skardon and voluntarily confessed that he was a spy. Three days later, he also directed a statement more technical in content to Michael Perrin, the deputy controller of atomic energy within the Ministry of Supply. Fuchs told interrogators that the NKGB had acquired an agent in Berkeley, California, who had informed the Soviet Union about electromagnetic separation research of uranium-235 in 1942 or earlier. Fuchs's statements to British and American intelligence agencies were used to implicate Harry Gold, a key witness in the trials of David Greenglass and Julius and Ethel Rosenberg in the United States. Fuchs later testified that he passed detailed information on the project to the Soviet Union through courier Harry Gold in 1945, and further information about Edward Teller's unworkable "Super" design for a hydrogen bomb in 1946 and 1947. Hans Bethe once said that Klaus Fuchs was the only physicist he knew who truly changed history. Because the head of the Soviet project, Lavrenti Beria, used foreign intelligence as a third-party check, rather than giving it directly to the scientists, as he did not trust the information by default, it is unknown whether Fuchs's fission information had a substantial effect. Considering that the pace of the Soviet program was set primarily by the amount of uranium they could procure, it is hard for scholars to accurately judge how much time this saved. According to "On a Field of Red", a history of the Comintern (Communist International) by Anthony Cave Brown and Charles B. MacDonald, Fuchs's greatest contribution to the Soviets may have been disclosing how uranium could be processed for use in a bomb. Fuchs gave Gold technical information in January 1945 that was acquired only after two years of experimentation at a cost of $400 million. Fuchs also disclosed the amount of uranium or plutonium the Americans planned to use in each atomic bomb. Whether the information Fuchs passed relating to the hydrogen bomb would have been useful is still debated. Most scholars agree with Hans Bethe's 1952 assessment, which concluded that by the time Fuchs left the thermonuclear program in mid-1946, too little was known about the mechanism of the hydrogen bomb for his information to be useful to the Soviet Union. The successful Teller-Ulam design was not devised until 1951. Soviet physicists later noted that they could see as well as the Americans eventually did that the early designs by Fuchs and Edward Teller were useless. Later archival work by Soviet physicist German Goncharov suggested that while Fuchs's early work did not help Soviet efforts towards the hydrogen bomb, it was closer to the final correct solution than anyone recognized at the time—and indeed spurred Soviet research into useful problems that eventually provided the correct answer. In any case, it seems clear that Fuchs could not have just given the Soviets the "secret" to the hydrogen bomb, since he did not actually know it himself. It is likely that Fuchs' espionage led the U.S. to cancel a 1950 Anglo-American plan to give Britain American-made atomic bombs. He was prosecuted by Sir Hartley Shawcross, and was convicted on 1 March 1950 of four counts of breaking the Official Secrets Act by "...communicating information to a potential enemy." After a trial lasting less than 90 minutes based on his confession, Lord Goddard sentenced him to fourteen years' imprisonment, the maximum for espionage, because the Soviet Union was classed as an ally at the time. In December 1950, he was stripped of his British citizenship. The head of the British H-bomb project, Sir William Penney, visited Fuchs in prison in 1952. While imprisoned he was friendly with Irish Republican Army prisoner Seamus Murphy whom he played chess with and helped escape. It was suggested by some that Fuchs had turned IRA leader Cathal Goulding into a Marxist but Seamus Murphy denied this saying "Fuchs never tried to turn anyone – it was hard to get a word out of him!". Fuchs was released on 23 June 1959, after serving nine years and four months of his sentence (as then required in England where long-term prisoners were entitled by law to one-third off for good behaviour in prison) at Wakefield Prison and promptly emigrated to the German Democratic Republic (GDR). On arrival at Berlin Schönefeld Airport in the GDR, Fuchs was met by Grete (Margarete) Keilson, a friend from his years as a student communist. They were married on 9 September 1959. In the GDR, Fuchs continued his scientific career and achieved considerable prominence as a leader of research. He became a member of the SED central committee in 1967, and in 1972 was elected to the Academy of Sciences where from 1974 to 1978 he was the head of the research area of physics, nuclear and materials science; he was then appointed deputy director of the Institute for Nuclear Research in Rossendorf, where he served until he retired in 1979. From 1984, Fuchs was head of the scientific councils for energetic basic research and for fundamentals of microelectronics. He received the Patriotic Order of Merit, the Order of Karl Marx and the National Prize of East Germany. A tutorial Fuchs gave to Qian Sanqiang and other Chinese physicists helped them to develop the first Chinese atomic bomb, the "596", which was tested five years later—according to Thomas Reed and Daniel Stillman, the authors of "The Nuclear Express: A Political History of the Bomb and Its Proliferation" (2009). Three historians of nuclear weapons history, Robert S. Norris, Jeremy Bernstein, and Peter D. Zimmerman, challenged this particular assertion as "unsubstantiated conjecture" and asserted that "The Nuclear Express" is "an ambitious but deeply flawed book". Fuchs died in Berlin on 28 January 1988. He was cremated and his ashes buried in the "Pergolenweg" of the Socialists' Memorial in Berlin's Friedrichsfelde Cemetery. A documentary film about Fuchs, "Väter der tausend Sonnen" (Father of a Thousand Suns) was released in 1990.
https://en.wikipedia.org/wiki?curid=17317
Konstantin Stanislavski Konstantin Sergeievich Stanislavski ("né" Alexeiev; ; 7 August 1938) was a seminal Russian theatre practitioner. He was widely recognised as an outstanding character actor and the many productions that he directed garnered him a reputation as one of the leading theatre directors of his generation. His principal fame and influence, however, rests on his 'system' of actor training, preparation, and rehearsal technique. Stanislavski (his stage name) performed and directed as an amateur until the age of 33, when he co-founded the world-famous Moscow Art Theatre (MAT) company with Vladimir Nemirovich-Danchenko, following a legendary 18-hour discussion. Its influential tours of Europe (1906) and the US (1923–24) and its landmark productions of "The Seagull" (1898) and "Hamlet" (1911–12) established his reputation and opened new possibilities for the art of the theatre. By means of the MAT, Stanislavski was instrumental in promoting the new Russian drama of his day—principally the work of Anton Chekhov, Maxim Gorky, and Mikhail Bulgakov—to audiences in Moscow and around the world; he also staged acclaimed productions of a wide range of classical Russian and European plays. He collaborated with the director and designer Edward Gordon Craig and was formative in the development of several other major practitioners, including Vsevolod Meyerhold (whom Stanislavski considered his "sole heir in the theatre"), Yevgeny Vakhtangov, and Michael Chekhov. At the MAT's 30-year anniversary celebrations in 1928, a massive heart attack on-stage put an end to his acting career (though he waited until the curtain fell before seeking medical assistance). He continued to direct, teach, and write about acting until his death a few weeks before the publication of the first volume of his life's great work, the acting manual "An Actor's Work" (1938). He was awarded the Order of the Red Banner and the Order of Lenin and was one of the first to be granted the title of People's Artist of the USSR. Stanislavski wrote that "there is nothing more tedious than an actor's biography" and that "actors should be banned from talking about themselves". At the request of a US publisher, however, he reluctantly agreed to write his autobiography, "My Life in Art" (first published in English in 1924 and in a revised, Russian-language edition in 1926), though its account of his artistic development is not always accurate. Two English-language biographies have been published: David Magarshack's "Stanislavsky: A Life" (1950) and Jean Benedetti's "Stanislavski: His Life and Art" (1988, revised and expanded 1999). Stanislavski subjected his acting and direction to a rigorous process of artistic self-analysis and reflection. His 'system' of acting developed out of his persistent efforts to remove the blocks that he encountered in his performances, beginning with a major crisis in 1906. He produced his early work using an external, director-centred technique that strove for an organic unity of all its elements—in each production he planned the interpretation of every role, blocking, and the "mise en scène" in detail in advance. He also introduced into the production process a period of discussion and detailed analysis of the play by the cast. Despite the success that this approach brought, particularly with his Naturalistic stagings of the plays of Anton Chekhov and Maxim Gorky, Stanislavski remained dissatisfied. Both his struggles with Chekhov's drama (out of which his notion of subtext emerged) and his experiments with Symbolism encouraged a greater attention to "inner action" and a more intensive investigation of the actor's process. He began to develop the more actor-centred techniques of "psychological realism" and his focus shifted from his productions to rehearsal process and pedagogy. He pioneered the use of theatre studios as a laboratory in which to innovate actor training and to experiment with new forms of theatre. Stanislavski organised his techniques into a coherent, systematic methodology, which built on three major strands of influence: (1) the director-centred, unified aesthetic and disciplined, ensemble approach of the Meiningen company; (2) the actor-centred realism of the Maly; and (3) the Naturalistic staging of Antoine and the independent theatre movement. The 'system' cultivates what Stanislavski calls the "art of experiencing" (to which he contrasts the "art of representation"). It mobilises the actor's conscious thought and will in order to activate other, less-controllable psychological processes—such as emotional experience and subconscious behaviour—sympathetically and indirectly. In rehearsal, the actor searches for inner motives to justify action and the definition of what the character seeks to achieve at any given moment (a "task"). Stanislavski's earliest reference to his 'system' appears in 1909, the same year that he first incorporated it into his rehearsal process. The MAT adopted it as its official rehearsal method in 1911. Later, Stanislavski further elaborated the 'system' with a more physically grounded rehearsal process that came to be known as the "Method of Physical Action". Minimising at-the-table discussions, he now encouraged an "active analysis", in which the sequence of dramatic situations are improvised. "The best analysis of a play", Stanislavski argued, "is to take action in the given circumstances." Just as the First Studio, led by his assistant and close friend Leopold Sulerzhitsky, had provided the forum in which he developed his initial ideas for the 'system' during the 1910s, he hoped to secure his final legacy by opening another studio in 1935, in which the Method of Physical Action would be taught. The Opera-Dramatic Studio embodied the most complete implementation of the training exercises described in his manuals. Meanwhile, the transmission of his earlier work via the students of the First Studio was revolutionising acting in the West. With the arrival of Socialist realism in the USSR, the MAT and Stanislavski's 'system' were enthroned as exemplary models. Stanislavski had a privileged youth, growing up in one of the richest families in Russia, the Alekseievs. He was born Konstantin Sergeievich Alexeiev—he adopted the stage name "Stanislavski" in 1884 to keep his performance activities secret from his parents. Up until the communist revolution in 1917, Stanislavski often used his inherited wealth to fund his experiments in acting and directing. His family's discouragement meant that he appeared only as an amateur until he was thirty three. As a child, Stanislavski was interested in the circus, the ballet, and puppetry. Later, his family's two private theatres provided a forum for his theatrical impulses. After his debut performance at one in 1877, he started what would become a lifelong series of notebooks filled with critical observations on his acting, aphorisms, and problems—it was from this habit of self-analysis and critique that Stanislavski's 'system' later emerged. Stanislavski chose not to attend university, preferring to work in the family business. Increasingly interested in "experiencing the role", Stanislavski experimented with maintaining a characterisation in real life. In 1884, he began vocal training under Fyodor Komissarzhevsky, with whom he also explored the coordination of body and voice. A year later, Stanislavski briefly studied at the Moscow Theatre School but, disappointed with its approach, he left after little more than two weeks. Instead, he devoted particular attention to the performances of the Maly Theatre, the home of Russian psychological realism (as developed in the 19th century by Alexander Pushkin, Nikolai Gogol and Mikhail Shchepkin). Shchepkin's legacy included a disciplined, ensemble approach, extensive rehearsals, and the use of careful observation, self-knowledge, imagination, and emotion as the cornerstones of the craft. Stanislavski called the Maly his 'university'. One of Shchepkin's students, Glikeriya Fedotova, taught Stanislavski; she instilled in him the rejection of inspiration as the basis of the actor's art, stressed the importance of training and discipline, and encouraged the practice of responsive interaction with other actors that Stanislavski came to call "communication". As well as the artists of the Maly, performances given by foreign stars influenced Stanislavski. The effortless, emotive, and clear playing of the Italian Ernesto Rossi, who performed major Shakespearean tragic protagonists in Moscow in 1877, particularly impressed him. So too did Tommaso Salvini's 1882 performance of Othello. By now well known as an amateur actor, at the age of twenty-five Stanslavski co-founded a Society of Art and Literature. Under its auspices, he performed in plays by Molière, Schiller, Pushkin, and Ostrovsky, as well as gaining his first experiences as a director. He became interested in the aesthetic theories of Vissarion Belinsky, from whom he took his conception of the role of the artist. On , Stanislavski married Maria Lilina (the stage name of Maria Petrovna Perevostchikova). Their first child, Xenia, died of pneumonia in May 1890 less than two months after she was born. Their second daughter, Kira, was born on . In January 1893, Stanislavski's father died. Their son Igor was born on . In February 1891, Stanislavski directed Leo Tolstoy's "The Fruits of Enlightenment" for the Society of Art and Literature, in what he later described as his first fully independent directorial work. But it was not until 1893 he first met the great realist novelist and playwright that became another important influence on him. Five years later the MAT would be his response to Tolstoy's demand for simplicity, directness, and accessibility in art. Stanislavski's directorial methods at this time were closely modelled on the disciplined, autocratic approach of Ludwig Chronegk, the director of the Meiningen Ensemble. In "My Life in Art" (1924), Stanislavski described this approach as one in which the director is "forced to work without the help of the actor". From 1894 onwards, Stanislavski began to assemble detailed prompt-books that included a directorial commentary on the entire play and from which not even the smallest detail was allowed to deviate. Whereas the Ensemble's effects tended toward the grandiose, Stanislavski introduced lyrical elaborations through the "mise en scène" that dramatised more mundane and ordinary elements of life, in keeping with Belinsky's ideas about the "poetry of the real". By means of his rigid and detailed control of all theatrical elements, including the strict choreography of the actors' every gesture, in Stanislavski's words "the inner kernel of the play was revealed by itself". Analysing the Society's production of "Othello" (1896), Jean Benedetti observes that: Stanislavski uses the theatre and its technical possibilities as an instrument of expression, a language, in its own right. The dramatic meaning is in the staging itself. [...] He went through the whole play in a completely different way, not relying on the text as such, with quotes from important speeches, not providing a 'literary' explanation, but speaking in terms of the play's dynamic, its action, the thoughts and feelings of the protagonists, the world in which they lived. His account flowed uninterruptedly from moment to moment. Benedetti argues that Stanislavski's task at this stage was to unite the realistic tradition of the creative actor inherited from Shchepkin and Gogol with the director-centred, organically unified Naturalistic aesthetic of the Meiningen approach. That synthesis would emerge eventually, but only in the wake of Stanislavski's directorial struggles with Symbolist theatre and an artistic crisis in his work as an actor. "The task of our generation", Stanislavski wrote as he was about to found the Moscow Art Theatre and begin his professional life in the theatre, is "to liberate art from outmoded tradition, from tired cliché and to give greater freedom to imagination and creative ability." Stanislavski's historic meeting with Vladimir Nemirovich-Danchenko on led to the creation of what was called initially the "Moscow Public-Accessible Theatre", but which came to be known as the Moscow Art Theatre (MAT). Their eighteen-hour-long discussion has acquired a legendary status in the history of theatre. Nemirovich was a successful playwright, critic, theatre director, and acting teacher at the Philharmonic school who, like Stanislavski, was committed to the idea of a popular theatre. Their abilities complemented one another: Stanislavski brought his directorial talent for creating vivid stage images and selecting significant details; Nemirovich, his talent for dramatic and literary analysis, his professional expertise, and his ability to manage a theatre. Stanislavski later compared their discussions to the Treaty of Versailles, their scope was so wide-ranging; they agreed on the conventional practices they wished to abandon and, on the basis of the working method they found they had in common, defined the policy of their new theatre. Stanislavski and Nemirovich planned a professional company with an ensemble ethos that discouraged individual vanity; they would create a realistic theatre of international renown, with popular prices for seats, whose organically unified aesthetic would bring together the techniques of the Meiningen Ensemble and those of André Antoine's Théâtre Libre (which Stanislavski had seen during trips to Paris). Nemirovich assumed that Stanislavski would fund the theatre as a privately owned business, but Stanislavski insisted on a limited, joint stock company. Viktor Simov, whom Stanislavski had met in 1896, was engaged as the company's principal designer. In his opening speech on the first day of rehearsals, , Stanislavski stressed the "social character" of their collective undertaking. In an atmosphere more like a university than a theatre, as Stanislavski described it, the company was introduced to his working method of extensive reading and research and detailed rehearsals in which the action was defined at the table before being explored physically. Stanislavski's lifelong relationship with Vsevolod Meyerhold began during these rehearsals; by the end of June, Meyerhold was so impressed with Stanislavski's directorial skills that he declared him a genius. The lasting significance of Stanislavski's early work at the MAT lies in its development of a Naturalistic performance mode. In 1898, Stanislavski co-directed with Nemirovich the first of his productions of the work of Anton Chekhov. The MAT production of "The Seagull" was a crucial milestone for the fledgling company that has been described as "one of the greatest events in the history of Russian theatre and one of the greatest new developments in the history of world drama." Despite its 80 hours of rehearsal—a considerable length by the standards of the conventional practice of the day—Stanislavski felt it was under-rehearsed. The production's success was due to the fidelity of its delicate representation of everyday life, its intimate, ensemble playing, and the resonance of its mood of despondent uncertainty with the psychological disposition of the Russian intelligentsia of the time. Stanislavski went on to direct the successful premières of Chekhov's other major plays: "Uncle Vanya" in 1899 (in which he played Astrov), "Three Sisters" in 1901 (playing Vershinin), and "The Cherry Orchard" in 1904 (playing Gaev). Stanislavski's encounter with Chekhov's drama proved crucial to the creative development of both men. His ensemble approach and attention to the psychological realities of its characters revived Chekhov's interest in writing for the stage, while Chekhov's unwillingness to explain or expand on the text forced Stanislavski to dig beneath its surface in ways that were new in theatre. In response to Stanislavski's encouragement, Maxim Gorky promised to launch his playwrighting career with the MAT. In 1902, Stanislavski directed the première productions of the first two of Gorky's plays, "The Philistines" and "The Lower Depths". As part of the rehearsal preparations for the latter, Stanislavski took the company to visit Khitrov Market, where they talked to its down-and-outs and soaked up its atmosphere of destitution. Stanislavski based his characterisation of Satin on an ex-officer he met there, who had fallen into poverty through gambling. "The Lower Depths" was a triumph that matched the production of "The Seagull" four years earlier, though Stanislavski regarded his own performance as external and mechanical. The productions of "The Cherry Orchard" and "The Lower Depths" remained in the MAT's repertoire for decades. Along with Chekhov and Gorky, the drama of Henrik Ibsen formed an important part of Stanislavski's work at this time—in its first two decades, the MAT staged more plays by Ibsen than any other playwright. In its first decade, Stanislavski directed "Hedda Gabler" (in which he played Løvborg), "An Enemy of the People" (playing Dr Stockmann, his favourite role), "The Wild Duck", and "Ghosts". "More's the pity I was not a Scandinavian and never saw how Ibsen was played in Scandinavia," Stanislavski wrote, because "those who have been there tell me that he is interpreted as simply, as true to life, as we play Chekhov". He also staged other important Naturalistic works, including Gerhart Hauptmann's "Drayman Henschel", "Lonely People", and "Michael Kramer" and Leo Tolstoy's "The Power of Darkness". In 1904, Stanislavski finally acted on a suggestion made by Chekhov two years earlier that he stage several one-act plays by Maurice Maeterlinck, the Belgian Symbolist. Despite his enthusiasm, however, Stanislavski struggled to realise a theatrical approach to the static, lyrical dramas. When the triple bill consisting of "The Blind", "Intruder", and "Interior" opened on , the experiment was deemed a failure. Meyerhold, prompted by Stanislavski's positive response to his new ideas about Symbolist theatre, proposed that they form a "theatre studio" (a term which he invented) that would function as "a laboratory for the experiments of more or less experienced actors." The Theatre-Studio aimed to develop Meyerhold's aesthetic ideas into new theatrical forms that would return the MAT to the forefront of the avant-garde and Stanislavski's socially conscious ideas for a network of "people's theatres" that would reform Russian theatrical culture as a whole. Central to Meyerhold's approach was the use of improvisation to develop the performances. When the studio presented a work-in-progress, Stanislavski was encouraged; when performed in a fully equipped theatre in Moscow, however, it was regarded as a failure and the studio folded. Meyerhold drew an important lesson: "one must first educate a new actor and only then put new tasks before him", he wrote, adding that "Stanislavski, too, came to such a conclusion." Reflecting in 1908 on the Theatre-Studio's demise, Stanislavski wrote that "our theatre found its future among its ruins." Nemirovich disapproved of what he described as the malign influence of Meyerhold on Stanislavski's work at this time. Stanislavski engaged two important new collaborators in 1905: Liubov Gurevich became his literary advisor and Leopold Sulerzhitsky became his personal assistant. Stanislavski revised his interpretation of the role of Trigorin (and Meyerhold reprised his role as Konstantin) when the MAT revived its production of Chekhov's "The Seagull" on . This was the year of the abortive revolution in Russia. Stanislavski signed a protest against the violence of the secret police, Cossack troops, and the right-wing extremist paramilitary "Black Hundreds", which was submitted to the Duma on the . Rehearsals for the MAT's production of Aleksandr Griboyedov's classic verse comedy "Woe from Wit" were interrupted by gun-battles on the streets outside. Stanislavski and Nemirovich closed the theatre and embarked on the company's first tour outside of Russia. The MAT's first European tour began on in Berlin, where they played to an audience that included Max Reinhardt, Gerhart Hauptmann, Arthur Schnitzler, and Eleanora Duse. "It's as though we were the revelation", Stanislavski wrote of the rapturous acclaim they received. The success of the tour provided financial security for the company, garnered an international reputation for their work, and made a significant impact on European theatre. The tour also provoked a major artistic crisis for Stanislavski that had a significant impact on "his" future direction. From his attempts to resolve this crisis, his 'system' would eventually emerge. Sometime in March 1906—Jean Benedetti suggests that it was during "An Enemy of the People"—Stanislavski became aware that he was acting without a flow of inner impulses and feelings and that as a consequence his performance had become mechanical. He spent June and July in Finland on holiday, where he studied, wrote, and reflected. With his notebooks on his own experience from 1889 onwards, he attempted to analyse "the foundation stones of our art" and the actor's creative process in particular. He began to formulate a psychological approach to controlling the actor's process in a "Manual on Dramatic Art". Stanislavski's activities began to move in a very different direction: his productions became opportunities for research, he was more interested in the process of rehearsal than its product, and his attention shifted away from the MAT towards its satellite projects—the theatre studios—in which he would develop his 'system'. On his return to Moscow, he explored his new psychological approach in his production of Knut Hamsun's Symbolist play "The Drama of Life". Nemirovich was particularly hostile to his new methods and their relationship continued to deteriorate in this period. In a statement made on , Stanislavski marked a significant shift in his directorial method and stressed the crucial contribution he now expected from a creative actor: The committee is wrong if it thinks that the director's preparatory work in the study is necessary, as previously, when he alone decided the whole plan and all the details of the production, wrote the "mise en scène" and answered all the actors' questions for them. The director is no longer king, as before, when the actor possessed no clear individuality. [...] It is essential to understand this—rehearsals are divided into two stages: the first stage is one of experiment when the cast helps the director, the second is creating the performance when the director helps the cast. Stanislavski's preparations for Maeterlinck's "The Blue Bird" (which was to become his most famous production to-date) included improvisations and other exercises to stimulate the actors' imaginations; Nemirovich described one in which the cast imitated various animals. In rehearsals he sought ways to encourage his actors' will to create afresh in every performance. He focused on the search for inner motives to justify action and the definition of what the characters are seeking to achieve at any given moment (what he would come to call their "task"). This use of the actor's conscious thought and will was designed to activate other, less-controllable psychological processes—such as emotional experience and subconscious behaviour—sympathetically and indirectly. Noting the importance to great actors' performances of their ability to remain relaxed, he discovered that he could abolish physical tension by focusing his attention on the specific action that the play demanded; when his concentration wavered, his tension returned. "What fascinates me most", Stanislavski wrote in May 1908, "is the rhythm of feelings, the development of affective memory and the psycho-physiology of the creative process." His interest in the creative use of the actor's personal experiences was spurred by a chance conversation in Germany in July that led him to the work of French psychologist Théodule-Armand Ribot. His "affective memory" contributed to the technique that Stanislavski would come to call "emotion memory". Together these elements formed a new vocabulary with which he explored a "return to realism" in a production of Gogol's "The Government Inspector" as soon as "The Blue Bird" had opened. At a theatre conference on , Stanislavski delivered a paper on his emerging 'system' that stressed the role of his techniques of the "magic if" (which encourages the actor to respond to the fictional circumstances of the play "as if" they were real) and emotion memory. He developed his ideas about three trends in the history of acting, which were to appear eventually in the opening chapters of "An Actor's Work": "stock-in-trade" acting, the art of representation, and the art of experiencing (his own approach). Stanislavski's production of "A Month in the Country" (1909) was a watershed in his artistic development. Breaking the MAT's tradition of open rehearsals, he prepared Turgenev's play in private. They began with a discussion of what he would come to call the "through-line" for the characters (their emotional development and the way they change over the course of the play). This production is the earliest recorded instance of his practice of analysing the action of the script into discrete "bits". At this stage in the development of his approach, Stanislavski's technique was to identify the emotional state contained in the psychological experience of the character during each bit and, through the use of the actor's emotion memory, to forge a subjective connection to it. Only after two months of rehearsals were the actors permitted to physicalise the text. Stanislavski insisted that they should play the actions that their discussions around the table had identified. Having realised a particular emotional state in a physical action, he assumed at this point in his experiments, the actor's repetition of that action would evoke the desired emotion. As with his experiments in "The Drama of Life", they also explored non-verbal communication, whereby scenes were rehearsed as "silent "études"" with actors interacting "only with their eyes". The production's success when it opened in December 1909 seemed to prove the validity of his new methodology. Late in 1910, Gorky invited Stanislavski to join him in Capri, where they discussed actor training and Stanislavski's emerging "grammar". Inspired by a popular theatre performance in Naples that employed the techniques of the "commedia dell'arte", Gorky suggested that they form a company, modelled on the medieval strolling players, in which a playwright and group of young actors would devise new plays together by means of improvisation. Stanislavski would develop this use of improvisation in his work with his First Studio. In his treatment of the classics, Stanislavski believed that it was legitimate for actors and directors to ignore the playwright's intentions for a play's staging. One of his most important—a collaboration with Edward Gordon Craig on a production of "Hamlet"—became a landmark of 20th-century theatrical modernism. Stanislavski hoped to prove that his recently developed 'system' for creating internally justified, realistic acting could meet the formal demands of a classic play. Craig envisioned a Symbolist monodrama in which every aspect of production would be subjugated to the protagonist: it would present a dream-like vision as seen through Hamlet's eyes. Despite these contrasting approaches, the two practitioners did share some artistic assumptions; the 'system' had developed out of Stanislavski's experiments with Symbolist drama, which had shifted his attention from a Naturalistic external surface to the characters' subtextual, inner world. Both had stressed the importance of achieving a unity of all theatrical elements in their work. Their production attracted enthusiastic and unprecedented worldwide attention for the theatre, placing it "on the cultural map for Western Europe", and it has come to be regarded as a seminal event that revolutionised the staging of Shakespeare's plays. It became "one of the most famous and passionately discussed productions in the history of the modern stage." Increasingly absorbed by his teaching, in 1913 Stanislavski held open rehearsals for his production of Molière's "The Imaginary Invalid" as a demonstration of the 'system'. As with his production of "Hamlet" and his next, Goldoni's "The Mistress of the Inn", he was keen to assay his 'system' in the crucible of a classical text. He began to inflect his technique of dividing the action of the play into bits with an emphasis on improvisation; he would progress from analysis, through free improvisation, to the language of the text: I divide the work into "large bits" clarifying the nature of each bit. Then, immediately, in my own words, I play each bit, observing all the curves. Then I go through the experiences of each bit ten times or so with its curves (not in a fixed way, not being consistent). Then I follow the successive bits in the book. And finally, I make the transition, imperceptibly, to the experiences as expressed in the actual words of the part. Stanislavski's struggles with both the Molière and Goldoni comedies revealed the importance of an appropriate definition of what he calls a character's "super-task" (the core problem that unites and subordinates the character's moment-to-moment tasks). This impacted particularly on the actors' ability to serve the plays' genre, because an unsatisfactory definition produced tragic rather than comic performances. Other European classics directed by Stanislavski include: Shakespeare's "The Merchant of Venice", "Twelfth Night", and "Othello", an unfinished production of Molière's "Tartuffe", and Beaumarchais's "The Marriage of Figaro". Other classics of the Russian theatre directed by Stanislavki include: several plays by Ivan Turgenev, Griboyedov's "Woe from Wit", Gogol's "The Government Inspector", and plays by Tolstoy, Ostrovsky, and Pushkin. Following the success of his production of "A Month in the Country", Stanislavski made repeated requests to the board of the MAT for proper facilities to pursue his pedagogical work with young actors. Gorky encouraged him not to found a drama school to teach inexperienced beginners, but rather—following the example of the Theatre-Studio of 1905—to create a studio for research and experiment that would train young professionals. Stanislavski created the First Studio on . Its founding members included Yevgeny Vakhtangov, Michael Chekhov, Richard Boleslavsky, and Maria Ouspenskaya, all of whom would exert a considerable influence on the subsequent history of theatre. Stanislavski selected Suler (as Gorky had nicknamed Sulerzhitsky) to lead the studio. In a focused, intense atmosphere, their work emphasised experimentation, improvisation, and self-discovery. Following Gorky's suggestions about devising new plays through improvisation, they searched for "the creative process common to authors, actors and directors". Stanislavski created the Second Studio of the MAT in 1916, in response to a production of Zinaida Gippius' "The Green Ring" that a group of young actors had prepared independently. With a greater focus on pedagogical work than the First Studio, the Second Studio provided the environment in which Stanislavski developed the training techniques that would form the basis for his manual "An Actor's Work" (1938). A significant influence on the development of the 'system' came from Stanislavski's experience teaching and directing at his Opera Studio, which was founded in 1918. He hoped that the successful application of his 'system' to opera, with its inescapable conventionality and artifice, would demonstrate the universality of his approach to performance and unite the work of Mikhail Shchepkin and Feodor Chaliapin. From this experience Stanislavski's notion of "tempo-rhythm" emerged. He invited Serge Wolkonsky to teach diction and Lev Pospekhin to teach expressive movement and dance and attended both of their classes as a student. Stanislavski spent the summer of 1914 in Marienbad where, as he had in 1906, he researched the history of theatre and theories of acting in order to clarify the discoveries that his practical experiments had produced. When the First World War broke out, Stanislavski was in Munich. "It seemed to me", he wrote of the atmosphere at the train station in an article detailing his experiences, "that death was hovering everywhere." The train was stopped at Immenstadt, where German soldiers denounced him as a Russian spy. Held in a room at the station with a large crowd with "the faces of wild beasts" baying at its windows, Stanislavski believed he was to be executed. He remembered that he was carrying an official document that mentioned having played to Kaiser Wilhelm during their tour of 1906 that, when he showed it to the officers, produced a change of attitude towards his group. They were placed on a slow train to Kempten. Gurevich later related how during the journey Stanislavski surprised her when he whispered that: [E]vents of recent days had given him a clear impression of the superficiality of all that was called human culture, bourgeois culture, that a completely different kind of life was needed, where all needs were reduced to the minimum, where there was work—real artistic work—on behalf of the people, for those who had not yet been consumed by this bourgeois culture. In Kempten they were again ordered into one of the station's rooms, where Stanislavski overheard the German soldiers complain of a lack of ammunition; it was only this, he understood, that prevented their execution. The following morning they were placed on a train and eventually returned to Russia via Switzerland and France. Turning to the classics of Russian theatre, the MAT revived Griboyedov's comedy "Woe from Wit" and planned to stage three of Pushkin's "little tragedies" in early 1915. Stanislavski continued to develop his 'system', explaining at an open rehearsal for "Woe from Wit" his concept of the state of "I am being". This term marks the stage in the rehearsal process when the distinction between actor and character blurs (producing the "actor/role"), subconscious behaviour takes the lead, and the actor feels fully present in the dramatic moment. He stressed the importance to achieving this state of a focus on action ("What would I do if ...") rather than emotion ("How would I feel if ..."): "You must ask the kinds of questions that lead to dynamic action." Instead of forcing emotion, he explained, actors should notice what is happening, attend to their relationships with the other actors, and try to understand "through the senses" the fictional world that surrounds them. When he prepared for his role in Pushkin's "Mozart and Salieri", Stanislavski created a biography for Salieri in which he imagined the character's memories of each incident mentioned in the play, his relationships with the other people involved, and the circumstances that had impacted on Salieri's life. When he attempted to render all of this detail in performance, however, the subtext overwhelmed the text; overladen with heavy pauses, Pushkin's verse was fragmented to the point of incomprehensibility. His struggles with this role prompted him to attend more closely to the structure and dynamics of language in drama; to that end, he studied Serge Wolkonsky's "The Expressive Word" (1913). The French theatre practitioner Jacques Copeau contacted Stanislavski in October 1916. As a result of his conversations with Edward Gordon Craig, Copeau had come to believe that his work at the Théâtre du Vieux-Colombier shared a common approach with Stanislavski's investigations at the MAT. On , Stanislavski's assistant and closest friend, Leopold Sulerzhitsky, died from chronic nephritis. Reflecting on their relationship in 1931, Stanislavski said that Suler had understood him completely and that no one, since, had replaced him. Stanislavski welcomed the February Revolution of 1917 and its overthrow of the absolute monarchy as a "miraculous liberation of Russia". With the October Revolution later in the year, the MAT closed for a few weeks and the First Studio was occupied by revolutionaries. Stanislavski thought that the social upheavals presented an opportunity to realise his long-standing ambitions to establish a Russian popular theatre that would provide, as the title of an essay he prepared that year put it, "The Aesthetic Education of the Popular Masses". Vladimir Lenin, who became a frequent visitor to the MAT after the revolution, praised Stanislavski as "a real artist" and indicated that, in his opinion, Stanislavski's approach was "the direction the theatre should take." The revolutions of that year brought about an abrupt change in Stanislavski's finances when his factories were nationalised, which left his wage from the MAT as his only source of income. On 29 August 1918 Stanislavski, along with several others from the MAT, was arrested by the Cheka, though he was released the following day. During the years of the Civil War, Stanislavski concentrated on teaching his 'system', directing (both at the MAT and its studios), and bringing performances of the classics to new audiences (such as factory workers and the Red Army). Several articles on Stanislavski and his 'system' were published, but none were written by him. On 5 March 1921, Stanislavski was evicted from his large house on Carriage Row, where he had lived since 1903. Following the personal intervention of Lenin (prompted by Anatoly Lunacharsky), Stanislavski was re-housed at 6 Leontievski Lane, not far from the MAT. He was to live there until his death in 1938. On 29 May 1922, Stanislavski's favourite pupil, the director Yevgeny Vakhtangov, died of cancer. In the wake of the temporary withdrawal of the state subsidy to the MAT that came with the New Economic Policy in 1921, Stanislavski and Nemirovich planned a tour to Europe and the US to augment the company's finances. The tour began in Berlin, where Stanislavski arrived on 18 September 1922, and proceeded to Prague, Zagreb, and Paris, where he was welcomed at the station by Jacques Hébertot, Aurélien Lugné-Poë, and Jacques Copeau. In Paris, he also met André Antoine, Louis Jouvet, Isadora Duncan, Firmin Gémier, and Harley Granville-Barker. He discussed with Copeau the possibility of establishing an international theatre studio and attended performances by Ermete Zacconi, whose control of his performance, economic expressivity, and ability both to "experience" and "represent" the role impressed him. The company sailed to New York and arrived on 4 January 1923. When reporters asked about their repertoire, Stanislavski explained that "America wants to see what Europe already knows." David Belasco, Sergei Rachmaninoff, and Feodor Chaliapin attended the opening night performance. Thanks in part to a vigorous publicity campaign that the American producer, Morris Gest, orchestrated, the tour garnered substantial critical praise, although it was not a financial success. As actors (among whom was the young Lee Strasberg) flocked to the performances to learn from the company, the tour made a substantial contribution to the development of American acting. Richard Boleslavsky presented a series of lectures on Stanislavski's 'system' (which were eventually published as "Acting: The First Six Lessons" in 1933). A performance of "Three Sisters" on 31 March 1923 concluded the season in New York, after which they travelled to Chicago, Philadelphia, and Boston. At the request of a US publisher, Stanislavski reluctantly agreed to write his autobiography, "My Life in Art", since his proposals for an account of the 'system' or a history of the MAT and its approach had been rejected. He returned to Europe during the summer where he worked on the book and, in September, began rehearsals for a second tour. The company returned to New York on 7 November and went on to perform in Philadelphia, Boston, New Haven, Hartford, Washington, D.C., Brooklyn, Newark, Pittsburgh, Chicago, and Detroit. On 20 March 1924, Stanislavski met President Calvin Coolidge at the White House. They were introduced by a translator, Elizabeth Hapgood, with whom he would later collaborate on "An Actor Prepares". The company left the US on 17 May 1924. On his return to Moscow in August 1924, Stanislavski began with the help of Gurevich to make substantial revisions to his autobiography, in preparation for a definitive Russian-language edition, which was published in September 1926. He continued to act, reprising the role of Astrov in a new production of "Uncle Vanya" (his performance of which was described as "staggering"). With Nemirovich away touring with his Music Studio, Stanislavski led the MAT for two years, during which time the company thrived. With a company fully versed in his 'system', Stanislavski's work on Mikhail Bulgakov's "The Days of the Turbins" focused on the tempo-rhythm of the production's dramatic structure and the through-lines of action for the individual characters and the play as a whole. "See everything in terms of action" he advised them. Aware of the disapproval of Bulgakov felt by the Repertory Committee ("Glavrepertkom") of the People's Commissariat for Education, Stanislavski threatened to close the theatre if the play was banned. Despite substantial hostility from the press, the production was a box-office success. In an attempt to render a classic play relevant to a contemporary Soviet audience, Stanislavski re-located the action in his fast and free-flowing production of Pierre Beaumarchais' 18th-century comedy "The Marriage of Figaro" to pre-Revolutionary France and emphasised the democratic point of view of Figaro and Susanna, in preference to that of the aristocratic Count Almaviva. His working methods contributed innovations to the 'system': the analysis of scenes in terms of concrete physical tasks and the use of the "line of the day" for each character. In preference to the tightly controlled, Meiningen-inspired scoring of the "mise en scène" with which he had choreographed crowd scenes in his early years, he now worked in terms of broad physical tasks: actors responded truthfully to the circumstances of scenes with sequences of improvised adaptations that attempted to solve concrete, physical problems. For the "line of the day," an actor elaborates in detail the events that supposedly occur to the character 'off-stage', in order to form a continuum of experience (the "line" of the character's life that day) that helps to justify his or her behaviour 'on-stage'. This means that the actor develops a relationship to where (as a character) he has just come from and to where he intends to go when leaving the scene. The production was a great success, garnering ten curtain calls on opening night. Thanks to its cohesive unity and rhythmic qualities, it is recognised as one of Stanislavski's major achievements. With a performance of extracts from its major productions—including the first act of "Three Sisters" in which Stanislavski played Vershinin—the MAT celebrated its 30-year jubilee on 29 October 1928. While performing Stanislavski suffered a massive heart-attack, although he continued until the curtain call, after which he collapsed. With that, his acting career came to an end. While on holiday in August 1926, Stanislavski began to develop what would become "An Actor's Work", his manual for actors written in the form of a fictional student's diary. Ideally, Stanislavski felt, it would consist of two volumes: the first would detail the actor's inner experiencing and outer, physical embodiment; the second would address rehearsal processes. Since the Soviet publishers used a format that would have made the first volume unwieldy, however, in practice this became three volumes—inner experiencing, outer characterisation, and rehearsal—each of which would be published separately, as it became ready. The danger that such an arrangement would obscure the mutual interdependence of these parts in the 'system' as a whole would be avoided, Stanislavski hoped, by means of an initial overview that would stress their integration in his psycho-physical approach; as it turned out, however, he never wrote the overview and many English-language readers came to confuse the first volume on psychological processes—published in a heavily abridged version in the US as "An Actor Prepares" (1936)—with the 'system' as a whole. The two editors—Hapgood with the American edition and Gurevich with the Russian—made conflicting demands on Stanislavski. Gurevich became increasingly concerned that splitting "An Actor's Work" into two books would not only encourage misunderstandings of the unity and mutual implication of the psychological and physical aspects of the 'system', but would also give its Soviet critics grounds on which to attack it: "to accuse you of dualism, spiritualism, idealism, etc." Frustrated with Stanislavski's tendency to tinker with details in preference to addressing more important missing sections, in May 1932 she terminated her involvement. Hapgood echoed Gurevich's frustration. In 1933, Stanislavski worked on the second half of "An Actor's Work". By 1935, a version of the first volume was ready for publication in America, to which the publishers made significant abridgements. A significantly different and far more complete Russian edition, "An Actor's Work on Himself, Part I", was not published until 1938, just after Stanislavski's death. The second part of "An Actor's Work on Himself" was published in the Soviet Union in 1948; an English-language variant, "Building a Character", was published a year later. The third volume, "An Actor's Work on a Role", was published in the Soviet Union in 1957; its nearest English-language equivalent, "Creating a Role", was published in 1961. The differences between the Russian and English-language editions of volumes two and three were even greater than those of the first volume. In 2008, an English-language translation of the complete Russian edition of "An Actor's Work" was published, with one of "An Actor's Work on a Role" following in 2010. While recuperating in Nice at the end of 1929, Stanislavski began a production plan for Shakespeare's "Othello". Hoping to use this as the basis for "An Actor's Work on a Role", his plan offers the earliest exposition of the rehearsal process that became known as his Method of Physical Action. He first explored this approach practically in his work on "Three Sisters" and "Carmen" in 1934 and "Molière" in 1935. In contrast to his earlier method of working on a play—which involved extensive readings and analysis around a table before any attempt to physicalise its action—Stanislavski now encouraged his actors to explore the action through its "active analysis". He felt that too much discussion in the early stages of rehearsal confused and inhibited the actors. Instead, focusing on the simplest physical actions, they improvised the sequence of dramatic situations given in the play. "The best analysis of a play", he argued, "is to take action in the given circumstances." If the actor justified and committed to the truth of the actions (which are easier to shape and control than emotional responses), Stanislavski reasoned, they would evoke truthful thoughts and feelings. Stanislavski's attitude to the use of emotion memory in rehearsals (as distinct from its use in actor training) had shifted over the years. Ideally, he felt, an instinctive identification with a character's situation should arouse an emotional response. The use of emotion memory in lieu of that had demonstrated a propensity for encouraging self-indulgence or hysteria in the actor. Its direct approach to feeling, Stanislavski felt, more often produced a block than the desired expression. Instead, an indirect approach to the subconscious via a focus on actions (supported by a commitment to the given circumstances and imaginative "Magic Ifs") was a more reliable means of luring the appropriate emotional response. This shift in approach corresponded both with an increased attention to the structure and dynamic of the play as a whole and with a greater prominence given to the distinction between the planning of a role and its performance. In performance the actor is aware of only one step at a time, Stanislavski reasoned, but this focus risks the loss of the overall dynamic of a role in the welter of moment-to-moment detail. Consequently, the actor must also adopt a different point of view in order to plan the role in relation to its dramatic structure; this might involve adjusting the performance by holding back at certain moments and playing full out at others. A sense of the whole thereby informs the playing of each episode. Borrowing a term from Henry Irving, Stanislavski called this the "perspective of the role". Every afternoon for five weeks during the summer of 1934 in Paris, Stanislavski worked with the American actress Stella Adler, who had sought his assistance with the blocks she had confronted in her performances. Given the emphasis that emotion memory had received in New York, Adler was surprised to find that Stanislavski rejected the technique except as a last resort. The news that this was Stanislavski's approach would have significant repercussions in the US; Lee Strasberg angrily rejected it and refused to modify his version of the 'system'. Following his heart attack in 1928, for the last decade of his life Stanislavski conducted most of his work writing, directing rehearsals, and teaching in his home on Leontievski Lane. In line with Joseph Stalin's policy of "isolation and preservation" towards certain internationally famous cultural figures, Stanislavski lived in a state of internal exile in Moscow. This protected him from the worst excesses of Stalin's "Great Terror". A number of articles critical of the terminology of Stanislavski's 'system' appeared in the run-up to a RAPP conference in early 1931, at which the attacks continued. The 'system' stood accused of philosophical idealism, of a-historicism, of disguising social and political problems under ethical and moral terms, and of "biological psychologism" (or "the suggestion of fixed qualities in nature"). In the wake of the first congress of the USSR Union of Writers (chaired by Maxim Gorky in August 1934), however, Socialist realism was established as the official party line in aesthetic matters. While the new policy would have disastrous consequences for the Soviet avant-garde, the MAT and Stanislavski's 'system' were enthroned as exemplary models. Given the difficulties he had with completing his manual for actors, Stanislavski decided that he needed to found a new studio if he was to ensure his legacy. "Our school will produce not just individuals," he wrote, "but a whole company". In June 1935, he began to instruct a group of teachers in the training techniques of the 'system' and the rehearsal processes of the Method of Physical Action. Twenty students (out of 3,500 auditionees) were accepted for the dramatic section of the Opera-Dramatic Studio, where classes began on 15 November. Stanislavski arranged a curriculum of four years of study that focused exclusively on technique and method—two years of the work detailed later in "An Actor's Work" and two of that in "An Actor's Work on a Role". Once the students were acquainted with the training techniques of the first two years, Stanislavski selected "Hamlet" and "Romeo and Juliet" for their work on roles. He worked with the students in March and April 1937, focusing on their sequences of physical actions, on establishing their through-lines of action, and on rehearsing scenes anew in terms of the actors' tasks. By June 1938 the students were ready for their first public showing, at which they performed a selection of scenes to a small number of spectators. The Opera-Dramatic Studio embodied the most complete implementation of the training exercises that Stanislavski described in his manuals. From late 1936 onwards, Stanislavski began to meet regularly with Vsevolod Meyerhold, with whom he discussed the possibility of developing a common theatrical language. In 1938, they made plans to work together on a production and discussed a synthesis of Stanislavski's Method of Physical Action and Meyerhold's biomechanical training. On 8 March, Meyerhold took over the rehearsals for "Rigoletto", the staging of which he completed after Stanislavski's death. On his death-bed Stanislavski declared to Yuri Bakhrushin that Meyerhold was "my sole heir in the theatre—here or anywhere else". Stalin's police tortured and killed Meyerhold in February 1940. Stanislavski died in his home at 3:45pm on 7 August 1938, having probably suffered another heart-attack five days earlier. Thousands of people attended his funeral. Three weeks after his death his widow, Lilina, received an advanced copy of the Russian-language edition of the first volume of "An Actor's Work"—the "labour of his life", as she called it. Stanislavski was buried in the Novodevichy Cemetery in Moscow, not far from the grave of Anton Chekhov.
https://en.wikipedia.org/wiki?curid=17318
K cell K cell may refer to:
https://en.wikipedia.org/wiki?curid=17319
Khartoum Khartoum or Khartum ( ; ) is the capital of Sudan. With a population of 5,274,321, its metropolitan area is the largest in Sudan, the sixth-largest in Africa, the second-largest in North Africa, and the fourth-largest in the Arab world. Khartoum is located at the confluence of the White Nile, flowing north from Lake Victoria, and the Blue Nile, flowing west from Lake Tana in Ethiopia. The location where the two Niles meet is known as "al-Mogran" or "al-Muqran" (; English: "The Confluence"). From there, the Nile continues to flow north towards Egypt and the Mediterranean Sea. Divided by these two parts of the Nile, Khartoum is a tripartite metropolis with an estimated overall population of over five million people, consisting of Khartoum proper, and linked by bridges to Khartoum North ( ) and Omdurman ( ) to the west. Khartoum was founded in 1821 as part of Ottoman Egypt, north of the ancient city of Soba. The Siege of Khartoum in 1884 led to the capture of the city by Mahdist forces and a massacre of the defending Anglo-Egyptian garrison. It was reoccupied by British forces in 1898 and served as the seat of the Anglo-Egyptian Sudan government until 1956, when the city became the capital of an independent Sudan. The city has continued to experience unrest in modern times. Three hostages were killed during the Attack on the Saudi Embassy in Khartoum in 1973. The Justice and Equality Movement engaged in combat with Sudanese government forces in the city in 2008 as part of the War in Darfur. The Khartoum massacre occurred in 2019 amongst the Sudanese Revolution. Khartoum is an economic and trade centre in Northern Africa, with rail lines from Port Sudan and El-Obeid. It is served by Khartoum International Airport, with another airport, Khartoum New International Airport, currently under construction. Several national and cultural institutions are located in Khartoum and its metropolitan area, including the National Museum of Sudan, the Khalifa House Museum, the University of Khartoum, and the Sudan University of Science and Technology. The origin of the word "Khartoum" is uncertain. One theory argues that it is derived from Arabic (, "trunk" or "hose"), probably referring to the narrow strip of land extending between the Blue and White Niles. Dinka scholars argue that the name derives from the Dinka words (Dinka-Bor dialect) or "khier-tuom" (as is the pronunciation in various Dinka Diaelects), translating to "place where rivers meet". This is supported by historical accounts which place the Dinka homeland in central Sudan (around present-day Khartoum) as recently as the 13th-17th centuries A.D. Captain J.A. Grant, who reached Khartoum in 1863 with Captain Speke's expedition, thought the name was most probably from the Arabic (, "safflower", i.e., "Carthamus tinctorius"), which was cultivated extensively in Egypt for its oil to be used as fuel. Some scholars speculate that the word derives from the Nubian word ("the abode of Atum"), the Nubian and Egyptian god of creation. Other Beja scholars suggest "Khartoum" is derived from the Beja word , "meeting". In 1821, Khartoum was established north of the ancient city of Soba, by Ibrahim Pasha, the son of Egypt's ruler, Muhammad Ali Pasha, who had just incorporated Sudan into his realm. Originally, Khartoum served as an outpost for the Egyptian Army, but the settlement quickly grew into a regional centre of trade. It also became a focal point for the slave trade. Later, it became the administrative center and official capital of Sudan. On 13 March 1884, troops loyal to the Mahdi Muhammad Ahmad started a siege of Khartoum, against defenders led by British General Charles George Gordon. The siege ended in a massacre of the Anglo-Egyptian garrison when on 26 January 1885 the heavily-damaged city fell to the Mahdists. On 2 September 1898, Omdurman was the scene of the bloody Battle of Omdurman, during which British forces under Herbert Kitchener defeated the Mahdist forces defending the city. In 1973, the city was the site of an anomalous hostage crisis in which members of Black September held 10 hostages at the Saudi Arabian embassy, five of them diplomats. The US ambassador, the US deputy ambassador, and the Belgian "chargé d'affaires" were murdered. The remaining hostages were released. A 1973 United States Department of State document, declassified in 2006, concluded: "The Khartoum operation was planned and carried out with the full knowledge and personal approval of Yasser Arafat." In 1977, the first oil pipeline between Khartoum and the Port of Sudan was completed. Throughout the 1970s and 1980s, Khartoum was the destination for hundreds of thousands refugees fleeing conflicts in neighboring nations such as Chad, Eritrea, Ethiopia, and Uganda. Many Eritrean and Ethiopian refugees assimilated into society, while others settled in large slums at the outskirts of the city. Since the mid-1980s, large numbers of refugees from South Sudan and Darfur fleeing the violence of the Second Sudanese Civil War and Darfur conflict have settled around Khartoum. In 1991, Osama bin Laden purchased a house in the affluent al-Riyadh neighborhood of the city and another in Soba. He lived there until 1996, when he was banished from the country. Following the 1998 U.S. embassy bombings, the United States accused bin Laden's al-Qaeda group and, on 20 August, launched cruise missile attacks on the al-Shifa pharmaceutical factory in Khartoum North. The destruction of the factory produced diplomatic tension between the U.S. and Sudan. The factory ruins are now a tourist attraction. In November 1991, the government of President Omar al-Bashir sought to remove half the population from the city. The residents, deemed "squatters", were mostly southern Sudanese who the government feared could be potential rebel sympathizers. Around 425,000 people were placed in five "Peace Camps" in the desert an hour's drive from Khartoum. The camps were watched over by heavily armed security guards, many relief agencies were banned from assisting, and "the nearest food was at a market four miles away, a vast journey in the desert heat." Many residents were reduced to having only burlap sacks as housing. The intentional displacement was part of a large urban renewal plan backed by the housing minister, Sharaf Bannaga. The sudden death of SPLA head and vice-president of Sudan, John Garang, at the end of July 2005, was followed by three days of violent riots in the capital. The riots finally died down after Southern Sudanese politicians and tribal leaders sent strong messages to the rioters. The situation could have been much more dire; even so, the death toll was at least 24, as youths from southern Sudan attacked northern Sudanese and clashed with security forces. The Organisation of African Unity summit of 18–22 July 1978 was held in Khartoum, during which Sudan was awarded the OAU presidency. The African Union summit of 16–24 January 2006 was held in Khartoum. The Arab League summit of 29th of August 1967 was held in Khartoum as the fourth Arab League Summit. The Arab League summit of 28–29 March 2006 was held in Khartoum, during which the Arab League awarded Sudan the Arab League presidency. On 10 May 2008, the Darfur rebel group, Justice and Equality Movement, moved into the city, where they engaged in heavy fighting with Sudanese government forces. Their soldiers included minors, and their goal was to topple Omar al-Bashir's government, though the Sudanese government succeeded in beating back the assault. On 23 October 2012, an explosion at the Yarmouk munitions factory killed two people and injured another person. The Sudanese government has claimed that the explosion was the result of an Israeli airstrike. On 3 June 2019, Khartoum was the site of the Khartoum massacre, where over 100 dissidents were murdered (the government said 61 were killed), hundreds more injured and 70 women raped by Rapid Support Forces (RSF) in order to forcefully disperse the peaceful protests calling for civilian government. On 1 July 2020, activists demanded that al-Zibar Basha street in Khartoum be renamed. Al-Zubayr Rahma Mansur was a slave trader and the al-Zibar Basha street leads to the military base where the 2019 Khartoum massacre took place. Khartoum is located in the middle of the populated areas in Sudan, at almost the northeast center of the country between 15 and 16 degrees latitude north, and between 31 and 32 degrees longitude east. Khartoum marks the convergence of the White Nile and the Blue Nile, where they join to form the bottom of the leaning-S shape of the main Nile ("see map, upper right") as it zigzags through northern Sudan into Egypt at Lake Nasser. Khartoum is relatively flat, at elevation , as the Nile flows northeast past Omdurman to Shendi, at elevation about away. Khartoum features a hot desert climate (Köppen climate classification "BWh") with a dry season occurring during winter, typical of the Saharo-Sahelian zone, which marks the progressive passage between the Sahara Desert's vast arid areas and the Sahel's vast semi-arid areas. The climate is extremely dry for most of the year, with about eight months when average rainfall is lower than . The very long dry season is itself divided into a hot, very dry season between November and February, as well as a very hot, dry season between March and May. During this part of the year, hot, dry continental trade winds from deserts sweep over the region such as the harmattan (a northerly or northeasterly wind); the weather is stable and very dry. The very irregular, very brief, rainy season lasts about 1 month as the maximum rainfall is recorded in August, with about . The rainy season is characterized by a seasonal reverse of wind regimes, when the Intertropical Convergence Zone goes northerly. Average annual rainfall is very low, with only of precipitation. Khartoum records on average six days with or more and 19 days with or more of rainfall. The highest temperatures occur during two periods in the year: the first at the late dry season, when average high temperatures consistently exceed from April to June, and the second at the early dry season, when average high temperatures exceed in September and October months. Khartoum is one of the hottest major cities on Earth, with annual mean temperatures hovering around . The city also has hot winters. In no month does the average monthly high temperature fall below . This is something not seen in other major cities with hot desert climates such as Riyadh, Baghdad and Phoenix. Temperatures cool off enough during the night, with Khartoum's lowest average low temperature of the year just above . After the signing of the historic Comprehensive Peace Agreement between the government of Sudan and the Sudan People's Liberation Movement (SPLA), the Government of Sudan began a massive development project. In 2007, the biggest projects in Khartoum were the Al-Mogran Development Project, two five-star hotels, a new airport, Mac Nimir Bridge (finished in October 2007) and the Tuti Bridge that links Khartoum to Tuti Island. In the 21st century, Khartoum developed based on Sudan's oil wealth (although the independence of South Sudan in 2011 affected the economy of Sudan negatively). The center of the city has tree-lined streets. Khartoum has the highest concentration of economic activity in the country. This has changed as major economic developments take place in other parts of the country, like oil exploration in the South, the Giad Industrial Complex in Al Jazirah state and White Nile Sugar Project in Central Sudan, and the Merowe Dam in the North. Among the city's industries are printing, glass manufacturing, food processing, and textiles. Petroleum products are now produced in the far north of Khartoum state, providing fuel and jobs for the city. One of Sudan's largest refineries is located in northern Khartoum. The Souq Al Arabi is Khartoum's largest open air market. The "souq" is spread over several blocks in the center of Khartoum proper just south of the Great Mosque (Mesjid al-Kabir) and the minibus station. It is divided into separate sections, including one focused entirely on gold. Al Qasr Street and Al Jamhoriyah Street are considered the most famous high streets in Khartoum State. Afra Mall is located in the southern suburb Arkeweet. The Afra Mall has a supermarket, retail outlets, coffee shops, a bowling alley, movie theaters, and a children's playground. In 2011, Sudan opened the Hotel Section and part of the food court of the new, Corinthia Hotel Tower. The Mall/Shopping section is still under construction. Khartoum is the main location for most of Sudan's top educational bodies. There are four main levels of education: The education system in Sudan went through many changes in the late 1980s and early 1990s. Khartoum is home to the largest airport in Sudan, Khartoum International Airport. It is the main hub for Sudan Airways, Sudan's main carrier. The airport was planned for the Southern outskirts of the city; but with Khartoum's rapid growth and consequent urban sprawl, the airport is still located in the heart of the city. Bridges over the Blue Nile connecting Khartoum to Khartoum North: Bridges over the White Nile, connecting Khartoum to Omdurman: Bridges connecting Tuti Island: Khartoum has rail lines from Wadi Halfa, Port Sudan on the Red Sea, and El Obeid. All are operated by Sudan Railways. Some lines also extended to some parts of south Sudan Architecture of Khartoum cannot be identified by one style or even two styles; it is as diverse as its culture, where 597 different cultural groups meet. In this article are 10 buildings of Khartoum to showcase this diversity in buildings’ shapes, materials, treatments. Sudan was home to numerous ancient civilizations, such as the Kingdom of Kush, Kerma, Nobatia, Alodia, Makuria, Meroë and others, most of which flourished along the Nile. During the pre-dynastic period Nubia and Nagadan Upper Egypt were identical, simultaneously evolved systems of Pharaonic kingship by 3300 BC. In response to the worldwide deterioration of the environment and the increase in pollution levels, there has been a strong movement towards sustainable architecture across the globe. This movement has received attention and concern from governments as well as private sectors. In the past decades, Sudan has seen a huge surge in infrastructure and technology, which has led to many new and innovative building concepts, ideas and construction techniques. There is now a constant flow of new projects arising, thus leading to a new, transformed, modernised form of architecture. Among the places of worship, they are predominantly Muslim mosques. There are also Christian churches and temples : Roman Catholic Archdiocese of Khartoum (Catholic Church), Sudan Interior Church (Baptist World Alliance), Presbyterian Church in Sudan (World Communion of Reformed Churches). The largest museum in Sudan is the National Museum of Sudan. Founded in 1971, it contains works from different epochs of Sudanese history. Among the exhibits are two Egyptian temples of Buhen and Semna, originally built by Queen Hatshepsut and Pharaoh Tuthmosis III, respectively, but relocated to Khartoum upon the flooding of Lake Nasser. The Republican Palace Museum, opened in 2000, is located in the former Anglican All Saints' cathedral on Sharia al-Jama'a, next to the historical Presidential Palace. The Ethnographic Museum is located on Sharia al-Jama'a, close to the Mac Nimir Bridge. Khartoum is home to a small botanical garden, in the Mogran district of the city. Khartoum is home to several clubs such as the Blue Nile Sailing Club, the German Club, the Greek Hotel, the Coptic Club, the Syrian Club and the International Club. There are also two football clubs situated in Khartoum – Al Khartoum SC and Al Ahli Khartoum.
https://en.wikipedia.org/wiki?curid=17320
Alpha-Ketoglutaric acid α-Ketoglutaric acid (2-oxoglutaric acid) is one of two ketone derivatives of glutaric acid. The term "ketoglutaric acid," when not further qualified, almost always refers to the alpha variant. β-Ketoglutaric acid varies only by the position of the ketone functional group, and is much less common. Its anion, α-ketoglutarate also called 2-oxoglutarate, is an important biological compound. It is the keto acid produced by deamination of glutamate, and is an intermediate in the Krebs cycle. The enzyme alanine transaminase converts α-ketoglutarate and L-alanine to L-glutamate and pyruvate, respectively, as a reversible process. α-Ketoglutarate is a key intermediate in the Krebs cycle, coming after isocitrate and before succinyl CoA. Anaplerotic reactions can replenish the cycle at this juncture by synthesizing α-ketoglutarate from transamination of glutamate, or through action of glutamate dehydrogenase on glutamate. Glutamine is synthesized from glutamate by glutamine synthetase, which utilizes adenosine triphosphate to form glutamyl phosphate; this intermediate is attacked by ammonia as a nucleophile giving glutamine and inorganic phosphate. Proline, arginine, and lysine (in some organisms) are other amino acids synthesized as well. These three amino acids derive from glutamate with the addition of further steps or enzymes to facilitate reactions. Another function is to combine with nitrogen released in cells, therefore preventing nitrogen overload. α-Ketoglutarate is one of the most important nitrogen transporters in metabolic pathways. The amino groups of amino acids are attached to it (by transamination) and carried to the liver where the urea cycle takes place. α-Ketoglutarate is transaminated, along with glutamine, to form the excitatory neurotransmitter glutamate. Glutamate can then be decarboxylated (requiring vitamin B6) into the inhibitory neurotransmitter gamma-aminobutyric acid. It is reported that high ammonia and/or high nitrogen levels may occur with high protein intake, excessive aluminum exposure, Reye's syndrome, cirrhosis, and urea cycle disorder. It plays a role in detoxification of ammonia in brain. Acting as a co-substrate for α-ketoglutarate-dependent hydroxylase, it also plays important function in oxidation reactions involving molecular oxygen. Molecular oxygen (O2) directly oxidizes many compounds to produce useful products in an organism, such as antibiotics, in reactions catalyzed by oxygenases. In many oxygenases, α-ketoglutarate helps the reaction by being oxidized with the main substrate. EGLN1, one of the α-ketoglutarate-dependent oxygenases, is an O2 sensor, informing the organism the oxygen level in its environment. In combination with molecular oxygen, alpha-ketoglutarate is one of the requirements for the hydroxylation of proline to hydroxyproline in the production of type 1 collagen. α-Ketoglutarate, which is released by several cell types, decreases the levels of hydrogen peroxide, and the α-ketoglutarate was depleted and converted to succinate in cell culture media. A study released linked α-ketoglutarate with significantly increased lifespan in nematode worms. A study showed that α-ketoglutarate promotes TH1 differentiation and depletion of glutamine (by depleting its metabolite, α-ketoglutarate favors treg (regulatory T-cell) differentiation. It might play a role in skewing the balance in favor of tregs in the setting of the amino acid deprivation that can be seen in the tumor microenvironment. α-Ketoglutarate can be produced by: Alpha-ketoglutarate can be used to produce:
https://en.wikipedia.org/wiki?curid=17322
Keynesian economics Keynesian economics ( ; sometimes Keynesianism, named for the economist John Maynard Keynes) are various macroeconomic theories about how in the short run – and especially during recessions – economic output is strongly influenced by aggregate demand (total spending in the economy). In the Keynesian view, aggregate demand does not necessarily equal the productive capacity of the economy; instead, it is influenced by a host of factors and sometimes behaves erratically, affecting production, employment, and inflation. Keynesian economics developed during and after the Great Depression from the ideas presented by Keynes in his 1936 book, "The General Theory of Employment, Interest and Money". Keynes contrasted his approach to the aggregate supply-focused classical economics that preceded his book. The interpretations of Keynes that followed are contentious and several schools of economic thought claim his legacy. Keynesian economics served as the standard economic model in the developed nations during the later part of the Great Depression, World War II, and the post-war economic expansion (1945–1973), though it lost some influence following the oil shock and resulting stagflation of the 1970s. The advent of the financial crisis of 2007–08 caused a resurgence in Keynesian thought, which continues as new Keynesian economics. Keynesian economists generally argue that as aggregate demand is volatile and unstable, a market economy often experiences inefficient macroeconomic outcomes in the form of economic recessions (when demand is low) and inflation (when demand is high), and that these can be mitigated by economic policy responses, in particular, monetary policy actions by the central bank and fiscal policy actions by the government, which can help stabilize output over the business cycle. Keynesian economists generally advocate a managed market economy – predominantly private sector, but with an active role for government intervention during recessions and depressions. Macroeconomics is the study of the factors applying to an economy as a whole, such as the overall price level, the interest rate, and the level of employment (or equivalently, of income/output measured in real terms). The classical tradition of partial equilibrium theory had been to split the economy into separate markets, each of whose equilibrium conditions could be stated as a single equation determining a single variable. The theoretical apparatus of supply and demand curves developed by Fleeming Jenkin and Alfred Marshall provided a unified mathematical basis for this approach, which the Lausanne School generalized to general equilibrium theory. For macroeconomics the relevant partial theories were: the Quantity theory of money determining the price level, the classical theory of the interest rate, and for employment the condition referred to by Keynes as the "first postulate of classical economics" stating that the wage is equal to the marginal product, which is a direct application of the marginalist principles developed during the nineteenth century (see "The General Theory"). Keynes sought to supplant all three aspects of the classical theory. Although Keynes's work was crystallized and given impetus by the advent of the Great Depression, it was part of a long-running debate within economics over the existence and nature of general gluts. A number of the policies Keynes advocated to address the Great Depression (notably government deficit spending at times of low private investment or consumption), and many of the theoretical ideas he proposed (effective demand, the multiplier, the paradox of thrift), had been advanced by various authors in the 19th and early 20th centuries. Keynes's unique contribution was to provide a "general theory" of these, which proved acceptable to the economic establishment. An intellectual precursor of Keynesian economics was underconsumption theories associated with John Law, Thomas Malthus, the Birmingham School of Thomas Attwood, and the American economists William Trufant Foster and Waddill Catchings, who were influential in the 1920s and 1930s. Underconsumptionists were, like Keynes after them, concerned with failure of aggregate demand to attain potential output, calling this "underconsumption" (focusing on the demand side), rather than "overproduction" (which would focus on the supply side), and advocating economic interventionism. Keynes specifically discussed underconsumption (which he wrote "under-consumption") in the "General Theory," in Chapter 22, Section IV and Chapter 23, Section VII. Numerous concepts were developed earlier and independently of Keynes by the Stockholm school during the 1930s; these accomplishments were described in a 1937 article, published in response to the 1936 "General Theory," sharing the Swedish discoveries. The paradox of thrift was stated in 1892 by John M. Robertson in his "The Fallacy of Saving," in earlier forms by mercantilist economists since the 16th century, and similar sentiments date to antiquity. In 1923 Keynes published his first contribution to economic theory, "A Tract on Monetary Reform", whose point of view is classical but incorporates ideas that later played a part in the "General Theory". In particular, looking at the hyperinflation in European economies, he drew attention to the opportunity cost of holding money (identified with inflation rather than interest) and its influence on the velocity of circulation. In 1930 he published "A Treatise on Money", intended as a comprehensive treatment of its subject "which would confirm his stature as a serious academic scholar, rather than just as the author of stinging polemics", and marks a large step in the direction of his later views. In it, he attributes unemployment to wage stickiness and treats saving and investment as governed by independent decisions: the former varying positively with the interest rate, the latter negatively. The velocity of circulation is expressed as a function of the rate of interest. He interpreted his treatment of liquidity as implying a purely monetary theory of interest. Keynes's younger colleagues of the Cambridge Circus and Ralph Hawtrey believed that his arguments implicitly assumed full employment, and this influenced the direction of his subsequent work. During 1933, he wrote essays on various economic topics "all of which are cast in terms of movement of output as a whole". At the time that Keynes's wrote the General Theory, it had been a tenet of mainstream economic thought that the economy would automatically revert to a state of general equilibrium: it had been assumed that, because the needs of consumers are always greater than the capacity of the producers to satisfy those needs, everything that is produced would eventually be consumed once the appropriate price was found for it. This perception is reflected in Say's law and in the writing of David Ricardo, which states that individuals produce so that they can either consume what they have manufactured or sell their output so that they can buy someone else's output. This argument rests upon the assumption that if a surplus of goods or services exists, they would naturally drop in price to the point where they would be consumed. Given the backdrop of high and persistent unemployment during the Great Depression, Keynes argued that there was no guarantee that the goods that individuals produce would be met with adequate effective demand, and periods of high unemployment could be expected, especially when the economy was contracting in size. He saw the economy as unable to maintain itself at full employment automatically, and believed that it was necessary for the government to step in and put purchasing power into the hands of the working population through government spending. Thus, according to Keynesian theory, some individually rational microeconomic-level actions such as not investing savings in the goods and services produced by the economy, if taken collectively by a large proportion of individuals and firms, can lead to outcomes wherein the economy operates below its potential output and growth rate. Prior to Keynes, a situation in which aggregate demand for goods and services did not meet supply was referred to by classical economists as a "general glut", although there was disagreement among them as to whether a general glut was possible. Keynes argued that when a glut occurred, it was the over-reaction of producers and the laying off of workers that led to a fall in demand and perpetuated the problem. Keynesians therefore advocate an active stabilization policy to reduce the amplitude of the business cycle, which they rank among the most serious of economic problems. According to the theory, government spending can be used to increase aggregate demand, thus increasing economic activity, reducing unemployment and deflation. The Liberal Party fought the 1929 General Election on a promise to "reduce levels of unemployment to normal within one year by utilising the stagnant labour force in vast schemes of national development". David Lloyd George launched his campaign in March with a policy document, "We can cure unemployment," which tentatively claimed that, "Public works would lead to a second round of spending as the workers spent their wages." Two months later Keynes, then nearing completion of his "Treatise on money", and Hubert Henderson collaborated on a political pamphlet seeking to "provide academically respectable economic arguments" for Lloyd George's policies. It was titled "Can Lloyd George do it?" and endorsed the claim that "greater trade activity would make for greater trade activity ... with a cumulative effect". This became the mechanism of the "ratio" published by Richard Kahn in his 1931 paper "The relation of home investment to unemployment", described by Alvin Hansen as "one of the great landmarks of economic analysis". The "ratio" was soon rechristened the "multiplier" at Keynes's suggestion. The multiplier of Kahn's paper is based on a respending mechanism familiar nowadays from textbooks. Samuelson puts it as follows: Let’s suppose that I hire unemployed resources to build a $1000 woodshed. My carpenters and lumber producers will get an extra $1000 of income... If they all have a marginal propensity to consume of 2/3, they will now spend $666.67 on new consumption goods. The producers of these goods will now have extra incomes... they in turn will spend $444.44 ... Thus an endless chain of "secondary consumption respending"  is set in motion by my "primary"  investment of $1000. Samuelson's treatment closely follows Joan Robinson's account of 1937 and is the main channel by which the multiplier has influenced Keynesian theory. It differs significantly from Kahn's paper and even more from Keynes's book. The designation of the initial spending as "investment" and the employment-creating respending as "consumption" echoes Kahn faithfully, though he gives no reason why initial consumption or subsequent investment respending shouldn't have exactly the same effects. Henry Hazlitt, who considered Keynes as much a culprit as Kahn and Samuelson, wrote that ... ... in connection with the multiplier (and indeed most of the time) what Keynes is referring to as "investment" really means "any addition to spending for any purpose"... The word "investment" is being used in a Pickwickian, or Keynesian, sense. Kahn envisaged money as being passed from hand to hand, creating employment at each step, until it came to rest in a "cul-de-sac"  (Hansen's term was "leakage"); the only "culs-de-sac"  he acknowledged were imports and hoarding, although he also said that a rise in prices might dilute the multiplier effect. Jens Warming recognised that personal saving had to be considered, treating it as a "leakage" (p. 214) while recognising on p. 217 that it might in fact be invested. The textbook multiplier gives the impression that making society richer is the easiest thing in the world: the government just needs to spend more. In Kahn's paper, it is harder. For him, the initial expenditure must not be a diversion of funds from other uses, but an increase in the total expenditure: something impossible – if understood in real terms – under the classical theory that the level of expenditure is limited by the economy's income/output. On page 174, Kahn rejects the claim that the effect of public works is at the expense of expenditure elsewhere, admitting that this might arise if the revenue is raised by taxation, but says that other available means have no such consequences. As an example, he suggests that the money may be raised by borrowing from banks, since ... ... it is always within the power of the banking system to advance to the Government the cost of the roads without in any way affecting the flow of investment along the normal channels. This assumes that banks are free to create resources to answer any demand. But Kahn adds that ... ... no such hypothesis is really necessary. For it will be demonstrated later on that, "pari passu"  with the building of roads, funds are released from various sources at precisely the rate that is required to pay the cost of the roads. The demonstration relies on "Mr Meade's relation" (due to James Meade) asserting that the total amount of money that disappears into "culs-de-sac"  is equal to the original outlay, which in Kahn's words "should bring relief and consolation to those who are worried about the monetary sources" (p. 189). A respending multiplier had been proposed earlier by Hawtrey in a 1928 Treasury memorandum ("with imports as the only leakage"), but the idea was discarded in his own subsequent writings. Soon afterwards the Australian economist Lyndhurst Giblin published a multiplier analysis in a 1930 lecture (again with imports as the only leakage). The idea itself was much older. Some Dutch mercantilists had believed in an infinite multiplier for military expenditure (assuming no import "leakage"), since ... ... a war could support itself for an unlimited period if only money remained in the country ... For if money itself is "consumed", this simply means that it passes into someone else's possession, and this process may continue indefinitely. Multiplier doctrines had subsequently been expressed in more theoretical terms by the Dane Julius Wulff (1896), the Australian Alfred de Lissa (late 1890s), the German/American Nicholas Johannsen (same period), and the Dane Fr. Johannsen (1925/1927). Kahn himself said that the idea was given to him as a child by his father. As the 1929 election approached "Keynes was becoming a strong public advocate of capital development" as a public measure to alleviate unemployment. Winston Churchill, the Conservative Chancellor, took the opposite view: It is the orthodox Treasury dogma, steadfastly held ... [that] very little additional employment and no permanent additional employment can, in fact, be created by State borrowing and State expenditure. Keynes pounced on a chink in the Treasury view. Cross-examining Sir Richard Hopkins, a Second Secretary in the Treasury, before the Macmillan Committee on Finance and Industry in 1930 he referred to the "first proposition" that "schemes of capital development are of no use for reducing unemployment" and asked whether "it would be a misunderstanding of the Treasury view to say that they hold to the first proposition". Hopkins responded that "The first proposition goes much too far. The first proposition would ascribe to us an absolute and rigid dogma, would it not?" Later the same year, speaking in a newly created Committee of Economists, Keynes tried to use Kahn's emerging multiplier theory to argue for public works, "but Pigou's and Henderson's objections ensured that there was no sign of this in the final product". In 1933 he gave wider publicity to his support for Kahn's multiplier in a series of articles titled "The road to prosperity" in "The Times" newspaper. A. C. Pigou was at the time the sole economics professor at Cambridge. He had a continuing interest in the subject of unemployment, having expressed the view in his popular "Unemployment"  (1913) that it was caused by "maladjustment between wage-rates and demand" – a view Keynes may have shared prior to the years of the "General Theory". Nor were his practical recommendations very different: "on many occasions in the thirties" Pigou "gave public support ... to State action designed to stimulate employment." Where the two men differed is in the link between theory and practice. Keynes was seeking to build theoretical foundations to support his recommendations for public works while Pigou showed no disposition to move away from classical doctrine. Referring to him and Dennis Robertson, Keynes asked rhetorically: "Why do they insist on maintaining theories from which their own practical conclusions cannot possibly follow?" John Maynard Keynes (1883–1946) set forward the ideas that became the basis for Keynesian economics in his main work, "The General Theory of Employment, Interest and Money" (1936). It was written during the Great Depression, when unemployment rose to 25% in the United States and as high as 33% in some countries. It is almost wholly theoretical, enlivened by occasional passages of satire and social commentary. The book had a profound impact on economic thought, and ever since it was published there has been debate over its meaning. Keynes begins the "General Theory"  with a summary of the classical theory of employment, which he encapsulates in his formulation of Say's Law as the dictum "Supply creates its own demand". Under the classical theory, the wage rate is determined by the marginal productivity of labour, and as many people are employed as are willing to work at that rate. Unemployment may arise through friction or may be "voluntary," in the sense that it arises from a refusal to accept employment owing to "legislation or social practices ... or mere human obstinacy", but "...the classical postulates do not admit of the possibility of the third category," which Keynes defines as "involuntary unemployment". Keynes raises two objections to the classical theory's assumption that "wage bargains ... determine the real wage". The first lies in the fact that "labour stipulates (within limits) for a money-wage rather than a real wage". The second is that classical theory assumes that, "The real wages of labour depend on the wage bargains which labour makes with the entrepreneurs," whereas, "If money wages change, one would have expected the classical school to argue that prices would change in almost the same proportion, leaving the real wage and the level of unemployment practically the same as before." Keynes considers his second objection the more fundamental, but most commentators concentrate on his first one: it has been argued that the quantity theory of money protects the classical school from the conclusion Keynes expected from it. Saving is that part of income not devoted to consumption, and consumption is that part of expenditure not allocated to investment, i.e., to durable goods. Hence saving encompasses hoarding (the accumulation of income as cash) and the purchase of durable goods. The existence of net hoarding, or of a demand to hoard, is not admitted by the simplified liquidity preference model of the "General Theory". Once he rejects the classical theory that unemployment is due to excessive wages, Keynes proposes an alternative based on the relationship between saving and investment. In his view, unemployment arises whenever entrepreneurs' incentive to invest fails to keep pace with society's propensity to save ("propensity" is one of Keynes's synonyms for "demand"). The levels of saving and investment are necessarily equal, and income is therefore held down to a level where the desire to save is no greater than the incentive to invest. The incentive to invest arises from the interplay between the physical circumstances of production and psychological anticipations of future profitability; but once these things are given the incentive is independent of income and depends solely on the rate of interest "r". Keynes designates its value as a function of "r"  as the "schedule of the marginal efficiency of capital". The propensity to save behaves quite differently. Saving is simply that part of income not devoted to consumption, and: ... the prevailing psychological law seems to be that when aggregate income increases, consumption expenditure will also increase but to a somewhat lesser extent. Keynes adds that "this psychological law was of the utmost importance in the development of my own thought". Keynes viewed the money supply as one of the main determinants of the state of the real economy. The significance he attributed to it is one of the innovative features of his work, and was influential on the politically hostile monetarist school. Money supply comes into play through the "liquidity preference" function, which is the demand function that corresponds to money supply. It specifies the amount of money people will seek to hold according to the state of the economy. In Keynes's first (and simplest) account – that of Chapter 13 – liquidity preference is determined solely by the interest rate "r"—which is seen as the earnings forgone by holding wealth in liquid form: hence liquidity preference can be written "L"("r" ) and in equilibrium must equal the externally fixed money supply "M̂". Money supply, saving and investment combine to determine the level of income as illustrated in the diagram, where the top graph shows money supply (on the vertical axis) against interest rate. "M̂"  determines the ruling interest rate "r̂"  through the liquidity preference function. The rate of interest determines the level of investment "Î"  through the schedule of the marginal efficiency of capital, shown as a blue curve in the lower graph. The red curves in the same diagram show what the propensities to save are for different incomes "Y" ; and the income "Ŷ"  corresponding to the equilibrium state of the economy must be the one for which the implied level of saving at the established interest rate is equal to "Î". In Keynes's more complicated liquidity preference theory (presented in Chapter 15) the demand for money depends on income as well as on the interest rate and the analysis becomes more complicated. Keynes never fully integrated his second liquidity preference doctrine with the rest of his theory, leaving that to John Hicks: see the IS-LM model below. Keynes rejects the classical explanation of unemployment based on wage rigidity, but it is not clear what effect the wage rate has on unemployment in his system. He treats wages of all workers as proportional to a single rate set by collective bargaining, and chooses his units so that this rate never appears separately in his discussion. It is present implicitly in those quantities he expresses in wage units, while being absent from those he expresses in money terms. It is therefore difficult to see whether, and in what way, his results differ for a different wage rate, nor is it entirely clear what he thought about the matter. An increase in the money supply, according to Keynes's theory, leads to a drop in the interest rate and an increase in the amount of investment that can be undertaken profitably, bringing with it an increase in total income. Keynes' name is associated with fiscal, rather than monetary, measures but they receive only passing (and often satirical) reference in the "General Theory". He mentions "increased public works" as an example of something that brings employment through the "multiplier", but this is before he develops the relevant theory, and he does not follow up when he gets to the theory. Later in the same chapter he tells us that: Ancient Egypt was doubly fortunate, and doubtless owed to this its fabled wealth, in that it possessed two activities, namely, pyramid-building as well as the search for the precious metals, the fruits of which, since they could not serve the needs of man by being consumed, did not stale with abundance. The Middle Ages built cathedrals and sang dirges. Two pyramids, two masses for the dead, are twice as good as one; but not so two railways from London to York. But again, he doesn't get back to his implied recommendation to engage in public works, even if not fully justified from their direct benefits, when he constructs the theory. On the contrary he later advises us that ... ... our final task might be to select those variables which can be deliberately controlled or managed by central authority in the kind of system in which we actually live ... and this appears to look forward to a future publication rather than to a subsequent chapter of the "General Theory". Keynes' view of saving and investment was his most important departure from the classical outlook. It can be illustrated using the "Keynesian cross" devised by Paul Samuelson. The horizontal axis denotes total income and the purple curve shows "C" ("Y" ), the propensity to consume, whose complement "S" ("Y" ) is the propensity to save: the sum of these two functions is equal to total income, which is shown by the broken line at 45°. The horizontal blue line "I" ("r" ) is the schedule of the marginal efficiency of capital whose value is independent of "Y". Keynes interprets this as the demand for investment and denotes the sum of demands for consumption and investment as "aggregate demand", plotted as a separate curve. Aggregate demand must equal total income, so equilibrium income must be determined by the point where the aggregate demand curve crosses the 45° line. This is the same horizontal position as the intersection of "I" ("r" ) with "S" ("Y" ). The equation "I" ("r" ) = "S" ("Y" ) had been accepted by the classics, who had viewed it as the condition of equilibrium between supply and demand for investment funds and as determining the interest rate (see the classical theory of interest). But insofar as they had had a concept of aggregate demand, they had seen the demand for investment as being given by "S" ("Y" ), since for them saving was simply the indirect purchase of capital goods, with the result that aggregate demand was equal to total income as an identity rather than as an equilibrium condition. Keynes takes note of this view in Chapter 2, where he finds it present in the early writings of Alfred Marshall but adds that "the doctrine is never stated to-day in this crude form". The equation "I" ("r" ) = "S" ("Y" ) is accepted by Keynes for some or all of the following reasons: Keynes introduces his discussion of the multiplier in Chapter 10 with a reference to Kahn's earlier paper (see below). He designates Kahn's multiplier the "employment multiplier" in distinction to his own "investment multiplier" and says that the two are only "a little different". Kahn's multiplier has consequently been understood by much of the Keynesian literature as playing a major role in Keynes's own theory, an interpretation encouraged by the difficulty of understanding Keynes's presentation. Kahn's multiplier gives the title ("The multiplier model") to the account of Keynesian theory in Samuelson's "Economics"  and is almost as prominent in Alvin Hansen’s "Guide to Keynes"  and in Joan Robinson's "Introduction to the Theory of Employment". Keynes states that there is ... ... a confusion between the logical theory of the multiplier, which holds good continuously, without time-lag ... and the consequence of an expansion in the capital goods industries which take gradual effect, subject to a time-lag, and only after an interval ... and implies that he is adopting the former theory. And when the multiplier eventually emerges as a component of Keynes's theory (in Chapter 18) it turns out to be simply a measure of the change of one variable in response to a change in another. The schedule of the marginal efficiency of capital is identified as one of the independent variables of the economic system: "What [it] tells us, is ... the point to which the output of new investment will be pushed ..." The multiplier then gives "the ratio ... between an increment of investment and the corresponding increment of aggregate income". G. L. S. Shackle regarded Keynes' move away from Kahn's multiplier as ... ... a retrograde step ... For when we look upon the Multiplier as an instantaneous functional relation ... we are merely using the word Multiplier to stand for an alternative way of looking at the marginal propensity to consume ..., which G. M. Ambrosi cites as an instance of "a Keynesian commentator who would have liked Keynes to have written something less 'retrograde. The value Keynes assigns to his multiplier is the reciprocal of the marginal propensity to save: "k"  = 1 / "S" '("Y" ). This is the same as the formula for Kahn's mutliplier in a closed economy assuming that all saving (including the purchase of durable goods), and not just hoarding, constitutes leakage. Keynes gave his formula almost the status of a definition (it is put forward in advance of any explanation). His multiplier is indeed the value of "the ratio ... between an increment of investment and the corresponding increment of aggregate income" as Keynes derived it from his Chapter 13 model of liquidity preference, which implies that income must bear the entire effect of a change in investment. But under his Chapter 15 model a change in the schedule of the marginal efficiency of capital has an effect shared between the interest rate and income in proportions depending on the partial derivatives of the liquidity preference function. Keynes did not investigate the question of whether his formula for multiplier needed revision. The liquidity trap is a phenomenon that may impede the effectiveness of monetary policies in reducing unemployment. Economists generally think the rate of interest will not fall below a certain limit, often seen as zero or a slightly negative number. Keynes suggested that the limit might be appreciably greater than zero but did not attach much practical significance to it. The term "liquidity trap" was coined by Dennis Robertson in his comments on the "General Theory", but it was John Hicks in "Mr. Keynes and the Classics" who recognised the significance of a slightly different concept. If the economy is in a position such that the liquidity preference curve is almost vertical, as must happen as the lower limit on "r"  is approached, then a change in the money supply "M̂"  makes almost no difference to the equilibrium rate of interest "r̂"  or, unless there is compensating steepness in the other curves, to the resulting income "Ŷ". As Hicks put it, "Monetary means will not force down the rate of interest any further." Paul Krugman has worked extensively on the liquidity trap, claiming that it was the problem confronting the Japanese economy around the turn of the millennium. In his later words: Short-term interest rates were close to zero, long-term rates were at historical lows, yet private investment spending remained insufficient to bring the economy out of deflation. In that environment, monetary policy was just as ineffective as Keynes described. Attempts by the Bank of Japan to increase the money supply simply added to already ample bank reserves and public holdings of cash... Hicks showed how to analyze Keynes' system when liquidity preference is a function of income as well as of the rate of interest. Keynes's admission of income as an influence on the demand for money is a step back in the direction of classical theory, and Hicks takes a further step in the same direction by generalizing the propensity to save to take both "Y"  and "r"  as arguments. Less classically he extends this generalization to the schedule of the marginal efficiency of capital. The IS-LM model uses two equations to express Keynes' model. The first, now written "I" ("Y", "r" ) = "S" ("Y","r" ), expresses the principle of effective demand. We may construct a graph on ("Y", "r" ) coordinates and draw a line connecting those points satisfying the equation: this is the "IS"  curve. In the same way we can write the equation of equilibrium between liquidity preference and the money supply as "L"("Y" ,"r" ) = "M̂" and draw a second curve – the "LM"  curve – connecting points that satisfy it. The equilibrium values "Ŷ"  of total income and "r̂"  of interest rate are then given by the point of intersection of the two curves. If we follow Keynes's initial account under which liquidity preference depends only on the interest rate "r", then the "LM"  curve is horizontal. Joan Robinson commented that: ... modern teaching has been confused by J. R. Hicks' attempt to reduce the "General Theory" to a version of static equilibrium with the formula IS–LM. Hicks has now repented and changed his name from J. R. to John, but it will take a long time for the effects of his teaching to wear off. Hicks subsequently relapsed. Keynes argued that the solution to the Great Depression was to stimulate the country ("incentive to invest") through some combination of two approaches: If the interest rate at which businesses and consumers can borrow decreases, investments that were previously uneconomic become profitable, and large consumer sales normally financed through debt (such as houses, automobiles, and, historically, even appliances like refrigerators) become more affordable. A principal function of central banks in countries that have them is to influence this interest rate through a variety of mechanisms collectively called "monetary policy". This is how monetary policy that reduces interest rates is thought to stimulate economic activity, i.e., "grow the economy"—and why it is called "expansionary" monetary policy. Expansionary fiscal policy consists of increasing net public spending, which the government can effect by a) taxing less, b) spending more, or c) both. Investment and consumption by government raises demand for businesses' products and for employment, reversing the effects of the aforementioned imbalance. If desired spending exceeds revenue, the government finances the difference by borrowing from capital markets by issuing government bonds. This is called deficit spending. Two points are important to note at this point. First, deficits are not required for expansionary fiscal policy, and second, it is only "change" in net spending that can stimulate or depress the economy. For example, if a government ran a deficit of 10% both last year and this year, this would represent neutral fiscal policy. In fact, if it ran a deficit of 10% last year and 5% this year, this would actually be contractionary. On the other hand, if the government ran a surplus of 10% of GDP last year and 5% this year, that would be expansionary fiscal policy, despite never running a deficit at all. But – contrary to some critical characterizations of it – Keynesianism does not consist solely of deficit spending, since it recommends adjusting fiscal policies according to cyclical circumstances. An example of a counter-cyclical policy is raising taxes to cool the economy and to prevent inflation when there is abundant demand-side growth, and engaging in deficit spending on labour-intensive infrastructure projects to stimulate employment and stabilize wages during economic downturns. Keynes's ideas influenced Franklin D. Roosevelt's view that insufficient buying-power caused the Depression. During his presidency, Roosevelt adopted some aspects of Keynesian economics, especially after 1937, when, in the depths of the Depression, the United States suffered from recession yet again following fiscal contraction. But to many the true success of Keynesian policy can be seen at the onset of World War II, which provided a kick to the world economy, removed uncertainty, and forced the rebuilding of destroyed capital. Keynesian ideas became almost official in social-democratic Europe after the war and in the U.S. in the 1960s. The Keynesian advocacy of deficit spending contrasted with the classical and neoclassical economic analysis of fiscal policy. They admitted that fiscal stimulus could actuate production. But, to these schools, there was no reason to believe that this stimulation would outrun the side-effects that "crowd out" private investment: first, it would increase the demand for labour and raise wages, hurting profitability; Second, a government deficit increases the stock of government bonds, reducing their market price and encouraging high interest rates, making it more expensive for business to finance fixed investment. Thus, efforts to stimulate the economy would be self-defeating. The Keynesian response is that such fiscal policy is appropriate only when unemployment is persistently high, above the non-accelerating inflation rate of unemployment (NAIRU). In that case, crowding out is minimal. Further, private investment can be "crowded in": Fiscal stimulus raises the market for business output, raising cash flow and profitability, spurring business optimism. To Keynes, this accelerator effect meant that government and business could be "complements" rather than substitutes in this situation. Second, as the stimulus occurs, gross domestic product rises—raising the amount of saving, helping to finance the increase in fixed investment. Finally, government outlays need not always be wasteful: government investment in public goods that is not provided by profit-seekers encourages the private sector's growth. That is, government spending on such things as basic research, public health, education, and infrastructure could help the long-term growth of "potential output". In Keynes's theory, there must be significant slack in the labour market before fiscal expansion is justified. Keynesian economists believe that adding to profits and incomes during boom cycles through tax cuts, and removing income and profits from the economy through cuts in spending during downturns, tends to exacerbate the negative effects of the business cycle. This effect is especially pronounced when the government controls a large fraction of the economy, as increased tax revenue may aid investment in state enterprises in downturns, and decreased state revenue and investment harm those enterprises. In the last few years of his life, John Maynard Keynes was much preoccupied with the question of balance in international trade. He was the leader of the British delegation to the United Nations Monetary and Financial Conference in 1944 that established the Bretton Woods system of international currency management. He was the principal author of a proposal – the so-called Keynes Plan – for an International Clearing Union. The two governing principles of the plan were that the problem of settling outstanding balances should be solved by 'creating' additional 'international money', and that debtor and creditor should be treated almost alike as disturbers of equilibrium. In the event, though, the plans were rejected, in part because "American opinion was naturally reluctant to accept the principle of equality of treatment so novel in debtor-creditor relationships". The new system is not founded on free-trade (liberalisation of foreign trade) but rather on regulating international trade to eliminate trade imbalances. Nations with a surplus would have a powerful incentive to get rid of it, which would automatically clear other nations deficits. Keynes proposed a global bank that would issue its own currency - the bancor - which was exchangeable with national currencies at fixed rates of exchange and would become the unit of account between nations, which means it would be used to measure a country's trade deficit or trade surplus. Every country would have an overdraft facility in its bancor account at the International Clearing Union. He pointed out that surpluses lead to weak global aggregate demand – countries running surpluses exert a "negative externality" on trading partners, and posed far more than those in deficit, a threat to global prosperity. Keynes thought that surplus countries should be taxed to avoid trade imbalances. In ""National Self-Sufficiency" The Yale Review, Vol. 22, no. 4 (June 1933)", he already highlighted the problems created by free trade. His view, supported by many economists and commentators at the time, was that creditor nations may be just as responsible as debtor nations for disequilibrium in exchanges and that both should be under an obligation to bring trade back into a state of balance. Failure for them to do so could have serious consequences. In the words of Geoffrey Crowther, then editor of The Economist, "If the economic relationships between nations are not, by one means or another, brought fairly close to balance, then there is no set of financial arrangements that can rescue the world from the impoverishing results of chaos." These ideas were informed by events prior to the Great Depression when – in the opinion of Keynes and others – international lending, primarily by the U.S., exceeded the capacity of sound investment and so got diverted into non-productive and speculative uses, which in turn invited default and a sudden stop to the process of lending. Influenced by Keynes, economic texts in the immediate post-war period put a significant emphasis on balance in trade. For example, the second edition of the popular introductory textbook, "An Outline of Money", devoted the last three of its ten chapters to questions of foreign exchange management and in particular the 'problem of balance'. However, in more recent years, since the end of the Bretton Woods system in 1971, with the increasing influence of Monetarist schools of thought in the 1980s, and particularly in the face of large sustained trade imbalances, these concerns – and particularly concerns about the destabilising effects of large trade surpluses – have largely disappeared from mainstream economics discourse and Keynes' insights have slipped from view. They are receiving some attention again in the wake of the financial crisis of 2007–08. Keynes's ideas became widely accepted after World War II, and until the early 1970s, Keynesian economics provided the main inspiration for economic policy makers in Western industrialized countries. Governments prepared high quality economic statistics on an ongoing basis and tried to base their policies on the Keynesian theory that had become the norm. In the early era of social liberalism and social democracy, most western capitalist countries enjoyed low, stable unemployment and modest inflation, an era called the Golden Age of Capitalism. In terms of policy, the twin tools of post-war Keynesian economics were fiscal policy and monetary policy. While these are credited to Keynes, others, such as economic historian David Colander, argue that they are, rather, due to the interpretation of Keynes by Abba Lerner in his theory of functional finance, and should instead be called "Lernerian" rather than "Keynesian". Through the 1950s, moderate degrees of government demand leading industrial development, and use of fiscal and monetary counter-cyclical policies continued, and reached a peak in the "go go" 1960s, where it seemed to many Keynesians that prosperity was now permanent. In 1971, Republican US President Richard Nixon even proclaimed "I am now a Keynesian in economics." Beginning in the late 1960s, a new classical macroeconomics movement arose, critical of Keynesian assumptions (see sticky prices), and seemed, especially in the 1970s, to explain certain phenomena better. It was characterized by explicit and rigorous adherence to microfoundations, as well as use of increasingly sophisticated mathematical modelling. With the oil shock of 1973, and the economic problems of the 1970s, Keynesian economics began to fall out of favour. During this time, many economies experienced high and rising unemployment, coupled with high and rising inflation, contradicting the Phillips curve's prediction. This stagflation meant that the simultaneous application of expansionary (anti-recession) and contractionary (anti-inflation) policies appeared necessary. This dilemma led to the end of the Keynesian near-consensus of the 1960s, and the rise throughout the 1970s of ideas based upon more classical analysis, including monetarism, supply-side economics, and new classical economics. However, by the late 1980s, certain failures of the new classical models, both theoretical (see Real business cycle theory) and empirical (see the "Volcker recession") hastened the emergence of New Keynesian economics, a school that sought to unite the most realistic aspects of Keynesian and neo-classical assumptions and place them on more rigorous theoretical foundation than ever before. One line of thinking, utilized also as a critique of the notably high unemployment and potentially disappointing GNP growth rates associated with the new classical models by the mid-1980s, was to emphasize low unemployment and maximal economic growth at the cost of somewhat higher inflation (its consequences kept in check by indexing and other methods, and its overall rate kept lower and steadier by such potential policies as Martin Weitzman's share economy). Multiple schools of economic thought that trace their legacy to Keynes currently exist, the notable ones being Neo-Keynesian economics, New Keynesian economics, and Post-Keynesian economics. Keynes's biographer Robert Skidelsky writes that the post-Keynesian school has remained closest to the spirit of Keynes's work in following his monetary theory and rejecting the neutrality of money. Today these ideas, regardless of provenance, are referred to in academia under the rubric of "Keynesian economics", due to Keynes's role in consolidating, elaborating, and popularizing them. In the postwar era, Keynesian analysis was combined with neoclassical economics to produce what is generally termed the "neoclassical synthesis", yielding Neo-Keynesian economics, which dominated mainstream macroeconomic thought. Though it was widely held that there was no strong automatic tendency to full employment, many believed that if government policy were used to ensure it, the economy would behave as neoclassical theory predicted. This post-war domination by Neo-Keynesian economics was broken during the stagflation of the 1970s. There was a lack of consensus among macroeconomists in the 1980s. However, the advent of New Keynesian economics in the 1990s, modified and provided microeconomic foundations for the neo-Keynesian theories. These modified models now dominate mainstream economics. Post-Keynesian economists, on the other hand, reject the neoclassical synthesis and, in general, neoclassical economics applied to the macroeconomy. Post-Keynesian economics is a heterodox school that holds that both Neo-Keynesian economics and New Keynesian economics are incorrect, and a misinterpretation of Keynes's ideas. The Post-Keynesian school encompasses a variety of perspectives, but has been far less influential than the other more mainstream Keynesian schools. Interpretations of Keynes have emphasized his stress on the international coordination of Keynesian policies, the need for international economic institutions, and the ways in which economic forces could lead to war or could promote peace. In a 2014 paper, economist Alan Blinder argues that, "for not very good reasons," public opinion in the United States has associated Keynesianism with liberalism, and he states that such is incorrect. For example, both Presidents Ronald Reagan (1981-89) and George W. Bush (2001-09) supported policies that were, in fact, Keynesian, even though both men were conservative leaders. And tax cuts can provide highly helpful fiscal stimulus during a recession, just as much as infrastructure spending can. Blinder concludes, "If you are not teaching your students that 'Keynesianism' is neither conservative nor liberal, you should be." The Keynesian schools of economics are situated alongside a number of other schools that have the same perspectives on what the economic issues are, but differ on what causes them and how to best resolve them. Today, most of these schools of thought have been subsumed into modern macroeconomic theory. The Stockholm school rose to prominence at about the same time that Keynes published his General Theory and shared a common concern in business cycles and unemployment. The second generation of Swedish economists also advocated government intervention through spending during economic downturns although opinions are divided over whether they conceived the essence of Keynes's theory before he did. There was debate between monetarists and Keynesians in the 1960s over the role of government in stabilizing the economy. Both monetarists and Keynesians agree that issues such as business cycles, unemployment, and deflation are caused by inadequate demand. However, they had fundamentally different perspectives on the capacity of the economy to find its own equilibrium, and the degree of government intervention that would be appropriate. Keynesians emphasized the use of discretionary fiscal policy and monetary policy, while monetarists argued the primacy of monetary policy, and that it should be rules-based. The debate was largely resolved in the 1980s. Since then, economists have largely agreed that central banks should bear the primary responsibility for stabilizing the economy, and that monetary policy should largely follow the Taylor rule – which many economists credit with the Great Moderation. The financial crisis of 2007–08, however, has convinced many economists and governments of the need for fiscal interventions and highlighted the difficulty in stimulating economies through monetary policy alone during a liquidity trap. Some Marxist economists criticized Keynesian economics. For example, in his 1946 appraisal Paul Sweezy—while admitting that there was much in the "General Theory"'s analysis of effective demand that Marxists could draw on—described Keynes as a prisoner of his neoclassical upbringing. Sweezy argued that Keynes had never been able to view the capitalist system as a totality. He argued that Keynes regarded the class struggle carelessly, and overlooked the class role of the capitalist state, which he treated as a "deus ex machina", and some other points. While Michał Kalecki was generally enthusiastic about the Keynesian revolution, he predicted that it would not endure, in his article "Political Aspects of Full Employment". In the article Kalecki predicted that the full employment delivered by Keynesian policy would eventually lead to a more assertive working class and weakening of the social position of business leaders, causing the elite to use their political power to force the displacement of the Keynesian policy even though profits would be higher than under a laissez faire system: The erosion of social prestige and political power would be unacceptable to the elites despite higher profits. James M. Buchanan criticized Keynesian economics on the grounds that governments would in practice be unlikely to implement theoretically optimal policies. The implicit assumption underlying the Keynesian fiscal revolution, according to Buchanan, was that economic policy would be made by wise men, acting without regard to political pressures or opportunities, and guided by disinterested economic technocrats. He argued that this was an unrealistic assumption about political, bureaucratic and electoral behaviour. Buchanan blamed Keynesian economics for what he considered a decline in America's fiscal discipline. Buchanan argued that deficit spending would evolve into a permanent disconnect between spending and revenue, precisely because it brings short-term gains, so, ending up institutionalizing irresponsibility in the federal government, the largest and most central institution in our society. Martin Feldstein argues that the legacy of Keynesian economics–the misdiagnosis of unemployment, the fear of saving, and the unjustified government intervention–affected the fundamental ideas of policy makers. Milton Friedman thought that Keynes's political bequest was harmful for two reasons. First, he thought whatever the economic analysis, benevolent dictatorship is likely sooner or later to lead to a totalitarian society. Second, he thought Keynes's economic theories appealed to a group far broader than economists primarily because of their link to his political approach. Alex Tabarrok argues that Keynesian politics–as distinct from Keynesian policies–has failed pretty much whenever it's been tried, at least in liberal democracies. In response to this argument, John Quiggin, wrote about these theories' implication for a liberal democratic order. He thought that if it is generally accepted that democratic politics is nothing more than a battleground for competing interest groups, then reality will come to resemble the model. Paul Krugman wrote "I don’t think we need to take that as an immutable fact of life; but still, what are the alternatives?" Daniel Kuehn, criticized James M. Buchanan. He argued, "if you have a problem with politicians - criticize politicians," not Keynes. He also argued that empirical evidence makes it pretty clear that Buchanan was wrong. James Tobin argued, if advising government officials, politicians, voters, it's not for economists to play games with them. Keynes implicitly rejected this argument, in "soon or late it is ideas not vested interests which are dangerous for good or evil." Brad DeLong has argued that politics is the main motivator behind objections to the view that government should try to serve a stabilizing macroeconomic role. Paul Krugman argued that a regime that by and large lets markets work, but in which the government is ready both to rein in excesses and fight slumps is inherently unstable, due to intellectual instability, political instability, and financial instability. Another influential school of thought was based on the Lucas critique of Keynesian economics. This called for greater consistency with microeconomic theory and rationality, and in particular emphasized the idea of rational expectations. Lucas and others argued that Keynesian economics required remarkably foolish and short-sighted behaviour from people, which totally contradicted the economic understanding of their behaviour at a micro level. New classical economics introduced a set of macroeconomic theories that were based on optimizing microeconomic behaviour. These models have been developed into the real business-cycle theory, which argues that business cycle fluctuations can to a large extent be accounted for by real (in contrast to nominal) shocks. Beginning in the late 1950s new classical macroeconomists began to disagree with the methodology employed by Keynes and his successors. Keynesians emphasized the dependence of consumption on disposable income and, also, of investment on current profits and current cash flow. In addition, Keynesians posited a Phillips curve that tied nominal wage inflation to unemployment rate. To support these theories, Keynesians typically traced the logical foundations of their model (using introspection) and supported their assumptions with statistical evidence. New classical theorists demanded that macroeconomics be grounded on the same foundations as microeconomic theory, profit-maximizing firms and rational, utility-maximizing consumers. The result of this shift in methodology produced several important divergences from Keynesian macroeconomics:
https://en.wikipedia.org/wiki?curid=17326
Kinetic energy In physics, the kinetic energy (KE) of an object is the energy that it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. The same amount of work is done by the body when decelerating from its current speed to a state of rest. In classical mechanics, the kinetic energy of a non-rotating object of mass "m" traveling at a speed "v" is . In relativistic mechanics, this is a good approximation only when "v" is much less than the speed of light. The standard unit of kinetic energy is the joule, while the imperial unit of kinetic energy is the foot-pound. The adjective "kinetic" has its roots in the Greek word κίνησις "kinesis", meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality. The principle in classical mechanics that "E ∝ mv2" was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the "living force", "vis viva". Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship. By dropping weights from different heights into a block of clay, Willem 's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation. The terms "kinetic energy" and "work" in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled "Du Calcul de l'Effet des Machines" outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–51. Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy. Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat. Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant. Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Without loss or gain, however, the sum of the kinetic and potential energy remains constant. Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates its speed as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound, binding energy (breaking bound structures). Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion. Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula ½mv² given by Newtonian (classical) mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed. In classical mechanics, the kinetic energy of a "point object" (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. In formula form: where formula_2 is the mass and formula_3 is the speed (or the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules. For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e., Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed. The kinetic energy of an object is related to its momentum by the equation: where: For the "translational kinetic energy," that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass formula_8, whose center of mass is moving in a straight line with speed formula_10, as seen above is equal to where: The kinetic energy of any entity depends on the reference frame in which it is measured. However the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy. The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole. The work done in accelerating a particle with mass "m" during the infinitesimal time interval "dt" is given by the dot product of "force" F and the infinitesimal "displacement "dx" where we have assumed the relationship p = "m" v and the validity of Newton's Second Law. (However, also see the special relativistic derivation below.) Applying the product rule we see that: Therefore, (assuming constant mass so that "dm" = 0), we have, Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy. Assuming the object was at rest at time 0, we integrate from time 0 to time t because the work done by the force to bring the object from rest to velocity "v" is equal to the work necessary to do the reverse: This equation states that the kinetic energy ("E"k) is equal to the integral of the dot product of the velocity (v) of a body and the infinitesimal change of the body's momentum (p). It is assumed that the body starts with no kinetic energy when it is at rest (motionless). If a rigid body Q is rotating about any line through the center of mass then it has "rotational kinetic energy" (formula_18) which is simply the sum of the kinetic energies of its moving parts, and is thus given by: where: (In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape). A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains. A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However all internal energies of all types contribute to body's mass, inertia, and total energy. In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point. Dividing by V, the unit of volume: where formula_24 is the dynamic pressure, and ρ is the density of the incompressible fluid. The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame. The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass. This may be simply shown: let formula_25 be the relative velocity of the center of mass frame "i" in the frame "k". Since Then, However, let formula_28 the kinetic energy in the center of mass frame, formula_29 would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass: formula_30. Substituting, we get: Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same). It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy): where: Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation. If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In special relativity theory, the expression for linear momentum is modified. With "m" being an object's rest mass, v and "v" its velocity and speed, and "c" the speed of light in vacuum, we use the expression for linear momentum formula_33, where formula_34. Integrating by parts yields Since formula_36, formula_38 is a constant of integration for the indefinite integral. Simplifying the expression we obtain formula_38 is found by observing that when formula_41 and formula_42, giving resulting in the formula This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary. The mathematical by-product of this calculation is the mass-energy equivalence formula—the body at rest must have energy content At a low speed ("v" ≪ "c"), the relativistic kinetic energy is approximated well by the classical kinetic energy. This is done by binomial approximation or by taking the first two terms of the Taylor expansion for the reciprocal square root: So, the total energy formula_47 can be partitioned into the rest mass energy plus the Newtonian kinetic energy at low speeds. When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation is small for low speeds. For example, for a speed of the correction to the Newtonian kinetic energy is 0.0417 J/kg (on a Newtonian kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a Newtonian kinetic energy of 5 GJ/kg). The relativistic relation between kinetic energy and momentum is given by This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics: This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity. Using the convention that where the four-velocity of a particle is and formula_53 is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity. If the particle has momentum as it passes by an observer with four-velocity "u"obs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is and the kinetic energy can be expressed as the total energy minus the rest energy: Consider the case of a metric that is diagonal and spatially isotropic ("g""tt", "g""ss", "g""ss", "g""ss"). Since where "v"α is the ordinary velocity measured w.r.t. the coordinate system, we get Solving for "u"t gives Thus for a stationary observer ("v" = 0) and thus the kinetic energy takes the form Factoring out the rest energy gives: This expression reduces to the special relativistic case for the flat-space metric where In the Newtonian approximation to general relativity where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies. In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass "m", the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator formula_65. The kinetic energy operator in the non-relativistic case can be written as Notice that this can be obtained by replacing formula_67 by formula_65 in the classical expression for kinetic energy in terms of momentum, In the Schrödinger picture, formula_65 takes the form formula_71 where the derivative is taken with respect to position coordinates and hence The expectation value of the electron kinetic energy, formula_73, for a system of "N" electrons described by the wavefunction formula_74 is a sum of 1-electron operator expectation values: where formula_76 is the mass of the electron and formula_77 is the Laplacian operator acting upon the coordinates of the "i"th electron and the summation runs over all electrons. The density functional formalism of quantum mechanics requires knowledge of the electron density "only", i.e., it formally does not require knowledge of the wavefunction. Given an electron density formula_78, the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as where formula_80 is known as the von Weizsäcker kinetic energy functional.
https://en.wikipedia.org/wiki?curid=17327
Khoisan languages The Khoisan languages (also Khoesan or Khoesaan) are a group of African languages originally classified together by Joseph Greenberg. Khoisan languages share click consonants and do not belong to other African language families. For much of the 20th century, they were thought to be genealogically related to each other, but this is no longer accepted. They are now held to comprise three distinct language families and two language isolates. All Khoisan languages but two are indigenous to southern Africa and belong to three language families. The Khoe family appears to have migrated to southern Africa not long before the Bantu expansion. Ethnically, their speakers are the Khoikhoi and the San (Bushmen). Two languages of east Africa, those of the Sandawe and Hadza, originally were also classified as Khoisan, although their speakers are ethnically neither Khoikhoi nor San. Before the Bantu expansion, Khoisan languages, or languages like them, were likely spread throughout southern and eastern Africa. They are currently restricted to the Kalahari Desert, primarily in Namibia and Botswana, and to the Rift Valley in central Tanzania. Most of the languages are endangered, and several are moribund or extinct. Most have no written record. The only widespread Khoisan language is Khoekhoe (or Nàmá) of Namibia, with a quarter of a million speakers; Sandawe in Tanzania is second in number with some 40–80,000, some monolingual; and the ǃKung language of the northern Kalahari is spoken by some 16,000 or so people. Language use is quite strong among the 20,000 speakers of Naro, half of whom speak it as a second language. Khoisan languages are best known for their use of click consonants as phonemes. These are typically written with characters such as ǃ and ǂ. Clicks are quite versatile as consonants, as they involve two articulations of the tongue which can operate partially independently. Consequently, the languages with the greatest numbers of consonants in the world are Khoisan. The Juǀʼhoan language has 48 click consonants among nearly as many non-click consonants, strident and pharyngealized vowels, and four tones. The ǃXóõ and ǂHõã languages are even more complex. Khoisan was proposed as one of the four families of African languages in Joseph Greenberg's classification (1949–1954, revised in 1963). However, linguists who study Khoisan languages reject their unity, and the name "Khoisan" is used by them as a term of convenience without any implication of linguistic validity, much as "Papuan" and "Australian" are. It has been suggested that the similarities of the Tuu and Kxʼa families are due to a southern African Sprachbund rather than a genealogical relationship, whereas the Khoe (or perhaps Kwadi–Khoe) family is a more recent migrant to the area, and may be related to Sandawe in East Africa. Ernst Oswald Johannes Westphal is known for his early rejection of the Khoisan language family (Starostin 2003). Bonny Sands (1998) concluded that the family is not demonstrable with current evidence. Anthony Traill at first accepted Khoisan (Traill 1986), but by 1998 concluded that it could not be demonstrated with current data and methods, rejecting it as based on a single typological criterion: the presence of clicks. Dimmendaal (2008) summarized the general view with, "it has to be concluded that Greenberg's intuitions on the genetic unity of Khoisan could not be confirmed by subsequent research. Today, the few scholars working on these languages treat the three [southern groups] as independent language families that cannot or can no longer be shown to be genetically related" (p. 841). Starostin (2013) accepts a relationship between Sandawe and Khoi is plausible, as is one between Tuu and Kxʼa, but sees no indication of a relationship between Sandawe and Khoi on the one hand and Tuu and Kxʼa on the other, or between any of them and Hadza. Janina Brutt-Griffler claims, "given that such colonial borders were generally arbitrarily drawn, they grouped large numbers of ethnic groups that spoke many languages." She hypothesizes that this took place within efforts to prevent the spread of English during European colonization and prevent the entrance of the majority into the middle class. Anthony Traill noted the Khoisan languages' extreme variation. Despite their shared clicks, the Khoisan languages diverge significantly from each other. Traill demonstrated this linguistic diversity in the data presented in the below table. The first two columns include words from the two Khoisan language isolates, Sandawe and Hadza. The following three are languages from the Khoe family, the Kxʼa family, and the Tuu family, respectively. The branches that were once considered part of so-called Khoisan are now considered independent families, since it has not been demonstrated that they are related according to the standard comparative method. See Khoe languages for speculations on the linguistic history of the region. With about 800 speakers in Tanzania, Hadza is no longer seen as a Khoisan language and appears to be unrelated to any other language. Genetically, the Hadza people are unrelated to the Khoisan peoples of Southern Africa, and their closest relatives may be among the Pygmies of Central Africa. There is some indication that Sandawe (about 40,000 speakers in Tanzania) may be related to the Khoe family, such as a congruent pronominal system and some good Swadesh-list matches, but not enough to establish regular sound correspondences. Sandawe is not related to Hadza, despite their proximity. The Khoe family is both the most numerous and diverse family of Khoisan languages, with seven living languages and over a quarter million speakers. Although little Kwadi data is available, proto-Khoe–Kwadi reconstructions have been made for pronouns and some basic vocabulary. A Haiǁom language is listed in most Khoisan references. A century ago the Haiǁom people spoke a Ju dialect, probably close to ǃKung, but they now speak a divergent dialect of Nama. Thus their language is variously said to be extinct or to have 18,000 speakers, to be Ju or to be Khoe. (Their numbers have been included under Nama above.) They are known as the "Saa" by the Nama, and this is the source of the word "San". The Tuu family consists of two language clusters, which are related to each other at about the distance of Khoekhoe and Tshukhwe within Khoe. They are typologically very similar to the Kxʼa languages (below), but have not been demonstrated to be related to them genealogically (the similarities may be an areal feature). The Kxʼa family is a relatively distant relationship formally demonstrated in 2010. Not all languages using clicks as phonemes are considered Khoisan. Most others are neighboring Bantu languages in southern Africa: the Nguni languages (Xhosa, Zulu, Swazi, Phuthi, and Northern Ndebele); Sotho; Yeyi in Botswana; and Mbukushu, Kwangali, and Gciriku in the Caprivi Strip. Clicks are spreading to a few additional neighboring languages. Of these languages, Xhosa, Zulu, Ndebele and Yeyi have intricate systems of click consonants; the others, despite the click in the name "Gciriku," more rudimentary ones. There is also the South Cushitic language Dahalo in Kenya, which has dental clicks in a few score words, and an extinct and presumably artificial Australian ritual language called Damin, which had only nasal clicks. The Bantu languages adopted the use of clicks from neighboring, displaced, or absorbed Khoisan populations (or from other Bantu languages), often through intermarriage, while the Dahalo are thought to have retained clicks from an earlier language when they shifted to speaking a Cushitic language; if so, the pre-Dahalo language may have been something like Hadza or Sandawe. Damin is an invented ritual language, and has nothing to do with Khoisan. These are the only languages known to have clicks in normal vocabulary. Occasionally other languages are said by laypeople to have "click" sounds. This is usually a misnomer for ejective consonants, which are found across much of the world, or is a reference to paralinguistic use of clicks such as English "tsk! tsk!"
https://en.wikipedia.org/wiki?curid=17333
Katina Paxinou Katina Paxinou (; 15 December 1900 – 22 February 1973) was a Greek film and stage actress. She started her stage career in Greece in 1928 and was one of the founding members of the National Theatre of Greece in 1932. The outbreak of World War II found her in the United Kingdom and she later moved to the United States, where she made her film debut in "For Whom the Bell Tolls" (1943) and won the Academy Award for Best Supporting Actress and the Golden Globe Award for Best Supporting Actress. She appeared in a few more Hollywood films, before returning to Greece in the early 1950s. She became a naturalized citizen of the United States in 1951. She then focused on her stage career and appeared in a number of European films including "Rocco and His Brothers" (1960). Paxinou was born Ekaterini Konstantopoulou, the daughter of Vassilis Konstantopoulos and Eleni Malandrinou. She trained as an opera singer at the Conservatoire de Musique de Genève and later in Berlin and Vienna. According to her biography in a 1942 "Playbill", Paxinou's family disowned her after she decided to seek a permanent stage career. Paxinou made her debut at the Municipal Theatre of Piraeus in 1920 in the operatic version of Maurice Maeterlinck's "Sister Beatrice", with a score by Dimitri Mitropoulos. She first appeared in a play in 1928, as a member of Marika Kotopouli's troupe, in an Athens production of Henry Bataille's "The Naked Woman". In 1931, she joined Aimilios Veakis' troupe along with Alexis Minotis, where she translated and appeared in the first of Eugene O'Neill's plays to be staged in Greece, "Desire Under the Elms". She also appeared in Anton Chekhov's "Uncle Vanya" and August Strindberg's "The Father". In 1932, Paxinou was among the actors that inaugurated the recently re-founded National Theatre of Greece, where she worked until 1940. During her stay in the National Theatre, she distinguished herself on Greek stage starring in major plays, such as Sophocles' "Electra", Henrik Ibsen's "Ghosts" and William Shakespeare's "Hamlet", which were also performed in London, Frankfurt and Berlin. When World War II began, Paxinou was performing in London. Unable to return to Greece, she emigrated in May 1941 to the United States, where she had earlier appeared in 1931, performing Clytemnestra in a modern Greek version of "Electra". She was selected to play the role of Pilar in the film "For Whom the Bell Tolls" (1943), for which she won an Oscar and a Golden Globe Award for Best Supporting Actress - Motion Picture. She made one British film, "Uncle Silas" (1947), which features Jean Simmons in the main female role and worked in Italy for 20th Century Fox, playing the mother of Tyrone Power's character in "Prince of Foxes" (1949). After this film, Paxinou worked for a Hollywood studio only once more, again playing a gypsy woman in the religious epic "The Miracle" (1959). In 1950, Paxinou resumed her stage career. In her native Greece, she formed the Royal Theatre of Athens with Alexis Minotis, her principal director and husband since 1940. Paxinou made several appearances on the Broadway stage and television as well. She played the lead in Ibsen's "Hedda Gabler" for 12 performances at New York City's Longacre Theatre, opening on 28 June 1942. She also played the principal role in the first production in English of Federico Garcia Lorca's "The House of Bernarda Alba", at the ANTA Playhouse in New York in 1951, and a BBC television production of Lorca's "Blood Wedding" ("Bodas de sangre"), broadcast on 2 June 1959. Paxinou died after a long battle with cancer in Athens on 22 February 1973 at the age of 72. She was survived by her husband and her one daughter from her first marriage to Ioannis Paxinos, whose surname she had been using after their divorce. Her remains are buried at First Cemetery of Athens. The Paxinou-Minotis Museum is an Athens museum featuring memorabilia of the life of Paxinou, including furniture, paintings and sketches, photographs, books and personal effects donated by Paxinou's husband, director Alexis Minotis, and include his personal library and theatrical archive.
https://en.wikipedia.org/wiki?curid=17334
Klaus Barbie Nikolaus Barbie (25 October 1913 – 25 September 1991) was an SS and Gestapo functionary during the Nazi era. He was known as the "Butcher of Lyon" for having personally tortured prisoners of the Gestapo – primarily Jews and members of La Résistance – while stationed in Lyon under the collaborationist Vichy regime. After the war, United States intelligence services employed him for his anti-Marxist efforts and also aided his escape to Bolivia. The West German Intelligence Service later recruited him. Barbie is suspected of having had a hand in the Bolivian coup d'état orchestrated by Luis García Meza in 1980. After the fall of the dictatorship, Barbie no longer had the protection of the government in La Paz and in 1983 was extradited to France, where he was convicted of crimes against humanity. He died of cancer in prison on 25 September 1991. Nikolaus "Klaus" Barbie was born on 25 October 1913 in Godesberg, later renamed Bad Godesberg, which is today part of Bonn. The Barbie family came from Merzig, in the Saar near the French border. It is likely that his patrilineal ancestors were French Roman Catholics named Barbier who left France at the time of the French Revolution. In 1914, his father, also named Nikolaus, was conscripted to fight in the First World War. He returned an angry, bitter man. He was wounded in the neck at Verdun and captured by the French, whom he hated, and he never recovered his health. He became an alcoholic who abused his children. Until 1923, when he was 10, Klaus Barbie attended the local school where his father taught. Afterwards, he attended a boarding school in Trier, and was relieved to be away from his abusive father. In 1925, the entire Barbie family moved to Trier. In June 1933, Barbie's younger brother, Kurt, died at the age of 18 of chronic illness. Later that year, their father died. The death of his father derailed plans for the 20-year-old Barbie to study theology, or otherwise become an academic, as his peers had expected. While unemployed, Barbie was conscripted into the Nazi labour service, the "Reichsarbeitsdienst". On 26 September 1935, aged 22, he joined the SS (member 272,284), and began working in the "Sicherheitsdienst" (SD), the SS security service, which acted as the intelligence-gathering arm of the Nazi Party. On 1 May 1937, he became member 4,583,085 of the Nazi Party. After the German conquest and occupation of the Netherlands, Barbie was assigned to Amsterdam. He had been pre-assigned to Adolf Eichmann's Amt (Department) IV/B-4. This department was responsible for identification, roundup and deportation of Dutch Jews and Freemasons. On 11 October 1940, Barbie arrested , Grand Master of the Grand Orient of the Netherlands. In March 1941, Tongeren was transported to Sachsenhausen concentration camp where, in freezing conditions, he died two weeks later. On 1 April, Barbie summoned Tongeren's daughter, Charlotte, to SD headquarters and informed her that her father had died of an infection in both ears and had been cremated. In 1942, he was sent to Dijon, France, in the Occupied Zone. In November of the same year, at the age of 29, he was assigned to Lyon as the head of the local Gestapo. He established his headquarters at the Hôtel Terminus in Lyon, where he personally tortured adult and child prisoners. He became known as the "Butcher of Lyon". The daughter of a French Resistance leader based in Lyon said her father was beaten and skinned alive, and that his head was immersed in a bucket of ammonia; he died shortly afterwards. Historians estimate that Barbie was directly responsible for the deaths of up to 14,000 people, personally participating in roundups such as the Rue Sainte-Catherine Roundup which saw 84 people arrested in a single day. He arrested Jean Moulin, a high-ranking member of the French Resistance and his most prominent captive. In 1943, he was awarded the Iron Cross (First Class) by Adolf Hitler for his campaign against the French Resistance and the capture of Moulin. In April 1944, Barbie ordered the deportation to Auschwitz of a group of 44 Jewish children from an orphanage at Izieu. He then rejoined the SiPo-SD of Lyon in its retreat to Bruyères, where he led an anti-partisan attack in Rehaupal in September 1944. In 1947, Barbie was recruited as an agent for the 66th Detachment of the U.S. Army Counterintelligence Corps (CIC). The U.S. used Barbie and other Nazi Party members to further anti-communist efforts in Europe. Specifically, they were interested in British interrogation techniques which Barbie had experienced firsthand, and the identities of SS officers the British were using for their own ends. Later, the CIC housed him in a hotel in Memmingen, and he reported on French intelligence activities in the French zone of occupied Germany because they suspected that the French had been infiltrated by the KGB and GRU. The French discovered that Barbie was in U.S. hands, and having sentenced him to death "in absentia" for war crimes, made a plea to John J. McCloy, U.S. High Commissioner for Germany, to hand him over for execution, but McCloy allegedly refused. Instead, the CIC helped him flee to Bolivia assisted by "ratlines" organized by U.S. intelligence services, and by Croatian Roman Catholic clergy, including Krunoslav Draganović. The CIC asserted that Barbie knew too much about the network of German spies the CIC had planted in various European communist organizations, and were suspicious of communist influence within the French government, but their protection of Barbie may have been as much to avoid the embarrassment of having recruited him in the first place. Other authors have suggested that the anticommunist element of Italian fascism and the protection of the Vatican allowed Klaus Barbie and other Nazis to flee to Bolivia. In 1965, Barbie was recruited by the West German foreign intelligence agency "Bundesnachrichtendienst" (BND), under the codename "Adler" ("Eagle") and the registration number V-43118. His initial monthly salary of 500 Deutsche Mark was transferred in May 1966 to an account of the Chartered Bank of London in San Francisco. During his time with the BND, Barbie made at least 35 reports to the BND headquarters in Pullach. Barbie emigrated to Bolivia, where he lived well for 30 years in Cochabamba, under the alias Klaus Altmann. It was easier and less embarrassing for him to find employment there than in Europe, and he enjoyed excellent relations with high-ranking Bolivian officials, including Bolivian dictators Hugo Banzer and Luis García Meza Tejada. "Altmann" was known for his German nationalist and anti-communist stances. While engaged in arms-trade operations in Bolivia, he was appointed to the rank of lieutenant colonel within the Bolivian Armed Forces. Barbie collaborated with General Barriento's regime, including teaching the general's private paramilitaries named "Furmont" how torture can best be used. The regime's political repression against leftist groups was helped by Barbie's knowledge about intelligence work, torture and interrogations. In 1972 under General Banzer (with whom Barbie collaborated even more openly), he assisted in illegal arrests, interrogations and murders of opposition and progressive groups. Journalists and activists who wrote or spoke about the regime's crimes against human rights were arrested and many fell victim to so-called "disappearances", the state's secret murders and abductions of leftists. Barbie actively participated in the regime's oppression of opponents. Barbie was strongly linked to the neo-Nazi paramilitary Alvaro De Castro, who was his personally hired bodyguard and the two participated in criminal actions and businesses together. De Castro had connections with powerful drugbarons and the illegal drug trade and, together with Barbie (under the name Altmann) and an Austrian company, sold weapons to the drug cartels, and when De Castro was arrested he admitted in interviews that he had earlier worked for drug lords in the country. Other sources say Barbie most likely also had connections with these organizations. De Castro continued to correspond with Barbie when Barbie was later under arrest. Their connections did also provide intelligence information to US authorities at the US Embassy. A group called "The Fiancées of Death", which included German Nazis and Fascists, had links to some of Barbie's actions in Bolivia. Barbie earlier also carried out a large arms purchase of tanks from Austria to the Bolivian army. These were then used in a coup d'état. People who met Barbie during his time in Bolivia have told that he was a firm and fanatic believer in the Nazi ideology and an anti-Semite. Barbie and De Castro reportedly talked about the cases and searches for Josef Mengele and Eichmann, whom Barbie supported and wanted to assist in remaining on the run. Barbie was identified as being in Peru in 1971 by the Klarsfelds (Nazi hunters from France) who came across a secret document that revealed his alias. On 19 January 1972, this information was published in the French newspaper "L'Aurore", along with a photograph of Altmann which the Klarsfelds obtained from a German expatriate living in Lima, Peru. Led by Beate Klarsfeld, French journalist Ladislas de Hoyos and cameraman Christian van Ryswyck flew to La Paz in January 1972 in order to find and interview Klaus Barbie posing as his alias Klaus Altman. The interview took place on February 3, 1972 in the Department of the Interior building and the following day, in prison where Klaus was placed under protection by the Bolivian authorities. In the videotape, and while the interview was conducted in Spanish, Ladislas de Hoyos steers away from the previously agreed upon questions by asking whether Barbie has ever been to Lyon in French, a language he isn't supposed to understand under his fake identity, to which Klaus Barbie automatically responds by the negative in German. Ladislas de Hoyos gave him photos of members of Resistance he had tortured, asking him if he recognized their faces, and while he returned them in denial, his fingerprints unmistakenly betrayed him. It was in this interview, later broadcast on French TV Channel Antenne 2 that he was recognized by French resistant Simone Lagrange who had been tortured by Klaus Barbie in 1944. Despite global outcry, Barbie was able to return to Bolivia where the government refused to extradite him, stating that France and Bolivia did not have an extradition treaty and that the statute of limitations on his crimes had expired. Barbie's close fascist friends knew who he was, but to the public Barbie denied being none other than his innocent alter-ego "Altmann" and in the videotaped interview conducted by Ladislas de Hoyos which he allowed, he continued to lie about never having been in Lyon, never knowing Moulin or having been in the Gestapo. However, in the 1970s, the community of refugee Jews who had survived or escaped the war, openly discussed the fact that Barbie was the war criminal from Lyon now living on the Calle Landaeta in La Paz and frequenting the Cafe de La Paz daily. It was no secret. Journalist and reporter Peter McFarren and a female journalist for "The New York Times" said that while they were outside Barbie's house in Bolivia in 1981, wanting to speak to him for an article, they saw Barbie in a window while they were taking photos at the place, and shortly thereafter they were taken away by twelve armed paramilitary men who had quickly arrived in a van and asked what they were doing there. The testimony of Italian insurgent Stefano Delle Chiaie before the Italian Parliamentary Commission on Terrorism suggests that Barbie took part in the "cocaine coup" of Luis García Meza Tejada, when the regime forced its way to power in Bolivia in 1980. In 1983, the newly elected democratic government of Hernán Siles Zuazo arrested Barbie in La Paz on the pretext of owing the government 10,000 dollars for goods he was supposed to have shipped but did not, and a few days later, the government delivered him to France to stand trial. In 1984, Barbie was indicted for crimes committed as Gestapo chief in Lyon between 1942 and 1944, chief among which was the Rue Sainte-Catherine Roundup. The jury trial started on 11 May 1987 in Lyon before the Rhône "Cour d'Assises". Unusually, the court allowed the trial to be filmed because of its historical value. A special courtroom was constructed with seating for an audience of about 700. The head prosecutor was Pierre Truche. At the trial, Barbie's defense was funded by Swiss financier François Genoud and undertaken by attorney Jacques Vergès. He was tried on 41 separate counts of crimes against humanity, based on the depositions of 730 Jews and French Resistance survivors who described how he tortured and murdered prisoners. The father of French Minister for Justice Robert Badinter had died in Sobibor after being deported from Lyon during Barbie's tenure. Barbie gave his name as Klaus Altmann, the name that he used while in Bolivia. He claimed that his extradition was technically illegal and asked to be excused from the trial and returned to his cell at Prison Saint-Paul. This was granted. He was brought back to court on 26 May 1987 to face some of his accusers, about whose testimony he had "nothing to say". Barbie's defense attorney, Vergès, had a reputation for attacking the French political system, particularly in the historic French colonial empire. His strategy was to use the trial to talk about war crimes committed by France since 1945. He got the prosecution to drop some of the charges against Barbie due to French legislation that had protected French citizens accused of the same crimes under the Vichy regime and in French Algeria. Vergès tried to argue that Barbie's actions were no worse than the supposedly ordinary actions of colonialists worldwide, and that his trial was tantamount to selective prosecution. During his trial, Barbie said "When I stand before the throne of God, I shall be judged innocent." The court rejected the defense's argument. On 4 July 1987, Barbie was convicted and sentenced to life imprisonment. He died in prison in Lyon four years later of leukemia and spine and prostate cancer at the age of 77. In April 1939, Barbie became engaged to Regina Margaretta Willms, the 23-year-old daughter of a postal clerk; they had two children, a son named Klaus-Georg Altmann and a daughter named Ute Messner. In 1983, Françoise Croizier, Klaus Barbie's French daughter-in-law, said in an interview the CIA kidnapped Klaus-Georg in 1946 to make sure his father carried out intelligence missions for the agency. Croizier met Klaus-Georg while both were students in Paris; they married in 1968, had three children and lived in Europe and Bolivia using the surname Altmann. Croizier said when she married she did not know who her father-in-law was, but that she could guess the reasons for a German to settle in South America after the war. Klaus-Georg died in a hang-gliding accident in 1981. Barbie remained unto the end a politically fanatic and systematic Nazi, who defended Hitler's politics and racial theories, and Fascism, in any situation they were questioned or criticized. Historians have noted that Barbie never had to be part of anything doing with the army or police in Bolivia, but that he actively chose such positions as part of his constant and active support for Nazi ideology and would fight for that cause in all ways he deemed "effective". The french documentary film "My Enemy's Enemy" ("Mon meilleur ennemi" in french) is the story of Klaus Barbie through World War II and post-war hiding journey in Bolivia including his involvement in the assassination of Che Guevara before being tried in France for war crimes committed in Lyon and the assassination of Jean Moulin.
https://en.wikipedia.org/wiki?curid=17335
Kashmir Kashmir is the northernmost geographical region of the Indian subcontinent. Until the mid-19th century, the term "Kashmir" denoted only the Kashmir Valley between the Great Himalayas and the Pir Panjal Range. Modern usage of the term encompasses a larger area that includes the Indian-administered territories of Jammu and Kashmir and Ladakh, the Pakistani-administered territories of Azad Kashmir and Gilgit-Baltistan, and Chinese-administered territories of Aksai Chin and the Trans-Karakoram Tract. In the first half of the first millennium, the Kashmir region became an important centre of Hinduism and later of Buddhism; later still, in the ninth century, Kashmir Shaivism arose. In 1339, Shah Mir became the first Muslim ruler of Kashmir, inaugurating the "Salatin-i-Kashmir" or Shah Mir dynasty. Kashmir was part of the Mughal Empire from 1586 to 1751, and thereafter, until 1820, of the Afghan Durrani Empire. That year, the Sikhs, under Ranjit Singh, annexed Kashmir. In 1846, after the Sikh defeat in the First Anglo-Sikh War, and upon the purchase of the region from the British under the Treaty of Amritsar, the Raja of Jammu, Gulab Singh, became the new ruler of Kashmir. The rule of his descendants, under the "paramountcy" (or tutelage) of the British Crown, lasted until the partition of India in 1947, when the former princely state of the British Indian Empire became the subject of the Kashmir conflict. The modern region is administered by three countries: India, Pakistan, and China. The word "Kashmir" was derived from the ancient Sanskrit language and was referred to as "". The Nilamata Purana describes the valley's origin from the waters, a lake called "Sati-saras". A popular, but uncertain, local etymology of "Kashmira" is that it is land desiccated from water. An alternative, but also uncertain, etymology derives the name from the name of the Vedic sage Kashyapa who is believed to have settled people in this land. Accordingly, "Kashmir" would be derived from either "kashyapa-mir" (Kashyapa's Lake) or "kashyapa-meru" (Kashyapa's Mountain). The word has been referenced to in a Hindu scripture mantra worshipping the Hindu goddess Sharada and is mentioned to have resided in the land of "kashmira", or which might have been a reference to the Sharada Peeth. The Ancient Greeks called the region "Kasperia", which has been identified with "Kaspapyros" of Hecataeus of Miletus (apud Stephanus of Byzantium) and "Kaspatyros" of Herodotus (3.102, 4.44). Kashmir is also believed to be the country meant by Ptolemy's "Kaspeiria". The earliest text which directly mentions the name "Kashmir" is in "Ashtadhyayi" written by the Sanskrit grammarian Pāṇini during the 5th century BC. Pāṇini called the people of Kashmir "Kashmirikas". Some other early references to Kashmir can also be found in Mahabharata in Sabha Parva and in puranas like Matsya Purana, Vayu Purana, Padma Purana and Vishnu Purana and Vishnudharmottara Purana. Huientsang, the Buddhist scholar and Chinese traveller, called Kashmir "kia-shi-milo", while some other Chinese accounts referred to Kashmir as "ki-pin" (or Chipin or Jipin) and "ache-pin". "Cashmere" is an archaic spelling of modern Kashmir, and in some countries it is still spelled this way. In the Kashmiri language, Kashmir itself is known as "Kasheer". The Government of India and Indian sources, refer to the territory under Pakistan control "Pakistan-occupied Kashmir" ("POK") or "Pakistan-held Kashmir" ("PHK"). The Government of Pakistan and Pakistani sources refer to the portion of Kashmir administered by India as "Indian-occupied Kashmir" ("IOK") or "Indian-held Kashmir" (IHK); The terms "Indian-administered Kashmir" and "Pakistani-administered Kashmir" are often used by neutral sources for the parts of the Kashmir region controlled by each country. During the ancient and medieval periods, Kashmir was an important centre for the development of a Hindu-Buddhist syncretism, in which Madhyamaka and Yogachara were blended with Shaivism and Advaita Vedanta. The Buddhist Mauryan emperor Ashoka is often credited with having founded the old capital of Kashmir, Shrinagari, now ruins on the outskirts of modern Srinagar. Kashmir was long a stronghold of Buddhism. As a Buddhist seat of learning, the Sarvastivada school strongly influenced Kashmir. East and Central Asian Buddhist monks are recorded as having visited the kingdom. In the late 4th century CE, the famous Kuchanese monk Kumārajīva, born to an Indian noble family, studied Dīrghāgama and Madhyāgama in Kashmir under Bandhudatta. He later became a prolific translator who helped take Buddhism to China. His mother Jīva is thought to have retired to Kashmir. Vimalākṣa, a Sarvāstivādan Buddhist monk, travelled from Kashmir to Kucha and there instructed Kumārajīva in the "Vinayapiṭaka". Karkoṭa Empire (625–885 CE) was a powerful Hindu empire, which originated in the region of Kashmir. It was founded by Durlabhvardhana during the lifetime of Harsha. The dynasty marked the rise of Kashmir as a power in South Asia. Avanti Varman ascended the throne of Kashmir on 855 CE, establishing the Utpala dynasty and ending the rule of Karkoṭa dynasty. According to tradition, Adi Shankara visited the pre-existing "" (Sharada Peeth) in Kashmir in the late 8th century or early 9th century CE. The "Madhaviya Shankaravijayam" states this temple had four doors for scholars from the four cardinal directions. The southern door of Sarvajna Pitha was opened by Adi Shankara. According to tradition, Adi Shankara opened the southern door by defeating in debate all the scholars there in all the various scholastic disciplines such as Mīmāṃsā, Vedanta and other branches of Hindu philosophy; he ascended the throne of Transcendent wisdom of that temple. Abhinavagupta (c. 950–1020 CE) was one of India's greatest philosophers, mystics and aestheticians. He was also considered an important musician, poet, dramatist, exegete, theologian, and logician – a polymathic personality who exercised strong influences on Indian culture. He was born in the Kashmir Valley in a family of scholars and mystics and studied all the schools of philosophy and art of his time under the guidance of as many as fifteen (or more) teachers and gurus. In his long life he completed over 35 works, the largest and most famous of which is Tantrāloka, an encyclopaedic treatise on all the philosophical and practical aspects of Trika and Kaula (known today as Kashmir Shaivism). Another one of his very important contributions was in the field of philosophy of aesthetics with his famous Abhinavabhāratī commentary of Nāṭyaśāstra of Bharata Muni. In the 10th century "Mokshopaya" or "Moksopaya Shastra", a philosophical text on salvation for non-ascetics ("moksa-upaya": 'means to release'), was written on the Pradyumna hill in Srinagar. It has the form of a public sermon and claims human authorship and contains about 30,000 shloka's (making it longer than the "Ramayana"). The main part of the text forms a dialogue between Vashistha and Rama, interchanged with numerous short stories and anecdotes to illustrate the content. This text was later (11th to the 14th century CE) expanded and vedanticised, which resulted in the "Yoga Vasistha". Queen Kota Rani was medieval Hindu ruler of Kashmir, ruling until 1339. She was a notable ruler who is often credited for saving Srinagar city from frequent floods by getting a canal constructed, named after her "Kutte Kol". This canal receives water from Jhelum River at the entry point of city and again merges with Jhelum river beyond the city limits. Shams-ud-Din Shah Mir (reigned 1339–42) was the first Muslim ruler of Kashmir and founder of the Shah Mir dynasty. Kashmiri historian Jonaraja, in his "Dvitīyā Rājataraṅginī" mentioned Shah Mir was from the country of "Panchagahvara" (identified as the Panjgabbar valley between Rajouri and Budhal), and his ancestors were Kshatriya, who converted to Islam. Scholar A. Q. Rafiqi states: Rinchan, from Ladakh, and Lankar Chak, from Dard territory near Gilgit, came to Kashmir and played a notable role in the subsequent political history of the Valley. All the three men were granted Jagirs (feudatory estates) by the King. Rinchan became the ruler of Kashmir for three years. Shah Mir was the first ruler of the Shah Mir dynasty, which was established in 1339. Muslim ulama, such as Mir Sayyid Ali Hamadani, arrived from Central Asia to proselytize in Kashmir and their efforts converted thousands of Kashmiris to Islam and Hamadani's son also convinced Sikander Butshikan to enforce Islamic law. By the late 1400s most Kashmiris had accepted Islam. Persian was introduced in Kashmir by the Šāh-Miri dynasty (1349–1561) and started to flourish under Sultan Zayn-al-ʿĀbedin (1420–70). The Mughal padishah (emperor) Akbar conquered Kashmir from 1585–86, taking advantage of Kashmir's internal Sunni-Shia divisions, and thus ended indigenous Kashmiri Muslim rule. Akbar added it to the Kabul Subah (encompassing modern-day northeastern Afghanistan, northern Pakistan and the Kashmir Valley of India), but Shah Jahan carved it out as a separate subah (imperial top-level province) with its seat at Srinagar. Kashmir became the northern-most region of Mughal India as well as a pleasure ground in the summertime. They built Persian water-gardens in Srinagar, along the shores of Dal Lake, with cool and elegantly proportioned terraces, fountains, roses, jasmine and rows of chinar trees. The Afghan Durrani dynasty's Durrani Empire controlled Kashmir from 1751, when 15th Mughal padshah (emperor) Ahmad Shah Bahadur's viceroy Muin-ul-Mulk was defeated and reinstated by the Durrani founder Ahmad Shah Durrani (who conquered, roughly, modern day Afghanistan and Pakistan from the Mughals and local rulers), until the 1820 Sikh triumph. The Afghan rulers brutally repressed Kashmiris of all faiths (according to Kashmiri historians). In 1819, the Kashmir Valley passed from the control of the Durrani Empire of Afghanistan to the conquering armies of the Sikhs under Ranjit Singh of the Punjab, thus ending four centuries of Muslim rule under the Mughals and the Afghan regime. As the Kashmiris had suffered under the Afghans, they initially welcomed the new Sikh rulers. However, the Sikh governors turned out to be hard taskmasters, and Sikh rule was generally considered oppressive, protected perhaps by the remoteness of Kashmir from the capital of the Sikh Empire in Lahore. The Sikhs enacted a number of anti-Muslim laws, which included handing out death sentences for cow slaughter, closing down the Jamia Masjid in Srinagar, and banning the a"dhan", the public Muslim call to prayer. Kashmir had also now begun to attract European visitors, several of whom wrote of the abject poverty of the vast Muslim peasantry and of the exorbitant taxes under the Sikhs. High taxes, according to some contemporary accounts, had depopulated large tracts of the countryside, allowing only one-sixteenth of the cultivable land to be cultivated. Many Kashmiri peasants migrated to the plains of the Punjab. However, after a famine in 1832, the Sikhs reduced the land tax to half the produce of the land and also began to offer interest-free loans to farmers; Kashmir became the second highest revenue earner for the Sikh Empire. During this time Kashmir shawls became known worldwide, attracting many buyers, especially in the West. The state of Jammu, which had been on the ascendant after the decline of the Mughal Empire, came under the sway of the Sikhs in 1770. Further in 1808, it was fully conquered by Maharaja Ranjit Singh. Gulab Singh, then a youngster in the House of Jammu, enrolled in the Sikh troops and, by distinguishing himself in campaigns, gradually rose in power and influence. In 1822, he was anointed as the Raja of Jammu. Along with his able general Zorawar Singh Kahluria, he conquered and subdued Rajouri (1821), Kishtwar (1821), Suru valley and Kargil (1835), Ladakh (1834–1840), and Baltistan (1840), thereby surrounding the Kashmir Valley. He became a wealthy and influential noble in the Sikh court. In 1845, the First Anglo-Sikh War broke out. According to "The Imperial Gazetteer of India:" "Gulab Singh contrived to hold himself aloof till the battle of Sobraon (1846), when he appeared as a useful mediator and the trusted advisor of Sir Henry Lawrence. Two treaties were concluded. By the first the State of Lahore (i.e. West Punjab) handed over to the British, as equivalent for one crore indemnity, the hill countries between the rivers Beas and Indus; by the second the British made over to Gulab Singh for 75 lakhs all the hilly or mountainous country situated to the east of the Indus and the west of the Ravi i.e. the Vale of Kashmir)." Drafted by a treaty and a bill of sale, and constituted between 1820 and 1858, the Princely State of Kashmir and Jammu (as it was first called) combined disparate regions, religions, and ethnicities: to the east, Ladakh was ethnically and culturally Tibetan and its inhabitants practised Buddhism; to the south, Jammu had a mixed population of Hindus, Muslims and Sikhs; in the heavily populated central Kashmir valley, the population was overwhelmingly "Sunni" Muslim, however, there was also a small but influential Hindu minority, the Kashmiri brahmins or pandits; to the northeast, sparsely populated Baltistan had a population ethnically related to Ladakh, but which practised "Shia Islam"; to the north, also sparsely populated, Gilgit Agency, was an area of diverse, mostly "Shiʻa" groups; and, to the west, Punch was Muslim, but of different ethnicity than the Kashmir valley. After the Indian Rebellion of 1857, in which Kashmir sided with the British, and the subsequent assumption of direct rule by Great Britain, the princely state of Kashmir came under the suzerainty of the British Crown. In the British census of India of 1941, Kashmir registered a Muslim majority population of 77%, a Hindu population of 20% and a sparse population of Buddhists and Sikhs comprising the remaining 3%. That same year, Prem Nath Bazaz, a Kashmiri Pandit journalist wrote: "The poverty of the Muslim masses is appalling. ... Most are landless laborers, working as serfs for absentee [Hindu] landlords ... Almost the whole brunt of official corruption is borne by the Muslim masses." Under the Hindu rule, Muslims faced hefty taxation, discrimination in the legal system and were forced into labor without any wages. Conditions in the princely state caused a significant migration of people from the Kashmir Valley to Punjab of British India. For almost a century until the census, a small Hindu elite had ruled over a vast and impoverished Muslim peasantry. Driven into docility by chronic indebtedness to landlords and moneylenders, having no education besides, nor awareness of rights, the Muslim peasants had no political representation until the 1930s. Ranbir Singh's grandson Hari Singh, who had ascended the throne of Kashmir in 1925, was the reigning monarch in 1947 at the conclusion of British rule of the subcontinent and the subsequent partition of the British Indian Empire into the newly independent Dominion of India and the Dominion of Pakistan. In the run up to 1947 partition, there were two major parties in the princely state: the National Conference and the Muslim Conference. The former was led by the charismatic Kashmiri leader Sheikh Abdullah, who tilted towards the accession of the state to India, whilst the latter tilted towards accession to Pakistan. The National Conference enjoyed popular support in the Kashmir Valley whilst the Muslim Conference was more popular in the Jammu region. The Hindus and Sikhs of the state were firmly in favour of joining India, as were the Buddhists. However, the sentiments of the state's Muslim population were divided. Scholar Christopher Snedden states that the Muslims of Western Jammu, and also the Muslims of the Frontier Districts Province, strongly wanted Jammu and Kashmir to join Pakistan. The ethnic Kashmiri Muslims of the Kashmir Valley, on the other hand, were ambivalent about Pakistan (possibly due to their secular nature). The fact that Kashmiris were not particularly enamoured with the idea of Pakistan reflected the failure of the idea of Pan-Islamic identity in satisfying the political urges of Kashmiris. At the same time there was also a lack of interest in merging with Indian nationalism. According to Burton Stein's "History of India" Kashmir was neither as large nor as old an independent state as Hyderabad; it had been created rather off-handedly by the British after the first defeat of the Sikhs in 1846, as a reward to a former official who had sided with the British. The Himalayan kingdom was connected to India through a district of the Punjab, but its population was 77 per cent Muslim and it shared a boundary with Pakistan. Hence, it was anticipated that the maharaja would accede to Pakistan when the British paramountcy ended on 14–15 August. When he hesitated to do this, Pakistan launched a guerrilla onslaught meant to frighten its ruler into submission. Instead the Maharaja appealed to Mountbatten for assistance, and the governor-general agreed on the condition that the ruler accede to India. Indian soldiers entered Kashmir and drove the Pakistani-sponsored irregulars from all but a small section of the state. The United Nations was then invited to mediate the quarrel. The UN mission insisted that the opinion of Kashmiris must be ascertained, while India insisted that no referendum could occur until all of the state had been cleared of irregulars. In the last days of 1948, a ceasefire was agreed under UN auspices. However, since the referendum demanded by the UN was never conducted, relations between India and Pakistan soured, and eventually led to two more wars over Kashmir in 1965 and 1999. India has control of about half the area of the former princely state of Jammu and Kashmir, which are divided into union territories Jammu and Kashmir and Ladakh. Pakistan controls a third of the region, divided into two "de facto" provinces, Gilgit-Baltistan and Azad Kashmir. "Although there was a clear Muslim majority in Kashmir before the 1947 partition and its economic, cultural, and geographic contiguity with the Muslim-majority area of the Punjab (in Pakistan) could be convincingly demonstrated, the political developments during and after the partition resulted in a division of the region. Pakistan was left with territory that, although basically Muslim in character, was thinly populated, relatively inaccessible, and economically underdeveloped. The largest Muslim group, situated in the Valley of Kashmir and estimated to number more than half the population of the entire region, lay in Indian-administered territory, with its former outlets via the Jhelum valley route blocked."
https://en.wikipedia.org/wiki?curid=17337
Kendall Square Research Kendall Square Research (KSR) was a supercomputer company headquartered originally in Kendall Square in Cambridge, Massachusetts in 1986, near Massachusetts Institute of Technology (MIT). It was co-founded by Steven Frank and Henry Burkhardt III, who had formerly helped found Data General and Encore Computer and was one of the original team that designed the PDP-8. KSR produced two models of supercomputer, the KSR1 and KSR2. The KSR systems ran a specially customized version of the OSF/1 operating system, a Unix variant, with programs compiled by a KSR-specific port of the Green Hills Software C and FORTRAN compilers. The architecture was shared memory implemented as a cache-only memory architecture or "COMA". Being all cache, memory dynamically migrated and replicated in a coherent manner based on the access pattern of individual processors. The processors were arranged in a hierarchy of rings, and the operating system mediated process migration and device access. Instruction decode was hardwired, and pipelining was used. Each KSR1 processor was a custom 64-bit reduced instruction set computing (RISC) CPU clocked at 20 MHz and capable of a peak output of 20 million instructions per second (MIPS) and 40 million floating-point operations per second (MFLOPS). Up to 1088 of these processors could be arranged in a single system, with a minimum of eight. The KSR2 doubled the clock rate to 40 MHz and supported over 5000 processors. The KSR-1 chipset was fabricated by Sharp Corporation while the KSR-2 chipset was built by Hewlett-Packard. Besides the traditional scientific applications, KSR with Oracle Corporation, addressed the massively parallel database market for commercial applications. The KSR-1 and -2 supported Micro Focus COBOL and C/C++ programming languages, and the Oracle PRDBMS and the MATISSE OODBMS from ADB, Inc. Their own product, the KSR Query Decomposer, complemented the functions of the Oracle product for SQL uses. The TUXEDO transaction monitor for OLTP was also provided. The KAP program (Kuck & Associate Preprocessor) provided for pre-processing for source code analysis and parallelization. The runtime environment was termed PRESTO, and was a POSIX compliant multithreading manager. The KSR-1 processor was implemented as a four-chip set in 1.2 micrometer complementary metal–oxide–semiconductor (CMOS). These chips were: the cell execution unit, the floating point unit, the arithmetic logic unit, and the external I/O unit (XIO). The CEU handled instruction fetch (two per clock), and all operations involving memory, such as loads and stores. 40-bit addresses were used, going to full 64-bit addresses later. The integer unit had 32, 64-bit-wide registers. The floating point unit is discussed below. The XIO had the capacity of 30 MB/s throughput to I/O devices. It included 64 control and data registers. The KSR processor was a 2-wide VLIW, with instructions of 6 types: memory reference (load and store), execute, control flow, memory control, I/O, and inserted. Execute instructions included arithmetic, logical, and type conversion. They were usually triadic register in format. Control flow refers to branches and jumps. Branch instructions were two cycles. The programmer (or compiler) could implicitly control the "quashing" behavior of the subsequent two instructions that would be initiated during the branch. The choices were: always retain the results, retain results if branch test is true, or retain results if branch test is false. Memory control provided synchronization primitives. I/O instructions were provided. Inserted instructions were forced into a flow by a coprocessor. Inserted load and store were used for direct memory access (DMA) transfers. Inserted memory instructions were used to maintain cache coherency. New coprocessors could be interfaced with the inserted instruction mechanism. IEEE standard floating point arithmetic was supported. Sixty-four 64-bit wide registers were included. The following example of KSR assembly performs an indirect procedure call to an address held in the procedure's constant block, saving the return address in register codice_1. It also saves the frame pointer, loads integer register zero with the value 3, and increments integer register 31 without changing the condition codes. Most instructions have a delay slot of 2 cycles and the delay slots are not interlocked, so must be scheduled explicitly, else the resulting hazard means wrong values are sometimes loaded. finop ; movb8_8 %i2,%c10 finop ; cxnop finop ; cxnop add8.ntr 75,%i31,%i31 ; ld8 8(%c10),%c4 finop ; st8 %fp,504(%sp) finop ; cxnop movi8 3, %i0 ; jsr %c14,16(%c4) In the KSR design, all of the memory was treated as cache. The design called for no "home" location- to reduce storage overheads and to software transparently, dynamically migrate/replicate memory based on where it was be utilized; A Harvard architecture, separate bus for instructions and memory was used. Each node board contained 256 kB of I-cache and D-cache, essentially primary cache. At each node was 32 MB of memory for main cache. The system level architecture was shared virtual memory, which was physically distributed in the machine. The programmer or application only saw one contiguous address space, which was spanned by a 40-bit address. Traffic between nodes traveled at up to 4 gigabytes per second. The 32 megabytes per node, in aggregate, formed the physical memory of the machine. Specialized input/output processors could be used in the system, providing scalable I/O. A 1088 node KSR1 could have 510 I/O channels with an aggregate in excess of 15 GB/s. Interfaces such as Ethernet, FDDI, and HIPPI were supported. As the company scaled up quickly to enter production, they moved in the late 1980s to 170 Tracer Lane, Waltham, Massachusetts. KSR refocused its efforts from the scientific to the commercial marketplace, with emphasis on parallel relational databases and OLTP operations. It then got out of the hardware business, but continued to market some of its data warehousing and analysis software products. The first KSR1 system was installed in 1991. With new processor hardware, new memory hardware and a novel memory architecture, a new compiler port, a new port of a relatively new operating system, and exposed memory hazards, early systems were noted for frequent system crashes. KSR called their cache-only memory architecture (COMA) by the trade name "Allcache"; reliability problems with early systems earned it the nickname "Allcrash", although memory was not necessarily the root cause of crashes. A few KSR1 models were sold, and as the KSR2 was being rolled out, the company collapsed amid accounting irregularities involving the overstatement of revenue. KSR used a proprietary processor because 64-bit processors were not commercially available. However, this put the small company in the difficult position of doing both processor design and system design. The KSR processors were introduced in 1991 at 20 MHz and 40 MFlops. At that time, the 32-bit Intel 80486 ran at 50 MHz and 50 MFlops. When the 64-bit DEC Alpha was introduced in 1992, it ran at up to 192 MHz and 192 MFlops, while the 1992 KSR2 ran at 40 MHz and 80 MFlops. One customer of the KSR2, the Pacific Northwest National Laboratory, a United States Department of Energy facility, purchased an enormous number of spare parts, and kept their machines running for years after the demise of KSR. KSR, along with many of its competitors (see below), went bankrupt during the collapse of the supercomputer market in the early 1990s. KSR went out of business in February 1994, when their stock was delisted from the stock exchange. KSR's competitors included MasPar Computer Corporation, Thinking Machines, Meiko Scientific, and various old-line (and still surviving) companies like IBM and Intel. "BUSINESS TECHNOLOGY; Pools of Memory, Waves of Dispute" John Markoff, The New York Times - 29 January 1992
https://en.wikipedia.org/wiki?curid=17339
Kinglassie Kinglassie (Gaelic: "Cille MoGhlasaidh") is a small village and parish in central Fife, Scotland. It is located two miles southwest of Glenrothes. In 2011, the population of the village was 1,684. The civil parish has a population of 22,543 (in 2011). The village of Kinglassie (pronounced Kin-glassie) lies to the north of the Lochty Burn, southwest of Glenrothes in Fife, and two miles southeast of Perth and Kinross district. In 830 AD, the village was known as Kinglace. The village has never been known as Goatmilkshire, though the area northeast of the village has always had that name or Gaitmilkshire. In the year 1231, the village was known as Kinglassin and was in the Lochoreshire area. However, this changed in 1235 when Constantine II of Lochore renounced his claim to the lands in favour of the Abbey of Dunfermline. From this time on, Kinglassie ceased to be part of Lochoreshire. Little of antiquity remains, except for the Dogton Stone, with its Celtic cross, situated in a field about a mile (1.5 km) to the south. For many years, Kinglassie was a weaving village, but in the 19th and 20th centuries it developed as a mining town. From a very early period through to the Reformation, Scotland was dotted over with certain divisions of lands known as "Schyres." Thus, in the immediate neighbourhood of Kinross were "Kynros-Schyre", "Portmocke-Schyre", "Kinglassy-Schyre", "Muchard-Schyre", and "Doloure-Schyre". These Schyres must not be confused with the shire of the present day; they were simply divisions of land, similar in extent to an average modern parish. Kinglassie has a primary school, Mitchell Hall (1896) and the Miners' Welfare Institute (est. 1931). Fife Airport lies about a mile (1.5 km) to the north and, on a hill overlooking the farm of Redwells, stands Blythe's Folly, a tower built in 1812 by an eccentric Leith ship owner. Kinglassie's development during the late 19th and early 20th centuries was marked by its rapid expansion to house mine workers. Many mine workers perished or were injured during the life of the mine. The mine was plagued by water flooding problems. The Kinglassie Pit started in 1908 and closed in 1967. The Westfield open cast coal mine lies to the west of the village and is still regarded as the biggest man-made hole in Europe by local people. Glastian of Kinglassie B (AC) (also known as Glastian of MacGlastian) was born in Fife, Scotland. He died at Kinglassie (Kinglace), Scotland, in 830. As bishop of Fife, Saint Glastian mediated in the bloody civil war between the Picts and the Scots. When the Picts were subjugated, Glastian did much to alleviate their lot. He is the patron saint of Kinglassie in Fife and is venerated in Kyntire (Benedictines, Husenbeth). Kinglassie Primary School has a roll of approximately 270 pupils. The school was built to designs by the architect George Charles Campbell in 1912. It has a butterfly type plan consisting of two single storey rendered wings either side of a hexagon shaped hall. The central portion of the façade is two storeys high and of red sandstone, with generous steps leading to a central formal entrance. It is a category B listed building. The Pupil Council represents pupils in the school. The eco-committee consists of pupils, staff, parents, and members of the wider community, and is proactive in promoting conservation initiatives throughout the school. A parent council represents the parent body and raises funds for various initiatives. In addition, children are supported in class by a growing number of parent helpers and the school is well-supported by parents generally. Blythe's Tower, built in 1812, is a four-storey square tower, high, built of rubble with ashlar string courses and a crenellated parapet. It is a category B listed building. The tower's interior was formerly floored to afford access to an observation platform. The tower was built by a linen merchant to view ships as they entered the Forth, affording him the opportunity to procure the best goods at port. During World War II, the tower was used as a look out tower by the home guard. The Dogton Stone, lies in a field to the south of Kinglassie at Grid reference - NT 236 968. The stone is a fragment of a free standing cross erected by the Picts, it probably dates form the 9th Century. The lower portion of the stone is all that remains of the cross and badly eroded decoration including a figure of an armed horseman above two beasts can be discerned. No one is certain why the stone was erected at this spot. It is a scheduled monument. The Mitchell Hall, built in 1896, was donated to the community by Alexander Mitchell. Mitchell also donated the first Parish Church organ. The Mitchell Hall is used by local community groups and is an asset to the wider Fife community.
https://en.wikipedia.org/wiki?curid=17341
Kordofanian languages The Kordofanian languages are a geographic grouping of five language groups spoken in the Nuba Mountains of the Kurdufan, Sudan: Talodi–Heiban languages, Lafofa languages, Rashad languages, Katla languages and Kadu languages. The first four groups are branches of the Niger–Congo family, whereas Kadu is now widely seen as a branch of the Nilo-Saharan family. In 1963, Joseph Greenberg added them to the Niger–Congo family, creating his Niger–Kordofanian proposal. The Kordofanian languages have not been shown to be more distantly related than other branches of Niger–Congo, however, and they have not been shown to constitute a valid group. Today, the Kadu languages are excluded, and the others are usually included in Niger–Congo proper. Roger Blench notes that the Talodi and Heiban families have the noun class systems characteristic of the Atlantic–Congo core of Niger–Congo but that the two Katla languages have no trace of ever having had such a system. However, the Kadu languages and some of the Rashad languages appear to have acquired noun classes as part of a Sprachbund rather than having inherited them. Blench concludes that Talodi and Heiban are core Niger–Congo whereas Katla and Rashad form a peripheral branch along the lines of Mande. The Heiban languages, also called Koalib or Koalib–Moro, and the Talodi languages, also called Talodi–Masakin, are part of the Talodi–Heiban group. Lafofa (Tegem) was for a time classified with Talodi, but appears to be a separate branch of Niger–Congo. The number of Rashad languages, also called Tegali–Tagoi, varies among descriptions, from two (Williamson & Blench 2000), three (Ethnologue), to eight (Blench "ms"). Tagoi has a noun-class system like the Atlantic–Congo languages, which is apparently borrowed, but Tegali does not. The two Katla languages have no trace of ever having had a Niger–Congo-type noun-class system. Since the work of Thilo Schadeberg in 1981, the "Tumtum" or Kadu branch is now widely seen as Nilo-Saharan. However, the evidence is slight, and a conservative classification would treat it as an independent family. Sample basic vocabulary of the Heiban, Talodi, Rashad, and Lafofa branches: "Note": In table cells with slashes, the singular form is given before the slash, while the plural form follows the slash. Comparison of numerals in individual languages:
https://en.wikipedia.org/wiki?curid=17348
Khwaja Ahmad Abbas Khwaja Ahmad Abbas (7 June 1914 – 1 June 1987), popularly known as K. A. Abbas, was an Indian film director, screenwriter, novelist, and a journalist in the Urdu, Hindi and English languages. He won four National Film Awards in India, and internationally his films won the Palme d'Or (Grand Prize) at the Cannes Film Festival (out of three Palme d'Or nominations) and the Crystal Globe at the Karlovy Vary International Film Festival. As a director and screenwriter, Khwaja Ahmad Abbas is considered one of the pioneers of Indian parallel or neo-realistic cinema, and as a screenwriter he is also known for writing Raj Kapoor's best films. As a director, he made a number of important Hindustani films. "Dharti Ke Lal" (1946), about the Bengal famine of 1943, was one of Indian cinema's first social-realist films, and opened up the overseas market for Indian films in the Soviet Union. "Pardesi" (1957) was nominated for the Palme d'Or at the Cannes Film Festival. "Shehar Aur Sapna" (1963) won the National Film Award for Best Feature Film, while "Saat Hindustani" (1969) and "Do Boond Pani" (1972) both won the National Film Awards for Best Feature Film on National Integration. As a screenwriter, he penned a number of neo-realistic films, such as "Dharti Ke Lal" (which he directed), "Neecha Nagar" (1946) which won the Palme d'Or at the first Cannes Film Festival, "Naya Sansar" (1941), "Jagte Raho" (1956), and "Saat Hindustani" (which he also directed). He is also known for writing the best of Raj Kapoor's films, including the Palme d'Or nominated "Awaara" (1951), as well as "Shree 420" (1955), "Mera Naam Joker" (1970), "Bobby" (1973) and "Henna" (1991). His column ‘Last Page’ holds the distinction of being one of the longest-running columns in the history of Indian journalism. The column began in 1935, in "The Bombay Chronicle", and moved to the "Blitz" after the "Chronicle"'s closure, where it continued until his death in 1987. He was awarded the Padma Shri by the Government of India in 1969. Khwaja Ahmad Abbas was born in Panipat, Undivided Punjab. He was born in the home of celebrated Urdu poet, Khwaja Altaf Hussain Hali, a student of Mirza Ghalib. His grandfather Khwaja Gulam Abbas was one of the chief rebels of the 1857 Rebellion movement, and the first martyr of Panipat to be blown from the mouth of a cannon. Abbas's father Ghulam-Us-Sibtain graduated from Aligarh Muslim University, was a tutor of a prince and a prosperous businessman, who modernised the preparation of Unani medicines. Abbas's mother, 'Masroor Khatoon', was the daughter of Sajjad Husain, an enlightened educationist. Abbas took his early education in 'Hali Muslim High School', which was established by his great grand father Hali. He had his early education till 7th in Panipat. He was instructed to read the Arabic text of the Quran and his childhood dreams swung at the compulsive behest of his father. Abbas completed his matriculation at the age of fifteen. He did his B.A. with English literature in 1933 and LL.B. in 1935 from Aligarh Muslim University. Abbas began his career as a journalist, when he joined 'National Call', a New Delhi based newspaper after finishing his B.A.. Later while studying law in 1934, started 'Aligarh Opinion', India's first university students' weekly during the pre-independence period. After completing his education at Aligarh Muslim University, Abbas joined "The Bombay Chronicle" in 1935. He occasionally served a film critic, but after the film critic of the paper died, he was made the editor of the film section. He entered films as a part-time publicist for Bombay Talkies in 1936, a production house owned by Himanshu Rai and Devika Rani, to whom he sold his first screenplay "Naya Sansar" (1941). While at "The Bombay Chronicle", (1935–1947), he started a weekly column called 'Last Page', which he continued when he joined the Blitz magazine. "The Last Page", (‘Azad Kalam’ in the Urdu edition), thus became the longest-running political column in India's history (1935–87). A collection of these columns was later published as two books. He continued to write for The Blitz and Mirror till his last days. Meanwhile, he had started writing scripts for other directors, "Neecha Nagar" for Chetan Anand and "Dr. Kotnis Ki Amar Kahani" for V. Shantaram. In 1945, he made his directorial debut with a film based on the Bengal famine of 1943, "Dharti Ke Lal" ("Children of the Earth") for the Indian People's Theatre Association (IPTA). In 1951, he founded his own production company called Naya Sansar, which consistently produced films that were socially relevant including, "Anhonee", "Munna", "Rahi" (1953), based on a Mulk Raj Anand story, was on the plight of workers on tea plantations, the National Film Award winner, "Shehar Aur Sapna" (1964) and "Saat Hindustani" (1969), which won the Nargis Dutt Award for Best Feature Film on National Integration and is also remembered as Bollywood icon Amitabh Bachchan's debut film. A prolific writer, and novelist, during his illustrious career spanning five decades, Abbas wrote over 73 books in English, Hindi and Urdu. Abbas was considered a leading light of the Urdu short story. His best known fictional work remains 'Inquilab', based Communal violence, which made him a household name in Indian literature. Like Inquilab, many of his works were translated into many Indian, and foreign languages, like Russian, German, Italian, French and Arabic. Abbas interviewed several renowned personalities in literary and non-literary fields, including the Russian Prime Minister Khrushchov, American President Roosevelt, Charlie Chaplin, Mao-Tse-Tung and Yuri Gagarin. He went on to write scripts for Jagte Raho, and most of the prominent Raj Kapoor films including "Awaara, Shri 420, Mera Naam Joker, Bobby" and "Henna". His autobiography, "I Am not an Island: An Experiment in Autobiography", was first published in 1977 and later released in 2010. In 1968, Abbas made a documentary film called "Char Shaher Ek Kahani" (A Tale of Four Cities). The film depicted the contrast between the luxurious life of the rich in the four cities of Calcutta, Bombay, Madras and Delhi and that of the squalor and poverty of the poor. He approached the Central Board of Film Certification to obtain a 'U' (Unrestricted Public Exhibition) certificate. Abbas was however informed by the regional office of the Board that film was not eligible to be granted a 'U' certificate but was suitable for exhibition only for adults. His appeal to the revising committee of the Central Board of Film Certification led to the decision of the censors being upheld. Khwaja Ahmad Abbas further appealed to the Central Government but the government decided to grant the film a 'U' certificate provided certain scenes were cut. Following this, Abbas approached the Supreme Court of India by filing a writ petition under Article 19(1) of the Indian Constitution. He claimed that his fundamental right of free speech and expression was denied by the Central Government's refusal to grant the film a 'U' certificate. Abbas also challenged the constitutional validity of pre-censorship on films. However the Supreme Court of India upheld the constitutional validity pre-censorship on films. Haryana State Robe of Honour for literary achievements in 1969, the prestigious Ghalib Award for his contribution to Urdu prose literature in 1983 Vorosky Literary Award of the Soviet Union in 1984, Urdu Akademi Delhi Special Award 1984, Maharashtra State Urdu Akademi Award in 1985 and the Soviet Award for his contribution to the cause of Indo-Soviet Friendship in 1985. He "published more than seventy books in English, Urdu and Hindi", including: "For detailed listing" :
https://en.wikipedia.org/wiki?curid=17351
Katherine MacLean Katherine Anne MacLean (January 22, 1925 – September 1, 2019) was an American science fiction author best known for her short fiction of the 1950s which examined the impact of technological advances on individuals and society. Damon Knight wrote, "As a science fiction writer she has few peers; her work is not only technically brilliant but has a rare human warmth and richness." Brian Aldiss noted that she could "do the hard stuff magnificently," while Theodore Sturgeon observed that she "generally starts from a base of hard science, or rationalizes psi phenomena with beautifully finished logic." According to "The Encyclopedia of Science Fiction", she "was in the vanguard of those sf writers trying to apply to the soft sciences the machinery of the hard sciences". Her stories have been included in anthologies and a few have had radio and television adaptations. Three collections of her stories have been published. It was while she worked as a laboratory technician in 1947 that she began writing science fiction. Strongly influenced by Ludwig von Bertalanffy's General Systems Theory, her fiction has often demonstrated foresight about scientific advances. She died on September 1, 2019 at the age of 94. MacLean received a Nebula Award in 1971, for her novella "The Missing Man" ("Analog", March, 1971) and she was a Professional Guest of Honor at the first WisCon in 1977. She was honored in 2003 by the Science Fiction Writers of America as an SFWA Author Emeritus. In 2011, she received the Cordwainer Smith Rediscovery Award. "The Diploids and Other Flights of Fancy" (Avon, 1962), her first short story collection, includes "The Diploids" (a.k.a. "Six Fingers"), "Feedback", "Pictures Don't Lie", "Incommunicado", "The Snow Ball Effect", "Defense Mechanism" and "And Be Merry" (a.k.a. "The Pyramid in the Desert"). Her second collection, "The Trouble with You Earth People" (Donning/Starblaze, 1980) contains "The Trouble with You Earth People", "The Gambling Hell and the Sinful Girl", "Syndrome Johnny", "Trouble with Treaties" (with Tom Condit), "The Origin of the Species", "Collision Orbit", "The Fittest", "These Truths", "Contagion", "Brain Wipe" and her Nebula Award-winning "The Missing Man".
https://en.wikipedia.org/wiki?curid=17353
Kenneth Kaunda Kenneth David Buchizya Kaunda (born April 28, 1924), also known as KK, is a Zambian former politician who served as the first President of Zambia from 1964 to 1991. Kaunda is the youngest of eight children born to an ordained Church of Scotland missionary and teacher, an immigrant from Malawi. He was at the forefront of the struggle for independence from British rule. Dissatisfied with Harry Nkumbula's leadership of the Northern Rhodesian African National Congress, he broke away and founded the Zambian African National Congress, later becoming the head of the United National Independence Party. He was the first President of the independent Zambia. In 1973 following tribal and inter-party violence, all political parties except UNIP were banned through an amendment of the constitution after the signing of the Choma Declaration. At the same time, Kaunda oversaw the acquisition of majority stakes in key foreign-owned companies. The oil crisis of 1973 and a slump in export revenues put Zambia in a state of economic crisis. International pressure forced Kaunda to change the rules that had kept him in power. Multi-party elections took place in 1991, in which Frederick Chiluba, the leader of the Movement for Multiparty Democracy, ousted Kaunda. Kaunda was briefly stripped of Zambian citizenship in 1999, but the decision was overturned the following year. At , he is the oldest living former Zambian president. Kaunda was the youngest of eight children. He was born at Lubwa Mission in Chinsali, Northern Province of Northern Rhodesia, now Zambia. His father was the Reverend David Kaunda, an ordained Church of Scotland missionary and teacher, who was born in Nyasaland (now Malawi) and had moved to Chinsali to work at Lubwa Mission. He attended Munali Training Centre in Lusaka (August 1941 – 1943). Both Kaunda's father and mother were teachers. His father was from Nyasaland, also known as Malawi and his mother was the first African woman to teach in colonial Zambia. They were both teachers among the Bemba ethnic group which is location in northern Zambia. This is where Kaunda received his education until the early 1940s. It was very common during this time for Africans in colonial Zambia who has achieved a little bit of middle-class status. He later on followed in his parents footsteps and became a teacher; first in colonial Zambia but then in the middle of the 1940s he moved to what is now Tanzania. (The Editors of Encyclopaedia Britannica) Kaunda was a teacher at the Upper Primary School and Boarding Master at Lubwa and then Headmaster at Lubwa from 1943 to 1945. For a time, he worked at the Salisbury and Bindura Mine. In early 1948, he became a teacher in Mufulira for the United Missions to the Copperbelt (UMCB). He was then assistant at an African Welfare Centre and Boarding Master of a Mine School in Mufulira. In this period, he was leading a Pathfinder Scout Group and was Choirmaster at a Church of Central Africa congregation. He was also Vice-Secretary of the Nchanga Branch of Congress. In April 1949, Kaunda returned to Lubwa to become a part-time teacher, but resigned in 1951. In that year he became Organising Secretary of Northern Province's Northern Rhodesian African National Congress. On 11 November 1953 he moved to Lusaka to take up the post of Secretary General of the ANC, under the presidency of Harry Nkumbula. The combined efforts of Kaunda and Nkumbula failed to mobilise native African peoples against the European-dominated Federation of Rhodesia and Nyasaland. In 1955 Kaunda and Nkumbula were imprisoned for two months with hard labour for distributing subversive literature; such imprisonment and other forms of harassment were normal rites of passage for African nationalist leaders. The experience of imprisonment had a radicalising impact on Kaunda. The two leaders drifted apart as Nkumbula became increasingly influenced by white liberals and was seen as being willing to compromise on the issue of black majority rule, waiting until most of the indigenous population was responsibly educated before extending the franchise. The franchise was to be determined by existing property and literacy qualifications, dropping race altogether. Nkumbula's allegedly autocratic leadership of the ANC eventually resulted in a split. Kaunda broke from the ANC and formed the Zambian African National Congress (ZANC) in October 1958. ZANC was banned in March 1959. In June Kaunda was sentenced to nine months' imprisonment, which he spent first in Lusaka, then in Salisbury. While Kaunda was in prison, Mainza Chona and other nationalists broke away from the ANC and, in October 1959, Chona became the first president of the United National Independence Party (UNIP), the successor to ZANC. However, Chona did not see himself as the party's main founder. When Kaunda was released from prison in January 1960 he was elected President of UNIP. In 1960 he visited Martin Luther King Jr. in Atlanta and afterwards, in July 1961, Kaunda organised a civil disobedience campaign in Northern Province, the so-called Cha-cha-cha campaign, which consisted largely of arson and obstructing significant roads. Kaunda subsequently ran as a UNIP candidate during the 1962 elections. This resulted in a UNIP–ANC Coalition Government, with Kaunda as Minister of Local Government and Social Welfare. In January 1964, UNIP won the next major elections, defeating their ANC rivals and securing Kaunda's position as prime minister. On 24 October 1964 he became the first President of an independent Zambia, appointing Reuben Kamanga as his vice-president. At the time of its independence, Zambia's modernisation process was far from complete. The nation's educational system was one of the most poorly developed in all of Britain's former colonies, and it had just 109 university graduates and less than 0.5% of the population was estimated to have completed primary education. Because of this, Zambia had to invest heavily in education at all levels. Kaunda instituted a policy where all children, irrespective of their parents' ability to pay, were given free exercise books, pens and pencils. The parents' main responsibility was to buy uniforms, pay a token "school fee" and ensure that the children attended school. This approach meant that the best pupils were promoted to achieve their best results, all the way from primary school to university level. Not every child could go to secondary school, for example, but those who did were well educated. The University of Zambia was opened in Lusaka in 1966, after Zambians all over the country had been encouraged to donate whatever they could afford towards its construction. Kaunda was appointed Chancellor and officiated at the first graduation ceremony in 1969. The main campus was situated on the Great East Road, while the medical campus was located at Ridgeway near the University Teaching Hospital. In 1979 another campus was established at the Zambia Institute of Technology in Kitwe. In 1988 the Kitwe campus was upgraded and renamed the Copperbelt University, offering business studies, industrial studies and environmental studies. Other tertiary-level institutions established during Kaunda's era were vocationally focused and fell under the aegis of the Department of Technical Education and Vocational Training. They include the and the Natural Resources Development College (both in Lusaka), the Northern Technical College at Ndola, the Livingstone Trades Training Institute in Livingstone, and teacher-training colleges. At independence Kaunda's government inherited a country with an economy that was completely under the control of foreigners. For example, the British South Africa Company (founded by the British imperialist Cecil Rhodes) still retained commercial assets and mineral rights that it had acquired from a concession signed with the Litunga of Bulozi in 1890. Only by threatening to expropriate it on the eve of independence did Kaunda manage to get favourable concessions from the BSAC. Deciding on a planned economy, Zambia instituted a program of national development, under the direction of the National Commission for Development Planning, which instituted a "Transitional Development Plan" and the "First National Development Plan". These two operations, which attempted to secure major investment in infrastructure and manufacturing sectors, were generally regarded as successful. A major change in the structure of Zambia's economy came with the Mulungushi Reforms of April 1968: Kaunda declared his intention to acquire an equity holding (usually 51% or more) in a number of key foreign-owned firms, to be controlled by his Industrial Development Corporation (IDC). By January 1970, Zambia had acquired majority holding in the Zambian operations of the two major foreign mining interests, the Anglo American Corporation and the Rhodesian Selection Trust (RST); the two became the Nchanga Consolidated Copper Mines (NCCM) and Roan Consolidated Mines (RCM), respectively. Kaunda also announced the creation of a new parastatal body, the Mining Development Corporation (MINDECO), while a Finance and Development Corporation (FINDECO) allowed the Zambian government to gain control of insurance companies and building societies. Major foreign-owned banks, such as Barclays, Standard Chartered and Grindlays Bank, successfully resisted takeover. In 1971, IDC, MINDE, and FINDECO were brought together under an omnibus parastatal, the Zambia Industrial and Mining Corporation (ZIMCO), to create one of the largest companies in sub-Saharan Africa, with Francis Kaunda as chairman of the board. The management contracts under which day-to-day operations of the mines had been carried out by Anglo American and RST were terminated in 1973. In 1982, NCCM and RCM were merged into the giant Zambia Consolidated Copper Mines Investment Holdings Ltd (ZCCM-IH). Unfortunately this nationalisation policy was ill-timed. In 1973, the massive increase in the price of oil was followed by a slump in copper prices and a diminution of export earnings. In early 1973, the price of copper accounted for 95% of all export earnings; this had halved in value on the world market by early 1975. By 1976, Zambia had a balance-of-payments crisis, and rapidly fell into debt with the International Monetary Fund (IMF). The Third National Development Plan had to be abandoned as crisis management replaced long-term planning. By 1986 Zambia had the second highest debt of any nation on the globe, relative to its gross domestic product (GDP). The IMF insisted that the Zambian government should focus on stabilising the economy and restructuring it to reduce dependence on copper. The proposed measures included the ending of price controls, devaluation of the kwacha, reining in of government spending, cancellation of subsidies on food and fertiliser, and increased prices for farm produce. Kaunda's removal of food subsidies caused the prices of basic foodstuffs to skyrocket, sparking riots and disorder. In desperation, Kaunda attempted to sever his ties with the IMF in May 1987 and introduce a New Economic Recovery Programme in 1988. However, this was not ultimately successful and he eventually moved toward a new understanding with the IMF in 1989. In 1990 Kaunda was forced to make major policy shifts; he announced the intention to partially privatise the parastatals. However, these changes were too little and came too late to prevent his fall from power as a result of Zambia's economic woes. In the wake of the Lumpa Uprising of Alice Lenshina, Kaunda proclaimed a state of emergency, banning the Lumpa Church, which he considered a major source of opposition because it refused to allow its members to participate in compulsory voting. This created animosity between the Church and UNIP, resulting in some low-level conflict which claimed numerous lives. Kaunda tried to mediate the differences between the Church, local authorities and UNIP party members but was eventually unable to control party cadres in the North. From 1964 onwards, Kaunda's government developed clearly authoritarian characteristics. Becoming increasingly intolerant of opposition, Kaunda banned all parties except UNIP following violence during the 1968 elections. However, in early 1972 he faced a new threat in the form of Simon Kapwepwe's decision to leave UNIP and found a rival party, the United Progressive Party, which Kaunda immediately attempted to suppress. Next, he appointed the Chona Commission, which was set up under the chairmanship of Mainza Chona in February 1972. Chona's task was to make recommendations for a new Zambian constitution which would effectively reduce the nation to a one-party state. The commission's terms of reference did not permit it to discuss the possible faults of Kaunda's decision. ANC party members boycotted Chona's efforts and unsuccessfully challenged the constitutional change in the courts. The Chona report was based on four months of public hearings and was submitted in October 1972 as a 'liberal' document. Finally, Kaunda neutralised Nkumbula by getting him to join UNIP and accept the Choma Declaration on 27 June 1973. The new constitution was formally promulgated on 25 August of that year. At the first elections under the new system held that December, Kaunda was the sole candidate. With all opposition having been eliminated, Kaunda allowed the creation of a personality cult. He developed a left nationalist-socialist ideology, called Zambian Humanism. This was based on a combination of mid-20th-century ideas of central planning/state control and what he considered basic African values: mutual aid, trust and loyalty to the community. Similar forms of African socialism were introduced inter alia in Ghana by Kwame Nkrumah ("Consciencism") and Tanzania by Julius Nyerere ("Ujamaa"), while in Zaire, President Mobutu Sese Seko, a much less "benevolent" ruler than Kaunda or Nyerere, was at a loss until he hit on the ideal ideology – 'Mobutuism'. To elaborate his ideology, Kaunda published several books: "Humanism in Zambia and a Guide to its Implementation, Parts 1, 2 and 3". Other publications on Zambian Humanism are: "Fundamentals of Zambian Humanism", by Timothy Kandeke; "Zambian Humanism, religion and social morality", by Cleve Dillion-Malone S.J. and "Zambian Humanism: some major spiritual and economic challenges", by Justin B. Zulu. "Kaunda on Violence", (US title, "The Riddle of Violence"), was published in 1980. He is known as "Gandhi of Africa" or "African Gandhi." During his early presidency Kaunda was an outspoken supporter of the anti-apartheid movement and opposed white minority rule in Southern Rhodesia. Although his nationalisation of the copper mining industry in the late 1960s and the volatility of international copper prices contributed to increased economic problems, matters were aggravated by his logistical support for the black nationalist movements in Ian Smith's Rhodesia, South West Africa, Angola, and Mozambique. Kaunda's administration later attempted to serve the role of a mediator between the entrenched white minority and colonial governments and the various guerilla movements which were aimed at overthrowing these respective administrations. Beginning in the early 1970s, he began permitting the most prominent guerilla organisations, such as the Rhodesian ZANU and the African National Congress, to use Zambia as a base for their operations. Former ANC president Oliver Tambo even spent a significant proportion of his 30-year exile living and working in Zambia. Joshua Nkomo, leader of ZAPU, also erected military encampments there, as did SWAPO and its military wing, the People's Liberation Army of Namibia. In the first twenty years of Kaunda's presidency, himself and his advisors sought numerous times to acquire modern weapons from the United States. In a letter written to Lyndon B. Johnson in 1967, Kaunda inquires if the United States would provide him with nuclear missiles, all of his requests for modern weapons were refused by the United States. In 1980, Kaunda would purchase sixteen MiG-21 jets from the Soviet Union, which would ultimately provoke a reaction from the United States. Kaunda responded to the United States, stating that the after numerous failed attempts to purchase weapons, buying from the Soviets was justified in his duty to protect his citizens, and Zambian national security. His attempted purchase of American weapons may have been a political tactic in order to use fear to establish his one-party rule over Zambia. From April 1975, when Kaunda visited Gerald Ford at the White House in Washington and delivered a powerful speech calling for the United States to play a more active and constructive role in southern Africa, until approximately 1984, the Zambian president was arguably the key African leader involved in the international diplomacy regarding the conflicts in Angola, Rhodesia (Zimbabwe), and Namibia. He hosted Henry Kissinger's 1976 trip to Zambia, got along very well with Jimmy Carter, and worked closely with Ronald Reagan's assistant secretary of state for African affairs, Chester Crocker. While there were disagreements between Kaunda and U.S. leaders (such as when Zambia purchased Soviet MIG fighters or when he accused two American diplomats of being spies), Kaunda generally enjoyed a positive relationship with the United States during these years. On 26 August 1975, Kaunda acted as mediator along with the Prime Minister of South Africa, B. J. Vorster at Victoria Falls to discuss possibilities for an internal settlement in Southern Rhodesia with Ian Smith and the black nationalists. After the Lancaster House Agreement, Kaunda attempted to seek similar majority rule in South West Africa. He met with P. W. Botha in Botswana to debate this proposal, but apparently failed to make a serious impression. Meanwhile, the anti-white minority insurgency conflicts of southern Africa continued to place a huge economic burden on Zambia as white minority governments were the country's main trading partners. In response, Kaunda negotiated the TAZARA Railway (Tanzam) linking Kapiri Mposhi on the Zambian Copperbelt with Tanzania's port of Dar-es-Salaam on the Indian Ocean. Completed in 1975, this was the only route for bulk trade which did not have to transit white-dominated territories. This precarious situation lasted more than 20 years, until the abolition of apartheid in South Africa. For much of the Cold War Kaunda was a strong supporter of the Non-Aligned Movement. He hosted a NAM summit in Lusaka in 1970 and served as the movement's chairman from 1970 to 1973. He maintained a close friendship with Yugoslavia's long-time leader Josip Broz Tito and is remembered by many Yugoslav officials for weeping openly over the latter's casket in 1980. He even had a special house constructed in Lusaka for Tito's visits to the country. He also visited and welcomed Romania's President, Nicolae Ceaușescu in the 1970s. In 1986, the University of Belgrade (Yugoslavia) awarded him an honorary doctorate. Kaunda had frequent but cordial differences with US President Ronald Reagan whom he met 1983 and British Prime Minister Margaret Thatcher mainly over what he saw as a blind eye being turned towards South African apartheid. He always maintained warm relations with the People's Republic of China who had provided assistance on many projects in Zambia, including the Tazara Railway. Prior to the first Gulf War, Kaunda cultivated a friendship with Iraqi President Saddam Hussein, with whom he secured oil resources for his nation. He even went so far as to name Zambian streets in Saddam's honour. In August 1989, Farzad Bazoft was detained in Iraq for alleged espionage. He was accompanied by a British nurse, Daphne Parish, who was arrested as well. Bazoft was an Iranian-born freelance journalist attempting to expose Saddam's mass murder of Iraqi Kurds. Bazoft was later tried and condemned to death, but Kaunda managed to negotiate for his female companion's release. Kaunda served as chairman of the Organization of African Unity (OAU) from 1970 to 1973. The creation of a one-party state effectively made Kaunda's presidency a legal dictatorship. From 1973 onward, his rule became increasingly autocratic. He personally appointed the Central Committee of UNIP, although the process was given a veneer of legitimacy by being "approved" by a National Congress of the party. In theory, Kaunda's nominations could be discarded by Congress. In practice, his control over the party was such that they were always accepted without modification. The argument used was that "the President knows the people who can work well with him, so if we modify the nominations we will end up with a less effective team". In turn, the Central Committee nominated a sole candidate for the party presidency. Of course, since the members of the Central Committee had been nominated by Kaunda, he was always the sole candidate. Constitutionally, whoever was in good standing with the party was at liberty to challenge him. In practice, no one did so because of his charisma and intolerance for dissent. As president of UNIP, Kaunda was the only candidate for president of the republic. After UNIP went through the formalities of (re)electing him as its leader, the rest of the Zambian population was given the opportunity to express approval or disapproval of Kaunda by voting either "Yes" or "No" in a referendum. Since parliamentary elections took place at the same time, there was great pressure placed on parliamentary candidates to "campaign" for a "Yes" vote for Kaunda, in addition to their own campaigns. Parastatal companies (which were controlled through ZIMCO – Zambia Industrial and Mining Corporation) were also under pressure to "campaign" for Kaunda by buying advertising space in the two national newspapers (Times of Zambia and Zambia Daily Mail) exhorting the electorate to give the president a "massive 'Yes' vote". Under this system, Kaunda was confirmed as president in 1978, 1983 and 1988, each time with official results showing over 80 percent of voters approving his candidacy. The parliamentary elections were also controlled by Kaunda: the names of candidates had to be submitted to UNIP's Central Committee, which then selected three people to stand for any particular constituency. The Central Committee could veto any candidate for any reason. Using these methods, Kaunda kept any potential rivals at bay by ensuring that they never got into a position to accrue any political power. For all intents and purposes, Kaunda held all governing power in the nation. This was the tactic he used when he saw off Nkumbula and Kapwepwe's challenges to his sole candidacy for the 1978 UNIP elections. On that occasion, the UNIP's constitution was "amended" overnight to bring in rules that invalidated the two challengers' nominations: Kapwepwe was told he could not stand because only people who had been members for five years could be nominated to the presidency (he had only rejoined UNIP three years before); Nkumbula was outmaneuvered by introducing a new rule that said each candidate needed the signatures of 200 delegates from "each" province to back his candidacy. Less creative tactics were used on a third prospective challenger. UNIP's Youth Wing simply beat him within an inch of his life, leaving him in no state to submit his nomination. Eventually, however, economic troubles and increasing international pressure to bring more democracy to Africa caused Kaunda to totter. While he had been known for his vehement opposition to apartheid in South Africa, his critics were increasingly emboldened to speak out against his authoritarian rule, and also questioned his competence. His close friend Julius Nyerere had retired as president of Tanzania in 1985 and was quietly encouraging Kaunda to follow suit. Matters quickly came to a head in the summer of 1990. In July, amid three days of rioting in the capital, Kaunda announced a referendum on whether to legalize other parties would be held that October. However, he himself argued for maintaining UNIP's monopoly, claiming that a multiparty system would lead to chaos. The announcement almost came too late; hours later, a disgruntled officer went on the radio to announce Kaunda had been overthrown. The coup attempt was broken three to four hours later, but it was clear Kaunda and the UNIP were reeling. Kaunda tried to mollify the opposition by moving the referendum to August 1991; the opposition claimed the original date didn't allow enough time for voter registration. While expressing willingness to have the Zambian people vote on a multiparty system, Kaunda maintained that only a one-party state could prevent tribalism and violence from engulfing the country. By September, however, opposition demands forced Kaunda to reverse course. He cancelled the referendum, and instead recommended constitutional amendments that would dismantle UNIP's monopoly on power. He also announced a snap general election for the following year, two years before they were due. He signed the necessary amendments into law in December. At these elections, the Movement for Multiparty Democracy (MMD), helmed by trade union leader Frederick Chiluba, swept UNIP from power in a landslide. In the presidential election, Kaunda was roundly defeated, taking only 24 percent of the vote to Chiluba's 75 percent. UNIP was cut down to only 25 seats in the legislature. One of the issues in the campaign was a plan by Kaunda to turn over one quarter of the nation's land to Maharishi Mahesh Yogi, an Indian guru who promised that he would use it for a network of utopian agricultural enclaves that proponents said would create "heaven on earth". Kaunda was forced in a television interview to deny practising Transcendental Meditation. When Kaunda handed power to Chiluba on 2 November 1991, he became the second mainland African head of state to allow free multiparty elections and to peacefully relinquish power when he lost. The first, Mathieu Kérékou of Benin, had done so in March of that year. After leaving office, Kaunda clashed frequently with Chiluba's government and the MMD. Chiluba later attempted to deport Kaunda on the grounds that he was a Malawian. The MMD dominated government under the leadership of Chiluba had the constitution amended, barring citizens with foreign parentage from standing for the presidency, to prevent Kaunda from contesting the next elections in 1996. Kaunda retired from politics after he was accused of involvement in the failed 1997 coup attempt. After the coup, on Boxing Day in 1997 he was placed under arrest by Chiluba. However, many officials in the region appealed against this; on New Year's Eve of the same year, he was placed under house arrest until the court date. In 1999 Kaunda was declared stateless by the Ndola High Court in a judgment delivered by Justice Chalendo Sakala. A full transcript of the judgment was published in the "Times of Zambia" edition of 1 April 1999. Kaunda however successfully challenged this decision in the Supreme Court of Zambia, which declared him to be a Zambian citizen in the "Lewanika and Others vs. Chiluba" ruling. After retiring, he has been involved in various charitable organisations. His most notable contribution has been his zeal in the fight against the spread of HIV/AIDS. One of Kaunda's children was claimed by the pandemic in the 1980s. From 2002 to 2004, he was an "African President-in-Residence" at the African Presidential Archives and Research Center at Boston University. In 2006, he was seen in attendance at an episode of "Dancing with the Stars"; Kaunda is an avid ballroom dancer. President Michael Sata made use of Kaunda as a roving ambassador for Zambia. In February 2014 Kaunda was hospitalized for a fever at Lusaka Trust Hospital. Foreign Honours Awards Since Kenneth Kaunda was known to wear a safari suit (safari jacket paired with trousers) constantly, the safari suit is still commonly referred to as a "Kaunda suit" throughout sub-Saharan Africa. Unknown to many, Kenneth Kaunda wrote music about the independence he hoped to achieve, although only one song has been known to many Zambians ("Tiyende pamodzi ndi mtima umo" literally meaning "Let's walk together with one heart"). He would ride his bicycle for hundreds of miles singing his songs.
https://en.wikipedia.org/wiki?curid=17355
K2 K2, at above sea level, is the second highest mountain in the world, after Mount Everest at . It is located on the China–Pakistan border between Baltistan in the Gilgit-Baltistan region of northern Pakistan, and Dafdar Township in Taxkorgan Tajik Autonomous County of Xinjiang, China. K2 is the highest point of the Karakoram range and the highest point in both Pakistan and Xinjiang. K2 is known as the "Savage Mountain" after George Bell, a climber on the 1953 American Expedition, told reporters "It's a savage mountain that tries to kill you." Of the five highest mountains in the world, K2 is the deadliest where approximately one person dies on the mountain for every four who reach the summit. Also occasionally known as Chhogori, or Mount Godwin-Austen, other nicknames for K2 are The King of Mountains and The Mountaineers' Mountain, as well as The Mountain of Mountains after climber Reinhold Messner titled his book about K2 the same. K2 is the only eight-thousand metre peak that has never been climbed during winter or from its East Face. Ascents have almost always been made in July and August, the warmest times of year; K2's more northern location makes it more susceptible to inclement and colder weather. The peak has now been climbed by almost all of its ridges. Although the summit of Everest is at a higher altitude, K2 is a more difficult and dangerous climb, due in part to its more inclement weather. , only 367 people have completed the ascent. 86 people have died attempting the climb according to the list maintained on the List of deaths on eight-thousanders. The summit was reached for the first time by the Italian climbers Lino Lacedelli and Achille Compagnoni, on the 1954 Italian Karakoram expedition led by Ardito Desio. The name K2 is derived from the notation used by the Great Trigonometrical Survey of British India. Thomas Montgomerie made the first survey of the Karakoram from Mount Haramukh, some to the south, and sketched the two most prominent peaks, labeling them K1 and K2. The policy of the Great Trigonometrical Survey was to use local names for mountains wherever possible and K1 was found to be known locally as Masherbrum. K2, however, appeared not to have acquired a local name, possibly due to its remoteness. The mountain is not visible from Askole, the last village to the south, or from the nearest habitation to the north, and is only fleetingly glimpsed from the end of the Baltoro Glacier, beyond which few local people would have ventured. The name "Chogori", derived from two Balti words, "chhogo" ("big") and "ri" ("mountain") (چھوغوری) has been suggested as a local name, but evidence for its widespread use is scant. It may have been a compound name invented by Western explorers or simply a bemused reply to the question "What's that called?" It does, however, form the basis for the name "Qogir" () by which Chinese authorities officially refer to the peak. Other local names have been suggested including "Lamba Pahar" ("Tall Mountain" in Urdu) and "Dapsang", but are not widely used. With the mountain lacking a local name, the name "Mount Godwin-Austen" was suggested, in honor of Henry Godwin-Austen, an early explorer of the area. While the name was rejected by the Royal Geographical Society, it was used on several maps and continues to be used occasionally. The surveyor's mark, K2, therefore continues to be the name by which the mountain is commonly known. It is now also used in the Balti language, rendered as "Kechu" or "Ketu" ( ). The Italian climber Fosco Maraini argued in his account of the ascent of Gasherbrum IV that while the name of K2 owes its origin to chance, its clipped, impersonal nature is highly appropriate for so remote and challenging a mountain. He concluded that it was: André Weil named K3 surfaces in mathematics partly after the beauty of the mountain K2. K2 lies in the northwestern Karakoram Range. It is located in the Baltistan region of Gilgit–Baltistan, Pakistan, and the Taxkorgan Tajik Autonomous County of Xinjiang, China. The Tarim sedimentary basin borders the range on the north and the Lesser Himalayas on the south. Melt waters from vast glaciers, such as those south and east of K2, feed agriculture in the valleys and contribute significantly to the regional fresh-water supply. K2 is ranked 22nd by topographic prominence, a measure of a mountain's independent stature, because it is part of the same extended area of uplift (including the Karakoram, the Tibetan Plateau, and the Himalaya) as Mount Everest, in that it is possible to follow a path from K2 to Everest that goes no lower than , at the Kora La on the Nepal/China border in the Mustang Lo. Many other peaks that are far lower than K2 are more independent in this sense. It is, however, the most prominent peak within the Karakoram range. K2 is notable for its local relief as well as its total height. It stands over above much of the glacial valley bottoms at its base. It is a consistently steep pyramid, dropping quickly in almost all directions. The north side is the steepest: there it rises over above the K2 (Qogir) Glacier in only of horizontal distance. In most directions, it achieves over of vertical relief in less than . A 1986 expedition led by George Wallerstein made an inaccurate measurement incorrectly showing that K2 was taller than Mount Everest, and therefore the tallest mountain in the world. A corrected measurement was made in 1987, but by then the claim that K2 was the tallest mountain in the world had already made it into many news reports and reference works. The mountains of K2 and Broad Peak, and the area westward to the lower reaches of Sarpo Laggo glacier, consist of metamorphic rocks, known as the "K2 Gneiss," and part of the Karakoram Metamorphic Complex. The K2 Gneiss consists of a mixture of orthogneiss and biotite-rich paragneiss. On the south and southeast face of K2, the orthogneiss consists of a mixture of a strongly foliated plagioclase-hornblende gneiss and a biotite-hornblende-K-feldspar orthogneiss, which has been intruded by garnet-mica leucogranitic dikes. In places, the paragneisses include clinopyroxene-hornblende-bearing psammites, garnet (grossular)-diopside marbles, and biotite-graphite phyllites. Near the memorial to the climbers who have died on K2, above Base Camp on the south spur, thin impure marbles with quartzites and mica schists, called the "Gilkey-Puchoz sequence", are interbanded within the orthogneisses. On the west face of Broad Peak and south spur of K2, lamprophyre dikes, which consist of clinopyroxene and biotite-porphyritic vogesites and minettes, have intruded the K2 gneiss. The K2 Gneiss is separated from the surrounding sedimentary and metasedimentary rocks of the surrounding Karakoram Metamorphic Complex by normal faults. For example, a fault separates the K2 gneiss of the east face of K2 from limestones and slates comprising nearby Skyang Kangri. 40Ar/39Ar ages of 115 to 120 million years ago obtained from and geochemical analyses of the K2 Gneiss demonstrate that it is a metamorphosed, older, Cretaceous, pre-collisional granite. The granitic precursor (protolith) to the K2 Gneiss originated as the result of the production of large bodies of magma by a northward-dipping subduction zone along what was the continental margin of Asia at that time and their intrusion as batholiths into its lower continental crust. During the initial collision of the Asia and Indian plates, this granitic batholith was buried to depths of about or more, highly metamorphosed, highly deformed, and partially remelted during the Eocene Period to form gneiss. Later, the K2 Gneiss was then intruded by leucogranite dikes and finally exhumed and uplifted along major breakback thrust faults during post-Miocene time. The K2 Gneiss was exposed as the entire K2-Broad Peak-Gasherbrum range experienced rapid uplift with which erosion rates have been unable to keep pace. The mountain was first surveyed by a European survey team in 1856. Team member Thomas Montgomerie designated the mountain "K2" for being the second peak of the Karakoram range. The other peaks were originally named K1, K3, K4, and K5, but were eventually renamed Masherbrum, Gasherbrum IV, Gasherbrum II, and Gasherbrum I, respectively. In 1892, Martin Conway led a British expedition that reached "Concordia" on the Baltoro Glacier. The first serious attempt to climb K2 was undertaken in 1902 by Oscar Eckenstein, Aleister Crowley, Jules Jacot-Guillarmod, Heinrich Pfannl, Victor Wessely, and Guy Knowles via the Northeast Ridge. In the early 1900s, modern transportation did not exist in the region: it took "fourteen days just to reach the foot of the mountain". After five serious and costly attempts, the team reached —although considering the difficulty of the challenge, and the lack of modern climbing equipment or weatherproof fabrics, Crowley's statement that "neither man nor beast was injured" highlights the pioneering spirit and bravery of the attempt. The failures were also attributed to sickness (Crowley was suffering the residual effects of malaria), a combination of questionable physical training, personality conflicts, and poor weather conditions—of 68 days spent on K2 (at the time, the record for the longest time spent at such an altitude) only eight provided clear weather. The next expedition to K2, in 1909, led by Prince Luigi Amedeo, Duke of the Abruzzi, reached an elevation of around on the South East Spur, now known as the "Abruzzi Spur" (or Abruzzi Ridge). This would eventually become part of the standard route, but was abandoned at the time due to its steepness and difficulty. After trying and failing to find a feasible alternative route on the West Ridge or the North East Ridge, the Duke declared that K2 would never be climbed, and the team switched its attention to Chogolisa, where the Duke came within of the summit before being driven back by a storm. The next attempt on K2 was not made until 1938, when First American Karakoram expedition led by Charles Houston made a reconnaissance of the mountain. They concluded that the Abruzzi Spur was the most practical route and reached a height of around before turning back due to diminishing supplies and the threat of bad weather. The following year, the 1939 American Karakoram expedition led by Fritz Wiessner came within of the summit but ended in disaster when Dudley Wolfe, Pasang Kikuli, Pasang Kitar, and Pintso disappeared high on the mountain. Charles Houston returned to K2 to lead the 1953 American expedition. The attempt ended in failure after a storm pinned down the team for 10 days at , during which time climber Art Gilkey became critically ill. A desperate retreat followed, during which Pete Schoening saved almost the entire team during a mass fall (known simply as The Belay), and Gilkey was killed, either in an avalanche or in a deliberate attempt to avoid burdening his companions. Despite the retreat and tragic end, the expedition has been given iconic status in mountaineering history. The 1954 Italian Karakoram expedition finally succeeded in ascending to the summit of K2 via the Abruzzi Spur on 31 July 1954. The expedition was led by Ardito Desio, and the two climbers who reached the summit were Lino Lacedelli and Achille Compagnoni. The team included a Pakistani member, Colonel Muhammad Ata-ullah, who had been a part of the 1953 American expedition. Also on the expedition were Walter Bonatti and Pakistani Hunza porter Amir Mehdi, who both proved vital to the expedition's success in that they carried oxygen tanks to for Lacedelli and Compagnoni. The ascent is controversial because Lacedelli and Compagnoni established their camp at a higher elevation than originally agreed with Mehdi and Bonatti. It being too dark to ascend or descend, Mehdi and Bonatti were forced to overnight without shelter above 8,000 metres leaving the oxygen tanks behind as requested when they descended. Bonatti and Mehdi survived, but Mehdi was hospitalized for months and had to have his toes amputated because of frostbite. Efforts in the 1950s to suppress these facts to protect Lacedelli and Compagnoni's reputations as Italian national heroes were later brought to light. It was also revealed that the moving of the camp was deliberate, a move apparently made because Compagnoni feared being outshone by the younger Bonatti. Bonatti was given the blame for Mehdi's hospitalization. On 9 August 1977, 23 years after the Italian expedition, Ichiro Yoshizawa led the second successful ascent, with Ashraf Aman as the first native Pakistani climber. The Japanese expedition took the Abruzzi Spur, and used more than 1,500 porters. The third ascent of K2 was in 1978, via a new route, the long and corniced Northeast Ridge. The top of the route traversed left across the East Face to avoid a vertical headwall and joined the uppermost part of the Abruzzi route. This ascent was made by an American team, led by James Whittaker; the summit party was Louis Reichardt, Jim Wickwire, John Roskelley, and Rick Ridgeway. Wickwire endured an overnight bivouac about below the summit, one of the highest bivouacs in history. This ascent was emotional for the American team, as they saw themselves as completing a task that had been begun by the 1938 team forty years earlier. Another notable Japanese ascent was that of the difficult North Ridge on the Chinese side of the peak in 1982. A team from the led by Isao Shinkai and put three members, Naoe Sakashita, Hiroshi Yoshino, and Yukihiro Yanagisawa, on the summit on 14 August. However Yanagisawa fell and died on the descent. Four other members of the team achieved the summit the next day. The first climber to reach the summit of K2 twice was Czech climber Josef Rakoncaj. Rakoncaj was a member of the 1983 Italian expedition led by Francesco Santon, which made the second successful ascent of the North Ridge (31 July 1983). Three years later, on 5 July 1986, he reached the summit via the Abruzzi Spur (double with Broad Peak West Face solo) as a member of Agostino da Polenza's international expedition. The first woman to summit K2 was Polish climber Wanda Rutkiewicz on 23 June 1986. Liliane and Maurice Barrard who had summitted later that day, fell during the descent; Liliane Barrard's body was found on 19 July 1986 at the foot of the south face. In 1986, two Polish expeditions summitted via two new routes, the Magic Line and the Polish Line (Jerzy Kukuczka and Tadeusz Piotrowski). Piotrowski fell to his death as the two were descending. This latter route has never been repeated. Thirteen climbers from several expeditions died in the 1986 K2 Disaster. Another six mountaineers died on 13 August 1995, while eleven climbers died in the 2008 K2 disaster. There are a number of routes on K2, of somewhat different character, but they all share some key difficulties, the first being the extremely high altitude and resulting lack of oxygen: there is only one-third as much oxygen available to a climber on the summit of K2 as there is at sea level. The second is the propensity of the mountain to experience extreme storms of several days duration, which have resulted in many of the deaths on the peak. The third is the steep, exposed, and committing nature of all routes on the mountain, which makes retreat more difficult, especially during a storm. Despite many attempts there have been no successful winter ascents. All major climbing routes lie on the Pakistani side, which is also where base camp is located. The standard route of ascent, used far more than any other route (75% of all climbers use this route) is the Abruzzi Spur, located on the Pakistani side, first attempted by Prince Luigi Amedeo, Duke of the Abruzzi in 1909. This is the southeast ridge of the peak, rising above the Godwin-Austen Glacier. The spur proper begins at an altitude of , where Advanced Base Camp is usually placed. The route follows an alternating series of rock ribs, snow/ice fields, and some technical rock climbing on two famous features, "House's Chimney" and the "Black Pyramid." Above the Black Pyramid, dangerously exposed and difficult to navigate slopes lead to the easily visible "Shoulder", and thence to the summit. The last major obstacle is a narrow couloir known as the "Bottleneck", which places climbers dangerously close to a wall of seracs that form an ice cliff to the east of the summit. It was partly due to the collapse of one of these seracs around 2001 that no climbers summitted the peak in 2002 and 2003. On 1 August 2008, 11 climbers from several expeditions died during a series of accidents, including several ice falls in the Bottleneck. Almost opposite from the Abruzzi Spur is the North Ridge, which ascends the Chinese side of the peak. It is rarely climbed, partly due to very difficult access, involving crossing the Shaksgam River, which is a hazardous undertaking. In contrast to the crowds of climbers and trekkers at the Abruzzi basecamp, usually at most two teams are encamped below the North Ridge. This route, more technically difficult than the Abruzzi, ascends a long, steep, primarily rock ridge to high on the mountain—Camp IV, the "Eagle's Nest" at —and then crosses a dangerously slide-prone hanging glacier by a leftward climbing traverse, to reach a snow couloir which accesses the summit. Besides the original Japanese ascent, a notable ascent of the North Ridge was the one in 1990 by Greg Child, Greg Mortimer, and Steve Swenson, which was done alpine style above Camp 2, though using some fixed ropes already put in place by a Japanese team. Because 75% of people who climb K2 use the Abruzzi Spur, these listed routes are rarely climbed. No one has climbed the East Face of the mountain due to the instability of the snow and ice formations on that side. For most of its climbing history, K2 was not usually climbed with supplemental oxygen, and small, relatively lightweight teams were the norm. However, the 2004 season saw a great increase in the use of oxygen: 28 of 47 summitteers used oxygen in that year. Acclimatisation is essential when climbing without oxygen to avoid some degree of altitude sickness. K2's summit is well above the altitude at which high altitude pulmonary edema (HAPE), or high altitude cerebral edema (HACE) can occur. In mountaineering, when ascending above an altitude of , the climber enters what is known as the "death zone".
https://en.wikipedia.org/wiki?curid=17359
Komodo dragon The Komodo dragon ("Varanus komodoensis"), also known as the Komodo monitor, is a species of lizard found in the Indonesian islands of Komodo, Rinca, Flores, and Gili Motang. A member of the monitor lizard family Varanidae, it is the largest extant species of lizard, growing to a maximum length of in rare cases and weighing up to approximately . As a result of their size, these lizards dominate the ecosystems in which they live. Komodo dragons hunt and ambush prey including invertebrates, birds, and mammals. It has been claimed that they have a venomous bite; there are two glands in the lower jaw which secrete several toxic proteins. The biological significance of these proteins is disputed, but the glands have been shown to secrete an anticoagulant. Komodo dragons' group behaviour in hunting is exceptional in the reptile world. The diet of big Komodo dragons mainly consists of Timor deer, though they also eat considerable amounts of carrion. Komodo dragons also occasionally attack humans. Mating begins between May and August, and the eggs are laid in September; as many as 20 eggs are deposited at a time in an abandoned megapode nest or in a self-dug nesting hole. The eggs are incubated for seven to eight months, hatching in April, when insects are most plentiful. Young Komodo dragons are vulnerable and therefore dwell in trees, safe from predators and cannibalistic adults. They take 8 to 9 years to mature, and are estimated to live up to 30 years. Komodo dragons were first recorded by Western scientists in 1910. Their large size and fearsome reputation make them popular zoo exhibits. In the wild, their range has contracted due to human activities, and they are listed as vulnerable by the IUCN. They are protected under Indonesian Law, and Komodo National Park was founded in 1980 to aid protection efforts. Komodo dragons were first documented by Europeans in 1910, when rumors of a "land crocodile" reached Lieutenant van Steyn van Hensbroek of the Dutch colonial administration. Widespread notoriety came after 1912, when Peter Ouwens, the director of the Zoological Museum at Bogor, Java, published a paper on the topic after receiving a photo and a skin from the lieutenant, as well as two other specimens from a collector. The first two live Komodo dragons to arrive in Europe were exhibited in the Reptile House at London Zoo when it opened in 1927. Joan Beauchamp Procter made some of the earliest observations of these animals in captivity and she demonstrated their behaviour at a Scientific Meeting of the Zoological Society of London in 1928. The Komodo dragon was the driving factor for an expedition to Komodo Island by W. Douglas Burden in 1926. After returning with 12 preserved specimens and two live ones, this expedition provided the inspiration for the 1933 movie "King Kong". It was also Burden who coined the common name "Komodo dragon". Three of his specimens were stuffed and are still on display in the American Museum of Natural History. The Dutch island administration, realizing the limited number of individuals in the wild, soon outlawed sport hunting and heavily limited the number of individuals taken for scientific study. Collecting expeditions ground to a halt with the occurrence of World War II, not resuming until the 1950s and 1960s, when studies examined the Komodo dragon's feeding behavior, reproduction, and body temperature. At around this time, an expedition was planned in which a long-term study of the Komodo dragon would be undertaken. This task was given to the Auffenberg family, who stayed on Komodo Island for 11 months in 1969. During their stay, Walter Auffenberg and his assistant Putra Sastrawan captured and tagged more than 50 Komodo dragons. Research from the Auffenberg expedition proved to be enormously influential in raising Komodo dragons in captivity. Research after that of the Auffenberg family has shed more light on the nature of the Komodo dragon, with biologists such as Claudio Ciofi continuing to study the creatures. The Komodo dragon is also sometimes known as the Komodo monitor or the Komodo Island monitor in scientific literature, although this name is uncommon. To the natives of Komodo Island, it is referred to as "ora", "buaya darat" (‘land crocodile’), or "biawak raksasa" (‘giant monitor’). They are also sometimes called “giant monitor lizards”, but that name is easily confused with fossil species found in Australia. The evolutionary development of the Komodo dragon started with the genus "Varanus", which originated in Asia about 40 million years ago and migrated to Australia, where it evolved into giant forms (the largest of all being the recently extinct "Megalania"), helped by the absence of competing placental carnivorans. Around 15 million years ago, a collision between the continental landmasses of Australia and Southeast Asia allowed these larger varanids to move back into what is now the Indonesian archipelago, extending their range as far east as the island of Timor. The Komodo dragon is believed to have differentiated from its Australian ancestors about 4 million years ago. However, recent fossil evidence from Queensland suggests the Komodo dragon actually evolved in Australia, before spreading to Indonesia. Dramatic lowering of sea level during the last glacial period uncovered extensive stretches of continental shelf that the Komodo dragon colonised, becoming isolated in their present island range as sea levels rose afterwards. Fossils of extinct Pliocene species of similar size to the modern Komodo dragon, such as "Varanus sivalensis", have been found in Eurasia as well, indicating that they fared well even in environments containing competition, such as mammalian carnivores, until the climate change and extinction events that marked the beginning of the Pleistocene. Genetic analysis of mitochondrial DNA shows the Komodo dragon to be the closest relative (sister taxon) of the lace monitor ("V. varius"), with their common ancestor diverging from a lineage that gave rise to the crocodile monitor ("Varanus salvadorii") of New Guinea. In the wild, adult Komodo dragons usually weigh around , although captive specimens often weigh more. According to "Guinness World Records", an average adult male will weigh and measure , while an average female will weigh and measure . The largest verified wild specimen was long and weighed , including its undigested food. The Komodo dragon has a tail as long as its body, as well as about 60 frequently replaced, serrated teeth that can measure up to in length. Its saliva is frequently blood-tinged because its teeth are almost completely covered by gingival tissue that is naturally lacerated during feeding. It also has a long, yellow, deeply forked tongue. Komodo dragon skin is reinforced by armoured scales, which contain tiny bones called osteoderms that function as a sort of natural chain-mail. This rugged hide makes Komodo dragon skin a poor source of leather. Additionally, these osteoderms become more extensive and variable in shape as the Komodo dragon ages, ossifying more extensively as the lizard grows. These osteoderms are absent in hatchlings and juveniles, indicating that the natural armor develops as a product of age and competition between adults for protection in intraspecific combat over food and mates. As with other varanids, Komodo dragons have only a single ear bone, the stapes, for transferring vibrations from the tympanic membrane to the cochlea. This arrangement means they are likely restricted to sounds in the 400 to 2,000 hertz range, compared to humans who hear between 20 and 20,000 hertz. They were formerly thought to be deaf when a study reported no agitation in wild Komodo dragons in response to whispers, raised voices, or shouts. This was disputed when London Zoological Garden employee Joan Procter trained a captive specimen to come out to feed at the sound of her voice, even when she could not be seen. The Komodo dragon can see objects as far away as , but because its retinas only contain cones, it is thought to have poor night vision. It can distinguish colours, but has poor visual discrimination of stationary objects. As with many other reptiles, the Komodo dragon primarily relies on its tongue to detect, taste, and smell stimuli, with the vomeronasal sense using the Jacobson's organ, rather than using the nostrils. With the help of a favorable wind and its habit of swinging its head from side to side as it walks, a Komodo dragon may be able to detect carrion from away. It only has a few taste buds in the back of its throat. Its scales, some of which are reinforced with bone, have sensory plaques connected to nerves to facilitate its sense of touch. The scales around the ears, lips, chin, and soles of the feet may have three or more sensory plaques. The Komodo dragon prefers hot and dry places, and typically lives in dry, open grassland, savanna, and tropical forest at low elevations. As an ectotherm, it is most active in the day, although it exhibits some nocturnal activity. Komodo dragons are solitary, coming together only to breed and eat. They are capable of running rapidly in brief sprints up to , diving up to , and climbing trees proficiently when young through use of their strong claws. To catch out-of-reach prey, the Komodo dragon may stand on its hind legs and use its tail as a support. As it matures, its claws are used primarily as weapons, as its great size makes climbing impractical. For shelter, the Komodo dragon digs holes that can measure from wide with its powerful forelimbs and claws. Because of its large size and habit of sleeping in these burrows, it is able to conserve body heat throughout the night and minimise its basking period the morning after. The Komodo dragon hunts in the afternoon, but stays in the shade during the hottest part of the day. These special resting places, usually located on ridges with cool sea breezes, are marked with droppings and are cleared of vegetation. They serve as strategic locations from which to ambush deer. As a result of their size, Komodo dragons dominate the ecosystems in which they live. They are carnivores, although they have been considered as eating mostly carrion, they will frequently ambush live prey with a stealthy approach. When suitable prey arrives near a dragon's ambush site, it will suddenly charge at the animal at high speeds and go for the underside or the throat. Komodo dragons do not deliberately allow the prey to escape with fatal injuries, but try to kill prey outright using a combination of lacerating damage and blood loss. They have been recorded as killing wild pigs within seconds, and observations of Komodo dragons tracking prey for long distances are likely misinterpreted cases of prey escaping an attack before succumbing to infection. Komodo dragons have been observed knocking down large pigs and deer with their strong tails. They are able to locate carcasses using their keen sense of smell, which can locate a dead or dying animal from a range of up to . Komodo dragons eat by tearing large chunks of flesh and swallowing them whole while holding the carcass down with their forelegs. For smaller prey up to the size of a goat, their loosely articulated jaws, flexible skulls, and expandable stomachs allow them to swallow prey whole. The undigested vegetable contents of a prey animal's stomach and intestines are typically avoided. Copious amounts of red saliva the Komodo dragons produce help to lubricate the food, but swallowing is still a long process (15–20 minutes to swallow a goat). A Komodo dragon may attempt to speed up the process by ramming the carcass against a tree to force it down its throat, sometimes ramming so forcefully, the tree is knocked down. A small tube under the tongue that connects to the lungs allows it to breathe while swallowing. After eating up to 80% of its body weight in one meal, it drags itself to a sunny location to speed digestion, as the food could rot and poison the dragon if left undigested in its stomach for too long. Because of their slow metabolism, large dragons can survive on as few as 12 meals a year. After digestion, the Komodo dragon regurgitates a mass of horns, hair, and teeth known as the gastric pellet, which is covered in malodorous mucus. After regurgitating the gastric pellet, it rubs its face in the dirt or on bushes to get rid of the mucus, suggesting it does not relish the scent of its own excretions. The largest animals eat first, while the smaller ones follow a hierarchy. The largest male asserts his dominance and the smaller males show their submission by use of body language and rumbling hisses. Dragons of equal size may resort to "wrestling". Losers usually retreat, though they have been known to be killed and eaten by victors. The Komodo dragon's diet is wide-ranging, and includes invertebrates, other reptiles (including smaller Komodo dragons), birds, bird eggs, small mammals, monkeys, wild boar, goats, deer, horses, and water buffalo. Young Komodos will eat insects, eggs, geckos, and small mammals, while adults prefer to hunt large mammals. Occasionally, they attack and bite humans. Sometimes they consume human corpses, digging up bodies from shallow graves. This habit of raiding graves caused the villagers of Komodo to move their graves from sandy to clay ground, and pile rocks on top of them, to deter the lizards. The Komodo dragon may have evolved to feed on the extinct dwarf elephant "Stegodon" that once lived on Flores, according to evolutionary biologist Jared Diamond. The Komodo dragon drinks by sucking water into its mouth via buccal pumping (a process also used for respiration), lifting its head, and letting the water run down its throat. Although previous studies proposed that Komodo dragon saliva contains a variety of highly septic bacteria that would help to bring down prey, research in 2013 suggested that the bacteria in the mouths of Komodo dragons are ordinary and similar to those found in other carnivores. They actually have surprisingly good mouth hygiene. As Bryan Fry put it: "After they are done feeding, they will spend 10 to 15 minutes lip-licking and rubbing their head in the leaves to clean their mouth ... Unlike people have been led to believe, they do not have chunks of rotting flesh from their meals on their teeth, cultivating bacteria." Nor do Komodo dragons wait for prey to die and track it at a distance, as vipers do; observations of them hunting deer, boar and in some cases buffalo reveal that they kill prey in less than half an hour, using their dentition to cause shock and trauma. The observation of prey dying of sepsis would then be explained by the natural instinct of water buffalos, who are not native to the islands where the Komodo dragon lives, to run into water after escaping an attack. The warm, faeces-filled water would then cause the infections. The study used samples from 16 captive dragons (10 adults and six neonates) from three US zoos. Researchers have isolated a powerful antibacterial peptide from the blood plasma of Komodo dragons, VK25. Based on their analysis of this peptide, they have synthesized a short peptide dubbed DRGN-1 and tested it against multidrug-resistant (MDR) pathogens. Preliminary results of these tests show that DRGN-1 is effective in killing drug-resistant bacterial strains and even some fungi. It has the added observed benefit of significantly promoting wound healing in both uninfected and mixed biofilm infected wounds. In late 2005, researchers at the University of Melbourne speculated the perentie ("Varanus giganteus"), other species of monitors, and agamids may be somewhat venomous. The team believes the immediate effects of bites from these lizards were caused by mild envenomation. Bites on human digits by a lace monitor ("V. varius"), a Komodo dragon, and a spotted tree monitor ("V. scalaris") all produced similar effects: rapid swelling, localised disruption of blood clotting, and shooting pain up to the elbow, with some symptoms lasting for several hours. In 2009, the same researchers published further evidence demonstrating Komodo dragons possess a venomous bite. MRI scans of a preserved skull showed the presence of two glands in the lower jaw. The researchers extracted one of these glands from the head of a terminally ill dragon in the Singapore Zoological Gardens, and found it secreted several different toxic proteins. The known functions of these proteins include inhibition of blood clotting, lowering of blood pressure, muscle paralysis, and the induction of hypothermia, leading to shock and loss of consciousness in envenomated prey. As a result of the discovery, the previous theory that bacteria were responsible for the deaths of Komodo victims was disputed. Other scientists have stated that this allegation of venom glands "has had the effect of underestimating the variety of complex roles played by oral secretions in the biology of reptiles, produced a very narrow view of oral secretions and resulted in misinterpretation of reptilian evolution". According to these scientists "reptilian oral secretions contribute to many biological roles other than to quickly dispatch prey". These researchers concluded that, "Calling all in this clade venomous implies an overall potential danger that does not exist, misleads in the assessment of medical risks, and confuses the biological assessment of squamate biochemical systems". Evolutionary biologist Schwenk says that even if the lizards have venom-like proteins in their mouths they may be using them for a different function, and he doubts venom is necessary to explain the effect of a Komodo dragon bite, arguing that shock and blood loss are the primary factors. Mating occurs between May and August, with the eggs laid in September. During this period, males fight over females and territory by grappling with one another upon their hind legs, with the loser eventually being pinned to the ground. These males may vomit or defecate when preparing for the fight. The winner of the fight will then flick his long tongue at the female to gain information about her receptivity. Females are antagonistic and resist with their claws and teeth during the early phases of courtship. Therefore, the male must fully restrain the female during coitus to avoid being hurt. Other courtship displays include males rubbing their chins on the female, hard scratches to the back, and licking. Copulation occurs when the male inserts one of his hemipenes into the female's cloaca. Komodo dragons may be monogamous and form "pair bonds", a rare behavior for lizards. Female Komodos lay their eggs from August to September and may use several types of locality; in one study, 60% laid their eggs in the nests of orange-footed scrubfowl (a moundbuilder or megapode), 20% on ground level and 20% in hilly areas. The females make many camouflage nests/holes to prevent other dragons from eating the eggs. Clutches contain an average of 20 eggs, which have an incubation period of 7–8 months. Hatching is an exhausting effort for the neonates, which break out of their eggshells with an egg tooth that falls off before long. After cutting themselves out, the hatchlings may lie in their eggshells for hours before starting to dig out of the nest. They are born quite defenseless and are vulnerable to predation. Sixteen youngsters from a single nest were on average 46.5 cm long and weighed 105.1 grams. Young Komodo dragons spend much of their first few years in trees, where they are relatively safe from predators, including cannibalistic adults, as juvenile dragons make up 10% of their diets. The habit of cannibalism may be advantageous in sustaining the large size of adults, as medium-sized prey on the islands is rare. When the young approach a kill, they roll around in faecal matter and rest in the intestines of eviscerated animals to deter these hungry adults. Komodo dragons take approximately 8 to 9 years to mature, and may live for up to 30 years. A Komodo dragon at London Zoo named Sungai laid a clutch of eggs in late 2005 after being separated from male company for more than two years. Scientists initially assumed she had been able to store sperm from her earlier encounter with a male, an adaptation known as superfecundation. On 20 December 2006, it was reported that Flora, a captive Komodo dragon living in the Chester Zoo in England, was the second known Komodo dragon to have laid unfertilised eggs: she laid 11 eggs, and seven of them hatched, all of them male. Scientists at Liverpool University in England performed genetic tests on three eggs that collapsed after being moved to an incubator, and verified Flora had never been in physical contact with a male dragon. After Flora's eggs' condition had been discovered, testing showed Sungai's eggs were also produced without outside fertilization. On 31 January 2008, the Sedgwick County Zoo in Wichita, Kansas, became the first zoo in the Americas to document parthenogenesis in Komodo dragons. The zoo has two adult female Komodo dragons, one of which laid about 17 eggs on 19–20 May 2007. Only two eggs were incubated and hatched due to space issues; the first hatched on 31 January 2008, while the second hatched on 1 February. Both hatchlings were males. Komodo dragons have the ZW chromosomal sex-determination system, as opposed to the mammalian XY system. Male progeny prove Flora's unfertilised eggs were haploid (n) and doubled their chromosomes later to become diploid (2n) (by being fertilised by a polar body, or by chromosome duplication without cell division), rather than by her laying diploid eggs by one of the meiosis reduction-divisions in her ovaries failing. When a female Komodo dragon (with ZW sex chromosomes) reproduces in this manner, she provides her progeny with only one chromosome from each of her pairs of chromosomes, including only one of her two sex chromosomes. This single set of chromosomes is duplicated in the egg, which develops parthenogenetically. Eggs receiving a Z chromosome become ZZ (male); those receiving a W chromosome become WW and fail to develop, meaning that only males are produced by parthenogenesis in this species. It has been hypothesised that this reproductive adaptation allows a single female to enter an isolated ecological niche (such as an island) and by parthenogenesis produce male offspring, thereby establishing a sexually reproducing population (via reproduction with her offspring that can result in both male and female young). Despite the advantages of such an adaptation, zoos are cautioned that parthenogenesis may be detrimental to genetic diversity. Attacks on humans are rare, but Komodo dragons have been responsible for several human fatalities, in both the wild and in captivity. According to data from Komodo National Park spanning a 38-year period between 1974 and 2012, there were 24 reported attacks on humans, five of them fatal. Most of the victims were local villagers living around the national park. Reports of attacks include: The Komodo dragon is classified by the IUCN as a vulnerable species and is listed on the IUCN Red List. The species' sensitivity to natural and man-made threats has long been recognized by conservationists, zoological societies, and the Indonesian government. Komodo National Park was founded in 1980 to protect Komodo dragon populations on islands including Komodo, Rinca, and Padar. Later, the Wae Wuul and Wolo Tado Reserves were opened on Flores to aid Komodo dragon conservation. Komodo dragons generally avoid encounters with humans. Juveniles are very shy and will flee quickly into a hideout if a human comes closer than about . Older animals will also retreat from humans from a shorter distance away. If cornered, they may react aggressively by gaping their mouth, hissing, and swinging their tail. If they are disturbed further, they may attack and bite. Although there are anecdotes of unprovoked Komodo dragons attacking or preying on humans, most of these reports are either not reputable or have subsequently been interpreted as defensive bites. Only a very few cases are truly the result of unprovoked attacks by abnormal individuals which lost their fear of humans. Volcanic activity, earthquakes, loss of habitat, fire, tourism, loss of prey due to poaching, and illegal poaching of the dragons themselves have all contributed to the vulnerable status of the Komodo dragon. Under Appendix I of CITES (the Convention on International Trade in Endangered Species), commercial trade of Komodo dragon skins or specimens is illegal. Despite this, there are occasional reports of illegal attempts to trade in live Komodo dragons. The most recent attempt was in March 2019, when Indonesian police in the East Java city of Surabaya reported that a criminal network had been caught trying to smuggle 41 young Komodo dragons out of Indonesia. The plan was said to include shipping the animals to several other countries in Southeast Asia through Singapore. It was hoped that the animals could be sold for up to 500 million rupiah (around US$35,000) each. It was believed that the Komodo dragons had been smuggled out of East Nusa Tenggara province through the port at Ende in central Flores. In 2013, the total population of Komodo dragons in the wild was assessed as 3,222 individuals, declining to 3,092 in 2014 and 3,014 in 2015. Populations remained relatively stable on the bigger islands (Komodo and Rinca), but decreased on smaller islands such as Nusa Kode and Gili Motang, likely due to diminishing prey availability. On Padar, a former population of Komodo dragons has recently become extinct, of which the last individuals were seen in 1975. It is widely assumed that the Komodo dragon died out on Padar following a major decline of populations of large ungulate prey, for which poaching was most likely responsible. Komodo dragons have long been sought-after zoo attractions, where their size and reputation make them popular exhibits. They are, however, rare in zoos because they are susceptible to infection and parasitic disease if captured from the wild, and do not readily reproduce in captivity. The first Komodo dragons were displayed at London Zoo in 1927. A Komodo dragon was exhibited in 1934 in the United States at the National Zoo in Washington, D.C., but it lived for only two years. More attempts to exhibit Komodo dragons were made, but the lifespan of the animals proved very short, averaging five years in the National Zoological Park. Studies done by Walter Auffenberg, which were documented in his book "The Behavioral Ecology of the Komodo Monitor", eventually allowed for more successful management and breeding of the dragons in captivity. As of May 2009, there were 35 North American, 13 European, one Singaporean, two African, and two Australian institutions which housed captive Komodo dragons. A variety of behaviors have been observed from captive specimens. Most individuals become relatively tame within a short time, and are capable of recognising individual humans and discriminating between familiar and unfamiliar keepers. Komodo dragons have also been observed to engage in play with a variety of objects, including shovels, cans, plastic rings, and shoes. This behavior does not seem to be "food-motivated predatory behavior". Even seemingly docile dragons may become unpredictably aggressive, especially when the animal's territory is invaded by someone unfamiliar. In June 2001, a Komodo dragon seriously injured Phil Bronstein, the then-husband of actress Sharon Stone, when he entered its enclosure at the Los Angeles Zoo after being invited in by its keeper. Bronstein was bitten on his bare foot, as the keeper had told him to take off his white shoes and socks, which the keeper stated could potentially excite the Komodo dragon as they were the same colour as the white rats the zoo fed the dragon. Although he survived, Bronstein needed to have several tendons in his foot reattached surgically.
https://en.wikipedia.org/wiki?curid=17360
Kiln A kiln is a thermally insulated chamber, a type of oven, that produces temperatures sufficient to complete some process, such as hardening, drying, or chemical changes. Kilns have been used for millennia to turn objects made from clay into pottery, tiles and bricks. Various industries use rotary kilns for pyroprocessing—to calcinate ores, to calcinate limestone to lime for cement, and to transform many other materials. The word "kiln" was originally pronounced "kil" with the "n" silent, as is referenced in Webster's Dictionary of 1828. Phonetically, the “ln” in “kiln” is categorized as a digraph: a combination of two letters that make only one sound, such as the “mn” in ”hymn.” From "English Words as Spoken and Written for Upper Grades" by James A. Bowen 1915: “The digraph ln, n silent, occurs in kiln. A fall down the kiln can kill you.” Bowen was pointing out the humorous fact that “kill” and “kiln” are homophones. Despite its origins, the modern pronunciation of this word, where the "n" is pronounced, has become more widely accepted than the original pronunciation. This is most likely due to a phenomenon known as spelling pronunciation, where the pronunciation of a word is derived from its spelling and differs from its actual pronunciation. This is common in words with silent letters. "Kiln" descends from the Old English "cylene" (), which was borrowed from Old Welsh 'Cylyn' which was then borrowed from the Latin "culīna" 'kitchen, cooking-stove, burning-place. Pit fired pottery was produced for thousands of years before the earliest known kiln, which dates to around 6000 BC, and was found at the Yarim Tepe site in modern Iraq. Neolithic kilns were able to produce temperatures greater than 900 °C (1652 °F). Uses include: Kilns are an essential part of the manufacture of all ceramics. Ceramics require high temperatures so chemical and physical reactions will occur to permanently alter the unfired body. In the case of pottery, clay materials are shaped, dried and then fired in a kiln. The final characteristics are determined by the composition and preparation of the clay body and the temperature at which it is fired. After a first firing, glazes may be used and the ware is fired a second time to fuse the glaze into the body. A third firing at a lower temperature may be required to fix overglaze decoration. Modern kilns often have sophisticated electronic control systems, although pyrometric devices are often also used. Clay consists of fine-grained particles that are relatively weak and porous. Clay is combined with other minerals to create a workable clay body. The firing process includes sintering. This heats the clay until the particles partially melt and flow together, creating a strong, single mass, composed of a glassy phase interspersed with pores and crystalline material. Through firing, the pores are reduced in size, causing the material to shrink slightly. This crystalline material predominantly consists of silicon and aluminium oxides. In the broadest terms, there are two types of kilns: intermittent and continuous, both being an insulated box with a controlled inner temperature and atmosphere. A continuous kiln, sometimes called a tunnel kiln, is long with only the central portion directly heated. From the cool entrance, ware is slowly moved through the kiln, and its temperature is increased steadily as it approaches the central, hottest part of the kiln. As it continues through the kiln, the temperature is reduced until the ware exits the kiln nearly at room temperature. A continuous kiln is energy-efficient, because heat given off during cooling is recycled to pre-heat the incoming ware. In some designs, the ware is left in one place, while the heating zone moves across it. Kilns in this type include: In the intermittent kiln, the ware is placed inside the kiln, the kiln is closed, and the internal temperature is increased according to a schedule. After the firing is completed, both the kiln and the ware are cooled. The ware is removed, the kiln is cleaned and the next cycle begins. Kilns in this type include: Kiln technology is very old. Kilns developed from a simple earthen trench filled with pots and fuel pit firing, to modern methods. One improvement was to build a firing chamber around pots with baffles and a stoking hole. This conserved heat. A chimney stack improved the air flow or "draw" of the kiln, thus burning the fuel more completely. Chinese kiln technology has always been a key factor in the development of Chinese pottery, and until recent centuries was the most advanced in the world. The Chinese developed kilns capable of firing at around 1,000 °C before 2000 BC. These were updraft kilns, often built below ground. Two main types of kiln were developed by about 200 AD and remained in use until modern times. These are the dragon kiln of hilly southern China, usually fuelled by wood, long and thin and running up a slope, and the horseshoe-shaped mantou kiln of the north Chinese plains, smaller and more compact. Both could reliably produce the temperatures of up to 1300 °C or more needed for porcelain. In the late Ming, the egg-shaped kiln or "zhenyao" was developed at Jingdezhen and mainly used there. This was something of a compromise between the other types, and offered locations in the firing chamber with a range of firing conditions. Both Ancient Roman pottery and medieval Chinese pottery could be fired in industrial quantities, with tens of thousands of pieces in a single firing. Early examples of simpler kilns found in Britain include those that made roof-tiles during the Roman occupation. These kilns were built up the side of a slope, such that a fire could be lit at the bottom and the heat would rise up into the kiln. Traditional kilns include: With the industrial age, kilns were designed to use electricity and more refined fuels, including natural gas and propane. Many large industrial pottery kilns use natural gas, as it is generally clean, efficient and easy to control. Modern kilns can be fitted with computerized controls allowing for fine adjustments during the firing. A user may choose to control the rate of temperature climb or "ramp", "hold" or "soak" the temperature at any given point, or control the rate of cooling. Both electric and gas kilns are common for smaller scale production in industry and craft, handmade and sculptural work. The temperature of some kilns is controlled by pyrometric cones—devices that begin to melt at specific temperatures. Modern kilns include: Green wood coming straight from the felled tree has far too high a moisture content to be commercially useful and will rot, warp and split. Both hardwoods and softwood must be left to dry out until the moisture content is between 18% and 8%. This can be a long process, or it is speeded up by use of a kiln. A variety of kiln technologies exist today: conventional, dehumidification, solar, vacuum and radio frequency. The economics of different wood drying technologies are based on the total energy, capital, insurance/risk, environmental impacts, labor, maintenance, and product degradation costs. These costs which can be a significant part of plant costs, involve the differential impact of the presence of drying equipment in a specific plant. Every piece of equipment from the green trimmer to the infeed system at the planer mill is part the "drying system". The true costs of the drying system can only be determined when comparing the total plant costs and risks with and without drying. Kiln dried firewood was pioneered during the 1980s, and was later adopted extensively in Europe due to the economic and practical benefits of selling wood with a lower moisture content. The total (harmful) air emissions produced by wood kilns, including their heat source, can be significant. Typically, the higher the temperature at which the kiln operates, the larger the quantity of emissions that are produced (per pound of water removed). This is especially true in the drying of thin veneers and high-temperature drying of softwoods.
https://en.wikipedia.org/wiki?curid=17361
Hymn A hymn is a type of song, usually religious, specifically written for the purpose of adoration or prayer, and typically addressed to a deity or deities, or to a prominent figure or personification. The word "hymn" derives from Greek ("hymnos"), which means "a song of praise". A writer of hymns is known as a hymnist. The singing or composition of hymns is called hymnody. Collections of hymns are known as hymnals or hymn books. Hymns may or may not include instrumental accompaniment. Although most familiar to speakers of English in the context of Christianity, hymns are also a fixture of other world religions, especially on the Indian subcontinent. Hymns also survive from antiquity, especially from Egyptian and Greek cultures. Some of the oldest surviving examples of notated music are hymns with Greek texts. Ancient hymns include the Egyptian "Great Hymn to the Aten", composed by Pharaoh Akhenaten; the Hurrian Hymn to Nikkal; the "Vedas", a collection of hymns in the tradition of Hinduism; and the Psalms, a collection of songs from Judaism. The Western tradition of hymnody begins with the Homeric Hymns, a collection of ancient Greek hymns, the oldest of which were written in the 7th century BC, praising deities of the ancient Greek religions. Surviving from the 3rd century BC is a collection of six literary hymns () by the Alexandrian poet Callimachus. Patristic writers began applying the term , or "hymnus" in Latin, to Christian songs of praise, and frequently used the word as a synonym for "psalm". Originally modeled on the Book of Psalms and other poetic passages (commonly referred to as "canticles") in the Scriptures, Christian hymns are generally directed as praise to the Christian God. Many refer to Jesus Christ either directly or indirectly. Since the earliest times, Christians have sung "psalms and hymns and spiritual songs", both in private devotions and in corporate worship (; ; ; ; ; ; ; cf. ; ). Non-scriptural hymns (i.e. not psalms or canticles) from the Early Church still sung today include 'Phos Hilaron', 'Sub tuum praesidium', and 'Te Deum'. One definition of a hymn is "...a lyric poem, reverently and devotionally conceived, which is designed to be sung and which expresses the worshipper's attitude toward God or God's purposes in human life. It should be simple and metrical in form, genuinely emotional, poetic and literary in style, spiritual in quality, and in its ideas so direct and so immediately apparent as to unify a congregation while singing it." Christian hymns are often written with special or seasonal themes and these are used on holy days such as Christmas, Easter and the Feast of All Saints, or during particular seasons such as Advent and Lent. Others are used to encourage reverence for the Bible or to celebrate Christian practices such as the eucharist or baptism. Some hymns praise or address individual saints, particularly the Blessed Virgin Mary; such hymns are particularly prevalent in Catholicism, Eastern Orthodoxy and to some extent High Church Anglicanism. A writer of hymns is known as a hymnodist, and the practice of singing hymns is called "hymnody"; the same word is used for the collectivity of hymns belonging to a particular denomination or period (e.g. "nineteenth century Methodist hymnody" would mean the body of hymns written and/or used by Methodists in the 19th century). A collection of hymns is called a "hymnal" or "hymnary". These may or may not include music; among the hymnals without printed music, some include names of hymn tunes suggested for use with each text, in case readers already know the tunes or would like to find them elsewhere. A student of hymnody is called a "hymnologist", and the scholarly study of hymns, hymnists and hymnody is hymnology. The music to which a hymn may be sung is a hymn tune. In many Evangelical churches, traditional songs are classified as hymns while more contemporary worship songs are not considered hymns. The reason for this distinction is unclear, but according to some it is due to the radical shift of style and devotional thinking that began with the Jesus movement and Jesus music. In recent years, Christian traditional hymns "have" seen a revival in some churches, usually more Reformed or Calvinistic in nature, as modern hymn writers such as Keith and Kristyn Getty and Sovereign Grace Music have reset old lyrics to new melodies, revised old hymns and republished them, or simply written a song in a hymn-like fashion such as "In Christ Alone". In ancient and medieval times, string instruments such as the harp, lyre and lute were used with psalms and hymns. Since there is a lack of musical notation in early writings, the actual musical forms in the early church can only be surmised. During the Middle Ages a rich hymnody developed in the form of Gregorian chant or plainsong. This type was sung in unison, in one of eight church modes, and most often by monastic choirs. While they were written originally in Latin, many have been translated; a familiar example is the 4th century "Of the Father's Heart Begotten" sung to the 11th century plainsong "Divinum Mysterium". Later hymnody in the Western church introduced four-part vocal harmony as the norm, adopting major and minor keys, and came to be led by organ and choir. It shares many elements with classical music. Today, except for choirs, more musically inclined congregations and "a cappella" congregations, hymns are typically sung in unison. In some cases complementary full settings for organ are also published, in others organists and other accompanists are expected to transcribe the four-part vocal score for their instrument of choice. To illustrate Protestant usage, in the traditional services and liturgies of the Methodist churches, which are based upon Anglican practice, hymns are sung (often accompanied by an organ) during the processional to the altar, during the receiving of communion, during the recessional, and sometimes at other points during the service. These hymns can be found in a common book such as the United Methodist Hymnal. The Doxology is also sung after the tithes and offerings are brought up to the altar. Contemporary Christian worship, as often found in Evangelicalism and Pentecostalism, may include the use of contemporary worship music played with electric guitars and the drum kit, sharing many elements with rock music. Other groups of Christians have historically excluded instrumental accompaniment, citing the absence of instruments in worship by the church in the first several centuries of its existence, and adhere to an unaccompanied "a cappella" congregational singing of hymns. These groups include the 'Brethren' (often both 'Open' and 'Exclusive'), the Churches of Christ, Mennonites, several Anabaptist-based denominations—such as the Apostolic Christian Church of America—Primitive Baptists, and certain Reformed churches, although during the last century or so, several of these, such as the Free Church of Scotland have abandoned this stance. Eastern Christianity (the Eastern Orthodox, Oriental Orthodox and Eastern Catholic churches) has a variety of ancient hymnographical traditions. Byzantine chant is almost always a cappella, and instrumental accompaniment is rare. It is used to chant all forms of liturgical worship. Instruments are common in Oriental traditions. The Coptic tradition which makes use of the cymbals and the Triangle (musical instrument). The Indian Orthodox (Malankara Orthodox Syrian Church) use of the organ. The Tewahedo Churches use drums, cymbals and other instruments on certain occasions. Thomas Aquinas, in the introduction to his commentary on the Psalms, defined the Christian hymn thus: ""Hymnus est laus Dei cum cantico; canticum autem exultatio mentis de aeternis habita, prorumpens in vocem"." ("A hymn is the praise of God with song; a song is the exultation of the mind dwelling on eternal things, bursting forth in the voice.") The Protestant Reformation resulted in two conflicting attitudes towards hymns. One approach, the regulative principle of worship, favoured by many Zwinglians, Calvinists and some radical reformers, considered anything that was not directly authorised by the Bible to be a novel and Catholic introduction to worship, which was to be rejected. All hymns that were not direct quotations from the Bible fell into this category. Such hymns were banned, along with any form of instrumental musical accompaniment, and organs were removed from churches. Instead of hymns, biblical psalms were chanted, most often without accompaniment, to very basic melodies. This was known as exclusive psalmody. Examples of this may still be found in various places, including in some of the Presbyterian churches of western Scotland. The other Reformation approach, the normative principle of worship, produced a burst of hymn writing and congregational singing. Martin Luther is notable not only as a reformer, but as the author of many hymns including "Ein feste Burg ist unser Gott" ("A Mighty Fortress Is Our God"), which is sung today even by Catholics, and "Gelobet seist du, Jesu Christ" ("Praise be to You, Jesus Christ") for Christmas. Luther and his followers often used their hymns, or chorales, to teach tenets of the faith to worshipers. The first Protestant hymnal was published in Bohemia in 1532 by the Unitas Fratrum. Count Zinzendorf, the Lutheran leader of the Moravian Church in the 18th century wrote some 2,000 hymns. The earlier English writers tended to paraphrase biblical texts, particularly Psalms; Isaac Watts followed this tradition, but is also credited as having written the first English hymn which was not a direct paraphrase of Scripture. Watts (1674–1748), whose father was an Elder of a dissenter congregation, complained at age 16, that when allowed only psalms to sing, the faithful could not even sing about their Lord, Christ Jesus. His father invited him to see what he could do about it; the result was Watts' first hymn, "Behold the glories of the Lamb". Found in few hymnals today, the hymn has eight stanzas in common meter and is based on Revelation 5:6, 8, 9, 10, 12. Relying heavily on Scripture, Watts wrote metered texts based on New Testament passages that brought the Christian faith into the songs of the church. Isaac Watts has been called "the father of English hymnody", but Erik Routley sees him more as "the liberator of English hymnody", because his hymns, and hymns like them, moved worshipers beyond singing only Old Testament psalms, inspiring congregations and revitalizing worship. Later writers took even more freedom, some even including allegory and metaphor in their texts. Charles Wesley's hymns spread Methodist theology, not only within Methodism, but in most Protestant churches. He developed a new focus: expressing one's personal feelings in the relationship with God as well as the simple worship seen in older hymns. Wesley wrote: Wesley's contribution, along with the Second Great Awakening in America led to a new style called gospel, and a new explosion of sacred music writing with Fanny Crosby, Lina Sandell, Philip Bliss, Ira D. Sankey, and others who produced testimonial music for revivals, camp meetings, and evangelistic crusades. The tune style or form is technically designated "gospel songs" as distinct from hymns. Gospel songs generally include a refrain (or chorus) and usually (though not always) a faster tempo than the hymns. As examples of the distinction, "Amazing Grace" is a hymn (no refrain), but "How Great Thou Art" is a gospel song. During the 19th century, the gospel-song genre spread rapidly in Protestantism and to a lesser but still definite extent, in Roman Catholicism; the gospel-song genre is unknown in the worship "per se" by Eastern Orthodox churches, which rely exclusively on traditional chants (a type of hymn). The Methodist Revival of the 18th century created an explosion of hymn-writing in Welsh, which continued into the first half of the 19th century. The most prominent names among Welsh hymn-writers are William Williams Pantycelyn and Ann Griffiths. The second half of the 19th century witnessed an explosion of hymn tune composition and congregational four-part singing in Wales. Along with the more classical sacred music of composers ranging from Mozart to Monteverdi, the Catholic Church continued to produce many popular hymns such as Lead, Kindly Light, Silent Night, O Sacrament Divine and Faith of our Fathers. Many churches today use contemporary worship music which includes a range of styles often influenced by popular music. This often leads to some conflict between older and younger congregants (see contemporary worship). This is not new; the Christian pop music style began in the late 1960s and became very popular during the 1970s, as young hymnists sought ways in which to make the music of their religion relevant for their generation. This long tradition has resulted in a wide variety of hymns. Some modern churches include within hymnody the traditional hymn (usually describing God), contemporary worship music (often directed to God) and gospel music (expressions of one's personal experience of God). This distinction is not perfectly clear; and purists remove the second two types from the classification as hymns. It is a matter of debate, even sometimes within a single congregation, often between revivalist and traditionalist movements. In modern times, hymn use has not been limited to strictly religious settings, including secular occasions such as Remembrance Day, and this "secularization" also includes use as sources of musical entertainment or even vehicles for mass emotion. African-Americans developed a rich hymnody from spirituals during times of slavery to the modern, lively black gospel style. The first influences of African American Culture into hymns came from Slave Songs of the United States a collection of slave hymns compiled by William Francis Allen who had difficulty pinning them down from the oral tradition, and though he succeeded, he points out the awe inspiring effect of the hymns when sung in by their originators. Hymn writing, composition, performance and the publishing of Christian hymnals were prolific in the 19th-century and were often linked to the abolitionist movement by many hymn writers. Surprisingly, Stephen Foster wrote a number of hymns that were used during church services during this era of publishing. Thomas Symmes spread throughout churches a new idea of how to sing hymns, in which anyone could sing a hymn any way they felt led to; this idea was opposed by the views of Symmes' colleagues who felt it was "like Five Hundred different Tunes roared out at the same time". William Billings, a singing school teacher, created the first tune book with only American born compositions. Within his books, Billings did not put as much emphasis on "common measure" which was the typical way hymns were sung, but he attempted "to have a Sufficiency in each measure". Boston's Handel and Haydn Society aimed at raising the level of church music in America, publishing their "Collection of Church Music". In the late 19th century Ira D. Sankey and Dwight L. Moody developed the relatively new subcategory of gospel hymns. Earlier in the 19th century, the use of musical notation, especially shape notes, exploded in America, and professional singing masters went from town to town teaching the population how to sing from sight, instead of the more common lining out that had been used before that. During this period hundreds of tune books were published, including B.F. White's "Sacred Harp", and earlier works like the "Missouri Harmony", "Kentucky Harmony", "Hesperian Harp", D.H. Mansfield's "The American Vocalist", "The Social Harp", the "Southern Harmony", William Walker's "Christian Harmony", Jeremiah Ingalls' "Christian Harmony", and literally many dozens of others. Shape notes were important in the spread of (then) more modern singing styles, with tenor-led 4-part harmony (based on older English West Gallery music), fuging sections, anthems and other more complex features. During this period, hymns were incredibly popular in the United States, and one or more of the above-mentioned tunebooks could be found in almost every household. It isn't uncommon to hear accounts of young people and teenagers gathering together to spend an afternoon singing hymns and anthems from tune books, which was considered great fun, and there are surviving accounts of Abraham Lincoln and his sweetheart singing together from the "Missouri Harmony" during his youth. By the 1860s musical reformers like Lowell Mason (the so-called "better music boys") were actively campaigning for the introduction of more "refined" and modern singing styles, and eventually these American tune books were replaced in many churches, starting in the Northeast and urban areas, and spreading out into the countryside as people adopted the gentler, more soothing tones of Victorian hymnody, and even adopted dedicated, trained choirs to do their church's singing, rather than having the entire congregation participate. But in many rural areas the old traditions lived on, not in churches, but in weekly, monthly or annual conventions were people would meet to sing from their favorite tunebooks. The most popular one, and the only one that survived continuously in print, was the "Sacred Harp", which could be found in the typical rural Southern home right up until the living tradition was "re-discovered" by Alan Lomax in the 1960s (although it had been well-documented by musicologist George Pullen Jackson prior to this). Indeed, "the most common book on . Since then there has been a renaissance in "Sacred Harp singing", with annual conventions popping up in all 50 states and in a number of European countries recently, including the UK, Germany, Ireland and Poland, as well as in Australia. Today "Sacred Harp singing" is a vibrant and living tradition with thousands of enthusiastic participants all around the globe, drawn to the democratic principles of the tradition and exotic, beautiful sound of the music. Although the lyrics tend to be highly religious in nature, the tradition is largely secular, and participation if open to all who care to attend. The meter indicates the number of syllables for the lines in each stanza of a hymn. This provides a means of marrying the hymn's text with an appropriate hymn tune for singing. In practice many hymns conform to one of a relatively small number of meters (syllable count and stress patterns). Care must be taken, however, to ensure that not only the metre of words and tune match, but also the stresses on the words in each line. Technically speaking an iambic tune, for instance, cannot be used with words of, say, trochaic metre. The meter is often denoted by a row of figures besides the name of the tune, such as "87.87.87", which would inform the reader that each verse has six lines, and that the first line has eight syllables, the second has seven, the third line eight, etc. The meter can also be described by initials; L.M. indicates long meter, which is 88.88 (four lines, each eight syllables long); S.M. is short meter (66.86); C.M. is common metre (86.86), while D.L.M., D.S.M. and D.C.M. (the "D" stands for double) are similar to their respective single meters except that they have eight lines in a verse instead of four. Also, if the number of syllables in one verse differ from another verse in the same hymn (e.g., the hymn "I Sing a Song of the Saints of God"), the meter is called Irregular. The Sikh holy book, the Guru Granth Sahib Ji ( ), is a collection of hymns (Shabad) or "Gurbani" describing the qualities of God and why one should meditate on God's name. The "Guru Granth Sahib" is divided by their musical setting in different ragas into fourteen hundred and thirty pages known as "Angs" (limbs) in Sikh tradition. Guru Gobind Singh (1666–1708), the tenth guru, after adding Guru Tegh Bahadur's bani to the Adi Granth affirmed the sacred text as his successor, elevating it to "Guru Granth Sahib". The text remains the holy scripture of the Sikhs, regarded as the teachings of the Ten Gurus. The role of Guru Granth Sahib, as a source or guide of prayer, is pivotal in Sikh worship. The links below are restricted to either material that is historical or resources that are non-denominational or inter-denominational. Denomination-specific resources are mentioned from the relevant denomination-specific articles.
https://en.wikipedia.org/wiki?curid=13756
History of physics Physics is a branch of science whose primary objects of study are matter and energy. Discoveries of physics find applications throughout the natural sciences and in technology, since matter and energy are the basic constituents of the natural world. Some other domains of study—more limited in their scope—may be considered branches that have split off from physics to become sciences in their own right. Physics today may be divided loosely into classical physics and modern physics. Elements of what became physics were drawn primarily from the fields of astronomy, optics, and mechanics, which were methodologically united through the study of geometry. These mathematical disciplines began in antiquity with the Babylonians and with Hellenistic writers such as Archimedes and Ptolemy. Ancient philosophy, meanwhile – including what was called "physics" – focused on explaining nature through ideas such as Aristotle's four types of "cause". The move towards a rational understanding of nature began at least since the Archaic period in Greece (650–480 BCE) with the Pre-Socratic philosophers. The philosopher Thales of Miletus (7th and 6th centuries BCE), dubbed "the Father of Science" for refusing to accept various supernatural, religious or mythological explanations for natural phenomena, proclaimed that every event had a natural cause. Thales also made advancements in 580 BCE by suggesting that water is the basic element, experimenting with the attraction between magnets and rubbed amber and formulating the first recorded cosmologies. Anaximander, famous for his proto-evolutionary theory, disputed Thales' ideas and proposed that rather than water, a substance called "apeiron" was the building block of all matter. Around 500 BCE, Heraclitus proposed that the only basic law governing the Universe was the principle of change and that nothing remains in the same state indefinitely. This observation made him one of the first scholars in ancient physics to address the role of time in the universe, a key and sometimes contentious concept in modern and present-day physics. The early physicist Leucippus (fl. first half of the 5th century BCE) adamantly opposed the idea of direct divine intervention in the universe, proposing instead that natural phenomena had a natural cause. Leucippus and his student Democritus were the first to develop the theory of atomism, the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy slowly developed into an exciting and contentious field of study. Aristotle (, "Aristotélēs") (384 – 322 BCE), a student of Plato, promoted the concept that observation of physical phenomena could ultimately lead to the discovery of the natural laws governing them. Aristotle's writings cover physics, metaphysics, poetry, theater, music, logic, rhetoric, linguistics, politics, government, ethics, biology and zoology. He wrote the first work which refers to that line of study as "Physics" – in the 4th century BCE, Aristotle founded the system known as Aristotelian physics. He attempted to explain ideas such as motion (and gravity) with the theory of four elements. Aristotle believed that all matter was made up of aether, or some combination of four elements: earth, water, air, and fire. According to Aristotle, these four terrestrial elements are capable of inter-transformation and move toward their natural place, so a stone falls downward toward the center of the cosmos, but flames rise upward toward the circumference. Eventually, Aristotelian physics became enormously popular for many centuries in Europe, informing the scientific and scholastic developments of the Middle Ages. It remained the mainstream scientific paradigm in Europe until the time of Galileo Galilei and Isaac Newton. Early in Classical Greece, knowledge that the Earth is spherical ("round") was common. Around 240 BCE, as the result of a seminal experiment, Eratosthenes (276–194 BCE) accurately estimated its circumference. In contrast to Aristotle's geocentric views, Aristarchus of Samos (; c.310 – c.230 BCE) presented an explicit argument for a heliocentric model of the Solar system, i.e. for placing the Sun, not the Earth, at its centre. Seleucus of Seleucia, a follower of Aristarchus' heliocentric theory, stated that the Earth rotated around its own axis, which, in turn, revolved around the Sun. Though the arguments he used were lost, Plutarch stated that Seleucus was the first to prove the heliocentric system through reasoning. In the 3rd century BCE, the Greek mathematician Archimedes of Syracuse ( (287–212 BCE) – generally considered to be the greatest mathematician of antiquity and one of the greatest of all time – laid the foundations of hydrostatics, statics and calculated the underlying mathematics of the lever. A leading scientist of classical antiquity, Archimedes also developed elaborate systems of pulleys to move large objects with a minimum of effort. The Archimedes' screw underpins modern hydroengineering, and his machines of war helped to hold back the armies of Rome in the First Punic War. Archimedes even tore apart the arguments of Aristotle and his metaphysics, pointing out that it was impossible to separate mathematics and nature and proved it by converting mathematical theories into practical inventions. Furthermore, in his work "On Floating Bodies", around 250 BCE, Archimedes developed the law of buoyancy, also known as Archimedes' principle. In mathematics, Archimedes used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution and an ingenious system for expressing very large numbers. He also developed the principles of equilibrium states and centers of gravity, ideas that would influence the well known scholars, Galileo, and Newton. Hipparchus (190–120 BCE), focusing on astronomy and mathematics, used sophisticated geometrical techniques to map the motion of the stars and planets, even predicting the times that Solar eclipses would happen. In addition, he added calculations of the distance of the Sun and Moon from the Earth, based upon his improvements to the observational instruments used at that time. Another of the most famous of the early physicists was Ptolemy (90–168 CE), one of the leading minds during the time of the Roman Empire. Ptolemy was the author of several scientific treatises, at least three of which were of continuing importance to later Islamic and European science. The first is the astronomical treatise now known as the "Almagest" (in Greek, Ἡ Μεγάλη Σύνταξις, "The Great Treatise", originally Μαθηματικὴ Σύνταξις, "Mathematical Treatise"). The second is the "Geography", which is a thorough discussion of the geographic knowledge of the Greco-Roman world. Much of the accumulated knowledge of the ancient world was lost. Even of the works of the better known thinkers, few fragments survived. Although he wrote at least fourteen books, almost nothing of Hipparchus' direct work survived. Of the 150 reputed Aristotelian works, only 30 exist, and some of those are "little more than lecture notes". Important physical and mathematical traditions also existed in ancient Chinese and Indian sciences. In Indian philosophy, Maharishi Kanada was the first to systematically develop a theory of atomism around 200 BCE though some authors have allotted him an earlier era in the 6th century BCE. It was further elaborated by the Buddhist atomists Dharmakirti and Dignāga during the 1st millennium CE. Pakudha Kaccayana, a 6th-century BCE Indian philosopher and contemporary of Gautama Buddha, had also propounded ideas about the atomic constitution of the material world. These philosophers believed that other elements (except ether) were physically palpable and hence comprised minuscule particles of matter. The last minuscule particle of matter that could not be subdivided further was termed Parmanu. These philosophers considered the atom to be indestructible and hence eternal. The Buddhists thought atoms to be minute objects unable to be seen to the naked eye that come into being and vanish in an instant. The Vaisheshika school of philosophers believed that an atom was a mere point in space. It was also first to depict relations between motion and force applied. Indian theories about the atom are greatly abstract and enmeshed in philosophy as they were based on logic and not on personal experience or experimentation. In Indian astronomy, Aryabhata's "Aryabhatiya" (499 CE) proposed the Earth's rotation, while Nilakantha Somayaji (1444–1544) of the Kerala school of astronomy and mathematics proposed a semi-heliocentric model resembling the Tychonic system. The study of magnetism in Ancient China dates back to the 4th century BCE. (in the "Book of the Devil Valley Master"), A main contributor to this field was Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, as well as establishing the concept of true north. In optics, Shen Kuo independently developed a camera obscura. In the 7th to 15th centuries, scientific progress occurred in the Muslim world. Many classic works in Indian, Assyrian, Sassanian (Persian) and Greek, including the works of Aristotle, were translated into Arabic. Important contributions were made by Ibn al-Haytham (965–1040), an Arab scientist, considered to be a founder of modern optics. Ptolemy and Aristotle theorised that light either shone from the eye to illuminate objects or that "forms" emanated from objects themselves, whereas al-Haytham (known by the Latin name "Alhazen") suggested that light travels to the eye in rays from different points on an object. The works of Ibn al-Haytham and Abū Rayhān Bīrūnī (973–1050), a Persian scientist, eventually passed on to Western Europe where they were studied by scholars such as Roger Bacon and Witelo. Ibn al-Haytham and Biruni were early proponents of the scientific method. Ibn al-Haytham is considered to be the "father of the modern scientific method" due to his emphasis on experimental data and reproducibility of its results. The earliest methodical approach to experiments in the modern sense is visible in the works of Ibn al-Haytham, who introduced an inductive-experimental method for achieving results. Bīrūnī introduced early scientific methods for several different fields of inquiry during the 1020s and 1030s, including an early experimental method for mechanics. Biruni's methodology resembled the modern scientific method, particularly in his emphasis on repeated experimentation. Ibn Sīnā (980–1037), known as "Avicenna", was a polymath from Bukhara (in present-day Uzbekistan) responsible for important contributions to physics, optics, philosophy and medicine. He published his theory of motion in "Book of Healing" (1020), where he argued that an impetus is imparted to a projectile by the thrower, and believed that it was a temporary virtue that would decline even in a vacuum. He viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made a distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that projectile in a vacuum would not stop unless it is acted upon. This conception of motion is consistent with Newton's first law of motion, inertia, which states that an object in motion will stay in motion unless it is acted on by an external force. This idea which dissented from the Aristotelian view was later described as "impetus" by John Buridan, who was influenced by Ibn Sina's "Book of Healing". Omar Khayyám (1048–1131), a Persian scientist, calculated the length of a solar year and was only out by a fraction of a second when compared to our modern day calculations. He used this to compose a calendar considered more accurate than the Gregorian calendar that came along 500 years later. He is classified as one of the world's first great science communicators, said, for example to have convinced a Sufi theologian that the world turns on an axis. Hibat Allah Abu'l-Barakat al-Baghdaadi (c. 1080-1165) adopted and modified Ibn Sina's theory on projectile motion. In his "Kitab al-Mu'tabar", Abu'l-Barakat stated that the mover imparts a violent inclination ("mayl qasri") on the moved and that this diminishes as the moving object distances itself from the mover. He also proposed an explanation of the acceleration of falling bodies by the accumulation of successive increments of power with successive increments of velocity. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Jean Buridan and Albert of Saxony later referred to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. Ibn Bajjah (c. 1085–1138), known as "Avempace" in Europe, proposed that for every force there is always a reaction force. While he did not specify that these forces be equal, it was a precursor to Newton's third law of motion which states that for every action there is an equal and opposite reaction. Ibn Bajjah was a critic of Ptolemy and he worked on creating a new theory of velocity to replace the one theorized by Aristotle. Two future philosophers supported the theories Avempace created, known as Avempacean dynamics. These philosophers were Thomas Aquinas, a Catholic priest, and John Duns Scotus. Galileo went on to adopt Avempace's formula "that the velocity of a given object is the difference of the motive power of that object and the resistance of the medium of motion". Nasir al-Din al-Tusi (1201–1274), a Persian astronomer and mathematician who died in Baghdad, authored the "Treasury of Astronomy", a remarkably accurate table of planetary movements that reformed the existing planetary model of Roman astronomer Ptolemy by describing a uniform circular motion of all planets in their orbits. This work led to the later discovery, by one of his students, that planets actually have an elliptical orbit. Copernicus later drew heavily on the work of al-Din al-Tusi and his students, but without acknowledgment. The gradual chipping away of the Ptolemaic system paved the way for the revolutionary idea that the Earth actually orbited the Sun (heliocentrism). Awareness of ancient works re-entered the West through translations from Arabic to Latin. Their re-introduction, combined with Judeo-Islamic theological commentaries, had a great influence on Medieval philosophers such as Thomas Aquinas. Scholastic European scholars, who sought to reconcile the philosophy of the ancient classical philosophers with Christian theology, proclaimed Aristotle the greatest thinker of the ancient world. In cases where they didn't directly contradict the Bible, Aristotelian physics became the foundation for the physical explanations of the European Churches. Quantification became a core element of medieval physics. Based on Aristotelian physics, Scholastic physics described things as moving according to their essential nature. Celestial objects were described as moving in circles, because perfect circular motion was considered an innate property of objects that existed in the uncorrupted realm of the celestial spheres. The theory of impetus, the ancestor to the concepts of inertia and momentum, was developed along similar lines by medieval philosophers such as John Philoponus and Jean Buridan. Motions below the lunar sphere were seen as imperfect, and thus could not be expected to exhibit consistent motion. More idealized motion in the "sublunary" realm could only be achieved through artifice, and prior to the 17th century, many did not view artificial experiments as a valid means of learning about the natural world. Physical explanations in the sublunary realm revolved around tendencies. Stones contained the element earth, and earthly objects tended to move in a straight line toward the centre of the earth (and the universe in the Aristotelian geocentric view) unless otherwise prevented from doing so. During the 16th and 17th centuries, a large advancement of scientific progress known as the Scientific revolution took place in Europe. Dissatisfaction with older philosophical approaches had begun earlier and had produced other changes in society, such as the Protestant Reformation, but the revolution in science began when natural philosophers began to mount a sustained attack on the Scholastic philosophical programme and supposed that mathematical descriptive schemes adopted from such fields as mechanics and astronomy could actually yield universally valid characterizations of motion and other concepts. A breakthrough in astronomy was made by Polish astronomer Nicolaus Copernicus (1473–1543) when, in 1543, he gave strong arguments for the heliocentric model of the Solar system, ostensibly as a means to render tables charting planetary motion more accurate and to simplify their production. In heliocentric models of the Solar system, the Earth orbits the Sun along with other bodies in Earth's galaxy, a contradiction according to the Greek-Egyptian astronomer Ptolemy (2nd century CE; see above), whose system placed the Earth at the center of the Universe and had been accepted for over 1,400 years. The Greek astronomer Aristarchus of Samos (c.310 – c.230 BCE) had suggested that the Earth revolves around the Sun, but Copernicus' reasoning led to lasting general acceptance of this "revolutionary" idea. Copernicus' book presenting the theory ("De revolutionibus orbium coelestium", "On the Revolutions of the Celestial Spheres") was published just before his death in 1543 and, as it is now generally considered to mark the beginning of modern astronomy, is also considered to mark the beginning of the Scientific revolution. Copernicus' new perspective, along with the accurate observations made by Tycho Brahe, enabled German astronomer Johannes Kepler (1571–1630) to formulate his laws regarding planetary motion that remain in use today. The Italian mathematician, astronomer, and physicist Galileo Galilei (1564–1642) was the central figure in the Scientific revolution and famous for his support for Copernicanism, his astronomical discoveries, empirical experiments and his improvement of the telescope. As a mathematician, Galileo's role in the university culture of his era was subordinated to the three major topics of study: law, medicine, and theology (which was closely allied to philosophy). Galileo, however, felt that the descriptive content of the technical disciplines warranted philosophical interest, particularly because mathematical analysis of astronomical observations – notably, Copernicus' analysis of the relative motions of the Sun, Earth, Moon, and planets – indicated that philosophers' statements about the nature of the universe could be shown to be in error. Galileo also performed mechanical experiments, insisting that motion itself – regardless of whether it was produced "naturally" or "artificially" (i.e. deliberately) – had universally consistent characteristics that could be described mathematically. Galileo's early studies at the University of Pisa were in medicine, but he was soon drawn to mathematics and physics. At 19, he discovered (and, subsequently, verified) the isochronal nature of the pendulum when, using his pulse, he timed the oscillations of a swinging lamp in Pisa's cathedral and found that it remained the same for each swing regardless of the swing's amplitude. He soon became known through his invention of a hydrostatic balance and for his treatise on the center of gravity of solid bodies. While teaching at the University of Pisa (1589–92), he initiated his experiments concerning the laws of bodies in motion that brought results so contradictory to the accepted teachings of Aristotle that strong antagonism was aroused. He found that bodies do not fall with velocities proportional to their weights. The famous story in which Galileo is said to have dropped weights from the Leaning Tower of Pisa is apocryphal, but he did find that the path of a projectile is a parabola and is credited with conclusions that anticipated Newton's laws of motion (e.g. the notion of inertia). Among these is what is now called Galilean relativity, the first precisely formulated statement about properties of space and time outside three-dimensional geometry. Galileo has been called the "father of modern observational astronomy", the "father of modern physics", the "father of science", and "the father of modern science". According to Stephen Hawking, "Galileo, perhaps more than any other single person, was responsible for the birth of modern science." As religious orthodoxy decreed a geocentric or Tychonic understanding of the Solar system, Galileo's support for heliocentrism provoked controversy and he was tried by the Inquisition. Found "vehemently suspect of heresy", he was forced to recant and spent the rest of his life under house arrest. The contributions that Galileo made to observational astronomy include the telescopic confirmation of the phases of Venus; his discovery, in 1609, of Jupiter's four largest moons (subsequently given the collective name of the "Galilean moons"); and the observation and analysis of sunspots. Galileo also pursued applied science and technology, inventing, among other instruments, a military compass. His discovery of the Jovian moons was published in 1610 and enabled him to obtain the position of mathematician and philosopher to the Medici court. As such, he was expected to engage in debates with philosophers in the Aristotelian tradition and received a large audience for his own publications such as the "Discourses and Mathematical Demonstrations Concerning Two New Sciences" (published abroad following his arrest for the publication of "Dialogue Concerning the Two Chief World Systems") and "The Assayer". Galileo's interest in experimenting with and formulating mathematical descriptions of motion established experimentation as an integral part of natural philosophy. This tradition, combining with the non-mathematical emphasis on the collection of "experimental histories" by philosophical reformists such as William Gilbert and Francis Bacon, drew a significant following in the years leading up to and following Galileo's death, including Evangelista Torricelli and the participants in the Accademia del Cimento in Italy; Marin Mersenne and Blaise Pascal in France; Christiaan Huygens in the Netherlands; and Robert Hooke and Robert Boyle in England. The French philosopher René Descartes (1596–1650) was well-connected to, and influential within, the experimental philosophy networks of the day. Descartes had a more ambitious agenda, however, which was geared toward replacing the Scholastic philosophical tradition altogether. Questioning the reality interpreted through the senses, Descartes sought to re-establish philosophical explanatory schemes by reducing all perceived phenomena to being attributable to the motion of an invisible sea of "corpuscles". (Notably, he reserved human thought and God from his scheme, holding these to be separate from the physical universe). In proposing this philosophical framework, Descartes supposed that different kinds of motion, such as that of planets versus that of terrestrial objects, were not fundamentally different, but were merely different manifestations of an endless chain of corpuscular motions obeying universal principles. Particularly influential were his explanations for circular astronomical motions in terms of the vortex motion of corpuscles in space (Descartes argued, in accord with the beliefs, if not the methods, of the Scholastics, that a vacuum could not exist), and his explanation of gravity in terms of corpuscles pushing objects downward. Descartes, like Galileo, was convinced of the importance of mathematical explanation, and he and his followers were key figures in the development of mathematics and geometry in the 17th century. Cartesian mathematical descriptions of motion held that all mathematical formulations had to be justifiable in terms of direct physical action, a position held by Huygens and the German philosopher Gottfried Leibniz, who, while following in the Cartesian tradition, developed his own philosophical alternative to Scholasticism, which he outlined in his 1714 work, "The Monadology". Descartes has been dubbed the 'Father of Modern Philosophy', and much subsequent Western philosophy is a response to his writings, which are studied closely to this day. In particular, his "Meditations on First Philosophy" continues to be a standard text at most university philosophy departments. Descartes' influence in mathematics is equally apparent; the Cartesian coordinate system — allowing algebraic equations to be expressed as geometric shapes in a two-dimensional coordinate system — was named after him. He is credited as the father of analytical geometry, the bridge between algebra and geometry, important to the discovery of calculus and analysis. The late 17th and early 18th centuries saw the achievements of the greatest figure of the Scientific revolution: Cambridge University physicist and mathematician Sir Isaac Newton (1642-1727), considered by many to be the greatest and most influential scientist who ever lived. Newton, a fellow of the Royal Society of England, combined his own discoveries in mechanics and astronomy to earlier ones to create a single system for describing the workings of the universe. Newton formulated three laws of motion which formulated the relationship between motion and objects and also the law of universal gravitation, the latter of which could be used to explain the behavior not only of falling bodies on the earth but also planets and other celestial bodies. To arrive at his results, Newton invented one form of an entirely new branch of mathematics: calculus (also invented independently by Gottfried Leibniz), which was to become an essential tool in much of the later development in most branches of physics. Newton's findings were set forth in his "Philosophiæ Naturalis Principia Mathematica" ("Mathematical Principles of Natural Philosophy"), the publication of which in 1687 marked the beginning of the modern period of mechanics and astronomy. Newton was able to refute the Cartesian mechanical tradition that all motions should be explained with respect to the immediate force exerted by corpuscles. Using his three laws of motion and law of universal gravitation, Newton removed the idea that objects followed paths determined by natural shapes and instead demonstrated that not only regularly observed paths, but all the future motions of any body could be deduced mathematically based on knowledge of their existing motion, their mass, and the forces acting upon them. However, observed celestial motions did not precisely conform to a Newtonian treatment, and Newton, who was also deeply interested in theology, imagined that God intervened to ensure the continued stability of the solar system. Newton's principles (but not his mathematical treatments) proved controversial with Continental philosophers, who found his lack of metaphysical explanation for movement and gravitation philosophically unacceptable. Beginning around 1700, a bitter rift opened between the Continental and British philosophical traditions, which were stoked by heated, ongoing, and viciously personal disputes between the followers of Newton and Leibniz concerning priority over the analytical techniques of calculus, which each had developed independently. Initially, the Cartesian and Leibnizian traditions prevailed on the Continent (leading to the dominance of the Leibnizian calculus notation everywhere except Britain). Newton himself remained privately disturbed at the lack of a philosophical understanding of gravitation while insisting in his writings that none was necessary to infer its reality. As the 18th century progressed, Continental natural philosophers increasingly accepted the Newtonians' willingness to forgo ontological metaphysical explanations for mathematically described motions. Newton built the first functioning reflecting telescope and developed a theory of color, published in "Opticks", based on the observation that a prism decomposes white light into the many colours forming the visible spectrum. While Newton explained light as being composed of tiny particles, a rival theory of light which explained its behavior in terms of waves was presented in 1690 by Christiaan Huygens. However, the belief in the mechanistic philosophy coupled with Newton's reputation meant that the wave theory saw relatively little support until the 19th century. Newton also formulated an empirical law of cooling, studied the speed of sound, investigated power series, demonstrated the generalised binomial theorem and developed a method for approximating the roots of a function. His work on infinite series was inspired by Simon Stevin's decimals. Most importantly, Newton showed that the motions of objects on Earth and of celestial bodies are governed by the same set of natural laws, which were neither capricious nor malevolent. By demonstrating the consistency between Kepler's laws of planetary motion and his own theory of gravitation, Newton also removed the last doubts about heliocentrism. By bringing together all the ideas set forth during the Scientific revolution, Newton effectively established the foundation for modern society in mathematics and science. Other branches of physics also received attention during the period of the Scientific revolution. William Gilbert, court physician to Queen Elizabeth I, published an important work on magnetism in 1600, describing how the earth itself behaves like a giant magnet. Robert Boyle (1627–91) studied the behavior of gases enclosed in a chamber and formulated the gas law named for him; he also contributed to physiology and to the founding of modern chemistry. Another important factor in the scientific revolution was the rise of learned societies and academies in various countries. The earliest of these were in Italy and Germany and were short-lived. More influential were the Royal Society of England (1660) and the Academy of Sciences in France (1666). The former was a private institution in London and included such scientists as John Wallis, William Brouncker, Thomas Sydenham, John Mayow, and Christopher Wren (who contributed not only to architecture but also to astronomy and anatomy); the latter, in Paris, was a government institution and included as a foreign member the Dutchman Huygens. In the 18th century, important royal academies were established at Berlin (1700) and at St. Petersburg (1724). The societies and academies provided the principal opportunities for the publication and discussion of scientific results during and after the scientific revolution. In 1690, James Bernoulli showed that the cycloid is the solution to the tautochrone problem; and the following year, in 1691, Johann Bernoulli showed that a chain freely suspended from two points will form a catenary, the curve with the lowest possible center of gravity available to any chain hung between two fixed points. He then showed, in 1696, that the cycloid is the solution to the brachistochrone problem. A precursor of the engine was designed by the German scientist Otto von Guericke who, in 1650, designed and built the world's first vacuum pump and created the world's first ever vacuum known as the Magdeburg hemispheres experiment. He was driven to make a vacuum to disprove Aristotle's long-held supposition that 'Nature abhors a vacuum'. Shortly thereafter, Irish physicist and chemist Boyle had learned of Guericke's designs and in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation for a gas: "PV" = "k", where "P" is pressure, "V" is volume and "k" is a constant: this relationship is known as Boyle's Law. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law. But, already before the establishment of the ideal gas law, an associate of Boyle's named Denis Papin built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Hence, prior to 1698 and the invention of the Savery Engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen Engine, and later the Watt Engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born. During the 18th century, the mechanics founded by Newton was developed by several scientists as more mathematicians learned calculus and elaborated upon its initial formulation. The application of mathematical analysis to problems of motion was known as rational mechanics, or mixed mathematics (and was later termed classical mechanics). In 1714, Brook Taylor derived the fundamental frequency of a stretched vibrating string in terms of its tension and mass per unit length by solving a differential equation. The Swiss mathematician Daniel Bernoulli (1700–1782) made important mathematical studies of the behavior of gases, anticipating the kinetic theory of gases developed more than a century later, and has been referred to as the first mathematical physicist. In 1733, Daniel Bernoulli derived the fundamental frequency and harmonics of a hanging chain by solving a differential equation. In 1734, Bernoulli solved the differential equation for the vibrations of an elastic bar clamped at one end. Bernoulli's treatment of fluid dynamics and his examination of fluid flow was introduced in his 1738 work "Hydrodynamica". Rational mechanics dealt primarily with the development of elaborate mathematical treatments of observed motions, using Newtonian principles as a basis, and emphasized improving the tractability of complex calculations and developing of legitimate means of analytical approximation. A representative contemporary textbook was published by Johann Baptiste Horvath. By the end of the century analytical treatments were rigorous enough to verify the stability of the solar system solely on the basis of Newton's laws without reference to divine intervention—even as deterministic treatments of systems as simple as the three body problem in gravitation remained intractable. In 1705, Edmond Halley predicted the periodicity of Halley's Comet, William Herschel discovered Uranus in 1781, and Henry Cavendish measured the gravitational constant and determined the mass of the Earth in 1798. In 1783, John Michell suggested that some objects might be so massive that not even light could escape from them. In 1739, Leonhard Euler solved the ordinary differential equation for a forced harmonic oscillator and noticed the resonance phenomenon. In 1742, Colin Maclaurin discovered his uniformly rotating self-gravitating spheroids. In 1742, Benjamin Robins published his "New Principles in Gunnery", establishing the science of aerodynamics. British work, carried on by mathematicians such as Taylor and Maclaurin, fell behind Continental developments as the century progressed. Meanwhile, work flourished at scientific academies on the Continent, led by such mathematicians as Bernoulli, Euler, Lagrange, Laplace, and Legendre. In 1743, Jean le Rond d'Alembert published his "Traite de Dynamique", in which he introduced the concept of generalized forces for accelerating systems and systems with constraints, and applied the new idea of virtual work to solve dynamical problem, now known as D'Alembert's principle, as a rival to Newton's second law of motion. In 1747, Pierre Louis Maupertuis applied minimum principles to mechanics. In 1759, Euler solved the partial differential equation for the vibration of a rectangular drum. In 1764, Euler examined the partial differential equation for the vibration of a circular drum and found one of the Bessel function solutions. In 1776, John Smeaton published a paper on experiments relating power, work, momentum and kinetic energy, and supporting the conservation of energy. In 1788, Joseph Louis Lagrange presented Lagrange's equations of motion in "Mécanique Analytique", in which the whole of mechanics was organized around the principle of virtual work. In 1789, Antoine Lavoisier states the law of conservation of mass. The rational mechanics developed in the 18th century received a brilliant exposition in both Lagrange's 1788 work and the "Celestial Mechanics" (1799–1825) of Pierre-Simon Laplace. During the 18th century, thermodynamics was developed through the theories of weightless "imponderable fluids", such as heat ("caloric"), electricity, and phlogiston (which was rapidly overthrown as a concept following Lavoisier's identification of oxygen gas late in the century). Assuming that these concepts were real fluids, their flow could be traced through a mechanical apparatus or chemical reactions. This tradition of experimentation led to the development of new kinds of experimental apparatus, such as the Leyden Jar; and new kinds of measuring instruments, such as the calorimeter, and improved versions of old ones, such as the thermometer. Experiments also produced new concepts, such as the University of Glasgow experimenter Joseph Black's notion of latent heat and Philadelphia intellectual Benjamin Franklin's characterization of electrical fluid as flowing between places of excess and deficit (a concept later reinterpreted in terms of positive and negative charges). Franklin also showed that lightning is electricity in 1752. The accepted theory of heat in the 18th century viewed it as a kind of fluid, called caloric; although this theory was later shown to be erroneous, a number of scientists adhering to it nevertheless made important discoveries useful in developing the modern theory, including Joseph Black (1728–99) and Henry Cavendish (1731–1810). Opposed to this caloric theory, which had been developed mainly by the chemists, was the less accepted theory dating from Newton's time that heat is due to the motions of the particles of a substance. This mechanical theory gained support in 1798 from the cannon-boring experiments of Count Rumford (Benjamin Thompson), who found a direct relationship between heat and mechanical energy. While it was recognized early in the 18th century that finding absolute theories of electrostatic and magnetic force akin to Newton's principles of motion would be an important achievement, none were forthcoming. This impossibility only slowly disappeared as experimental practice became more widespread and more refined in the early years of the 19th century in places such as the newly established Royal Institution in London. Meanwhile, the analytical methods of rational mechanics began to be applied to experimental phenomena, most influentially with the French mathematician Joseph Fourier's analytical treatment of the flow of heat, as published in 1822. Joseph Priestley proposed an electrical inverse-square law in 1767, and Charles-Augustin de Coulomb introduced the inverse-square law of electrostatics in 1798. At the end of the century, the members of the French Academy of Sciences had attained clear dominance in the field. At the same time, the experimental tradition established by Galileo and his followers persisted. The Royal Society and the French Academy of Sciences were major centers for the performance and reporting of experimental work. Experiments in mechanics, optics, magnetism, static electricity, chemistry, and physiology were not clearly distinguished from each other during the 18th century, but significant differences in explanatory schemes and, thus, experiment design were emerging. Chemical experimenters, for instance, defied attempts to enforce a scheme of abstract Newtonian forces onto chemical affiliations, and instead focused on the isolation and classification of chemical substances and reactions. In 1800, Alessandro Volta invented the electric battery (known as the voltaic pile) and thus improved the way electric currents could also be studied. A year later, Thomas Young demonstrated the wave nature of light—which received strong experimental support from the work of Augustin-Jean Fresnel—and the principle of interference. In 1813, Peter Ewart supported the idea of the conservation of energy in his paper "On the measure of moving force". In 1820, Hans Christian Ørsted found that a current-carrying conductor gives rise to a magnetic force surrounding it, and within a week after Ørsted's discovery reached France, André-Marie Ampère discovered that two parallel electric currents will exert forces on each other. In 1821, William Hamilton began his analysis of Hamilton's characteristic function. In 1821, Michael Faraday built an electricity-powered motor, while Georg Ohm stated his law of electrical resistance in 1826, expressing the relationship between voltage, current, and resistance in an electric circuit. A year later, botanist Robert Brown discovered Brownian motion: pollen grains in water undergoing movement resulting from their bombardment by the fast-moving atoms or molecules in the liquid. In 1829, Gaspard Coriolis introduced the terms of work (force times distance) and kinetic energy with the meanings they have today. In 1831, Faraday (and independently Joseph Henry) discovered the reverse effect, the production of an electric potential or current through magnetism – known as electromagnetic induction; these two discoveries are the basis of the electric motor and the electric generator, respectively. In 1834, Carl Jacobi discovered his uniformly rotating self-gravitating ellipsoids (the Jacobi ellipsoid). In 1834, John Russell observed a nondecaying solitary water wave (soliton) in the Union Canal near Edinburgh and used a water tank to study the dependence of solitary water wave velocities on wave amplitude and water depth. In 1835, William Hamilton stated Hamilton's canonical equations of motion. In the same year, Gaspard Coriolis examined theoretically the mechanical efficiency of waterwheels, and deduced the Coriolis effect. In 1841, Julius Robert von Mayer, an amateur scientist, wrote a paper on the conservation of energy but his lack of academic training led to its rejection. In 1842, Christian Doppler proposed the Doppler effect. In 1847, Hermann von Helmholtz formally stated the law of conservation of energy. In 1851, Léon Foucault showed the Earth's rotation with a huge pendulum (Foucault pendulum). There were important advances in continuum mechanics in the first half of the century, namely formulation of laws of elasticity for solids and discovery of Navier–Stokes equations for fluids. In the 19th century, the connection between heat and mechanical energy was established quantitatively by Julius Robert von Mayer and James Prescott Joule, who measured the mechanical equivalent of heat in the 1840s. In 1849, Joule published results from his series of experiments (including the paddlewheel experiment) which show that heat is a form of energy, a fact that was accepted in the 1850s. The relation between heat and energy was important for the development of steam engines, and in 1824 the experimental and theoretical work of Sadi Carnot was published. Carnot captured some of the ideas of thermodynamics in his discussion of the efficiency of an idealized engine. Sadi Carnot's work provided a basis for the formulation of the first law of thermodynamics—a restatement of the law of conservation of energy—which was stated around 1850 by William Thomson, later known as Lord Kelvin, and Rudolf Clausius. Lord Kelvin, who had extended the concept of absolute zero from gases to all substances in 1848, drew upon the engineering theory of Lazare Carnot, Sadi Carnot, and Émile Clapeyron–as well as the experimentation of James Prescott Joule on the interchangeability of mechanical, chemical, thermal, and electrical forms of work—to formulate the first law. Kelvin and Clausius also stated the second law of thermodynamics, which was originally formulated in terms of the fact that heat does not spontaneously flow from a colder body to a hotter. Other formulations followed quickly (for example, the second law was expounded in Thomson and Peter Guthrie Tait's influential work "Treatise on Natural Philosophy") and Kelvin in particular understood some of the law's general implications. The second Law was the idea that gases consist of molecules in motion had been discussed in some detail by Daniel Bernoulli in 1738, but had fallen out of favor, and was revived by Clausius in 1857. In 1850, Hippolyte Fizeau and Léon Foucault measured the speed of light in water and find that it is slower than in air, in support of the wave model of light. In 1852, Joule and Thomson demonstrated that a rapidly expanding gas cools, later named the Joule–Thomson effect or Joule–Kelvin effect. Hermann von Helmholtz puts forward the idea of the heat death of the universe in 1854, the same year that Clausius established the importance of "dQ/T" (Clausius's theorem) (though he did not yet name the quantity). In 1859, James Clerk Maxwell discovered the distribution law of molecular velocities. Maxwell showed that electric and magnetic fields are propagated outward from their source at a speed equal to that of light and that light is one of several kinds of electromagnetic radiation, differing only in frequency and wavelength from the others. In 1859, Maxwell worked out the mathematics of the distribution of velocities of the molecules of a gas. The wave theory of light was widely accepted by the time of Maxwell's work on the electromagnetic field, and afterward the study of light and that of electricity and magnetism were closely related. In 1864 James Maxwell published his papers on a dynamical theory of the electromagnetic field, and stated that light is an electromagnetic phenomenon in the 1873 publication of Maxwell's "Treatise on Electricity and Magnetism". This work drew upon theoretical work by German theoreticians such as Carl Friedrich Gauss and Wilhelm Weber. The encapsulation of heat in particulate motion, and the addition of electromagnetic forces to Newtonian dynamics established an enormously robust theoretical underpinning to physical observations. The prediction that light represented a transmission of energy in wave form through a "luminiferous ether", and the seeming confirmation of that prediction with Helmholtz student Heinrich Hertz's 1888 detection of electromagnetic radiation, was a major triumph for physical theory and raised the possibility that even more fundamental theories based on the field could soon be developed. Experimental confirmation of Maxwell's theory was provided by Hertz, who generated and detected electric waves in 1886 and verified their properties, at the same time foreshadowing their application in radio, television, and other devices. In 1887, Heinrich Hertz discovered the photoelectric effect. Research on the electromagnetic waves began soon after, with many scientists and inventors conducting experiments on their properties. In the mid to late 1890s Guglielmo Marconi developed a radio wave based wireless telegraphy system (see invention of radio). The atomic theory of matter had been proposed again in the early 19th century by the chemist John Dalton and became one of the hypotheses of the kinetic-molecular theory of gases developed by Clausius and James Clerk Maxwell to explain the laws of thermodynamics. The kinetic theory in turn led to a revolutionary approach to science, the statistical mechanics of Ludwig Boltzmann (1844–1906) and Josiah Willard Gibbs (1839–1903), which studies the statistics of microstates of a system and uses statistics to determine the state of a physical system. Interrelating the statistical likelihood of certain states of organization of these particles with the energy of those states, Clausius reinterpreted the dissipation of energy to be the statistical tendency of molecular configurations to pass toward increasingly likely, increasingly disorganized states (coining the term "entropy" to describe the disorganization of a state). The statistical versus absolute interpretations of the second law of thermodynamics set up a dispute that would last for several decades (producing arguments such as "Maxwell's demon"), and that would not be held to be definitively resolved until the behavior of atoms was firmly established in the early 20th century. In 1902, James Jeans found the length scale required for gravitational perturbations to grow in a static nearly homogeneous medium. At the end of the 19th century, physics had evolved to the point at which classical mechanics could cope with highly complex problems involving macroscopic situations; thermodynamics and kinetic theory were well established; geometrical and physical optics could be understood in terms of electromagnetic waves; and the conservation laws for energy and momentum (and mass) were widely accepted. So profound were these and other developments that it was generally accepted that all the important laws of physics had been discovered and that, henceforth, research would be concerned with clearing up minor problems and particularly with improvements of method and measurement. However, around 1900 serious doubts arose about the completeness of the classical theories—the triumph of Maxwell's theories, for example, was undermined by inadequacies that had already begun to appear—and their inability to explain certain physical phenomena, such as the energy distribution in blackbody radiation and the photoelectric effect, while some of the theoretical formulations led to paradoxes when pushed to the limit. Prominent physicists such as Hendrik Lorentz, Emil Cohn, Ernst Wiechert and Wilhelm Wien believed that some modification of Maxwell's equations might provide the basis for all physical laws. These shortcomings of classical physics were never to be resolved and new ideas were required. At the beginning of the 20th century a major revolution shook the world of physics, which led to a new era, generally referred to as modern physics. In the 19th century, experimenters began to detect unexpected forms of radiation: Wilhelm Röntgen caused a sensation with his discovery of X-rays in 1895; in 1896 Henri Becquerel discovered that certain kinds of matter emit radiation on their own accord. In 1897, J. J. Thomson discovered the electron, and new radioactive elements found by Marie and Pierre Curie raised questions about the supposedly indestructible atom and the nature of matter. Marie and Pierre coined the term "radioactivity" to describe this property of matter, and isolated the radioactive elements radium and polonium. Ernest Rutherford and Frederick Soddy identified two of Becquerel's forms of radiation with electrons and the element helium. Rutherford identified and named two types of radioactivity and in 1911 interpreted experimental evidence as showing that the atom consists of a dense, positively charged nucleus surrounded by negatively charged electrons. Classical theory, however, predicted that this structure should be unstable. Classical theory had also failed to explain successfully two other experimental results that appeared in the late 19th century. One of these was the demonstration by Albert A. Michelson and Edward W. Morley—known as the Michelson–Morley experiment—which showed there did not seem to be a preferred frame of reference, at rest with respect to the hypothetical luminiferous ether, for describing electromagnetic phenomena. Studies of radiation and radioactive decay continued to be a preeminent focus for physical and chemical research through the 1930s, when the discovery of nuclear fission by Lise Meitner and Otto Frisch opened the way to the practical exploitation of what came to be called "atomic" energy. In 1905, a 26-year-old German physicist named Albert Einstein (then a patent clerk in Bern, Switzerland) showed how measurements of time and space are affected by motion between an observer and what is being observed. Einstein's radical theory of relativity revolutionized science. Although Einstein made many other important contributions to science, the theory of relativity alone represents one of the greatest intellectual achievements of all time. Although the concept of relativity was not introduced by Einstein, his major contribution was the recognition that the speed of light in a vacuum is constant, i.e. the same for all observers, and an absolute physical boundary for motion. This does not impact a person's day-to-day life since most objects travel at speeds much slower than light speed. For objects travelling near light speed, however, the theory of relativity shows that clocks associated with those objects will run more slowly and that the objects shorten in length according to measurements of an observer on Earth. Einstein also derived the famous equation, "E" = "mc"2, which expresses the equivalence of mass and energy. Einstein argued that the speed of light was a constant in all inertial reference frames and that electromagnetic laws should remain valid independent of reference frame—assertions which rendered the ether "superfluous" to physical theory, and that held that observations of time and length varied relative to how the observer was moving with respect to the object being measured (what came to be called the "special theory of relativity"). It also followed that mass and energy were interchangeable quantities according to the equation "E"="mc"2. In another paper published the same year, Einstein asserted that electromagnetic radiation was transmitted in discrete quantities ("quanta"), according to a constant that the theoretical physicist Max Planck had posited in 1900 to arrive at an accurate theory for the distribution of blackbody radiation—an assumption that explained the strange properties of the photoelectric effect. The special theory of relativity is a formulation of the relationship between physical observations and the concepts of space and time. The theory arose out of contradictions between electromagnetism and Newtonian mechanics and had great impact on both those areas. The original historical issue was whether it was meaningful to discuss the electromagnetic wave-carrying "ether" and motion relative to it and also whether one could detect such motion, as was unsuccessfully attempted in the Michelson–Morley experiment. Einstein demolished these questions and the ether concept in his special theory of relativity. However, his basic formulation does not involve detailed electromagnetic theory. It arises out of the question: "What is time?" Newton, in the "Principia" (1686), had given an unambiguous answer: "Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external, and by another name is called duration." This definition is basic to all classical physics. Einstein had the genius to question it, and found that it was incomplete. Instead, each "observer" necessarily makes use of his or her own scale of time, and for two observers in relative motion, their time-scales will differ. This induces a related effect on position measurements. Space and time become intertwined concepts, fundamentally dependent on the observer. Each observer presides over his or her own space-time framework or coordinate system. There being no absolute frame of reference, all observers of given events make different but equally valid (and reconcilable) measurements. What remains absolute is stated in Einstein's relativity postulate: "The basic laws of physics are identical for two observers who have a constant relative velocity with respect to each other." Special relativity had a profound effect on physics: started as a rethinking of the theory of electromagnetism, it found a new symmetry law of nature, now called "Poincaré symmetry", that replaced the old Galilean symmetry. Special relativity exerted another long-lasting effect on dynamics. Although initially it was credited with the "unification of mass and energy", it became evident that relativistic dynamics established a firm "distinction" between rest mass, which is an invariant (observer independent) property of a particle or system of particles, and the energy and momentum of a system. The latter two are separately conserved in all situations but not invariant with respect to different observers. The term "mass" in particle physics underwent a semantic change, and since the late 20th century it almost exclusively denotes the rest (or "invariant") mass. By 1916, Einstein was able to generalize this further, to deal with all states of motion including non-uniform acceleration, which became the general theory of relativity. In this theory Einstein also specified a new concept, the curvature of space-time, which described the gravitational effect at every point in space. In fact, the curvature of space-time completely replaced Newton's universal law of gravitation. According to Einstein, gravitational force in the normal sense is a kind of illusion caused by the geometry of space. The presence of a mass causes a curvature of space-time in the vicinity of the mass, and this curvature dictates the space-time path that all freely-moving objects must follow. It was also predicted from this theory that light should be subject to gravity - all of which was verified experimentally. This aspect of relativity explained the phenomena of light bending around the sun, predicted black holes as well as properties of the Cosmic microwave background radiation — a discovery rendering fundamental anomalies in the classic Steady-State hypothesis. For his work on relativity, the photoelectric effect and blackbody radiation, Einstein received the Nobel Prize in 1921. The gradual acceptance of Einstein's theories of relativity and the quantized nature of light transmission, and of Niels Bohr's model of the atom created as many problems as they solved, leading to a full-scale effort to reestablish physics on new fundamental principles. Expanding relativity to cases of accelerating reference frames (the "general theory of relativity") in the 1910s, Einstein posited an equivalence between the inertial force of acceleration and the force of gravity, leading to the conclusion that space is curved and finite in size, and the prediction of such phenomena as gravitational lensing and the distortion of time in gravitational fields. Although relativity resolved the electromagnetic phenomena conflict demonstrated by Michelson and Morley, a second theoretical problem was the explanation of the distribution of electromagnetic radiation emitted by a black body; experiment showed that at shorter wavelengths, toward the ultraviolet end of the spectrum, the energy approached zero, but classical theory predicted it should become infinite. This glaring discrepancy, known as the ultraviolet catastrophe, was solved by the new theory of quantum mechanics. Quantum mechanics is the theory of atoms and subatomic systems. Approximately the first 30 years of the 20th century represent the time of the conception and evolution of the theory. The basic ideas of quantum theory were introduced in 1900 by Max Planck (1858–1947), who was awarded the Nobel Prize for Physics in 1918 for his discovery of the quantified nature of energy. The quantum theory (which previously relied in the "correspondence" at large scales between the quantized world of the atom and the continuities of the "classical" world) was accepted when the Compton Effect established that light carries momentum and can scatter off particles, and when Louis de Broglie asserted that matter can be seen as behaving as a wave in much the same way as electromagnetic waves behave like particles (wave–particle duality). In 1905, Einstein used the quantum theory to explain the photoelectric effect, and in 1913 the Danish physicist Niels Bohr used the same constant to explain the stability of Rutherford's atom as well as the frequencies of light emitted by hydrogen gas. The quantized theory of the atom gave way to a full-scale quantum mechanics in the 1920s. New principles of a "quantum" rather than a "classical" mechanics, formulated in matrix-form by Werner Heisenberg, Max Born, and Pascual Jordan in 1925, were based on the probabilistic relationship between discrete "states" and denied the possibility of causality. Quantum mechanics was extensively developed by Heisenberg, Wolfgang Pauli, Paul Dirac, and Erwin Schrödinger, who established an equivalent theory based on waves in 1926; but Heisenberg's 1927 "uncertainty principle" (indicating the impossibility of precisely and simultaneously measuring position and momentum) and the "Copenhagen interpretation" of quantum mechanics (named after Bohr's home city) continued to deny the possibility of fundamental causality, though opponents such as Einstein would metaphorically assert that "God does not play dice with the universe". The new quantum mechanics became an indispensable tool in the investigation and explanation of phenomena at the atomic level. Also in the 1920s, the Indian scientist Satyendra Nath Bose's work on photons and quantum mechanics provided the foundation for Bose–Einstein statistics, the theory of the Bose–Einstein condensate. Fermions are particles "like electrons and nucleons" and are the usual constituents of matter. Fermi–Dirac statistics later found numerous other uses, from astrophysics (see Degenerate matter) to semiconductor design. As the philosophically inclined continued to debate the fundamental nature of the universe, quantum theories continued to be produced, beginning with Paul Dirac's formulation of a relativistic quantum theory in 1928. However, attempts to quantize electromagnetic theory entirely were stymied throughout the 1930s by theoretical formulations yielding infinite energies. This situation was not considered adequately resolved until after World War II ended, when Julian Schwinger, Richard Feynman and Sin-Itiro Tomonaga independently posited the technique of renormalization, which allowed for an establishment of a robust quantum electrodynamics (QED). Meanwhile, new theories of fundamental particles proliferated with the rise of the idea of the quantization of fields through "exchange forces" regulated by an exchange of short-lived "virtual" particles, which were allowed to exist according to the laws governing the uncertainties inherent in the quantum world. Notably, Hideki Yukawa proposed that the positive charges of the nucleus were kept together courtesy of a powerful but short-range force mediated by a particle with a mass between that of the electron and proton. This particle, the "pion", was identified in 1947 as part of what became a slew of particles discovered after World War II. Initially, such particles were found as ionizing radiation left by cosmic rays, but increasingly came to be produced in newer and more powerful particle accelerators. Outside particle physics, significant advances of the time were: Einstein deemed that all fundamental interactions in nature can be explained in a single theory. Unified field theories were numerous attempts to "merge" several interactions. One of formulations of such theories (as well as field theories in general) is a "gauge theory", a generalization of the idea of symmetry. Eventually the Standard Model (see below) succeeded in unification of strong, weak, and electromagnetic interactions. All attempts to unify gravitation with something else failed. The interaction of these particles by scattering and decay provided a key to new fundamental quantum theories. Murray Gell-Mann and Yuval Ne'eman brought some order to these new particles by classifying them according to certain qualities, beginning with what Gell-Mann referred to as the "Eightfold Way". While its further development, the quark model, at first seemed inadequate to describe strong nuclear forces, allowing the temporary rise of competing theories such as the S-Matrix, the establishment of quantum chromodynamics in the 1970s finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravitation, and which remains generally accepted within its domain of application. The Standard Model groups the electroweak interaction theory and quantum chromodynamics into a structure denoted by the gauge group SU(3)×SU(2)×U(1). The formulation of the unification of the electromagnetic and weak interactions in the standard model is due to Abdus Salam, Steven Weinberg and, subsequently, Sheldon Glashow. Electroweak theory was later confirmed experimentally (by observation of neutral weak currents), and distinguished by the 1979 Nobel Prize in Physics. Since the 1970s, fundamental particle physics has provided insights into early universe cosmology, particularly the Big Bang theory proposed as a consequence of Einstein's general theory of relativity. However, starting in the 1990s, astronomical observations have also provided new challenges, such as the need for new explanations of galactic stability ("dark matter") and the apparent acceleration in the expansion of the universe ("dark energy"). While accelerators have confirmed most aspects of the Standard Model by detecting expected particle interactions at various collision energies, no theory reconciling general relativity with the Standard Model has yet been found, although supersymmetry and string theory were believed by many theorists to be a promising avenue forward. The Large Hadron Collider, however, which began operating in 2008, has failed to find any evidence whatsoever that is supportive of supersymmetry and string theory. Cosmology may be said to have become a serious research question with the publication of Einstein's General Theory of Relativity in 1915 although it did not enter the scientific mainstream until the period known as the "Golden age of general relativity". About a decade later, in the midst of what was dubbed the "Great Debate", Hubble and Slipher discovered the expansion of universe in the 1920s measuring the redshifts of Doppler spectra from galactic nebulae. Using Einstein's general relativity, Lemaître and Gamow formulated what would become known as the big bang theory. A rival, called the steady state theory was devised by Hoyle, Gold, Narlikar and Bondi. Cosmic background radiation was verified in the 1960s by Penzias and Wilson, and this discovery favoured the big bang at the expense of the steady state scenario. Later work was by Smoot et al. (1989), among other contributors, using data from the Cosmic Background explorer (CoBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) satellites that refined these observations. The 1980s (the same decade of the COBE measurements) also saw the proposal of inflation theory by Guth. Recently the problems of dark matter and dark energy have risen to the top of the cosmology agenda. On July 4, 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, a potential key to an understanding of why elementary particles have mass and indeed to the existence of diversity and life in the universe. For now, some physicists are calling it a "Higgslike" particle. Joe Incandela, of the University of California, Santa Barbara, said, "It's something that may, in the end, be one of the biggest observations of any new phenomena in our field in the last 30 or 40 years, going way back to the discovery of quarks, for example." Michael Turner, a cosmologist at the University of Chicago and the chairman of the physics center board, said: Peter Higgs was one of six physicists, working in three independent groups, who, in 1964, invented the notion of the Higgs field ("cosmic molasses"). The others were Tom Kibble of Imperial College, London; Carl Hagen of the University of Rochester; Gerald Guralnik of Brown University; and François Englert and Robert Brout, both of Université libre de Bruxelles. Although they have never been seen, Higgslike fields play an important role in theories of the universe and in string theory. Under certain conditions, according to the strange accounting of Einsteinian physics, they can become suffused with energy that exerts an antigravitational force. Such fields have been proposed as the source of an enormous burst of expansion, known as inflation, early in the universe and, possibly, as the secret of the dark energy that now seems to be speeding up the expansion of the universe. With increased accessibility to and elaboration upon advanced analytical techniques in the 19th century, physics was defined as much, if not more, by those techniques than by the search for universal principles of motion and energy, and the fundamental nature of matter. Fields such as acoustics, geophysics, astrophysics, aerodynamics, plasma physics, low-temperature physics, and solid-state physics joined optics, fluid dynamics, electromagnetism, and mechanics as areas of physical research. In the 20th century, physics also became closely allied with such fields as electrical, aerospace and materials engineering, and physicists began to work in government and industrial laboratories as much as in academic settings. Following World War II, the population of physicists increased dramatically, and came to be centered on the United States, while, in more recent decades, physics has become a more international pursuit than at any time in its previous history.
https://en.wikipedia.org/wiki?curid=13758
Hydrofoil A hydrofoil is a lifting surface, or foil, that operates in water. They are similar in appearance and purpose to aerofoils used by aeroplanes. Boats that use hydrofoil technology are also simply termed hydrofoils. As a hydrofoil craft gains speed, the hydrofoils lift the boat's hull out of the water, decreasing drag and allowing greater speeds. The hydrofoil usually consists of a wing like structure mounted on struts below the hull, or across the keels of a catamaran in a variety of boats (see illustration). As a hydrofoil-equipped watercraft increases in speed, the hydrofoil elements below the hull(s) develop enough lift to raise the hull out of the water, which greatly reduces hull drag. This provides a corresponding increase in speed and fuel efficiency. Wider adoption of hydrofoils is prevented by the increased complexity of building and maintaining them. Hydrofoils are generally prohibitively more expensive than conventional watercraft above the certain displacement, so most hydrofoil craft are relatively small, and are mainly used as high-speed passenger ferries, where the relatively high passenger fees can offset the high cost of the craft itself. However, the design is simple enough that there are many human-powered hydrofoil designs. Amateur experimentation and development of the concept is popular. Since air and water are governed by similar fluid equations—albeit with different levels of viscosity, density, and compressibility—the hydrofoil and airfoil (both types of foil) create lift in identical ways. The foil shape moves smoothly through the water, deflecting the flow downward, which, following the Euler equations, exerts an upward force on the foil. This turning of the water creates higher pressure on the bottom of the foil and reduced pressure on the top. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flow field about the foil has a higher average velocity on one side than the other. When used as a lifting element on a hydrofoil boat, this upward force lifts the body of the vessel, decreasing drag and increasing speed. The lifting force eventually balances with the weight of the craft, reaching a point where the hydrofoil no longer lifts out of the water but remains in equilibrium. Since wave resistance and other impeding forces such as various types of drag (physics) on the hull are eliminated as the hull lifts clear, turbulence and drag act increasingly on the much smaller surface area of the hydrofoil, and decreasingly on the hull, creating a marked increase in speed. Early hydrofoils used V-shaped foils. Hydrofoils of this type are known as "surface-piercing" since portions of the V-shape hydrofoils rise above the water surface when foilborne. Some modern hydrofoils use fully submerged inverted T-shape foils. Fully submerged hydrofoils are less subject to the effects of wave action, and, therefore, more stable at sea and more comfortable for crew and passengers. This type of configuration, however, is not self-stabilizing. The angle of attack on the hydrofoils must be adjusted continuously to changing conditions, a control process performed by sensors, a computer, and active surfaces. The first evidence of a hydrofoil on a vessel appears on a British patent granted in 1869 to Emmanuel Denis Farcot, a Parisian. He claimed that "adapting to the sides and bottom of the vessel a series or inclined planes or wedge formed pieces, which as the vessel is driven forward will have the effect of lifting it in the water and reducing the draught.". Italian inventor Enrico Forlanini began work on hydrofoils in 1898 and used a "ladder" foil system. Forlanini obtained patents in Britain and the United States for his ideas and designs. Between 1899 and 1901, British boat designer John Thornycroft worked on a series of models with a stepped hull and single bow foil. In 1909 his company built the full scale long boat, "Miranda III". Driven by a engine, it rode on a bowfoil and flat stern. The subsequent "Miranda IV" was credited with a speed of . A March 1906 Scientific American article by American hydrofoil pioneer William E. Meacham explained the basic principle of hydrofoils. Alexander Graham Bell considered the invention of the hydroplane (now regarded as a distinct type, but also employing lift) a very significant achievement, and after reading the article began to sketch concepts of what is now called a hydrofoil boat. With his chief engineer Casey Baldwin, Bell began hydrofoil experiments in the summer of 1908. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models based on those designs, which led to the development of hydrofoil watercraft. During Bell's world tour of 1910–1911, Bell and Baldwin met with Forlanini in Italy, where they rode in his hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Bell's large laboratory at his Beinn Bhreagh estate near Baddeck, Nova Scotia, they experimented with a number of designs, culminating in Bell's "HD-4". Using Renault engines, a top speed of was achieved, accelerating rapidly, taking waves without difficulty, steering well and showing good stability. Bell's report to the United States Navy permitted him to obtain two 260 kW (350 hp) engines. On 9 September 1919 the "HD-4" set a world marine speed record of , which stood for two decades. A full-scale replica of the "HD-4" is viewable at the Alexander Graham Bell National Historic Site museum in Baddeck, Nova Scotia. In the early 1950s an English couple built the "White Hawk", a jet-powered hydrofoil water craft, in an attempt to beat the absolute water speed record. However, in tests, "White Hawk" could barely top the record breaking speed of the 1919 "HD-4". The designers had faced an engineering phenomenon that limits the top speed of even modern hydrofoils: cavitation disturbs the lift created by the foils as they move through the water at speed above , bending the lifting foil. German engineer Hanns von Schertel worked on hydrofoils prior to and during World War II in Germany. After the war, the Russians captured Schertel's team. As Germany was not authorized to build fast boats, Schertel went to Switzerland, where he established the Supramar company. In 1952, Supramar launched the first commercial hydrofoil, PT10 "Freccia d'Oro" (Golden Arrow), in Lake Maggiore, between Switzerland and Italy. The PT10 is of surface-piercing type, it can carry 32 passengers and travel at . In 1968, the Bahraini born banker Hussain Najadi acquired the Supramar AG and expanded its operations into Japan, Hong Kong, Singapore, the UK, Norway and the US. General Dynamics of the United States became its licensee, and the Pentagon awarded its first R&D naval research project in the field of supercavitation. Hitachi Shipbuilding of Osaka, Japan, was another licensee of Supramar, as well as many leading ship owners and shipyards in the OECD countries. From 1952 to 1971, Supramar designed many models of hydrofoils: PT20, PT50, PT75, PT100 and PT150. All are of surface-piercing type, except the PT150 combining a surface-piercing foil forward with a fully submerged foil in the aft location. Over 200 of Supramar's design were built, most of them by Rodriquez in Sicily, Italy. During the same period the Soviet Union experimented extensively with hydrofoils, constructing hydrofoil river boats and ferries with streamlined designs during the cold war period and into the 1980s. Such vessels include the Raketa (1957) type, followed by the larger Meteor type and the smaller Voskhod type. One of the most successful Soviet designer/inventor in this area was Rostislav Alexeyev, who some consider the 'father' of the modern hydrofoil due to his 1950s era high speed hydrofoil designs. Later, circa 1970s, Alexeyev combined his hydrofoil experience with the surface effect principle to create the Ekranoplan. In 1961, SRI International issued a study on "The Economic Feasibility of Passenger Hydrofoil Craft in US Domestic and Foreign Commerce". Commercial use of hydrofoils in the US first appeared in 1961 when two commuter vessels were commissioned by Harry Gale Nye, Jr.'s North American Hydrofoils to service the route from Atlantic Highlands, New Jersey to the financial district of Lower Manhattan. A 17-ton German craft "VS-6 Hydrofoil" was designed and constructed in 1940, completed in 1941 for use as a mine layer, it was tested in the Baltic Sea, producing speeds of 47 knots. Tested against a standard E-boat over the next three years it performed well but was not brought into production. Being faster it could carry a higher payload and was capable of travelling over minefields but was prone to damage and noisier. In Canada during World War II, Baldwin worked on an experimental smoke laying hydrofoil (later called the Comox Torpedo) that was later superseded by other smoke-laying technology and an experimental target-towing hydrofoil. The forward two foil assemblies of what is believed to be the latter hydrofoil were salvaged in the mid-1960s from a derelict hulk in Baddeck, Nova Scotia by Colin MacGregor Stevens. These were donated to the Maritime Museum in Halifax, Nova Scotia. The Canadian Armed Forces built and tested a number of hydrofoils (e.g., Baddeck and two vessels named "Bras d'Or"), which culminated in the high-speed anti-submarine hydrofoil HMCS "Bras d'Or" in the late 1960s. However, the program was cancelled in the early 1970s due to a shift away from anti-submarine warfare by the Canadian military. The "Bras d'Or" was a surface-piercing type that performed well during her trials, reaching a maximum speed of . The USSR introduced several hydrofoil-based fast attack craft into their navy, principally: The US Navy began experiments with hydrofoils in the mid-1950s by funding a sailing vessel that used hydrofoils to reach speeds in the 30 mph range. The "XCH-4" (officially, "Experimental Craft, Hydrofoil No. 4"), designed by William P. Carl, exceeded speeds of and was mistaken for a seaplane due to its shape. The US Navy implemented a small number of combat hydrofoils, such as the "Pegasus" class, from 1977 through 1993. These hydrofoils were fast and well armed. The Italian Navy has used six hydrofoils of the "Sparviero" class since the late 1970s. These were armed with a 76 mm gun and two missiles, and were capable of speeds up to . Three similar boats were built for the Japan Maritime Self-Defense Force. The French experimental sail powered hydrofoil "Hydroptère" is the result of a research project that involves advanced engineering skills and technologies. In September 2009, the "Hydroptère" set new sailcraft world speed records in the 500 m category, with a speed of and in the category with a speed of . Another trimaran sailboat is the Windrider Rave. The Rave is a commercially available , two person, hydrofoil trimaran, capable of reaching speeds of . The boat was designed by Jim Brown. The Moth dinghy has evolved into some radical foil configurations. Hobie Sailboats produced a production foiling trimaran, the Hobie Trifoiler, the fastest production sailboat. Trifoilers have clocked speeds upward of thirty knots. A new kayak design, called Flyak, has hydrofoils that lift the kayak enough to significantly reduce drag, allowing speeds of up to . Some surfers have developed surfboards with hydrofoils called foilboards, specifically aimed at surfing big waves further out to sea. Quadrofoil Q2 is a two-seater, four-foiled hydrofoil electrical leisure watercraft. Its initial design was set in 2012 and it has been available commercially since the end of 2016. Powered by a 5.2-kWh lithium-ion battery pack and propelled by a 5.5 kW motor, it reaches the top speed of 40 km/h and has 80 km of range. The Manta5 Hydrofoiler XE-1 is a Hydrofoil E-bike, designed and built in New Zealand that has since been available commercially for pre-order since late 2017. Propelled by a 400 watt motor, it can reach speeds exceeding 14 km/h with a weight of 22 kg. A single charge of the battery lasts an hour for a rider weighing 85kg. Soviet-built Voskhods are one of the most successful passenger hydrofoil designs. Manufactured in Russia and Ukraine, they are in service in more than 20 countries. The most recent model, Voskhod-2M FFF, also known as Eurofoil, was built in Feodosiya for the Dutch public transport operator Connexxion. The Boeing 929 is widely used in Asia for passenger services between the many islands of Japan, between Hong Kong and Macau and on the Korean peninsula. Current operators of hydrofoils include: See also the history of Condor Ferries, which operated six hydrofoil ferries over a 29-year period between the Channel Islands, south coast of England and Saint-Malo. Hydrofoils had their peak in popularity in the 1960s and 70s. Since then there has been a steady decline in their use and popularity for leisure, military and commercial passenger transport use. There are a number of reasons for this:
https://en.wikipedia.org/wiki?curid=13761
Henri Chopin Henri Chopin (18 June 1922 – 3 January 2008) was an avant-garde poet and musician. Henri Chopin was born in Paris,18 June 1922, one of three brothers, and the son of an accountant. Both his siblings died during the war. One was shot by a German soldier the day after an armistice was declared in Paris, the other while sabotaging a train . Chopin was a French practitioner of concrete and sound poetry, well known throughout the second half of the 20th century. His work, though iconoclastic, remained well within the historical spectrum of poetry as it moved from a spoken tradition to the printed word and now back to the spoken word again . He created a large body of pioneering recordings using early tape recorders, studio technologies and the sounds of the manipulated human voice. His emphasis on sound is a reminder that language stems as much from oral traditions as from classic literature, of the relationship of balance between order and chaos. Chopin is significant above all for his diverse spread of creative achievement, as well as for his position as a focal point of contact for the international arts. As poet, painter, graphic artist and designer, typographer, independent publisher, filmmaker, broadcaster and arts promoter, Chopin's work is a barometer of the shifts in European media between the 1950s and the 1970s In 1966 he was with Gustav Metzger, Otto Muehl, Wolf Vostell, Peter Weibel and others a participant of the Destruction in Art Symposium ("DIAS") in London . In 1964 he created "OU", one of the most notable reviews of the second half of the 20th century, and he ran it until 1974. "OU"'s contributors included William S. Burroughs, Brion Gysin, Gil J Wolman, François Dufrêne, Bernard Heidsieck, John Furnival, Tom Phillips, and the Austrian sculptor, writer and Dada pioneer Raoul Hausmann. His books included "Le Dernier Roman du Monde" (1971), "Portrait des 9" (1975), "The Cosmographical Lobster" (1976), "Poésie Sonore Internationale" (1979), "Les Riches Heures de l'Alphabet" (1992) and "Graphpoemesmachine" (2006). Henri also created many graphic works on his typewriter: the typewriter poems (also known as dactylopoèmes) feature in international art collections such as those of Francesco Conz in Verona, the Morra Foundation in Naples and Ruth and Marvin Sackner in Miami, and have been the subject of Australian, British and French retrospectives . His publication and design of the classic audio-visual magazines "Cinquième Saison" and "OU" between 1958 and 1974, each issue containing recordings as well as texts, images, screenprints and multiples, brought together international contemporary writers and artists such as members of Lettrisme and Fluxus, Jiri Kolar, Ian Hamilton Finlay, Tom Phillips, Brion Gysin, William S. Burroughs and many others, as well as bringing the work of survivors from earlier generations such as Raoul Hausmann and Marcel Janco to a fresh audience. From 1968 to 1986 Henri Chopin lived in Ingatestone, Essex, but with the death of his wife Jean in 1985, he moved back to France. In 2001 with his health failing, he returned to England, living with his daughter and family at Dereham, Norfolk until his death on 3 January 2008 . Chopin's "poesie sonore" aesthetics included a deliberate cultivation of a "barbarian" approach in production, using raw or crude sound manipulations to explore the area between distortion and intelligibility. He avoided high-quality, professional recording machines, preferring to use very basic equipment and "bricolage" methods, such as sticking matchsticks in the erase heads of a second-hand tape recorder, or manually interfering with the tape path .
https://en.wikipedia.org/wiki?curid=13763
Hassium Hassium is a chemical element with the symbol Hs and the atomic number 108. Hassium is highly radioactive; the most stable known isotope, 269Hs, has a half-life of approximately 16 seconds. One of its isotopes, 270Hs, has magic numbers of both protons and neutrons for deformed nuclei, which gives it greater stability against spontaneous fission. Hassium has been made only in laboratories in minuscule quantities; its possible occurrence in nature has been hypothesized but no natural hassium has been found so far. The first attempts to synthesize element 108 were made in two different experiments at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in 1978. More attempts were made at the same venue in 1983 and then in 1984; the latter resulted in a claim that element 108 had been produced. Later in 1984, a synthesis claim followed from the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Hesse, West Germany. The 1993 report by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, concluded the report from Darmstadt was more conclusive on its own and the major credit was assigned to the German scientists. Following the recognition, GSI formally announced they wished to name the element "hassium" after the German state of Hesse home to the facility. Two rulings from the International Union of Pure and Applied Chemistry (IUPAC) followed in 1994 and 1995 to establish permanent names of a series of elements that included element 108; both were met with rejection by competing scientists and scrapped. A third ruling, published in 1997, named the element "hassium" per the original suggestion by GSI; it was accepted by the scientific community and the name was established as final. In the periodic table of elements, hassium is a transactinide element, a member of the 7th period and group 8; it is thus the sixth member of the 6d series of transition metals. Chemistry experiments have confirmed that hassium behaves as the heavier homologue to osmium, in group 8, reacting readily with oxygen to form a volatile tetroxide. The chemical properties of hassium have only been partly characterized but they compare well with the chemistry of the other group 8 elements. The chemical element with the highest atomic number that exists in nature in significant quantities is uranium. The atomic number is the number of protons in an atomic nucleus. Such a number constitutes an exhaustive definition of an element, so an element can be referred to by its atomic number; for example, uranium is element 92. All elements with atomic number above 92 were discovered by synthesis rather than by observation in nature. The first such element, element 93, later named neptunium, was discovered in 1940 at the University of California in Berkeley, California, United States. Elements through 101 were discovered at this university's Radiation Laboratory (RL; later named Lawrence Berkeley Laboratory, LBL, and now Lawrence Berkeley National Laboratory, LBNL). Starting with element 102, another major facility emerged that claimed discoveries of new elements: the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, which first reported synthesis of a new element in 1964 (the scientific team reported their first attempt in 1956 from the Institute of Atomic Energy of the Academy of Sciences of the USSR in Moscow). Another major venue—Gesellschaft für Schwerionenforschung (GSI; "Institute for Heavy Ion Research") in Darmstadt, Hesse, West Germany—first reported synthesis of a new element (element 107) in 1981. These facilities often claimed discoveries of new elements. Sometimes, these claims clashed; since a discoverer was considered entitled to naming of an element, conflicts over priority of discovery often resulted in conflicts over names of these new elements. These conflicts became known as the Transfermium Wars. Nuclear reactions used in the 1960s resulted in high excitation energies that required expulsion of four or five neutrons; these reactions used targets made of elements with high atomic numbers to maximize the size difference between the two nuclei in a reaction. While this increased the chance of fusion due to the lower electrostatic repulsion between the target and the projectile, the formed compound nuclei often broke apart and did not survive to form a new element. Moreover, fusion processes inevitably produce neutron-poor nuclei, as heavier elements require more neutrons per proton to maximize stability; therefore, the necessary ejection of neutrons results in final products with typically have shorter lifetimes. As such, light beams (6 to 10 protons) only allowed synthesis of elements up to 106. To advance to heavier elements, Soviet physicist Yuri Oganessian at JINR hypothesized a different mechanism, in which the bombarded nucleus would be lead-208, which has magic numbers of protons and neutrons, or one close to it. The magic number of protons and/or neutrons gives the nuclide additional stability, which requires more energy for an external nucleus to penetrate it. More equal atomic numbers of the reacting nuclei result in greater electrostatic repulsion between them, but the greater mass excess of the target nucleus balances it. This leaves less excitation energy for the newly created compound nucleus, which necessitates fewer neutron ejections to reach a stable state. Because of this energy difference, the former mechanism became known as "hot fusion" and the latter as "cold fusion". Cold fusion was first declared initially successful in 1974 at JINR, when it was tested for synthesis of the yet undiscovered element 106. These new nuclei were projected to decay via spontaneous fission. The physicists at JINR concluded those were not seen before because no then-known fissioning nucleus showed similar parameters of fission and because changing either of the two nuclei in the reactions negated the seen effects. Physicists at LBL also expressed great interest in the new technique. When asked about how far this new method could go and if lead targets were a physics' Klondike, Oganessian responded, "Klondike may be an exaggeration [...] But soon, we will try to get elements 107...108 in these reactions." The synthesis of element 108 was first attempted in 1978 by a research team led by Oganessian at the JINR. The team used a reaction that would generate element 108, specifically, the isotope 270108, from fusion of radium (specifically, the isotope and calcium . The researchers were uncertain in interpreting their data, and their paper did not unambiguously claim to have discovered the element. The same year, another team at JINR investigated the possibility of synthesis of element 108 in reactions between lead and iron ; they were uncertain in interpreting the data, suggesting the possibility that element 108 had not been created. In 1983, new experiments were performed at JINR. The experiments probably resulted in the synthesis of element 108; bismuth was bombarded with manganese to obtain 263108, lead , was bombarded with iron to obtain 264108, and californium was bombarded with neon to obtain 270108. These experiments were not claimed as a discovery and Oganessian announced them in a conference rather than in a written report. In 1984, JINR researchers in Dubna performed experiments set up identically to the previous ones; they bombarded bismuth and lead targets with ions of lighter elements manganese and iron, respectively. Twenty-one spontaneous fission events were recorded; these were assigned to 264108. Later in 1984, a research team led by Peter Armbruster and Gottfried Münzenberg at GSI attempted to create element 108. The team bombarded a lead target with accelerated iron nuclei. GSI's experiment to create element 108 was delayed until after their creation of element 109 in 1982, as prior calculations had suggested that even–even isotopes of element 108 would have spontaneous fission half-lives of less than one microsecond, making them difficult to detect and identify. The element 108 experiment finally went ahead after 266109 had been synthesized and was found to decay by alpha emission, suggesting that isotopes of element 108 would do likewise, and this was corroborated by an experiment aimed at synthesizing isotopes of element 106. GSI reported synthesis of three atoms of 265108. Two years later, they reported synthesis of one atom of the even–even 264108. In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed the Transfermium Working Group (TWG) to assess discoveries and establish final names for elements with atomic numbers greater than 100. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria for recognition of an element and in 1991, they finished the work of assessing discoveries and disbanded. These results were published in 1993. According to the report, the 1984 works from JINR and GSI simultaneously and independently established synthesis of element 108. Of the two 1984 works, the one from GSI was said to be sufficient as a discovery on its own. The JINR work, which preceded the GSI one, "very probably" displayed synthesis of element 108 but that was determined in retrospect given the work from Darmstadt; it focused on chemically identifying remote granddaughters of element 108 isotopes (which could not exclude the possibility that these daughter isotopes had other progenitors), while the GSI work clearly identified the decay path of those element 108 isotopes. The report concluded that the major credit should be awarded to GSI. In written responses to this ruling, both JINR and GSI agreed with its conclusions. In the same response, GSI confirmed that they and JINR were able to resolve all conflicts between them; GSI also proposed a name for element 108 that had been officially presented at the facility three weeks earlier. Historically, a newly discovered element was named by its discoverer. The first regulation came in 1947, when IUPAC decided that naming required regulation in case there are conflicting names. These matters were to be resolved by the Commission of Inorganic Nomenclature and the Commission of Atomic Weights. They would review the names in case of a conflict and select one; the decision would be based on a number of factors, such as usage, and would not be an indicator of priority of a claim. The two commissions would recommend a name to the IUPAC Council, which would be the final authority. The discoverers held the right to name an element, but their name would be a subject to approval by IUPAC. The Commission of Atomic Weights distanced itself from element naming in most cases. According to Mendeleev's nomenclature for unnamed and undiscovered elements, hassium should be known as "eka-osmium". In 1979, IUPAC published recommendations according to which the element was to be called "unniloctium" and assigned the corresponding symbol of "Uno", a systematic element name as a placeholder until the element was discovered and the discovery then confirmed, and a permanent name was decided. Although these recommendations were widely followed in the chemical community, most scientists in the field ignored them. They either called it "element 108", with the symbols "E108", "(108)" or "108", or used the proposed name "hassium". In 1990, in an attempt to break a deadlock in establishing priority of discovery and naming of several elements, IUPAC reaffirmed in its nomenclature of inorganic chemistry that after existence of an element was established, the discoverers could propose a name. (In addition, the Commission of Atomic Weights was excluded from the naming process.) The first publication on criteria for an element discovery, released in 1991, specified the need for recognition by TWG. Armbruster and his colleagues, the officially recognized German discoverers, held a naming ceremony for the elements 107 through 109, which had all been recognized as discovered by GSI, on 7 September 1992. For element 108, the scientists proposed the name "hassium". It is derived from the Latin name "Hassia" for the German state of Hesse where the institute is located. This name was proposed to IUPAC in a written response to their ruling on priority of discovery claims of elements, signed 29 September 1992. In 1994, IUPAC Commission on Nomenclature of Inorganic Chemistry recommended that element 108 be named "hahnium" (Hn) after the German physicist Otto Hahn so elements named after Hahn and Lise Meitner (it was recommended element 109 should be named meitnerium, following GSI's suggestion) would be next to each other, honouring their joint discovery of nuclear fission; IUPAC commented that they felt the German suggestion was obscure. GSI protested, saying this proposal contradicted the long-standing convention of giving the discoverer the right to suggest a name; the American Chemical Society supported GSI. The name "hahnium", albeit with the different symbol Ha, had already been proposed and used by the American scientists for element 105, for which they had a discovery dispute with JINR; they thus protested the confusing scrambling of names. Following the uproar, IUPAC formed an ad hoc committee of representatives from the national adhering organizations of the three countries home to the competing institutions; they produced a new set of names in 1995. Element 108 was again named "hahnium"; this proposal was also retracted. The final compromise was reached in 1996 and published in 1997; element 108 was named "hassium" (Hs). Simultaneously, the name "dubnium" (Db; from Dubna, the JINR location) was assigned to element 105, and the name "hahnium" was not used for any element. The official justification for this naming, alongside that of darmstadtium for element 110, was that it completed a set of geographic names for the location of the GSI; this set had been initiated by 19th-century names europium and germanium. This set would serve as a response to earlier naming of americium, californium, and berkelium for elements discovered in Berkeley. Armbruster commented on this, "this bad tradition was established by Berkeley. We wanted to do it for Europe." Later, when commenting on the naming of element 112, Armbruster said, "I did everything to ensure that we do not continue with German scientists and German towns." Hassium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve isotopes with mass numbers ranging from 263 to 277 (with the exceptions of 272, 274, and 276) have been reported, four of which—hassium-265, -267, -269, and -277—have known metastable states, although that of hassium-277 is unconfirmed. Most of these isotopes decay predominantly through alpha decay; this is the most common for all isotopes for which comprehensive decay characteristics are available, the only exception being hassium-277, which undergoes spontaneous fission. The lightest isotopes, which usually have shorter half-lives, were synthesized by direct fusion between two lighter nuclei and as decay products. The heaviest isotope produced by direct fusion is 271Hs; heavier isotopes have only been observed as decay products of elements with larger atomic numbers. Superheavy nuclei are deformed as opposed to regular spherical nuclei. Until the 1960s, it was thought that deformation made these nuclei incapable of forming a nuclear structure and thus very unstable. The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. The addition to the stability against the spontaneous fission should be particularly great against spontaneous fission, although increase in stability against the alpha decay would also be pronounced. The center of the region on a chart of nuclides that would correspond to this stability for deformed nuclei was determined as 270Hs, with 108 being a magic number for protons in deformed nuclei and 162 being a magic neutron number. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei. Theoretical models predict a region of instability for some hassium isotopes to lie around "A" = 275 and "N" = 168–170, which is between the predicted neutron shell closures at "N" = 162 for deformed nuclei and "N" = 184 for spherical nuclei. Nuclides within this region are predicted to have low fission barrier heights, resulting in short partial half-lives toward spontaneous fission. This prediction is supported by the observed 11 millisecond half-life of 277Hs and that of the neighbouring isobar 277Mt because the expected hindrance factors from the odd nucleon were shown to be much lower than expected. The measured half-lives are even lower than those predicted for the even–even 276Hs and 278Ds, which suggests a gap in stability away from the shell closures and perhaps a weakening of the shell closures in this region. In 1991, Polish physicists Zygmunt Patyk and Adam Sobiczewski predicted that 108 is a proton magic number for deformed nuclei—nuclei that are far from spherical—and 162 is a neutron magic number for deformed nuclei. This means such nuclei are permanently deformed in their ground state but have high, narrow fission barriers to further deformation and hence relatively long life-times toward spontaneous fission. Computational prospects for shell stabilization for 270Hs made it a promising candidate for a deformed doubly magic nucleus. Experimental data is scarce, but the existing data is interpreted by the researchers to support the assignment of "N" = 162 as a magic number. In particular, this conclusion was drawn from the decay data of 269Hs, 270Hs, and 271Hs. Hassium is not known to occur naturally on Earth; the half-lives of all of its known isotopes are short enough that no primordial hassium would have survived to the present day. This does not rule out the possibility of the existence of unknown, longer-lived isotopes or nuclear isomers, some of which could still exist in trace quantities if they are long-lived enough. As early as 1914, German physicist Richard Swinne proposed element 108 as a source of X-rays in the Greenland ice sheet. Although Swinne was unable to verify this observation and thus did not claim discovery, he proposed in 1931 the existence of regions of long-lived transuranic elements, including one around "Z" = 108. In 1963, Soviet geologist and physicist Viktor Cherdyntsev, who had previously claimed the existence of primordial curium-247, claimed to have discovered element 108—specifically the 267108 isotope, which supposedly had a half-life of 400 to 500 million years—in natural molybdenite and suggested the provisional name "sergenium" (symbol Sg); this name takes its origin from the name for the Silk Road and was explained as "coming from Kazakhstan" for it. His rationale for claiming that sergenium was the heavier homologue to osmium was that minerals supposedly containing sergenium formed volatile oxides when boiled in nitric acid, similarly to osmium. Cherdyntsev's findings were criticized by Soviet physicist Vladimir Kulakov on the grounds that some of the properties Cherdyntsev claimed sergenium had were inconsistent with the then-current nuclear physics. The chief questions raised by Kulakov were that the claimed alpha decay energy of sergenium was many orders of magnitude lower than expected and the half-life given was eight orders of magnitude shorter than what would be predicted for a nuclide alpha-decaying with the claimed decay energy. At the same time, a corrected half-life in the region of 1016 years would be impossible because it would imply the samples contained about 100 milligrams of sergenium. In 2003, it was suggested that the observed alpha decay with energy 4.5 MeV could be due to a low-energy and strongly enhanced transition between different hyperdeformed states of a hassium isotope around 271Hs, thus suggesting that the existence of superheavy elements in nature was at least possible, although unlikely. In 2006, Russian geologist Alexei Ivanov hypothesized that an isomer of 271Hs might have a half-life of around years, which would explain the observation of alpha particles with energies of around 4.4 MeV in some samples of molybdenite and osmiridium. This isomer of 271Hs could be produced from the beta decay of 271Bh and 271Sg, which, being homologous to rhenium and molybdenum respectively, should occur in molybdenite along with rhenium and molybdenum if they occurred in nature. Because hassium is homologous to osmium, it should occur along with osmium in osmiridium if it occurs in nature. The decay chains of 271Bh and 271Sg are hypothetical and the predicted half-life of this hypothetical hassium isomer is not long enough for any sufficient quantity to remain on Earth. It is possible that more 271Hs may be deposited on the Earth as the Solar System travels through the spiral arms of the Milky Way; this would explain excesses of plutonium-239 found on the ocean floors of the Pacific Ocean and the Gulf of Finland. However, minerals enriched with 271Hs are predicted to have excesses of its daughters uranium-235 and lead-207; they would also have different proportions of elements that are formed during spontaneous fission, such as krypton, zirconium, and xenon. The natural occurrence of hassium in minerals such as molybdenite and osmiride is theoretically possible, but very unlikely. In 2007, the Joint Institute for Nuclear Research started a search for natural hassium; this was done underground to avoid interference and false positives from cosmic rays. In 2010, Oganessian reaffirmed that the search was still underway. No results have been released. Atomic nuclei show additional stability if they have specific numbers of protons or neutrons called magic numbers. The nuclear shell model describes this as a consequence of closed "shells" of protons and neutrons, whose closures create especially stable configurations. The highest known magic numbers are 82 for protons and 126 for neutrons. This notion is sometimes expanded to include additional numbers between those magic numbers, which also provide some additional stability and indicate closure of "sub-shells". There are various predictions for higher magic numbers; the next doubly magic nucleus (having magic numbers of both protons and neutrons) is expected to lie in the center of the "island of stability", which is theorized to contain longer-lived superheavy nuclides in the vicinity of "Z" = 110–114 and the predicted magic neutron number "N" = 184. In 1997, Polish physicist Robert Smolańczuk calculated that the isotope 292Hs may be the most stable superheavy nucleus against alpha decay and spontaneous fission as a consequence of the predicted "N" = 184 shell closure. As such, it was considered as a candidate to exist in nature. This nucleus, however, is predicted to be very unstable toward beta decay and any beta-stable isotopes of hassium such as 286Hs would be too unstable in the other decay channels to be observed in nature. Indeed, a 2012 search for 292Hs in nature along with its homologue osmium was unsuccessful, setting an upper limit to its abundance at of hassium per gram of osmium. Various calculations suggest that hassium should be the heaviest group 8 element so far, consistently with the periodic law. Its properties should generally match those expected for a heavier homologue of osmium; as is the case for all 6d metals, a few deviations are expected to arise from relativistic effects. Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available. Relativistic effects on hassium should arise due to the high charge of its nuclei, which causes the electrons around the nucleus to move faster—so fast their velocity becomes comparable to the speed of light. There are three main effects: the direct relativistic effect, the indirect relativistic effect, and the spin–orbit splitting. (The existing calculations do not account for Breit interactions, but those are negligible, and their omission can only result in an uncertainty of the current calculations of no more than 2%.) As atomic number increases, so does the electrostatic attraction between an electron and the nucleus. This causes the velocity of the electron to increase, which leads to an increase its mass. This in turn increases the gravitational attraction between the electron and the nucleus, which leads to contraction of the atomic orbitals, most specifically the s and p1/2 orbitals. Their electrons become more closely attached to the atom and harder to pull from the nucleus. This is the direct relativistic effect. Since the s and p1/2 orbitals are closer to the nucleus, they take a bigger portion of the electric charge of the nucleus on themselves ("shield" it). This leaves less charge for attraction of the remaining electrons, whose orbitals therefore expand, making them easier to pull from the nucleus. This is the indirect relativistic effect. The combination of the two leads to that the Hs+ ion, compared to the neutral atom, lacks a 6d electron, rather than a 7s electron. In comparison, Os+ lacks a 6s electron compared to the neutral atom. The ionic radius (in oxidation state +8) of hassium is greater than that of osmium because of the relativistic expansion of the 6p3/2 orbitals, which are the outermost orbitals for an Hs8+ ion. There are several kinds of electronic orbitals, denoted by the letters s, p, d, and f (g orbitals are expected to start being chemically active among elements after element 120). Each of these corresponds to an azimuthal quantum number "l": s to 0, p to 1, d to 2, and f to 3. Every electron also corresponds to a spin quantum number "s", which may equal either +1/2 or −1/2. Thus, the total angular momentum quantum number "j = l" + "s" is equal to "j" = "l" ± 1/2 (except for "l" = 0, for which for both electrons in each orbital "j =" 0 + 1/2 = 1/2). Spin of an electron interacts with its orbit, which leads to a split of a subshell into two with different energies (the one with "j" = "l" − 1/2 is lower in energy and thus these electrons more difficult to extract): for instance, of the six 6p electrons, two become 6p1/2 and four become 6p3/2. This is the spin–orbit splitting (sometimes also referred to as subshell splitting or "jj" coupling). It is most visible with p electrons, which do not play an important role in the chemistry of hassium, but those for d and f electrons are within the same order of magnitude (quantitatively, spin–orbit splitting in expressed in energy units, such as electronvolts). These relativistic effects are responsible for the expected increase of the ionization energy, decrease of the electron affinity, and increase of stability of the +8 oxidation state compared to osmium; without them, the trends would be reversed. Relativistic effects decrease the atomization energies of the compounds of hassium because the spin–orbit splitting of the d orbital lowers binding energy between electrons and the nucleus and because relativistic effects decrease ionic character in bonding. The previous members of group 8 have relatively high melting points: Fe, 1538 °C; Ru, 2334 °C; Os, 3033 °C. Much like them, hassium is predicted to be a solid at room temperature although its melting point has not been precisely calculated. Hassium should crystallize in the hexagonal close-packed structure ("c"/"a" = 1.59), similarly to its lighter congener osmium. Pure metallic hassium is calculated to have a bulk modulus (resistance to uniform compression) of 450 GPa, comparable with that of diamond, 442 GPa. Hassium is expected to have a bulk density of 41 g/cm3 at standard pressure and temperature, the highest of any of the 118 known elements and nearly twice the highest density of an element observed to this day at 22.6 g/cm3. The atomic radius of hassium is expected to be around 126 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Hs+ ion is predicted to have an electron configuration of [Rn] 5f14 6d5 7s2, giving up a 6d electron instead of a 7s electron, which is the opposite of the behaviour of its lighter homologues. The Hs2+ ion is expected to have an electron configuration of [Rn] 5f14 6d5 7s1, analogous to that calculated for the Os2+ ion. In chemical compounds, hassium is calculated to display bonding characteristic for a d-block element, whose bonding will be primarily executed by 6d3/2 and 6d5/2 orbitals; compared to the elements from the previous periods, 7s, 6p1/2, 6p3/2, and 7p1/2 orbitals should be more important. Hassium is the sixth member of the 6d series of transition metals and is expected to be much like the platinum group metals. Some of these properties were confirmed by gas-phase chemistry experiments. The group 8 elements portray a wide variety of oxidation states but ruthenium and osmium readily portray their group oxidation state of +8; this state becomes more stable down the group. This oxidation state is extremely rare: among stable elements, only ruthenium, osmium, and xenon are able to attain it in reasonably stable compounds. Hassium is expected to follow its congeners and have a stable +8 state, but like them it should show lower stable oxidation states such as +6, +4, +3, and +2. Hassium(IV) is expected to be more stable than hassium(VIII) in aqueous solution. Hassium should be a rather noble metal; the standard reduction potential for the Hs4+/Hs couple is expected to be 0.4 V, which is more than that for the Cu2+/Cu couple of copper (0.3419 V), but less than that for the Ru2+/Ru couple for ruthenium (0.455 V). The group 8 elements show a distinctive oxide chemistry. All of the lighter members have known or hypothetical tetroxides, MO4. Their oxidizing power decreases as one descends the group. FeO4 is not known due to its extraordinarily large electron affinity—the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion—which results in the formation of the well-known oxyanion ferrate(VI), . Ruthenium tetroxide, RuO4, which is formed by oxidation of ruthenium(VI) in acid, readily undergoes reduction to ruthenate(VI), . Oxidation of ruthenium metal in air forms the dioxide, RuO2. In contrast, osmium burns to form the stable tetroxide, OsO4, which complexes with the hydroxide ion to form an osmium(VIII) -"ate" complex, [OsO4(OH)2]2−. Therefore, eka-osmium properties for hassium should be demonstrated by the formation of a stable, very volatile tetroxide HsO4, which undergoes complexation with hydroxide to form a hassate(VIII), [HsO4(OH)2]2−. Ruthenium tetroxide and osmium tetroxide are both volatile due to their symmetrical tetrahedral molecular geometry and because they are charge-neutral; hassium tetroxide should similarly be a very volatile solid. The trend of the volatilities of the group 8 tetroxides is known to be RuO4 4 > HsO4, which confirms the calculated results. In particular, the calculated enthalpies of adsorption—the energy required for the adhesion of atoms, molecules, or ions from a gas, liquid, or dissolved solid to a surface—of HsO4, −(45.4 ± 1) kJ/mol on quartz, agrees very well with the experimental value of −(46 ± 2) kJ/mol. The first goal for chemical investigation was the formation of the tetroxide; it was chosen because ruthenium and osmium form volatile tetroxides, being the only transition metals to display a stable compound in the +8 oxidation state. Despite this selection for gas-phase chemical studies being clear from the beginning, chemical characterization of hassium was considered a difficult task for a long time. Although hassium isotopes were first synthesized in 1984, it was not until 1996 that a hassium isotope long-lived enough to allow chemical studies was synthesized. Unfortunately, this hassium isotope, 269Hs, was synthesized indirectly from the decay of 277Cn; not only are indirect synthesis methods not favourable for chemical studies, but the reaction that produced the isotope 277Cn had a low yield—its cross section was only 1 pb—and thus did not provide enough hassium atoms for a chemical investigation. Direct synthesis of 269Hs and 270Hs in the reaction 248Cm(26Mg,"x"n)274−"x"Hs ("x" = 4 or 5) appeared more promising because the cross section for this reaction was somewhat larger at 7 pb. This yield was still around ten times lower than that for the reaction used for the chemical characterization of bohrium. New techniques for irradiation, separation, and detection had to be introduced before hassium could be successfully characterized chemically. Ruthenium and osmium have very similar chemistry due to the lanthanide contraction but iron shows some differences from them; for example, although ruthenium and osmium form stable tetroxides in which the metal is in the +8 oxidation state, iron does not. In preparation for the chemical characterization of hassium, research focused on ruthenium and osmium rather than iron because hassium was expected to be similar to ruthenium and osmium, as the predicted data on hassium closely matched that of those two. The first chemistry experiments were performed using gas thermochromatography in 2001, using the synthetic osmium radioisotopes 172Os and 173Os as a reference. During the experiment, seven hassium atoms were synthesized using the reactions 248Cm(26Mg,5n)269Hs and 248Cm(26Mg,4n)270Hs. They were then thermalized and oxidized in a mixture of helium and oxygen gases to form hassium tetroxide molecules. The measured deposition temperature of hassium tetroxide was higher than that of osmium tetroxide, which indicated the former was the less volatile one, and this placed hassium firmly in group 8. The enthalpy of adsorption for HsO4 measured, , was significantly lower than the predicted value, , indicating OsO4 is more volatile than HsO4, contradicting earlier calculations that implied they should have very similar volatilities. For comparison, the value for OsO4 is . (The calculations that yielded a closer match to the experimental data came after the experiment, in 2008.) It is possible hassium tetroxide interacts differently with silicon nitride than with silicon dioxide, the chemicals used for the detector; further research is required, including more accurate measurements of the nuclear properties of 269Hs and comparisons with RuO4 in addition to OsO4. In 2004, scientists reacted hassium tetroxide and sodium hydroxide to form sodium hassate(VIII), a reaction that is well known with osmium. This was the first acid-base reaction with a hassium compound, forming sodium hassate(VIII): The team from the University of Mainz planned in 2008 to study the electrodeposition of hassium atoms using the new TASCA facility at GSI. Their aim was to use the reaction 226Ra(48Ca,4n)270Hs. Scientists at GSI were hoping to use TASCA to study the synthesis and properties of the hassium(II) compound hassocene, Hs(C5H5)2, using the reaction 226Ra(48Ca,"x"n). This compound is analogous to the lighter compounds ferrocene, ruthenocene, and osmocene, and is expected to have the two cyclopentadienyl rings in an eclipsed conformation like ruthenocene and osmocene and not in a staggered conformation like ferrocene. Hassocene, which is expected to be a stable and highly volatile compound, was chosen because it has hassium in the low formal oxidation state of +2—although the bonding between the metal and the rings is mostly covalent in metallocenes—rather than the high +8 state that had previously been investigated, and relativistic effects were expected to be stronger in the lower oxidation state. The highly symmetrical structure of hassocene and its low number of atoms make relativistic calculations easier. , there are no experimental reports of hassocene.
https://en.wikipedia.org/wiki?curid=13764
Henry Kissinger Henry Alfred Kissinger (; ; born Heinz Alfred Kissinger; May 27, 1923) is an American politician, diplomat, and geopolitical consultant who served as United States Secretary of State and National Security Advisor under the presidential administrations of Richard Nixon and Gerald Ford. A Jewish refugee who fled Nazi Germany with his family in 1938, he became National Security Advisor in 1969 and U.S. Secretary of State in 1973. For his actions negotiating a ceasefire in Vietnam, Kissinger received the 1973 Nobel Peace Prize under controversial circumstances, with two members of the committee resigning in protest. A practitioner of "Realpolitik", Kissinger played a prominent role in United States foreign policy between 1969 and 1977. During this period, he pioneered the policy of "détente" with the Soviet Union, orchestrated the opening of relations with the People's Republic of China, engaged in what became known as shuttle diplomacy in the Middle East to end the Yom Kippur War, and negotiated the Paris Peace Accords, ending American involvement in the Vietnam War. Kissinger has also been associated with such controversial policies as U.S. involvement in the 1973 Chilean military coup, a "green light" to Argentina's military junta for their Dirty War, and U.S. support for Pakistan during the Bangladesh War despite the genocide being perpetrated by his allies. After leaving government, he formed Kissinger Associates, an international geopolitical consulting firm. Kissinger has written over a dozen books on diplomatic history and international relations. Kissinger remains a controversial and polarizing figure in American politics, both condemned as an alleged war criminal by many journalists, political activists, and human rights lawyers, as well as venerated as a highly effective U.S. Secretary of State by many prominent international relations scholars. Kissinger was born Heinz Alfred Kissinger in Fürth, Bavaria, Germany in 1923 to a family of German Jews. His father, Louis Kissinger (1887–1982), was a schoolteacher. His mother, Paula (Stern) Kissinger (1901–1998), from Leutershausen, was a homemaker. Kissinger has a younger brother, Walter Kissinger (born 1924). The surname Kissinger was adopted in 1817 by his great-great-grandfather Meyer Löb, after the Bavarian spa town of Bad Kissingen. In his youth, Kissinger enjoyed playing soccer, and played for the youth wing of his favorite club, SpVgg Fürth, which was one of the nation's best clubs at the time. In 1938, when Kissinger was 15 years old, he fled Germany with his family as a result of Nazi persecution. His family briefly emigrated to London, England, before arriving in New York on September 5. Kissinger spent his high school years in the Washington Heights section of Upper Manhattan as part of the German Jewish immigrant community that resided there at the time. Although Kissinger assimilated quickly into American culture, he never lost his pronounced German accent, due to childhood shyness that made him hesitant to speak. Following his first year at George Washington High School, he began attending school at night and worked in a shaving brush factory during the day. Following high school, Kissinger enrolled in the City College of New York, studying accounting. He excelled academically as a part-time student, continuing to work while enrolled. His studies were interrupted in early 1943, when he was drafted into the U.S. Army. Kissinger underwent basic training at Camp Croft in Spartanburg, South Carolina. On June 19, 1943, while stationed in South Carolina, at the age of 20 years, he became a naturalized U.S. citizen. The army sent him to study engineering at Lafayette College, Pennsylvania, but the program was canceled, and Kissinger was reassigned to the 84th Infantry Division. There, he made the acquaintance of Fritz Kraemer, a fellow Jewish immigrant from Germany who noted Kissinger's fluency in German and his intellect, and arranged for him to be assigned to the military intelligence section of the division. Kissinger saw combat with the division, and volunteered for hazardous intelligence duties during the Battle of the Bulge. During the American advance into Germany, Kissinger, only a private, was put in charge of the administration of the city of Krefeld, owing to a lack of German speakers on the division's intelligence staff. Within eight days he had established a civilian administration. Kissinger was then reassigned to the Counter Intelligence Corps (CIC), where he became a CIC Special Agent holding the enlisted rank of sergeant. He was given charge of a team in Hanover assigned to tracking down Gestapo officers and other saboteurs, for which he was awarded the Bronze Star. In June 1945, Kissinger was made commandant of the Bensheim metro CIC detachment, Bergstrasse district of Hesse, with responsibility for de-Nazification of the district. Although he possessed absolute authority and powers of arrest, Kissinger took care to avoid abuses against the local population by his command. In 1946, Kissinger was reassigned to teach at the European Command Intelligence School at Camp King and, as a civilian employee following his separation from the army, continued to serve in this role. Henry Kissinger received his AB degree "summa cum laude", Phi Beta Kappa in political science from Harvard College in 1950, where he lived in Adams House and studied under William Yandell Elliott. His senior undergraduate thesis, titled "The Meaning of History: Reflections on Spengler, Toynbee and Kant", was over 400 pages long. He received his MA and PhD degrees at Harvard University in 1951 and 1954, respectively. In 1952, while still a graduate student at Harvard, he served as a consultant to the director of the Psychological Strategy Board. His doctoral dissertation was titled "Peace, Legitimacy, and the Equilibrium (A Study of the Statesmanship of Castlereagh and Metternich)". In his PhD dissertation, Kissinger first introduced the concept of "legitimacy", which he defined as: "Legitimacy as used here should not be confused with justice. It means no more than an international agreement about the nature of workable arrangements and about the permissible aims and methods of foreign policy". An international order accepted by all of the major powers is "legitimate" whereas an international order not accepted by one or more of the great powers is "revolutionary" and hence dangerous. Thus, when after the Congress of Vienna in 1815, the leaders of Britain, France, Austria, Prussia and Russia agreed to co-operate in the Concert of Europe to preserve the peace, in Kissinger's viewpoint this international system was "legitimate" because it was accepted by the leaders of all five of the Great Powers of Europe. Notably, Kissinger's "primat der aussenpolitik" approach to diplomacy took it for granted that as long as the decision-makers in the major states were willing to accept the international order, then it is "legitimate" with questions of public opinion and morality dismissed as irrelevant. Kissinger remained at Harvard as a member of the faculty in the Department of Government and, with Robert R. Bowie, co-founded the Center for International Affairs in 1958 where he served as associate director. In 1955, he was a consultant to the National Security Council's Operations Coordinating Board. During 1955 and 1956, he was also study director in nuclear weapons and foreign policy at the Council on Foreign Relations. He released his book "Nuclear Weapons and Foreign Policy" the following year. The book which was a critique of the Eisenhower Administration's "massive retaliation" nuclear doctrine caused much controversy at the time with its advocacy of using tactical nuclear weapons on a regular basis to win wars. From 1956 to 1958 he worked for the Rockefeller Brothers Fund as director of its Special Studies Project. He was director of the Harvard Defense Studies Program between 1958 and 1971. He was also director of the Harvard International Seminar between 1951 and 1971. Outside of academia, he served as a consultant to several government agencies and think tanks, including the Operations Research Office, the Arms Control and Disarmament Agency, Department of State, and the RAND Corporation. Keen to have a greater influence on U.S. foreign policy, Kissinger became foreign policy advisor to the presidential campaigns of Nelson Rockefeller, supporting his bids for the Republican nomination in 1960, 1964, and 1968. Kissinger first met Richard Nixon at a party hosted by Clare Booth Luce in 1967, saying that he found him more "thoughtful" than what he expected. During the Republican primaries in 1968, Kissinger again served as the foreign policy adviser to Rockefeller and in July 1968 called Nixon "the most dangerous of all the men running to have as president". Initially upset when Nixon won the Republican nomination, the ambitious Kissinger soon changed his mind about Nixon and contacted a Nixon campaign aide, Richard Allen, to state he was willing to do anything to help Nixon win. After Nixon became president in January 1969, Kissinger was appointed as National Security Advisor. Kissinger served as National Security Advisor and Secretary of State under President Richard Nixon, and continued as Secretary of State under Nixon's successor Gerald Ford. On Nixon's last full day in office, in the meeting where he informed Ford of his intention to resign the next day, he advised Ford that he felt it was very important that he keep Kissinger in his new administration, to which Ford agreed. The relationship between Nixon and Kissinger was unusually close, and has been compared to the relationships of Woodrow Wilson and Colonel House, or Franklin D. Roosevelt and Harry Hopkins. In all three cases, the State Department was relegated to a backseat role in developing foreign policy. Historian David Rothkopf has looked at the personalities of Nixon and Kissinger: A proponent of "Realpolitik", Kissinger played a dominant role in United States foreign policy between 1969 and 1977. In that period, he extended the policy of "détente". This policy led to a significant relaxation in US–Soviet tensions and played a crucial role in 1971 talks with Chinese Premier Zhou Enlai. The talks concluded with a rapprochement between the United States and the People's Republic of China, and the formation of a new strategic anti-Soviet Sino-American alignment. He was jointly awarded the 1973 Nobel Peace Prize with Lê Đức Thọ for helping to establish a ceasefire and U.S. withdrawal from Vietnam. The ceasefire, however, was not durable. Thọ declined to accept the award and Kissinger appeared deeply ambivalent about it (donating his prize money to charity, not attending the award ceremony and later offering to return his prize medal]). As National Security Advisor, in 1974 Kissinger directed the much-debated National Security Study Memorandum 200. Kissinger and Nixon shared a penchant for secrecy and conducted numerous "backchannel" negotiations that excluded State Department experts. One such years-long backchannel was conducted through the Soviet Ambassador to the United States, Anatoly Dobrynin. One historian argues that Kissinger formed such a strong "bond of affection, trust, and mutual interest" with the ambassador that he came to see U.S.-Soviet relations as holding exaggerated significance. He typically met with or talked to Dobrynin about four times a week, and they had a direct line to each other's offices. Nixon gave Kissinger the freedom to assemble his own team in 1969 in order to "revitalize" the National Security Council. Kissinger's team consisted of Colonel Alexander Haig, Morton Halperin, and Anthony Lake. Right from the start, Kissinger started to exclude both the Secretary of State William P. Rogers and the Defense Secretary Melvin Laird from the decision-making process. Kissinger had low opinion of Washington bureaucracy, writing in his PhD dissertation "A World Restored" that: "The essence of bureaucracy is its quest for safety; its success is calculability. Profound policy thrives on perpetual creation, on a constant redefinition of goals...Bureaucracies are designed to execute, not conceive". Kissinger as National Security Adviser saw a chance to put his theories in action, favoring a strategy of being unpredictable in an attempt to change the diplomatic equilibrium in favor of the United States. Kissinger sought to place diplomatic pressure on the Soviet Union by playing the "China card". Kissinger initially had little interest in China when began his work as National Security Adviser in 1969, and the driving force being the rapprochement with China was Nixon. Out of fear of the China Lobby that he himself once cultivated in the 1950s, Nixon wanted to keep the negotiations with China secret. During his visit to Pakistan (which was uniquely an ally of both the United States and China) in August 1969, Nixon asked General Yahya Khan to pass on a message to Mao Zedong that he wanted an "opening" to China. Shortly afterwards, Kissinger asked that the back channel to China work only through messages personally sent from Pakistani ambassador in Washington, Agha Hilaly, to Yahya Khan. The Pakistani back channel to China worked very slowly not least because Yahya Khan expected to be paid bribes for his help, and only six months later in February 1970 did Yahya Khan pass on a message to Nixon from Mao expressing interest. In his memoirs Kissinger portrayed Yahya Khan as an honorable soldier who never asked for any rewards for his work as an intermediary, but in fact he demanded extensive American military supplies and support for Pakistan's long running feud with India as the price of his help. When Chiang Ching-kuo, the son and heir apparent of Generalissimo Chiang Kai-shek arrived in Washington in April 1970 for a visit, both Nixon and Kissinger promised him that they would never abandon Taiwan or make any compromises with Mao Zedong, although Nixon did speak vaguely of his wish to improve relations with the People's Republic. In November 1970, Yahya Khan visited Beijing to meet Mao, and was informed: "In order to discuss the subject of vexation of China's territory called Taiwan, a special envoy from President Nixon would be welcome in Beijing". Kissinger made two trips to the People's Republic of China in July and October 1971 (the first of which was made in secret) to confer with Premier Zhou Enlai, then in charge of Chinese foreign policy. Unlike Mao who spoke no language other than Mandarin, Zhou spoke French (the traditional language of diplomacy) at a conversational level and it was he who usually handled relations with foreigners. According to Kissinger's book, "The White House Years" and "On China", the first secret China trip was arranged through Pakistani and Romanian diplomatic and Presidential involvement, as there were no direct communication channels between the states. During his visit to Beijing, the main issue turned out to be Taiwan as Zhou demanded the United States recognize that Taiwan was a legitimate part of the People's Republic of China, pull U.S. forces out of Taiwan, and end military support for the Kuomintang regime, saying that once the Taiwan issue was resolved, there would be no outstanding problems in Sino-American relations. Kissinger gave away by promising to pull U.S. forces out of Taiwan, saying two-thirds would be pulled out when the Vietnam war ended and the rest to be pulled out as Sino-American relations improved. In October 1971, at the same time Kissinger was making his second trip to the People's Republic, the issue of which Chinese government deserved to be represented in the United Nations came up again. Out of concern to not be seen abandoning an ally, the United States tried to promote a compromise under which both Chinese regimes would be UN members, although Kissinger called it "an essentially doomed rearguard action". At the same time that the American ambassador to the UN, George H. W. Bush, was lobbying for the "two Chinas" formula, Kissinger was removing favorable references to Taiwan from a speech that Rogers was preparing as he expected the Republic of China to be expelled from the UN. During his second visit to Beijing, Kissinger told Zhou that according to a public opinion poll 62% of Americans wanted Taiwan to remain an UN member and asked him to consider the "two Chinas" compromise to avoid offending American public opinion. Zhou responded with his claim that the People's Republic was the legitimate government of all China and no compromise was possible with the Taiwan issue. When Kissinger said that the United States could not totally sever ties with Chiang who had been an ally in World War II, Zhou cynically said: "That is still your old saying-you don't want to cast aside old friends. But you have already cast aside many old friends. Chiang Kai-shek was even an older friend of ours than yours". Kissinger told Nixon that Bush was "too soft and not sophisticated" enough to properly represent the United States at the UN and expressed no anger when the UN General Assembly voted to expel Taiwan and give China's seat on the UN Security Council to the People's Republic. Bush later said about the expulsion of Taiwan: "What was hard...to understand was Henry's telling me he was 'disappointed' by the final outcome of the Taiwan vote...given the fact that we were saying one thing in New York and doing another in Washington, that outcome was inevitable". The stubbornness of Chiang, who just as much as Mao believed in "one China" ensured the defeat of the "two Chinas" compromise; by 1971, the general consensus around the world was that Chiang was delusional in believing that he would one day return in triumph to the mainland to take back control from the Communist "rebels" who had defeated him in 1949 and it was absurd to have the Republic of China which only controlled Taiwan to be representing China at the UN. His trips paved the way for the groundbreaking 1972 summit between Nixon, Zhou, and Communist Party of China Chairman Mao Zedong, as well as the formalization of relations between the two countries, ending 23 years of diplomatic isolation and mutual hostility. The result was the formation of a tacit strategic anti-Soviet alliance between China and the United States. While Kissinger's diplomacy led to economic and cultural exchanges between the two sides and the establishment of Liaison Offices in the Chinese and American capitals, with serious implications for Indochinese matters, full normalization of relations with the People's Republic of China would not occur until 1979, because the Watergate scandal overshadowed the latter years of the Nixon presidency and because the United States continued to recognize the Republic of China on Taiwan. Kissinger's involvement in Indochina started prior to his appointment as National Security Adviser to Nixon. While still at Harvard, he had worked as a consultant on foreign policy to both the White House and State Department. Kissinger says that "In August 1965 ... [Henry Cabot Lodge, Jr.], an old friend serving as Ambassador to Saigon, had asked me to visit Vietnam as his consultant. I toured Vietnam first for two weeks in October and November 1965, again for about ten days in July 1966, and a third time for a few days in October 1966 ... Lodge gave me a free hand to look into any subject of my choice". He became convinced of the meaninglessness of military victories in Vietnam, "... unless they brought about a political reality that could survive our ultimate withdrawal". In a 1967 peace initiative, he would mediate between Washington and Hanoi. Nixon had been elected in 1968 on the promise of achieving "peace with honor" and ending the Vietnam War. By promising to continue the peace talks which Johnson began in May 1968 in Paris, Nixon admitted that he had ruled out "a military victory" in Vietnam. Nixon wanted a diplomatic settlement similar to the armistice of Panmunjom that ended the Korean War and frequently stated in private he had no intention of being "the first president of the United States to lose a war". To force the North Vietnamese to sign an armistice, Nixon favored a two-pronged approach of the "madman theory" of seeking to act rashly to intimidate the North Vietnamese while at the same time trying using the strategy of "linkage" to improve relations with the Soviet Union and China in order to persuade both these nations to stop sending arms to North Vietnam. In office, and assisted by Kissinger, Nixon implemented a policy of Vietnamization that aimed to gradually withdraw U.S. troops while expanding the combat role of the South Vietnamese Army so that it would be capable of independently defending its government against the National Front for the Liberation of South Vietnam, a Communist guerrilla organization, and the North Vietnamese army (Vietnam People's Army or PAVN). In an article published in "Foreign Affairs" in January 1969, Kissinger criticized General William Westmoreland's attrition strategy because the Vietnamese Communists were willing to accept far higher losses on the battlefield than the United States and could therefore "win" as long as they did not "lose" by merely keeping the war going. In the same article, he argued that losses endured by the Vietnamese Communists in the Tet Offensive were meaningless as the Tet Offensive had turned American public opinion against the war, ruling out the possibility of a military solution, and the best that could be done now was to negotiate the most favorable peace settlement at the Paris peace talks. Kissinger, when he came into office in 1969 favored a negotiating strategy under which the United States and North Vietnam would sign an armistice and agreed to pull their troops out of South Vietnam while the South Vietnamese government and the Viet Cong were to agree to a coalition government. Kissinger had doubts about Nixon's theory of "linkage", believing that this would give the Soviet Union leverage over the United States and unlike Nixon was less concerned about the ultimate fate of South Vietnam. One of Kissinger's first acts as National Security Adviser in early 1969 was to seek opinions of the Vietnam experts within the CIA, the military and the State Department. The lengthy volume that emerged contained a diverse collection of opinions with some stating the South Vietnamese were making "rapid strides" while others doubted that the government in Saigon would "ever constitute an effective political or military counter to the Vietcong". The "bulls" estimated that American troops would need to fight on in Vietnam for 8.3 years before the South Vietnamese would be able to fight on their own while the "bears" estimated it take 13.4 years of American troops fighting in Vietnam before the South Vietnamese would be able to fight on their own. Kissinger passed the volume on to Nixon with the comment that there was no consensus within the expert community with the implied conclusion that he should be free on his own without consulting the experts. In early 1969, Kissinger was opposed to the plans for Operation Menu, the bombing of Cambodia, fearing that Nixon was acting rashly with no plans for the diplomatic fall-out, but on 16 March 1969 Nixon at a meeting at the White House attended by Kissinger announced the bombing would start the next day. As Congress was unlikely to grant approval to bombing Cambodia, Nixon decided to go ahead without Congressional approval and keep the bombings secret, a decision that several constitutional law experts argued was illegal. In May 1969, the bombing was leaked to William M. Beecher of the "New York Times" who published an article about it, which infuriated Kissinger. At the time, Kissinger told the FBI director J. Edgar Hoover "we will destroy whoever did this". As a result, the phones of 13 members of Kissinger's staff were taped by the FBI without a warrant. At the time, Kissinger portrayed himself to his friends at Harvard as a moderating force who was working to remove the United States from Vietnam, saying he did not want to end up like his predecessor W.W. Rostow, whose actions as National Security Adviser had caused him to be ostracized by the liberal American intelligentsia. As part of the "linkage" concept, Kissinger in March 1969 sent Cyrus Vance to Moscow with the message that if the Soviet Union pressured North Vietnam into a diplomatic settlement favorable to the United States, the reward would be concessions on the talks on limiting the nuclear arms race. At the same time, Kissinger met with the Soviet ambassador Anatoly Dobrynin to warn him that Nixon was a dangerous man who wanted to escalate the Vietnam war. Nixon and Kissinger played a "good cop-bad cop" routine with Dobrynin with Nixon acting the part of the petulant president at the end of his patience with North Vietnam while Kissinger acted as the reasonable diplomat anxious to improve relations with the Soviet Union, saying to Dobrynin in May 1969 that Nixon would "escalate the war" if the Soviet Union "didn't produce a settlement" in Vietnam. At another meeting in 1969, Kissinger warned Dobrynin that "the train has just left the station and is now headed down the track", saying the Soviet Union better start pressuring North Vietnam now before Nixon did something truly reckless and dangerous. The attempt at "linkage" failed as the Soviet Union did not pressure North Vietnam and instead Dobrynin told Kissinger that the Soviets wanted better relations with the United States regardless of the Vietnam war. After the failure of the "linkage" attempt, Nixon became more open to the alternative strategy suggested by the Defense Secretary Melvin Laird who argued that the burden of the war should be shifted to the South Vietnamese, which was initially called "de-Americanization" and which Laird renamed Vietnamization because it sounded better. On 4 August 1969, Kissinger met secretly with Xuân Thủy at the Paris apartment of Jean Sainteny to discuss peace. Kissinger repeated the American offer of "mutual withdrawal" of U.S and North Vietnamese forces from South Vietnam which Thủy rejected while Thủy demanded a new government in Saigon which Kissinger rejected.  Kissinger had a low opinion of North Vietnam, saying "I can't believe that a fourth-rate power like North Vietnam doesn't have a breaking point". Kissinger was opposed to the strategy of Vietnamization, expressing some doubt about the ability of the ARVN (Army of the Republic of Vietnam-i.e. the South Vietnamese Army) to hold the field, causing much tension with Defense Secretary Laird who was deeply committed to Vietnamization. In September 1969, Kissinger in a memo advised Nixon against "de-escalation", saying that keeping U.S troops fighting in Vietnam "remains one of our few bargaining weapons". In the same memo, Kissinger stated he was "deeply disturbed" that Nixon had started pulling out U.S. troops, saying that withdrawing the troops was like "salted peanuts" to the American people, "the more U.S troops come home, the more will be demanded", giving the advantage to the enemy who merely had to "wait us out". Instead, he recommenced that the United States resume bombing North Vietnam and mine the coast. Later in September 1969, Kissinger proposed a plan for what he called a "savage, punishing" blow against North Vietnam code-named Duck Hook to Nixon, arguing that this was the best way to force North Vietnam to agree to peace on American terms. Laird was strongly opposed to Duck Hook, warning Nixon that the use of nuclear weapons to kill a massive number of North Vietnamese civilians would alienate American public opinion from the administration and persuaded Nixon to reject it. Reflecting his background as a Harvard professor of political science who belonged to the "Primat der Aussenpolitik" school which saw foreign policy as belonging only to a small elite, Kissinger was less sensitive to public opinion than Laird, a former Republican congressman who constantly advised Nixon to keep American public opinion in mind. Kissinger played a key role in bombing Cambodia to disrupt PAVN and Viet Cong units launching raids into South Vietnam from within Cambodia's borders and resupplying their forces by using the Ho Chi Minh trail and other routes, as well as the 1970 Cambodian Incursion and subsequent widespread bombing of Khmer Rouge targets in Cambodia. The Paris peace talks had become stalemated by late 1969 owing to the obstructionism of the South Vietnamese delegation who wanted the talks to fail. The South Vietnamese President Nguyễn Văn Thiệu did not want the United States to withdraw from Vietnam, and out of frustration with him, Kissinger decided to begin secret peace talks in Paris parallel to the official talks that the South Vietnamese were unaware of. On 21 February 1970, Kissinger secretly met in a modest house in a Paris suburb Lê Đức Thọ, the North Vietnamese diplomat who was to become his most tenacious adversary. In 1981, Kissinger told the journalist Stanley Karnow: "I don't look back on our meetings with any great joy, yet he was a person of substance and discipline who defended the position he represented with dedication". Not until February 1971 were Rogers and Laird first informed of the parallel peace talks in Paris. Kissinger was to meet Tho three times between February–April 1970, and the North Vietnamese first sensed a softening of the American position during these talks as Kissinger slightly altered the "mutual withdrawal formula" that the Americans had previously held to. Nixon was gravely disappointed that the secret talks in Paris did not have the prompt results he wanted. Kissinger wrote in his memoirs that "historians rarely do justice to the psychological stress on a policy-maker", noting that by early 1970 Nixon was feeling very much besieged and inclined to lash out against a world he was believed was plotting his downfall. Nixon had become obsessed with the film "Patton", seeing how the film presented Patton as a solitary and misunderstood genius whom the world did not appreciate a parallel to himself and kept watching the film over and over again. On 18 March 1970, the Prime Minister of Cambodia, Lon Nol carried out a coup against King Sihanouk and ultimately declared Cambodia a republic. Out of fury with Lon Nol, the king went to Beijing where he allied himself to his former enemies, the Khmer Rouge, calling upon the Khmer people to "liberate our motherland". As the most of the Khmer peasantry regarded the king as a god-like figure, the royal endorsement of the Khmer Rouge had immediate results. Cambodia had descended into chaos by late March 1970 as Lon Nol regime to prove its nationalist credibility organized pogroms against the Vietnamese minority, leading the North Vietnamese and the Viet Cong to attack and defeat Cambodia's weak army. Nixon believed in the situation in Cambodia offered the chance for him to be like Patton by doing something bold and risky. Kissinger was initially ambivalent about Nixon's plans to invade Cambodia, but as he saw the president was committed, he became more and more supportive. On 23 April 1970, Nixon in a memo to Kissinger declared "we need a bold move in Cambodia to show that we stand with Lon Nol". On 26 April 1970, Nixon decided to "go for broke" by invading Cambodia with U.S troops. When Nixon's speech-writer William Safire pointed out that the use of U.S. troops violated the Nixon Doctrine that America's Asian allies should do the fighting, Kissinger snapped at him: "We wrote the goddamn doctrine, we can change it!" Kissinger was under immense strain as several of his aides were planning to resign to protest the invasion of Cambodia, his liberal friends from Harvard were pressuring him to resign as well while Nixon was all for more belligerence. Kissinger received a phone call from Nixon and his best friend, Charles "Bebe" Rebozo, who both sounded very drunk; Nixon began the call and then handed the phone to Rebozo who said: "The President wants you to know if this doesn't work, Henry, it's your ass". On 30 April 1970, the United States invaded Cambodia which Nixon announced in a television address that Kissinger contemptuously called "vintage Nixon" because of his overblown rhetoric. At the time, Nixon was seen as recklessly escalating the war and in early May 1970 the largest protests ever against the Vietnam war took place. Four of Kissinger's aides resigned in protest while the Cambodian "incursion" ended several of Kissinger's friendships with colleagues from Harvard when he chose not to resign. Nixon in his memoirs claimed that Kissinger "took a particularly hard line" with regards to the "Cambodian incursion". Roger Morris, one of Kissinger's aides later recalled that he was frightened by the huge anti-war demonstrations, comparing the antiwar movement to Nazis. Kissinger was haunted by memories of his youth in Germany, and had a deep distrust of mass movements of either the left or the right, favoring the Primat der Aussenpolitik school of foreign-policy making by an elite with the masses excluded. In his interview with Karnow, Kissinger maintained he felt torn about where he stood and blamed Nixon for his failure to find "the language of respect and compassion that might have created a bridge at least to the more reasonable elements of the antiwar movement". The Cambodian "incursion" saw American and South Vietnamese take the areas of eastern Cambodia that American commanders called the Fish Hook and Parrot's Beak and captured an impressive haul of arms originating from China and the Soviet Union. But the majority of Vietnamese Communist forces had withdrawn deeper into Cambodia before the invasion with only a small number left behind to wage a fighting retreat to avoid charges of cowardice. In June 1970, the Americans pulled out of Cambodia and the Vietnamese Communists returned, through the loss of weapons greatly hindered their operations in the Saigon area for the rest of 1970. Having committed itself to supporting Lon Nol, the United States now had two allies instead of one to support in Southeast Asia. The bombing campaign in Cambodia contributed to the chaos of the Cambodian Civil War, which saw the forces of leader Lon Nol unable to retain foreign support to combat the growing Khmer Rouge insurgency that would overthrow him in 1975. Documents uncovered from the Soviet archives after 1991 reveal that the North Vietnamese invasion of Cambodia in 1970 was launched at the explicit request of the Khmer Rouge and negotiated by Pol Pot's then second in command, Nuon Chea. The American bombing of Cambodia resulted in 40,000–150,000 deaths from 1969 to 1973, including at least 5,000 civilians. Pol Pot biographer David P. Chandler argues that the bombing "had the effect the Americans wanted—it broke the Communist encirclement of Phnom Penh." However, Ben Kiernan and Taylor Owen suggest that "the bombs drove ordinary Cambodians into the arms of the Khmer Rouge, a group that seemed initially to have slim prospects of revolutionary success." Kissinger himself defers to others on the subject of casualty estimates. "...since I am in no position to make an accurate estimate of my own, I consulted the OSD Historian, who gave me an estimate of 50,000 based on the tonnage of bombs delivered over the period of four and a half years." The Cambodian invasion further polarized an already deeply divided nation and the President's Commission on Campus Unrest headed by William Scranton in its report of September 1970 wrote the divisions in American society were "as deep as any since the Civil War". A number of Republican politicians complained to Nixon that his stance on Vietnam was hurting their chances for congressional elections in November 1970, leading the president to say to Kissinger it was natural that liberals like Senator George McGovern and Senator Mark Hatfield wanted to "bug out...But when the Right starts wanting to get out, for whatever reason, that's "our" problem". In an attempt to change Nixon's image, Kissinger and Nixon devised the notion of a "standstill cease-fire" where both sides would occupy whatever areas of South Vietnam they were holding at the time of the ceasefire, an offer that Nixon publicly made in a television address on 7 October 1970. In his speech, Nixon apparently moved way from the "mutual withdrawal formula" the North Vietnamese kept rejecting by not mentioning it, winning much acclaim, even from his opponents like McGovern and Hatfield (through he also said the withdraw of U.S. forces would be "based on principles" he had "previously" discussed, i.e. the "mutual withdrawal formula"). Kissinger and Nixon both disliked the idea of a "standstill ceasefire" as weakening South Vietnam, but fearing if Nixon continued on his present course, he would not be reelected in 1972, the offer was seen as worth the risk, especially since the North Vietnamese rejected it. In private, Kissinger called the "standstill ceasefire" offer as the means that "at a minimum...would give us from temporary relief from public pressures". Subsequently, Kissinger has maintained Nixon's offer of 7 October was sincere and the North Vietnamese made a major error in rejecting it. In late 1970, Nixon and Kissinger became concerned that the North Vietnamese would launch a major offensive in 1972 to coincide with presidential election, making it imperative to cut the Ho Chi Minh Trail in 1971 to prevent the Communists from building up their forces. As the Cooper–Church Amendment had forbidden U.S. troops from fighting in Laos, the plans that was conceived called for South Vietnamese troops with American air support to invade Laos to sever the Ho Chi Minh Trail in operation code-named Lam Son 719. Kissinger wrote about Lam Son "the operation, conceived in doubt and assailed by skepticism, proceeded in confusion". In the first major test of Vietnamization, the ARVN failed miserably. The ARVN invaded Laos on 8 February 1971 and were stopped decisively by the North Vietnamese. In March, Kissinger sent his deputy Haig to inspect the situation personally, leading him to report that the ARVN officers lacked courage and did not want to fight, making retreat the only option. The retreat when it began turned into a rout. Kissinger wrote Lam Son had fallen "far short of our expectations", which he blamed on bad American planning, poor South Vietnamese tactics and Nixon's leadership style, leading Karnow to write he blamed "everyone, characteristically, except himself". In late May 1971, Kissinger returned to Paris to fruitlessly meet again with Tho. The North Vietnamese demand that Thiệu step down proved to the main obstacle. Kissinger did not want a repeat of the prolonged bout of political instability that characterized South Vietnam from 1963 to 1967 and believed Thiệu was a force for order. Tho suggested to Kissinger that Americans "stop supporting" Thiệu who was running for the reelection in a ballot scheduled for 3 October 1971. Tho claimed that Thiệu's opponents, Air Marshal Nguyễn Cao Kỳ and General Dương Văn Minh aka "Big Minh", were both open to a coalition government with the Viet Cong and had if either men were elected president, the war would be over by late 1971. Thiệu used a legal technicality to disqualify Kỳ while Minh dropped out when it was clear the election was rigged. In the 1971 election, the CIA donated money for Thiệu's reelection campaign while the Americans did not pressure Thiệu to stop rigging the election. Through Kissinger did not regard South Vietnam as important in its own right, he believed it was necessary to support South Vietnam to maintain the United States as a global power, believing that none of America's allies would trust the United States if South Vietnam were abandoned too quickly. Kissinger also believed that if South Vietnam were to collapse, it "leave deep scars on our society, feeling impulses for recrimination". As a Jew who had grown up in Nazi Germany, Kissinger was haunted by how the "Dolchstoßlegende" had used by the German right to delegitimatize the Weimar Republic, and believed that something similar would happen in the United States should it lose the Vietnam War, fueling the rise of right-wing extremism. In June 1971, Kissinger supported Nixon's effort to ban the Pentagon Papers saying the "hemorrhage of state secrets" to the media was making diplomacy impossible. Daniel Ellsberg, the man who leaked the Pentagon Papers to the "New York Times" had been consulted by Kissinger for ideas about Vietnam in late 1968-early 1969, but when he leaked the papers, Kissinger told Nixon that he was a left-wing "fanatic" and a "drug abuser". Kissinger depicted Ellsberg to Nixon as a drug-crazed degenerate of questionable mental stability out to ruin his administration. Reflecting his increasingly frustration with the war, Nixon often talked to Kissinger in a bloodthirsty manner about a "fantasy holocaust" in which he would have U.S. forces kill every living thing in North Vietnam and then pull out, leading the latter appalled by his own account. By early 1972, Nixon boasted that he had pulled out 400,000 U.S soldiers from Vietnam since July 1969, and battle deaths had fallen from an average of 200 per week in 1969 down to an average of 10 per week in 1972. The policy of Vietnamization had, as Laird predicted it would, tamed the antiwar movement as most Americans objected not to war in Vietnam per se, only to Americans dying in it. With the antiwar movement in decline by 1972, Nixon believed his chances of reelection were good, but Kissinger kept complaining that he was losing "negotiating assets" in his talks with Tho every time a withdrawal of American forces was announced. Likewise, Kissinger noted that the major reason why Congress despite the antiwar feelings of many of its members kept voting to fund the war was because the argument it was patriotic to support "our boys in the field"; as more Americans were pulled out, Congress was less inclined to vote to fund keeping South Vietnamese "boys in the field". However, the imperatives of being re-elected was far more important to Nixon than with giving Kissinger "negotiating assets". In early 1972, Nixon publicly revealed that Kissinger had secretly negotiating with Tho since 1970 to prove that he was really was committed to peace in Vietnam despite what the antiwar movement had been saying about him for the last three years. Reflecting Kissinger's weakening hand in his talks with Tho, Nixon had increasingly come by 1971–72 to believe that "linkage" concept of improving relations with the Soviet Union and China in exchange for those nations cutting off the supply of weapons to North Vietnam offered his best chance of a favorable peace deal. On 21 February 1972 was in Nixon's words "the week that changed the world" as he landed in Beijing to meet Mao Zedong. Kissinger who accompanied Nixon to China spent much time talking to the suave Chinese Premier Zhou Enlai about Vietnam, pressing him to end the supply of arms to North Vietnam. The talks went nowhere as Zhou told Kissinger that the North Vietnamese played off China against the Soviet Union, and to cut off North Vietnam would allow it to fall into the Soviet sphere of influence. As the Chinese People's Liberation Army had badly bloodied by the Red Army in a border war in 1969, Zhou stated that to face a two-front war with Chinese forces facing North Vietnam in the south and the Soviet Union in the north was not acceptable to his government. Zhou offered Kissinger only the vague message that China supported efforts to find peace in Vietnam while refusing to make any promises, though Kissinger also noted that Zhou declined to endorse North Vietnam's demands. Despite Nixon's coming visit, in late 1971 the Chinese drastically increased their military aid to North Vietnam and continued to send a massive amount of weapons south even as Nixon and Kissinger exchanged pleasantries with Mao and Zhou in Beijing. As usual, when the Chinese increased their supply of arms to North Vietnam, the Soviet Union did likewise as both Communist states competed with one another for influence in Hanoi by tying to be the biggest supplier of weapons. On 30 March 1972, the PAVN launched the Easter Offensive that overran several provinces in South Vietnam while pushing the ARVN to the brink of collapse. At the time of the Easter Offensive, Kissinger was deeply involved in planning for Nixon's visit to Moscow in May 1972. The offensive brought to the fore the differences between Nixon and Kissinger. Nixon threatened to cancel his summit with Leonid Brezhnev in Moscow if the Soviet Union did not force North Vietnam to end the Easter Offensive at once, saying: "Whatever else happens, we cannot lose this war. The summit isn't worth a damn if the price for it is losing in Vietnam". Nixon in his instructions to Kissinger stated that he viewed the relations with the Soviet Union through the prism of the Vietnam war and if the Soviets were not prepared to help, Kissinger "should just pack up and come home". Kissinger for his part believed that Nixon was massively exaggerating Soviet influence in North Vietnam and not longer believed if he ever did in Nixon's "linkage" concept. Kissinger feared that Nixon was obsessed with Vietnam and damaging relations with the Soviet Union over Vietnam would destabilize the international power balance by increasing American-Soviet tensions. On 20 April 1972, Kissinger arrived in Moscow without informing the U.S. ambassador Jacob D. Beam, and then went to have tea with Brezhnev in the Kremlin. Nixon as usual when under stress departed for a marathon drinking session with Rebozo at Camp David, and via Haig kept sending messages to Kissinger to be tough with Brezhnev. As no American president had ever visited Moscow before, Kissinger got the impression that Brezhnev wanted the planned summit to happen "at almost any cost". Kissinger's implemented Nixon's emphasis on building an amicable personal relationship with Brezhnev. Upon his return to Washington, Kissinger reported that Brezhnev would not cancel the summit and was keen to sign the SALT I Treaty. Kissinger went to Paris on 3 May to meet Tho with orders from Nixon that North Vietnam must "Settle or else!" Nixon complained that Kissinger was "obsessed" with the need for a peace treaty while he charged that he now wished he followed his instincts to bomb North Vietnam in 1970, saying if he had done so, the war would have been over by now. On 2 May 1972, the PAVN had captured Quangtri, an important provincial city in South Vietnam, and as a result of this victory, Tho was in no mood for a compromise. Through Kissinger in general shared Nixon's determination to be tough, he was afraid that the president would overreact and destroy the budding détente with the Soviet Union and China by striking too hard at North Vietnam. Moreover, after the rupture caused by the Cambodian incursion, Kissigner was trying hard to rebuilt his relations with the liberal American intelligentsia, saying he did want to become "this administration's Walt Rostow". Kissinger's predecessor, Rostow had once being a professor at Harvard, Oxford the MIT, and Cambridge, but serving as the National Security Adviser had shunned by the Ivy League universities and ended up at the lowly University of Texas, a fate that Kissinger was determined to avoid. On 5 May 1972, Nixon ordered the U.S. Air Force to start bombing Hanoi and Haiphong and on 8 May ordered the U.S Navy to mine the coast of North Vietnam. As the bombings and mining of North Vietnam, Nixon and even more so Kissinger waited anxiously for the Soviet reaction, and much to their relief received only the standard statement decrying the American action and a diplomatic note complaining that American aircraft had bombed a Soviet freighter in Haiphong harbor. The Moscow summit was not canceled. On 24 July 1972, Congress passed an act calling for the total withdraw of all American forces from Vietnam once all of the American POWs in North Vietnam were released, causing Kissinger to say the North Vietnamese only had to wait until "Congress voted us of the war". However, the sight of Nixon and Kissinger posing for photographs with Brezhnev and Mao deeply worried the North Vietnamese who were afraid of being "sold out" by either China and/or the Soviet Union, causing some flexibility in their negotiating tactics. The Eastern Offensive had not caused the collapse of the South Vietnamese government, but it increased the amount of territory under Communist control. The North Vietnamese were moving towards taking up the "standstill ceasefire" offer and ordered the Viet Cong to seize as much territory as possible in preparation for a "leopard's spot" ceasefire (so called because the patchwork of territories controlled by the Viet Cong and the Saigon government resembled the spots on a leopard's fur). On 1 August 1972, Kissinger met Tho again in Paris, and for first time, he seemed willing to compromise, saying that political and military terms of an armistice could be treated separately and hinted that his government was no longer willing to make the overthrow of Thiệu a precondition. On the evening of 8 October 1972 at a secret meeting of Kissinger and Tho in a house in the Paris suburb of Gif-sur-Yvette once owned by the painter Fernard Léger came the decisive breakthrough in the talks. Tho believed that Kissinger was as he later it put "in a rush" for a peace deal before the presidential election, and began with he called "a very realistic and very simple proposal" for a ceasefire that would see the Americans pull all their forces out of Vietnam in exchange for the release of all the POWs in North Vietnam. As for the ultimate fate of South Vietnam, Tho proposed the creation of a "council of national reconciliation" that would govern the nation, but in the meantime Thiệu could stay in power until the council was formed while a "leopard's spot" ceasefire would come into effect with the Viet Cong and the Saigon government controlling whatever territories they were had at the time of the ceasefire. The "mutual withdrawal formula" was to be disregarded with PAVN forces to stay in South Vietnam with Tho giving Kissinger a vague promise that no more supplies would be sent down the Ho Chi Minh Trail. Kissinger accepted Tho's offer as the most best deal possible, saying that the "mutual withdrawal formula" had to be abandoned as it been "unobtainable through ten years of war...We could not make it a condition for a final settlement. We had long passed that threshold". Several of Kissinger's own staff, most notably John Negroponte, were strongly opposed to him accepting this offer, saying Kissinger had given away more than he had obtained. In response to Negronponte's objections, Kissinger exploded in rage, accusing him of "nit-picking" and screamed at the top of his voice: "You don't understand. I want to meet their terms. I want to reach an agreement. I want to end this war before the election. It can be done and it will be done...What do you want us to do? Stay there forever?". Reflecting the "leopard's spot" ceasefire, Kissinger sent Thiệu a message saying he should "seize as much territory as possible" before the ceasefire came into effect while the United States launched Operation Enhance Plus to give South Vietnam as many weapons as possible. Over the course of six weeks in the fall of 1972, South Vietnam ended up with the world's fourth largest air force as the Americans provided as war planes as possibly could. However, neither Kissinger nor Nixon appreciated that for Thiệu any sort of peace deal calling for withdrawal of American forces was unacceptable and he saw the draft peace agreement that Kissinger signed in Paris on 18 October 1972 as a betrayal. On 21 October Kissinger together with the American ambassador Ellsworth Bunker arrived at the Gia Long Palace in Saigon to show Thiệu the peace agreement. The meeting went extremely badly with Thiệu engaged that Kissinger did not take the time to translate the draft peace treaty into Vietnamese, bringing with him only an English language copy. The meeting went from bad to worse with Thiệu having a meltdown as he broke down in tears and hysterically accused Kissinger of plotting with the Soviet Union and China to betray him, saying he could never accept this peace agreement. Thiệu refused to sign the peace agreement and demanded very extensive amendments that Kissinger reported to Nixon "verge on insanity". Nixon ordered Kissinger to "push Thiệu as far as possible", but Thiệu refused to sign the peace agreement. As Kissinger returned to Washington, one of his aides recalled "In twenty-four hours, the bottom fell out". Through Nixon had initially supported Kissinger against Thiệu, but two of his most influential advisers, namely his Chief of Staff, H.R. Haldeman and the Domestic Affairs Adviser John Ehrlichman urged him to reconsider, arguing that Kissinger had given away too much and Thiệu's objections had merit. As Thiệu sensed Nixon's changing mood, on 24 October 1972 he called a press conference to denounce the draft agreement as a betrayal and stated the Viet Cong "must be wiped out quickly and mercilessly". On October 26, North Vietnam published the draft agreement and accused the United States of tying to "sabotage" it by backing Thiệu. On the same day, Kissinger who until then had never spoken to the media called a press conference at the White House to say: "We believe peace is at hand. We believe an agreement is within sight". Kissinger later admitted that this statement was a major mistake as it inflated hopes for peace while enraging Nixon who saw it as weakness. Nixon came very close to disavowed Kissinger as he declared the draft peace agreement had "differences that must be resolved". Taking up Thiệu's cause as his own, Nixon wanted 69 amendments to the draft peace agreement included in the final treaty and ordered Kissinger back to Paris to force Tho to accept them. Kissinger regarded Nixons' 69 amendments as "preposterous" as he knew Tho would never accept them. By this point, Kissinger's relations with Nixon were tense while Nixon's "German shepherds" Haldeman and Ehrlichman intrigued against him. As expected, Tho refused to consider any of the 69 amendments and on 13 December 1972 left Paris for Hanoi after he accused the Americans of negotiating in bad faith. Kissinger by this stage was worked up into a state of fury after Tho walked out of the Paris talks and told Nixon: "They're just a bunch of shits. Tawdry, filthy shits". The National Security Adviser now advised Nixon to bomb North Vietnam to make them "talk seriously". On 14 December 1972, Nixon sent an ultimatum demanding that Tho return to Paris to negotiate seriously" within the 72 hours or else he would bomb North Vietnam without limit. Kissinger told the media that the peace agreement was "99 percent completed" but "we will not be blackmailed into an agreement. We will not be stampeded into an agreement and, if I may say so, we will not be charmed into an agreement until its conditions are right". At the same time, Nixon ordered Admiral Thomas Hinman Moorer, the Chairman of the Joint Chiefs of Staff: "I don't want any more of this crap about the fact that we couldn't hit this target or that one. This is your chance to use military power to win this war, and if you don't, I'll hold you responsible". Following the rejection of Nixon's ultimatum, on 18 December, Operation Linebacker II was launched, the so-called Christmas Bombings that lasted until 29 December 1972. During these 11 days that consisted of the heaviest bombing of the entire war, B-52 bombers flew 3, 000 sorties and dropped 40, 000 tons of bombs on Hanoi and Haiphong. On 26 December, in a press statement Hanoi indicated a willingness to resume the Paris peace talks provided that the bombing stopped. On 8 January 1973, Kissinger and Tho met again in Paris and the next day reached an agreement, which in main points was essentially the same as the one Nixon had rejected in October with only cosmetic concessions to the Americans. Thiệu once again rejected the peace agreement, only to receive an ultimatum from Nixon: "You must decide now whether you desire to continue our alliance or whether you want me to seek a settlement with the enemy which serves U.S. interests alone". Nixon's threat served its purpose and Thiệu reluctantly accepted the peace agreement. On 27 January 1973, Kissinger and Tho signed a peace agreement in Paris that called for the complete withdrawal of all U.S forces from Vietnam by March in exchange for North Vietnam freeing all the U.S POWs. On 29 March 1973, the withdrawal of the Americans from Vietnam was complete and on 1 April 1973, the last POWs were freed. The peace agreement put into effect the "leopard's spot" ceasefire with the Viet Cong being allowed to rule whatever parts of South Vietnam they held at the time of the ceasefire and all of the North Vietnamese troops in South Vietnam were allowed to stay, putting the Communists in a strong position to eventually take over South Vietnam. On 15 March 1973, Nixon had implied during a speech that the United States might go back into Vietnam should the Communists violate the ceasefire, and as a result Congress began debating a bill to limit American funding for military operations in Southeast Asia. In April, the CIA estimated the total number of PAVN troops in South Vietnam at 150, 000 (about the same as in 1972) whereas Kissinger accused North Vietnam of moving more troops down the Ho Chi Minh Trail. That month, Kissinger met with Tho in Paris to reaffirm their commitment to the Paris peace agreement and to pressure him to stop the Khmer Rouge from overrunning Cambodia. Tho told Kissinger that the Khmer Rouge's leader, Pol Pot, was a Vietnamphobe and North Vietnam had very limited influence over him. At the same time, Kissinger reported to Nixon that "only a miracle" could save South Vietnam now as Thiệu showed no signs of making the necessary reforms to allow the ARVN to fight. His assessment of Cambodia was even more bleaker as the Lon Nol regime had lost control of much of the countryside by the spring of 1973 and only American air strikes prevented the Khmer Rouge from taking Phnom Penh. On 4 June 1973, the Senate passed a bill that already cleared the House of Representatives to block funding for any American military operations in Indochina and Kissinger spent much of the summer of 1973 lobbying Congress to extent the deadline to 15 August in order to keep bombing Cambodia. The Lon Nol regime was saved in 1973 due to heavy American bombing, but the cutoff in funding ended the possibility of an American return to Southeast Asia. The PAVN had taken heavy losses in the Easter Offensive, but North Vietnamese were rebuilding their strength for a new offensive. By the spring of 1973, Nixon was caught up in the Watergate scandal and was losing interest in foreign affairs. Thiệu's government was still receiving massive amounts of military aid, and his regime controlled 75% of South Vietnam's territory and 85% of the population at the time of the ceasefire. But Thiệu's unwillingness to crackdown on corruption and end the system under which ARVN officers were promoted for political loyalty instead of military merit were structural weaknesses that spelled long term problems for his regime. South Vietnam's economy's had heavily dependent upon the hundreds of millions brought in by spending by the U.S. military and the withdraw of American forces threw the economy into recession. Even more damaging was the Arab oil shock of 1973–74 which destabilized South Vietnam's economy and by the summer of 1974 90% of the ARVN's soldiers were not receiving enough pay to support themselves and their families. Along with North Vietnamese Politburo Member Le Duc Tho, Kissinger was awarded the Nobel Peace Prize on December 10, 1973, for their work in negotiating the ceasefires contained in the Paris Peace Accords on "Ending the War and Restoring Peace in Vietnam", signed the previous January. According to Irwin Abrams, this prize was the most controversial to date. For the first time in the history of the Peace Prize, two members left the Nobel Committee in protest. Tho rejected the award, telling Kissinger that peace had not been restored in South Vietnam. Kissinger wrote to the Nobel Committee that he accepted the award "with humility," and "donated the entire proceeds to the children of American servicemembers killed or missing in action in Indochina." After the Fall of Saigon in 1975, Kissinger attempted to return the award. As the South Vietnamese economy to collapse under the weight of inflation caused by the Arab oil shock and rampant corruption, by the summer of 1974, the U.S. embassy reported that morale in the ARVN had fallen to dangerously low levels and it was uncertain how much more longer South Vietnam would last. Widespread protests against corruption that broke out in the summer of 1974 with the protesters accusing Thiệu and his family of corruption indicated that the South Vietnamese regime had lost popular support. In August 1974, Congress passed a bill limiting American aid to South Vietnam to $700 million annually. By November 1974, fearing the worse for South Vietnam as the ARVN continued to lose, Kissinger during the Vladivostok summit lobbied Brezhnev to end Soviet military aid to North Vietnam. The same month, he also lobbied Mao and Zhou to end Chinese military aid to North Vietnam during a visit to Beijing. On 1 March 1975 the PAVN began an offensive that saw them overrun the Central Highlands and by 25 March Hue had fallen. Thiệu was slow to withdraw his armies and by 30 March when Danang fell, the ARVN's best divisions were lost. With the road to Saigon wide open, it became imperative for the North Vietnamese to take the capital before the monsoons began in May, leading to a rapid march on Saigon. On 15 April 1975, Kissinger testified before the Senate Appropriations Committee, urging Congress to increase the military aid budget to South Vietnam by another $700 million to save the ARVN as the PAVN was rapidly advancing on Saigon, which was refused. Kissinger maintained at the time, and still maintains that if only Congress had approved of his request for another $700 million South Vietnam would have been saved. In opposition, Karnow argued that by this point, South Vietnam was so far gone with the morale in the ARVN having collapsed it is very doubtful that anything short of sending in U.S. troops again could have saved South Vietnam. On 17 April 1975, the Lon Nol regime collapsed and the Khmer Rouge took Phnom Penh. On 29 April 1975, Option IV, the largest helicopter evacuation began as 70 Marine helicopters flew 8, 000 people from the American embassy in Saigon to the fleet offshore. On 30 April 1975, Saigon fell to the PAVN and the war in Vietnam finally ended. Nixon supported Pakistan's strongman, General Yahya Khan, in the Bangladesh Liberation War in 1971. Kissinger sneered at people who "bleed" for "the dying Bengalis" and ignored the first telegram from the United States consul general in East Pakistan, Archer K. Blood, and 20 members of his staff, which informed the US that their allies West Pakistan were undertaking, in Blood's words, "a selective genocide" targeting the Bengali intelligentsia, supporters of independence for East Pakistan, and the Hindu minority. In the second, more famous, Blood Telegram the word genocide was again used to describe the events, and further that with its continuing support for West Pakistan the US government had "evidenced [...] moral bankruptcy". As a direct response to the dissent against US policy Kissinger and Nixon ended Archer Blood's tenure as United States consul general in East Pakistan and put him to work in the State Department's Personnel Office. Christopher Clary argues that Nixon and Kissinger were unconsciously biased, leading them to overestimate the likelihood of Pakistani victory against Bengali rebels. Kissinger was particularly concerned about the expansion of Soviet influence in the Indian Subcontinent as a result of a treaty of friendship recently signed by India and the USSR, and sought to demonstrate to the People's Republic of China (Pakistan's ally and an enemy of both India and the USSR) the value of a tacit alliance with the United States. Kissinger had also come under fire for private comments he made to Nixon during the Bangladesh–Pakistan War in which he described Indian Prime Minister Indira Gandhi as a "bitch" and a "witch". He also said "The Indians are bastards", shortly before the war. Kissinger has since expressed his regret over the comments. As National Security Adviser under Nixon, Kissinger pioneered the policy of "détente" with the Soviet Union, seeking a relaxation in tensions between the two superpowers. As a part of this strategy, he negotiated the Strategic Arms Limitation Talks (culminating in the SALT I treaty) and the Anti-Ballistic Missile Treaty with Leonid Brezhnev, General Secretary of the Soviet Communist Party. Negotiations about strategic disarmament were originally supposed to start under the Johnson Administration but were postponed in protest upon the invasion by Warsaw Pact troops of Czechoslovakia in August 1968. Nixon felt his administration had neglected relations with the Western European states in his first term and in September 1972 decided that if he was reelected that 1973 would be the "Year of Europe" as the United States would focus on relations with the states of the European Economic Community (EEC) which had emerged as a serious economic rival by 1970. Applying his favorite "linkage" concept, Nixon intended henceforward economic relations with Europe would not be severed from security relations, and if the EEC states wanted changes in American tariff and monetary policies, the price would be defense spending on their part. Kissinger in particular as part of the "Year of Europe" wanted to "revitalize" NATO, which he called a "decaying" alliance as he believed that there was nothing at present to stop the Red Army from overrunning Western Europe in a conventional forces conflict. The "linkage" concept more applied to the question of security as Kissinger noted that the United States was going to sacrifice NATO for the sake of "citrus fruits". According to notes taken by H.R. Haldeman, Nixon "ordered his aides to exclude all Jewish-Americans from policy-making on Israel", including Kissinger. One note quotes Nixon as saying "get K. [Kissinger] out of the play—Haig handle it". In 1973, Kissinger did not feel that pressing the Soviet Union concerning the plight of Jews being persecuted there was in the interest of U.S. foreign policy. In conversation with Nixon shortly after a meeting with Israeli Prime Minister Golda Meir on March 1, 1973, Kissinger stated, "The emigration of Jews from the Soviet Union is not an objective of American foreign policy, and if they put Jews into gas chambers in the Soviet Union, it is not an American concern. Maybe a humanitarian concern." Kissinger argued, however: In 1970, President Nasser of Egypt died and was succeeded by Anwar Sadat, a man whom even more than Kissinger believed in the diplomacy of surprise, in engaging in sudden moves to upset the diplomatic equilibrium. Sadat liked to say that his favorite game was backgammon, a game where skill and persistence was rewarded, but was best won by sudden gambles, making an analogy between how he played backgammon and conducted his diplomacy. Under Nasser, Egypt and Saudi Arabia had engaged what was known as the Arab Cold War, but Sadat got along very well with King Faisal of Saudi Arabia, forming an alliance between the most populous Arab state and the wealthiest Arab state. Kissinger later admitted that he was so engrossed with the Paris peace talks to end Vietnam war that he and others in Washington missed the significance of the Egyptian-Saudi alliance. At the same that Sadat moved closer to Saudi Arabia, he also wanted a rapprochement with the United States and to move Egypt away from its alliance with the Soviet Union. In July 1972, Sadat expelled all 16, 000 of the Soviet military personnel in Egypt in a signal that he wanted better relations with the United States. Kissinger was taken completely by surprise by Sadat's move, saying: "Why has he done me this favor? Why didn't he demand all sorts of concessions first?" Sadat expected as a reward that the United States would respond by pressuring Israel to return the Sinai to Egypt, but after his anti-Soviet move prompted no response from the United States, by November 1972 Sadat moved again closer to the Soviet Union, buying a massive amount of Soviet arms for a war he planned to launch against Israel in 1973. For Sadat, cost was no object as the money to buy Soviet arms came from Saudi Arabia. At the same time, Faisal promised Sadat that if it should come to war, Saudi Arabia would embargo oil to the West. In April 1973, the Saudi Oil Minister Ahmed Zaki Yamani visited Washington to meet Kissinger and told him that King Faisal was becoming more and more unhappy with the United States, saying he wanted America to pressure Israel to return all the lands captured in the Six Day War of 1967. In a later interview, Yamani accused Kissinger of not taking his warning seriously, saying all he did was to ask him not to speak anymore of this threat. Angry at Kissinger, Yamani in an interview with the "Washington Post" on 19 April 1973 warned that King Faisal was considering an oil embargo. At the time, the general feeling in Washington was the Saudis were bluffing and nothing would come of their threat to impose an oil embargo. The fact that Faisal's ineffectual half brother King Saud had imposed a cripplingly oil embargo on Britain and France during the Suez War of 1956 was not considered an important precedent. The CEOs of four of America's oil companies had after speaking to Faisal arrived in Washington in May 1973 with the warning that Faisal was considerably more tougher, intelligent and ruthless than his half-brother Saud whom he had deposed in 1964, and his threats were serious. Kissinger declined to meet the four CEOs. In 1970, American oil production peaked and the United States began to import more and more oil as oil imports rose by 52% between 1969–1972. Of American oil imports, by 1972 83% were coming from the Middle East. Throughout the 1960s, the price for a barrel of oil remained at $1.80, meaning that with the effects of inflation considered the price of oil in real terms got progressively lower and lower throughout the decade with Americans paying less for oil in 1969 than they had in 1959. Even after a price for a barrel of oil rose to $2.00 in 1971, adjusted for inflation, people in the Western nations were paying less for oil in 1971 than they had in 1958. The extremely low price of oil served as the basis for the "long summer" of prosperity and mass affluence that began in 1945. In an assessment done by Kissinger and his staff about the Middle East in the summer of 1973, the repeated statements by Sadat about waging  "jihad" against Israel were dismissed as empty talk while the warnings from King Faisal were likewise regarded as inconsequential. In September 1973, Nixon fired Rogers as Secretary of State and replaced him with Kissinger. Kissinger was later to state he had not given been enough time to know the Middle East as he settled into the State Department's office at Foggy Bottom as Egypt and Syria attacked Israel on 6 October 1973. Documents show that Kissinger delayed telling President Richard Nixon about the start of the Yom Kippur War in 1973 to keep him from interfering. On October 6, 1973, the Israelis informed Kissinger about the attack at 6 am; Kissinger waited nearly 3 and a half hours before he informed Nixon. According to Kissinger, in an interview in November 2013, he was notified at 6:30 a.m. (12:30 pm. Israel time) that war was imminent, and his urgent calls to the Soviets and Egyptians were ineffective. He says Golda Meir's decision not to preempt was wise and reasonable, balancing the risk of Israel looking like the aggressor and Israel's actual ability to strike within such a brief span of time. More importantly, Israel had built along the banks of the Suez Canal the Bar Lev Line that was considered to be impregnable. The war began on October 6, 1973, when Egypt and Syria attacked Israel. The Egyptians broke through the gigantic sand banks of the Bar Lev Line through the simple device of water cannons, and within two hours were Egyptians soldiers were in the Sinai for the first time since 1967. Kissinger published lengthy telephone transcripts from this period in the 2002 book "Crisis". On October 12, under Nixon's direction, and against Kissinger's initial advice, while Kissinger was on his way to Moscow to discuss conditions for a cease-fire, Nixon sent a message to Brezhnev giving Kissinger full negotiating authority. Kissinger wanted to stall a ceasefire to gain more time for Israel to push across the Suez Canal to the African side, and wanted to be perceived as a mere presidential emissary whom to consult the White House all the time as a stalling tactic. King Hussein of Jordan believed in making peace with Israel, but knowing that the majority of his subjects were Palestinian refugees felt compelled to send a Jordanian armored brigade to fight with the Syrians in the Golan Heights after receiving Israeli permission. Knowing that King Hussein was a moderate who a voice for peace and fearing that he might be overthrown by his subjects if Jordan did not fight, Meir gave her permission for the king to send his troops to fight against her nation. Kissinger commented: "Only in the Middle East is it conceivable that a belligerent would ask its adversary's approval for engaging in an act of war against it". Kissinger promised the Israeli Prime Minister Golda Meir the United States would replace its losses in equipment after the war, but sought initially to delay arm shipments to Israel as believed it would improve the odds of making peace along the lines of United Nations Security Council Resolution 242 calling for a "land for peace" deal if an armistice was signed with Egypt and Syria gaining some territory in the Sinai and the Golan Heights respectively. The Arab concept of the "peace of the brave" (i.e. a victorious leader being magnanimous to his defeated opponents) meant there was a possibility that Sadat at least would make peace with Israel provided that the war ended in such a way that Egypt was not perceived to be defeated. Likewise, Kissinger regarded Meir as being rather arrogant and believed that an armistice that ended the war in a manner that was not an unambiguous Israeli victory would make her more humble. As both Syria and Egypt lost much equipment during the fighting, the Soviet Union began to fly new equipment in starting on 12 October, and the Soviet flights to Syria and Egypt were recorded by the British radar stations in Cyprus. Through the Soviets were flying in an average of 60 flights per day, exaggerated accounts appeared in Western newspapers speaking of "one hundred flights per day". At this point, both Nixon and Kissinger began to see the October War more in terms of the Cold War rather than Middle Eastern politics, both seeing the Soviet arms lifts to Egypt and Syria as a Soviet power play that required an American answer. Israel took heavy losses in men and material during the fighting against Egypt and Syria, and on 18 October 1973 Meir requested $850 million worth of American arms and equipment to replace its material losses. Nixon decided characteristically to act on an epic scale and instead of the $850 million worth of arms requested sent some $2 billion worth of arms to Israel as he boasted that the U.S. Air Force flew more sorties to Israel in October 1973 than it had during the Berlin Airlift of 1948–49. In an interview with the British historian Robert Lacey in 1981, Kissinger later admitted about the arms lift to Israel: "I made a mistake. In retrospect it was not the best considered decision we made". The arms lift enraged King Faisal of Saudi Arabia and he retaliated on 20 October 1973 by placing a total embargo on oil shipments to the United States, to be joined by all of the other oil-producing Arab states except Iraq and Libya. The Arab oil embargo ended the long period of prosperity in the West that had begun in 1945, throwing the world's economy into the steepest economic contraction since the Great Depression. Lacey wrote about the impact of the Arab oil embargo of 1973–74 that for people in the West life suddenly become "slower, darker and chiller" as gasoline was rationed, the lights were turned off in Times Square, the "gas guzzler" automobiles stopped selling, speed limits became common and restrictions were placed on weekend driving in a bid to conserve fuel. As the American automobile industry specialized in producing heavy "gas guzzler" vehicles, there was an immediate shift on the part of consumers to the lighter and more fuel efficient vehicles produced by the Japanese and West German automobile industries, sending the American automobile industry into decline. The years from 1945 to 1973 had been a period of unprecedented prosperity in the West, a "long summer" that many believed would never end, and its abrupt end in 1973 as the oil embargo which increased the price of oil by 400% within a matter of days threw the world's economy into a sharp recession with unemployment mounting and inflation raging came as a profound shock. The end of what the French called the "Trente Glorieuses" ("Glorious Thirty") led to a mood of widespread pessimism in the West with the "Financial Times" running a famous headline in late 1973 saying "The Future will be subject to Delay". In the midst of the war, in what journalist Elizabeth Drew called “Strangelove Day,” Kissinger put U.S. military forces on DEFCON 3 late in the evening of October 24, in what a historian argues is "best understood as an emotional response to a misunderstanding" with Soviet ambassador to the United States Anatoly Dobrynin. Israel regained the territory it lost in the early fighting and gained new territories from Syria and Egypt, including land in Syria east of the previously captured Golan Heights, and additionally on the western bank of the Suez Canal, although they did lose some territory on the eastern side of the Suez Canal that had been in Israeli hands since the end of the Six-Day War. During a summit in Cairo on 6 November, Kissinger asked Sadat what Faisal was like and was told: "Well, Dr. Henry, he will probably go on with you about Communism and the Jews". King Faisal's two great hatreds were Communism and Zionism as he believed that the Soviet Union and Israel were plotting together against Islam. When King Faisal was shown a translation into Arabic of "The Protocols of the Learned Elders of Zion", he instantly believed in the authenticity of "The Protocols" and therefore talked to anyone who would listen about what he had learned, despite the fact that "The Protocols" had been exposed as a forgery in 1921. On 7 November 1973, Kissinger flew to Riyadh to meet King Faisal and ask him to end the oil embargo in exchange for promising to be "evenhanded" with the Arab-Israeli dispute. As the plane carrying him prepared to land in Riyadh, Kissinger was clearly nervous at the prospect of negotiating with the stern Wahhabi Faisal who had a marked dislike of Jews. Kissinger discovered that King Faisal was a worthy companion to Lê Đức Thọ in terms of stubbornness as the king accused the United States of being biased in favor of Israel, instead engaging in a long rant about the balefulness of "Jewish Communists" in Russia and Israel, and despite all of Kissinger's efforts to charm him, refused to end the oil embargo. In February 1974, King Faisal chaired the second summit of Islamic states (which unlike the first summit Faisal had chaired in 1969 was not boycotted by Iraq and Syria), where he was acclaimed as a conquering hero who humiliated and humbled the West by wrecking its economy, and henceforward Kissinger had to face a bloc of Muslim states who were far more assertive and self-confident than before. Only on 19 March 1974 did the king end the oil embargo after Sadat, whom he trusted, reported to him that the United States was being more "evenhanded" and after Kissinger had promised to sell Saudi Arabia weapons that it had previously denied under the grounds that they might be used against Israel. More importantly, Saudi Arabia had billions of dollars invested in Western banks, and the massive bout of inflation set off by the oil embargo was a threat to this fortune as inflation eroded the value of the money, giving Faisal a vested interest in helping to contain the damage he himself inflicted on the economies of the West. Kissinger pressured the Israelis to cede some of the newly captured land back to its Arab neighbors, contributing to the first phases of Israeli–Egyptian non-aggression. In 1973–74, Kissinger engaged in "shuttle diplomacy" flying between Tel Aviv, Cairo and Damascus in a bid to make the armistice the basis of a preferment peace. Based on his talks with President Hafez al Assad of Syria, Kissinger concluded the "Syrians considered Palestine part of a 'greater Syria'". Kissinger's first meeting Assad lasted 6 hours and 30 minutes, causing the press to believe for a moment that he had been kidnapped by the Syrians. Kissinger called negotiating with Assad "time-consuming, nerve-racking, and bizarre". In his memoirs, Kissinger described during the course of his 28 meetings in Damascus in 1973–74 how Assad "negotiated tenaciously and daringly like a riverboat gambler to make sure he had exacted the last sliver of available concessions". Kissinger further described how with Assad: "His tactic was to open with a statement of the most extreme position to test what the traffic would bear. He might then allow himself to be driven back to the attainable, fighting a dogged rear guard action that made clear that concessions could be exacted only at a heavy price and that discouraged excessive expectations of them. (His negotiating style was in this respect not so different from that of the Israelis, much as both of them would hate the comparison.)" By contrast, Kissinger's negotiations with Sadat, through not without difficulties, were more fruitful. The move saw a warming in U.S.–Egyptian relations, bitter since the 1950s, as the country moved away from its former independent stance and into a close partnership with the United States. The peace was finalized in 1978 when U.S. President Jimmy Carter mediated the Camp David Accords, during which Israel returned the Sinai Peninsula in exchange for an Egyptian peace agreement that included the recognition of the state of Israel. A major concern for Kissinger was the possibility of Soviet influence in the oil-rich region of the Persian Gulf. Iraq had been ruled by the Baath Party since 1968, and in April 1969 came into conflict with Iran when Shah Mohammad Reza Pahlavi renounced the 1937 treaty governing the Shatt-al-Arab river. After two years of skirmishes along the border, President Ahmed Hassan al-Bakr broke off diplomatic relations with Iran on 1 December 1971. On 9 April 1972, Iraq signed a treaty of friendship with the Soviet Union. In May 1972, Nixon and Kissinger visited Tehran to tell the Shah that there would be no "second-guessing of his requests" to buy American weapons. At the same time, Nixon and Kissinger agreed a plan of the Shah's that the United States together with Iran and Israel would support the Kurdish "peshmerga" guerrillas fighting for independence from Iraq. Kissinger later wrote that after Vietnam, there was no possibility of deploying American forces in the Middle East, and henceforward Iran was to act as America's surrogate in the Persian Gulf. Kissinger described the Baathist regime in Iraq as a potential threat to the United States and believed that building up Iran and supporting the "peshmerga" was the best counterweight. Kissinger wrote about the last Shah: "[he] had been restored to the throne in 1953 by American influence when a leftist government had come close to toppling him. He never forgot that; it may have been the root of his extraordinary trust in American purposes and American goodwill, and of his psychological disintegration when he sensed that friendship evaporating. On some levels excessively, even morbidly, suspicious of possible attempts to diminish his authority, he nevertheless retained an almost naive faith in the United States". An essentially weak man, the Shah's ego was greatly inflated when Nixon and Kissinger told him that he was to be America's "man in the Persian Gulf", and he began to display signs of the megalomania that characterized his reign until his overthrow in the Iranian Revolution. A childlike man whose upbringing was warped by his overbearing and violent father, Reza Khan, the last Shah had a pathological and desperate need for American approval that closely resembled the way he needed his father's approval which he never received. Once Mohammad Reza felt he finally had the approval of the United States, his ego became inflated, leading to his belief that he was always right because America approved of him. To put further pressure on Iraq, in the winter of 1974–75, the "peshmerga" were encouraged by Iran and the United States to switch from guerrilla warfare to conventional war, marking the sharpest point in the fighting. In March 1975, Mohammad Reza signed the Algiers Accord with Vice President Saddam Hussein of Iraq which settled the dispute over the Shatt-al-Arab river to Iran's satisfaction, and at which point Iran, Israel and the United States abandoned the "peshmerga" to their fate, cutting off the arms supplies. The sudden ending of support caused the overexposed "peshmerga" to be rapidly defeated and as the British journalist Patrick Brogan noted that "...the Iraqis celebrated their victory in the usual manner, by executing as many of the rebels as they could lay their hands on". At the time, Kissinger stated: "Fuck the Kurds if they can’t take a joke" and “covert action should not be confused with missionary work.” Following a period of steady relations between the U.S. Government and the Greek military regime after 1967, Secretary of State Kissinger was faced with the coup by the Greek junta and the Turkish invasion of Cyprus in July and August 1974. In an August 1974 edition of "The New York Times", it was revealed that Kissinger and State Department were informed in advance οf the impending coup by the Greek junta in Cyprus. Indeed, according to the journalist, the official version of events as told by the State Department was that it felt it had to warn the Greek military regime not to carry out the coup. The warning had been delivered by July 9, according to repeated assurances from its Athens services, that is, the U.S. embassy and the American ambassador Henry J. Tasca himself. Ioannis Zigdis, then a Greek MP for Centre Union and former minister, claimed that "the Cyprus crisis will become Kissinger's Watergate". Zigdis also stressed: "Not only did Kissinger know about the coup for the overthrow of Archbishop Makarios before July 15th, he also encouraged it, if he did not instigate it." It is unclear what evidence Zigdis had to support this allegation. Kissinger was a target of anti-American sentiment which was a significant feature of Greek public opinion at the time—particularly among young people—viewing the U.S. role in Cyprus as negative. In a demonstration by students in Heraklion, Crete, soon after the second phase of the Turkish invasion in August 1974, slogans such as "Kissinger, murderer", "Americans get out", "No to Partition" and "Cyprus is no Vietnam" were heard. Some years later, Kissinger expressed the opinion that the Cyprus issue was resolved in 1974. The United States continued to recognize and maintain relationships with non-left-wing governments, democratic and authoritarian alike. John F. Kennedy's Alliance for Progress was ended in 1973. In 1974, negotiations over a new settlement for the Panama Canal began, and they eventually led to the Torrijos-Carter Treaties and the handing over of the Canal to Panamanian control. Kissinger initially supported the normalization of United States-Cuba relations, broken since 1961 (all U.S.–Cuban trade was blocked in February 1962, a few weeks after the exclusion of Cuba from the Organization of American States because of U.S. pressure). However, he quickly changed his mind and followed Kennedy's policy. After the involvement of the Cuban Revolutionary Armed Forces in the independence struggles in Angola and Mozambique, Kissinger said that unless Cuba withdrew its forces relations would not be normalized. Cuba refused. Chilean Socialist Party presidential candidate Salvador Allende was elected by a plurality of 36.2 percent in 1970, causing serious concern in Washington, D.C. due to his openly socialist and pro-Cuban politics. The Nixon administration, with Kissinger's input, authorized the Central Intelligence Agency (CIA) to encourage a military coup that would prevent Allende's inauguration, but the plan was not successful. United States-Chile relations remained frosty during Salvador Allende's tenure, following the complete nationalization of the partially U.S.-owned copper mines and the Chilean subsidiary of the U.S.-based ITT Corporation, as well as other Chilean businesses. The U.S. claimed that the Chilean government had greatly undervalued fair compensation for the nationalization by subtracting what it deemed "excess profits". Therefore, the U.S. implemented economic sanctions against Chile. The CIA also provided funding for the mass anti-government strikes in 1972 and 1973, and extensive black propaganda in the newspaper "El Mercurio". The most expeditious way to prevent Allende from assuming office was somehow to convince the Chilean congress to confirm Jorge Alessandri as the winner of the election. Once elected by the congress, Alessandri—a party to the plot through intermediaries—was prepared to resign his presidency within a matter of days so that new elections could be held. This first, nonmilitary, approach to stopping Allende was called the Track I approach. The CIA's second approach, the Track II approach, was designed to encourage a military overthrow. On September 11, 1973, Allende died during a military coup launched by Army Commander-in-Chief Augusto Pinochet, who became President. A document released by the CIA in 2000 titled "CIA Activities in Chile" revealed that the United States, acting through the CIA, actively supported the military junta after the overthrow of Allende, and that it made many of Pinochet's officers into paid contacts of the CIA or U.S. military. In September 1976, Orlando Letelier, a Chilean opponent of the Pinochet regime, was assassinated in Washington, D.C. with a car bomb. Previously, Kissinger had helped secure his release from prison, and had chosen to cancel a letter to Chile warning them against carrying out any political assassinations. The U.S. ambassador to Chile, David H. Popper, said that Pinochet might take as an insult any inference that he was connected with assassination plots. It has been confirmed that Pinochet directly ordered the assassination. This murder was part of Operation Condor, a covert program of political repression and assassination carried out by Southern Cone nations that Kissinger has been accused of being involved in. On September 10, 2001, the family of Chilean general René Schneider filed a suit against Kissinger, accusing him of collaborating in arranging Schneider's kidnapping which resulted in his death. According to phone records, Kissinger claimed to have "turned off" the operation. However, the CIA claimed that no such "stand-down" order was ever received, and he and Nixon later joked that an "incompetent" CIA had struggled to kill Schneider. A subsequent Congressional investigation found that the CIA was not directly involved in Schneider's death. The case was later dismissed by a U.S. District Court, citing separation of powers: "The decision to support a coup of the Chilean government to prevent Dr. Allende from coming to power, and the means by which the United States Government sought to effect that goal, implicate policy makers in the murky realm of foreign affairs and national security best left to the political branches." Decades later the CIA admitted its involvement in the kidnapping of General Schneider, but not his murder, and subsequently paid the group responsible for his death $35,000 "to keep the prior contact secret, maintain the goodwill of the group, and for humanitarian reasons." Kissinger took a similar line as he had toward Chile when the Argentine military, led by Jorge Videla, toppled the elected government of Isabel Perón in 1976 with a process called the National Reorganization Process by the military, with which they consolidated power, launching brutal reprisals and "disappearances" against political opponents. An October 1987 investigative report in The Nation broke the story of how, in a June 1976 meeting in the Hotel Carrera in Santiago, Kissinger gave the military junta in neighboring Argentina the "green light" for their own clandestine repression against leftwing guerrillas and other dissidents, thousands of whom were kept in more than 400 secret concentration camps before they were executed. During a meeting with Argentine foreign minister César Augusto Guzzetti, Kissinger assured him that the United States was an ally, but urged him to "get back to normal procedures" quickly before the U.S. Congress reconvened and had a chance to consider sanctions. As the article published in "The Nation" noted, as the state-sponsored terror mounted, conservative Republican U.S. Ambassador to Buenos Aires Robert C. Hill "'was shaken, he became very disturbed, by the case of the son of a thirty-year embassy employee, a student who was arrested, never to be seen again,' recalled former "New York Times" reporter Juan de Onis. 'Hill took a personal interest.' He went to the Interior Minister, a general with whom he had worked on drug cases, saying, 'Hey, what about this? We're interested in this case.' He questioned (Foreign Minister Cesar) Guzzetti and, finally, President Jorge R. Videla himself. 'All he got was stonewalling; he got nowhere.' de Onis said. 'His last year was marked by increasing disillusionment and dismay, and he backed his staff on human rights right to the hilt." In a letter to "The Nation" editor Victor Navasky, protesting publication of the article, Kissinger claimed that: "At any rate, the notion of Hill as a passionate human rights advocate is news to all his former associates." Yet Kissinger aide Harry W. Shlaudeman later disagreed with Kissinger, telling the oral historian William E. Knight of the Association for Diplomatic Studies and Training Foreign Affairs Oral History Project: "It really came to a head when I was Assistant Secretary, or it began to come to a head, in the case of Argentina where the dirty war was in full flower. Bob Hill, who was Ambassador then in Buenos Aires, a very conservative Republican politician -- by no means liberal or anything of the kind, began to report quite effectively about what was going on, this slaughter of innocent civilians, supposedly innocent civilians -- this vicious war that they were conducting, underground war. He, at one time in fact, sent me a back-channel telegram saying that the Foreign Minister, who had just come for a visit to Washington and had returned to Buenos Aires, had gloated to him that Kissinger had said nothing to him about human rights. I don't know -- I wasn't present at the interview." Navasky later wrote in his book about being confronted by Kissinger, "'Tell me, Mr. Navasky,' [Kissinger] said in his famous guttural tones, 'how is it that a short article in a obscure journal such as yours about a conversation that was supposed to have taken place years ago about something that did or didn't happen in Argentina resulted in sixty people holding placards denouncing me a few months ago at the airport when I got off the plane in Copenhagen?'" According to declassified state department files, Kissinger also attempted to thwart the Carter Administration's efforts to halt the mass killings by the 1976–83 military dictatorship. In September 1976 Kissinger was actively involved in negotiations regarding the Rhodesian Bush War. Kissinger, along with South Africa's Prime Minister John Vorster, pressured Rhodesian Prime Minister Ian Smith to hasten the transition to black majority rule in Rhodesia. With FRELIMO in control of Mozambique and even South Africa withdrawing its support, Rhodesia's isolation was nearly complete. According to Smith's autobiography, Kissinger told Smith of Mrs. Kissinger's admiration for him, but Smith stated that he thought Kissinger was asking him to sign Rhodesia's "death certificate". Kissinger, bringing the weight of the United States, and corralling other relevant parties to put pressure on Rhodesia, hastened the end of minority-rule. The Portuguese decolonization process brought U.S. attention to the former Portuguese colony of East Timor, which lies within the Indonesian archipelago and declared its independence in 1975. Indonesian president Suharto was a strong U.S. ally in Southeast Asia and began to mobilize the Indonesian army, preparing to annex the nascent state, which had become increasingly dominated by the popular leftist Fretilin party. East Timor was a predominantly Roman Catholic nation that did not wish to join Muslim majority Indonesia, but Suharto in common with other Indonesian nationalists regarded East Timor as rightfully part of Indonesia in the same way that Indonesia had claimed Dutch New Guiana as a successor state to the Netherlands East Indies and tried to annex Malaysia when Britain granted independence in 1963. In December 1975, Suharto discussed the invasion plans during a meeting with Kissinger and President Ford in the Indonesian capital of Jakarta. Both Ford and Kissinger made clear that U.S. relations with Indonesia would remain strong and that it would not object to the proposed annexation. They only wanted it done "fast" and proposed that it be delayed until after they had returned to Washington. Accordingly, Suharto delayed the operation for one day. Finally on December 7 Indonesian forces invaded the former Portuguese colony. U.S. arms sales to Indonesia continued, and Suharto went ahead with the annexation plan. According to Ben Kiernan, the invasion and occupation resulted in the deaths of nearly a quarter of the Timorese population from 1975 to 1981. In February 1976, Kissinger considered launching air strikes against ports and military installations in Cuba, as well as deploying Marine battalions based at the US Navy base at Guantanamo Bay, in retaliation for Cuban President Fidel Castro's decision in late 1975 to send troops to Angola to help the newly independent nation fend off attacks from South Africa and right-wing guerrillas. Kissinger left office when Democrat Jimmy Carter defeated Republican Gerald Ford in the 1976 presidential elections. Kissinger continued to participate in policy groups, such as the Trilateral Commission, and to maintain political consulting, speaking, and writing engagements. After Kissinger left office in 1977, he was offered an endowed chair at Columbia University. There was student opposition to the appointment, which became a subject of media commentary. Columbia canceled the appointment as a result. Kissinger was then appointed to Georgetown University's Center for Strategic and International Studies. He taught at Georgetown's Edmund Walsh School of Foreign Service for several years in the late 1970s. In 1982, with the help of a loan from the international banking firm of E.M. Warburg, Pincus and Company, Kissinger founded a consulting firm, Kissinger Associates, and is a partner in affiliate Kissinger McLarty Associates with Mack McLarty, former chief of staff to President Bill Clinton. He also serves on the board of directors of Hollinger International, a Chicago-based newspaper group, and as of March 1999, was a director of Gulfstream Aerospace. In September 1989, the "Wall Street Journal"'s John Fialka disclosed that Kissinger took a direct economic interest in US-China relations in March 1989 with the establishment of China Ventures, Inc., a Delaware limited partnership, of which he was chairman of the board and chief executive officer. A US$75 million investment in a joint venture with the Communist Party government's primary commercial vehicle at the time, China International Trust & Investment Corporation (CITIC), was its purpose. Board members were major clients of Kissinger Associates. Kissinger was criticised for not disclosing his role in the venture when called upon by ABC's Peter Jennings to comment the morning after the June 4, 1989 Tiananmen Square massacre. Kissinger's position was generally supportive of Deng Xiaoping's decision to use the military against the demonstrating students and he opposed economic sanctions. From 1995 to 2001, Kissinger served on the board of directors for Freeport-McMoRan, a multinational copper and gold producer with significant mining and milling operations in Papua, Indonesia. In February 2000, then-president of Indonesia Abdurrahman Wahid appointed Kissinger as a political advisor. He also serves as an honorary advisor to the United States-Azerbaijan Chamber of Commerce. In 1998, in response to the 2002 Winter Olympic bid scandal, the International Olympic Committee formed a commission, called the “2000 Commission,” to recommend reforms, which Kissinger served on. This service led in 2000 to his appointment as one of five IOC “honor members,” a category the organization described as granted to “eminent personalities from outside the IOC who have rendered particularly outstanding services to it.” From 2000–2006, Kissinger served as chairman of the board of trustees of Eisenhower Fellowships. In 2006, upon his departure from Eisenhower Fellowships, he received the Dwight D. Eisenhower Medal for Leadership and Service. In November 2002, he was appointed by President George W. Bush to chair the newly established National Commission on Terrorist Attacks Upon the United States to investigate the September 11 attacks. Kissinger stepped down as chairman on December 13, 2002, rather than reveal his business client list, when queried about potential conflicts of interest. In the Rio Tinto espionage case of 2009–2010, Kissinger was paid $5 million to advise the multinational mining company how to distance itself from an employee who had been arrested in China for bribery. Kissinger—along with William Perry, Sam Nunn, and George Shultz—has called upon governments to embrace the vision of a world free of nuclear weapons, and in three "Wall Street Journal" op-eds proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Threat Initiative to advance this agenda. In 2010, the four were featured in a documentary film entitled "Nuclear Tipping Point". The film is a visual and historical depiction of the ideas laid forth in the "Wall Street Journal" op-eds and reinforces their commitment to a world without nuclear weapons and the steps that can be taken to reach that goal. In December 2008, Kissinger was given the American Patriot Award by the National Defense University Foundation "in recognition for his distinguished career in public service." Earlier that year, a NDU professor had blown the whistle on the fact that a Chilean colleague at the William J. Perry Center for Hemispheric Defense Studies of U.S. Southern Command headquartered at NDU had not only been a member of Pinochet's DINA death squad operation (the same organization responsible for the 1976 car bomb murder of former Chilean Foreign Minister Orlando Letelier and American aide Ronni Karpen Moffitt less than a mile from the White House), but was in addition accused of participating in the torture and murder of seven detainees in Chile. The whistleblower, Martin Edwin Andersen, was not only a senior staff member who earlier—as a senior advisor for policy planning at the Criminal Division of the U.S. Department of Justice—was the first national security whistleblower to receive the U.S. Office of Special Counsel's "Public Servant Award," but was also the same person who broke the story in "The Nation" on Kissinger's "green light" for Argentina's dirty "war." On November 17, 2016, Kissinger met with then President-elect Donald Trump during which they discussed global affairs. Kissinger also met with President Trump at the White House in May 2017. In an interview with Charlie Rose on August 17, 2017, Kissinger said about President Trump: "I'm hoping for an Augustinian moment, for St. Augustine ... who in his early life followed a pattern that was quite incompatible with later on when he had a vision, and rose to sainthood. One does not expect the president to become that, but it's conceivable ..." Kissinger also argued that Russian President Vladimir Putin wanted to weaken Hillary Clinton, not elect Donald Trump. Kissinger said that Putin "thought—wrongly incidentally—that she would be extremely confrontational ... I think he tried to weaken the incoming president [Clinton]". In several articles of his and interviews that he gave during the Yugoslav wars, he criticized the United States' policies in Southeast Europe, among other things for the recognition of Bosnia and Herzegovina as a sovereign state, which he described as a foolish act. Most importantly he dismissed the notion of Serbs and Croats being aggressors or separatist, saying that "they can't be separating from something that has never existed". In addition, he repeatedly warned the West against inserting itself into a conflict that has its roots at least hundreds of years back in time, and said that the West would do better if it allowed the Serbs and Croats to join their respective countries. Kissinger shared similarly critical views on Western involvement in Kosovo. In particular, he held a disparaging view of the Rambouillet Agreement: However, as the Serbs did not accept the Rambouillet text and NATO bombings started, he opted for a continuation of the bombing as NATO's credibility was now at stake, but dismissed the use of ground forces, claiming that it was not worth it. In 2006, it was reported in the book "" by Bob Woodward that Kissinger met regularly with President George W. Bush and Vice President Dick Cheney to offer advice on the Iraq War. Kissinger confirmed in recorded interviews with Woodward that the advice was the same as he had given in a column in "The Washington Post" on August 12, 2005: "Victory over the insurgency is the only meaningful exit strategy." In an interview on the BBC's "Sunday AM" on November 19, 2006, Kissinger was asked whether there is any hope left for a clear military victory in Iraq and responded, "If you mean by 'military victory' an Iraqi government that can be established and whose writ runs across the whole country, that gets the civil war under control and sectarian violence under control in a time period that the political processes of the democracies will support, I don't believe that is possible. ... I think we have to redefine the course. But I don't believe that the alternative is between military victory as it had been defined previously, or total withdrawal." In an interview with Peter Robinson of the Hoover Institution on April 3, 2008, Kissinger reiterated that even though he supported the 2003 invasion of Iraq, he thought that the George W. Bush administration rested too much of its case for war on Saddam's supposed weapons of mass destruction. Robinson noted that Kissinger had criticized the administration for invading with too few troops, for disbanding the Iraqi Army, and for mishandling relations with certain allies. Kissinger said in April 2008 that "India has parallel objectives to the United States," and he called it an ally of the U.S. Kissinger was present at the opening ceremony of the 2008 Beijing Summer Olympics. A few months before the Games opened, as controversy over China's human rights record was intensifying due to criticism by Amnesty International and other groups of the widespread use of the death penalty and other issues, Kissinger told the PRC's official press agency Xinhua: “I think one should separate Olympics as a sporting event from whatever political disagreements people may have had with China. I expect that the games will proceed in the spirit for which they were designed, which is friendship among nations, and that other issues are discussed in other forums.” He said China had made huge efforts to stage the Games. “Friends of China should not use the Olympics to pressure China now.” He added that he would bring two of his grandchildren to watch the Games and planned to attend the opening ceremony. During the Games, he participated with Australian swimmer Ian Thorpe, film star Jackie Chan, and former British PM Tony Blair at a Peking University forum on the qualities that make a champion. He sat with his wife Nancy Kissinger, President George W. Bush, former President George H. W. Bush, and Foreign Minister Yang Jiechi at the men's basketball game between China and the U.S. In 2011, Kissinger published "On China", chronicling the evolution of Sino-American relations and laying out the challenges to a partnership of 'genuine strategic trust' between the U.S. and China. In his 2011 book "On China", his 2014 book "World Order" and in a 2018 interview with "Financial Times", Kissinger stated that he believes China wants to restore its historic role as the Middle Kingdom and be "the principal adviser to all humanity". Kissinger's position on this issue of U.S.–Iran talks was reported by the "Tehran Times" to be that "Any direct talks between the U.S. and Iran on issues such as the nuclear dispute would be most likely to succeed if they first involved only diplomatic staff and progressed to the level of secretary of state before the heads of state meet." In 2016, Kissinger said that the biggest challenge facing the Middle East is the "potential domination of the region by an Iran that is both imperial and jihadist." He further wrote in August 2017 that if the Islamic Revolutionary Guard Corps of Iran and its Shiite allies were allowed to fill the territorial vacuum left by a militarily defeated Islamic State of Iraq and the Levant, the region would be left with a land corridor extending from Iran to the Levant "which could mark the emergence of an Iranian radical empire." Commenting on the Joint Comprehensive Plan of Action, Kissinger said that he wouldn't have agreed to it, but that Trump's plan to end the agreement after it was signed would "enable the Iranians to do more than us." On March 5, 2014, "The Washington Post" published an op-ed piece by Kissinger, 11 days before the Crimean referendum on whether Autonomous Republic of Crimea should officially rejoin Ukraine or join neighboring Russia. In it, he attempted to balance the Ukrainian, Russian and Western desires for a functional state. He made four main points: Kissinger also wrote: "The west speaks Ukrainian; the east speaks mostly Russian. Any attempt by one wing of Ukraine to dominate the other—as has been the pattern—would lead eventually to civil war or break up." Following the publication of his book titled "World Order", Kissinger participated in an interview with Charlie Rose and updated his position on Ukraine, which he sees as a possible geographical mediator between Russia and the West. In a question he posed to himself for illustration regarding re-conceiving policy regarding Ukraine, Kissinger stated: "If Ukraine is considered an outpost, then the situation is that its eastern border is the NATO strategic line, and NATO will be within of Volgograd. That will never be accepted by Russia. On the other hand, if the Russian western line is at the border of Poland, Europe will be permanently disquieted. The Strategic objective should have been to see whether one can build Ukraine as a bridge between East and West, and whether one can do it as a kind of a joint effort." In December 2016, Kissinger advised then President-elect Donald Trump to accept "Crimea as a part of Russia" in an attempt to secure a rapprochement between the United States and Russia, whose relations soured as a result of the Crimean crisis. When asked if he explicitly considered Russia's sovereignty over Crimea legitimate, Kissinger answered in the affirmative, reversing the position he took in his "Washington Post" op-ed. In 2019, Kissinger wrote about the increasing tendency to give control of nuclear weapons to computers operating with Artificial Intelligence (AI) that: “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage". Kissinger argued that giving power to launch nuclear weapons to computers using algorithms to make decisions would eliminate the human factor and give the advantage to the state that had the most effective AI system as a computer can make decisions about war and peace far faster than any human ever could. Just as an AI-enhanced computer can win chess games by anticipating human decision-making, an AI-enhanced computer could be useful in a crisis as in a nuclear war, the side that strikes first would have the advantage by destroying the opponent's nuclear capacity. Kissinger also noted there was always the danger that a computer would make a decision to start a nuclear war that before diplomacy had been exhausted or the algorithm controlling the AI might make a decision to start a nuclear war that would be not understandable to the operators. Kissinger also warned the use of AI to control nuclear weapons would impose "opacity" on the decision-making process as the algorithms that control the AI system are not readily understandable, destabilizing the decision-making process as "...grand strategy requires an understanding of the capabilities and military deployments of potential adversaries. But if more and more intelligence becomes opaque, how will policy makers understand the views and abilities of their adversaries and perhaps even allies? Will many different internets emerge or, in the end, only one? What will be the implications for cooperation? For confrontation? As AI becomes ubiquitous, new concepts for its security need to emerge." At the height of Kissinger's prominence, many commented on his wit. In February 1972, at the Washington Press Club annual congressional dinner, "Kissinger mocked his reputation as a secret swinger." The insight, "Power is the ultimate aphrodisiac", is widely attributed to him, although Kissinger was paraphrasing Napoleon Bonaparte. Four scholars at the College of William & Mary ranked Kissinger as the most effective U.S. Secretary of State in the 50 years to 2015. A number of activists and human rights lawyers, however, have sought his prosecution for alleged war crimes. According to historian and Kissinger biographer Niall Ferguson, however, accusing Kissinger alone of war crimes "requires a double standard" because "nearly all the secretaries of state ... and nearly all the presidents" have taken similar actions. But Ferguson continues “this is not to say that it’s all OK.” Kissinger was interviewed in "", a documentary examining the underpinnings of the 1979 peace treaty between Israel and Egypt. Some have blamed Kissinger for injustices in American foreign policy during his tenure in government. In September 2001, relatives and survivors of General Rene Schneider (former head of the Chilean general staff) filed civil proceedings in Federal Court in Washington, DC, and, in April 2002, a petition for Kissinger's arrest was filed in the High Court in London by human rights campaigner Peter Tatchell, citing the destruction of civilian populations and the environment in Indochina during the years 1969–75. British-American journalist and author Christopher Hitchens authored "The Trial of Henry Kissinger", in which Hitchens calls for the prosecution of Kissinger "for war crimes, for crimes against humanity, and for offenses against common or customary or international law, including conspiracy to commit murder, kidnap, and torture". Critics on the right, such as Ray Takeyh, have faulted Kissinger for his role in the Nixon administration's opening to China and secret negotiations with North Vietnam. Takeyh writes that while rapprochement with China was a worthy goal, the Nixon administration failed to achieve any meaningful concessions from Chinese officials in return, as China continued to support North Vietnam and various "revolutionary forces throughout the Third World," "nor does there appear to be even a remote, indirect connection between Nixon and Kissinger's diplomacy and the communist leadership's decision, after Mao's bloody rule, to move away from a communist economy towards state capitalism." On Vietnam, Takeyh claims that Kissinger's negotiations with Le Duc Tho were intended only "to secure a 'decent interval' between America's withdrawal and South Vietnam's collapse." Johannes Kadura offers a more positive assessment of Nixon and Kissinger's strategy, arguing that the two men "simultaneously maintained a Plan A of further supporting Saigon and a Plan B of shielding Washington should their maneuvers prove futile." According to Kadura, the "decent interval" concept has been "largely misrepresented," in that Nixon and Kissinger "sought to gain time, make the North turn inward, and create a perpetual equilibrium" rather than acquiescing in the collapse of South Vietnam, but the strength of the anti-war movement and the sheer unpredictability of events in Indochina compelled them to prepare for the possibility that South Vietnam might collapse despite their best efforts. Kadura concludes: "Without Nixon, Kissinger, and Ford's clever use of triangular diplomacy ... The Soviets and the Chinese could have been tempted into a far more aggressive stance" following the "U.S. defeat in Indochina" than actually occurred. In 2011, Chimerica Media released an interview-based documentary, titled "Kissinger", in which Kissinger "reflects on some of his most important and controversial decisions" during his tenure as Secretary of State. Kissinger's record was brought up during the 2016 Democratic Party presidential primaries. Hillary Clinton had cultivated a close relationship with Kissinger, describing him as a "friend" and a source of "counsel." During the Democratic Primary Debates, Clinton touted Kissinger's praise for her record as Secretary of State. In response, candidate Bernie Sanders issued a critique of Kissinger's foreign policy, declaring: "I am proud to say that Henry Kissinger is not my friend. I will not take advice from Henry Kissinger." On April 3, 2020 Kissinger shared his diagnostic view of the COVID-19 pandemic, saying that it theatens the "liberal world order". Kissinger added that the virus does not know borders although global leaders are trying to address the crisis on a mainly national basis. He stressed that the key is not a purely national effort but greater international cooperation. Kissinger married Ann Fleischer on February 6, 1949. They had two children, Elizabeth and David, and divorced in 1964. On March 30, 1974, he married Nancy Maginnes. They now live in Kent, Connecticut, and in New York City. Kissinger's son David Kissinger served as an executive with NBCUniversal before becoming head of Conaco, Conan O'Brien's production company. In February 1982, Kissinger underwent coronary bypass surgery at the age of 58. Kissinger described "Diplomacy" as his favorite game in a 1973 interview. Daryl Grove characterised Kissinger as one of the most influential people in the growth of soccer in the United States. Kissinger was named chairman of the North American Soccer League board of directors in 1978. Since his childhood, Kissinger has been a fan of his hometown's soccer club, SpVgg Greuther Fürth. Even during his time in office, the German Embassy informed him about the team's results every Monday morning. He is an honorary member with lifetime season-tickets. In September 2012 Kissinger attended a home game in which SpVgg Greuther Fürth lost, 0–2, against Schalke after promising years ago he would attend a Greuther Fürth home game if they were promoted to the Bundesliga, the top football league in Germany, from the 2. Bundesliga.
https://en.wikipedia.org/wiki?curid=13765
Hydra (genus) Hydra ( ) is a genus of small, fresh-water organisms of the phylum Cnidaria and class Hydrozoa. They are native to the temperate and tropical regions. Biologists are especially interested in "Hydra" because of their regenerative ability – they do not appear to die of old age, or indeed to age at all. "Hydra" has a tubular, radially symmetric body up to long when extended, secured by a simple adhesive foot called the basal disc. Gland cells in the basal disc secrete a sticky fluid that accounts for its adhesive properties. At the free end of the body is a mouth opening surrounded by one to twelve thin, mobile tentacles. Each tentacle, or cnida (plural: cnidae), is clothed with highly specialised stinging cells called cnidocytes. Cnidocytes contain specialized structures called nematocysts, which look like miniature light bulbs with a coiled thread inside. At the narrow outer edge of the cnidocyte is a short trigger hair called a cnidocil. Upon contact with prey, the contents of the nematocyst are explosively discharged, firing a dart-like thread containing neurotoxins into whatever triggered the release. This can paralyze the prey, especially if many hundreds of nematocysts are fired. "Hydra" has two main body layers, which makes it "diploblastic". The layers are separated by mesoglea, a gel-like substance. The outer layer is the epidermis, and the inner layer is called the gastrodermis, because it lines the stomach. The cells making up these two body layers are relatively simple. Hydramacin is a bactericide recently discovered in "Hydra"; it protects the outer layer against infection. A single "Hydra" is composed of 50,000 to 100,000 cells which consist of three specific stem cell populations that will create many different cell types. These stem cells will continually renew themselves in the body column"." "Hydras" have two significant structures on their body: the "head" and the "foot". When a "Hydra" is cut in half, each half will regenerate and form into a small "Hydra"; the "head" will regenerate a "foot" and the "foot" will regenerate a "head". If the "Hydra" is sliced into many segments then the middle slices will form both a "head" and a "foot". Respiration and excretion occur by diffusion throughout the surface of the epidermis, while larger excreta are discharged through the mouth. The nervous system of "Hydra" is a nerve net, which is structurally simple compared to more derived animal nervous systems. "Hydra" does not have a recognizable brain or true muscles. Nerve nets connect sensory photoreceptors and touch-sensitive nerve cells located in the body wall and tentacles. The structure of the nerve net has two levels: Some have only two sheets of neurons. If "Hydra" are alarmed or attacked, the tentacles can be retracted to small buds, and the body column itself can be retracted to a small gelatinous sphere. "Hydra" generally react in the same way regardless of the direction of the stimulus, and this may be due to the simplicity of the nerve nets. "Hydra" are generally or sessile, but do occasionally move quite readily, especially when hunting. They have two distinct methods for moving – 'looping' and 'somersaulting'. They do this by bending over and attaching themselves to the with the mouth and tentacles and then relocate the foot, which provides the usual attachment, this process is called looping. In somersaulting, the body then bends over and makes a new place of attachment with the foot. By this process of "looping" or "somersaulting", a "Hydra" can move several inches (c. 100 mm) in a day. "Hydra" may also move by amoeboid motion of their bases or by detaching from the substrate and floating away in the current. When food is plentiful, many "Hydra" reproduce asexually by producing buds in the body wall, which grow to be miniature adults and break away when they are mature. When a hydra is well fed, a new bud can form every two days. When conditions are harsh, often before winter or in poor feeding conditions, sexual reproduction occurs in some "Hydra". Swellings in the body wall develop into either ovaries or testes. The testes release free-swimming gametes into the water, and these can fertilize the egg in the ovary of another individual. The fertilized eggs secrete a tough outer coating, and, as the adult dies (due to starvation or cold), these resting eggs fall to the bottom of the lake or pond to await better conditions, whereupon they hatch into nymph "Hydra". Some "Hydra" species, like "Hydra circumcincta" and "Hydra viridissima", are hermaphrodites and may produce both testes and ovaries at the same time. Many members of the Hydrozoa go through a body change from a polyp to an adult form called a medusa, which is usually the life stage where sexual reproduction occurs, but "Hydra" do not progress beyond the polyp phase. "Hydra" mainly feed on aquatic invertebrates such as "Daphnia" and "Cyclops". While feeding, "Hydra" extend their body to maximum length and then slowly extend their tentacles. Despite their simple construction, the tentacles of "Hydra" are extraordinarily extensible and can be four to five times the length of the body. Once fully extended, the tentacles are slowly manoeuvred around waiting for contact with a suitable prey animal. Upon contact, nematocysts on the tentacle fire into the prey, and the tentacle itself coils around the prey. Within 30 seconds, most of the remaining tentacles will have already joined in the attack to subdue the struggling prey. Within two minutes, the tentacles will have surrounded the prey and moved it into the opened mouth aperture. Within ten minutes, the prey will have been engulfed within the body cavity, and digestion will have started. "Hydra" are able to stretch their body wall considerably in order to digest prey more than twice their size. After two or three days, the indigestible remains of the prey will be discharged through the mouth aperture via contractions. The feeding behaviour of "Hydra" demonstrates the sophistication of what appears to be a simple nervous system. Some species of "Hydra" exist in a mutual relationship with various types of unicellular algae. The algae are protected from predators by "Hydra" and, in return, photosynthetic products from the algae are beneficial as a food source to "Hydra". The feeding response in "Hydra" is induced by glutathione (specifically in the reduced state as GSH) released from damaged tissue of injured prey. There are several methods conventionally used for quantification of the feeding response. In some, the duration for which the mouth remains open is measured. Other methods rely on counting the number of "Hydra" among a small population showing the feeding response after addition of glutathione. Recently, an assay for measuring the feeding response in hydra has been developed. In this method, the linear two-dimensional distance between the tip of the tentacle and the mouth of hydra was shown to be a direct measure of the extent of the feeding response. This method has been validated using a starvation model, as starvation is known to cause enhancement of the "Hydra" feeding response. "Hydra" undergoes morphallaxis (tissue regeneration) when injured or severed. Typically, "Hydras" will reproduce by just budding off a whole new individual, the bud will occur around two-thirds of the way down the body axis. When a "Hydra" is cut in half, each half will regenerate and form into a small "Hydra"; the "head" will regenerate a "foot" and the "foot" will regenerate a "head". This regeneration occurs without cell division. If the "Hydra" is sliced into many segments then the middle slices will form both a "head" and a "foot". The polarity of the regeneration is explained by two pairs of positional value gradients. There is both a head and foot activation and inhibition gradient. The head activation and inhibition works in an opposite direction of the pair of foot gradients. The evidence for these gradients was shown in the early 1900s with grafting experiments. The inhibitors for both gradients have shown to be important to block the bud formation. The location that the bud will form is where the gradients are low for both the head and foot. "Hydras" are capable of regenerating from pieces of tissue from the body and additionally after tissue dissociation from reaggregates. Daniel Martinez claimed in a 1998 article in "Experimental Gerontology" that "Hydra" are biologically immortal. This publication has been widely cited as evidence that "Hydra" do not senesce (do not age), and that they are proof of the existence of non-senescing organisms generally. In 2010, Preston Estep published (also in "Experimental Gerontology") a letter to the editor arguing that the Martinez data refute the hypothesis that "Hydra" do not senesce. The controversial unlimited life span of "Hydra" has attracted much attention from scientists. Research today appears to confirm Martinez' study. "Hydra" stem cells have a capacity for indefinite self-renewal. The transcription factor "forkhead box O" (FoxO) has been identified as a critical driver of the continuous self-renewal of "Hydra". In experiments, a drastically reduced population growth resulted from FoxO down-regulation. In bilaterally symmetrical organisms (Bilateria), the transcription factor FoxO impacts stress response, lifespan, and increase in stem cells. If this transcription factor is knocked down in bilaterian model organisms, such as fruit flies and nematodes, their lifespan is significantly decreased. In experiments on "H. vulgaris" (a radially symmetrical member of phylum Cnidaria), when FoxO levels were decreased, there was a negative impact of many key features of the "Hydra", but no death was observed, thus it is believed other factors may contribute to the apparent lack of aging in these creatures. While "Hydra" immortality is well-supported today, the implications for human aging are still controversial. There is much optimism; however, it appears that researchers still have a long way to go before they are able to understand how the results of their work might apply to the reduction or elimination of human senescence. An ortholog comparison analysis done within the last decade demonstrated that "Hydra" share a minimum of 6,071 genes with humans. "Hydra" is becoming an increasingly better model system as more genetic approaches become available. A draft of the genome of "Hydra magnipapillata" was reported in 2010.
https://en.wikipedia.org/wiki?curid=13767
Hydrus Hydrus is a small constellation in the deep southern sky. It was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman and it first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius and Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. Its name means "male water snake", as opposed to Hydra, a much larger constellation that represents a female water snake. It remains below the horizon for most Northern Hemisphere observers. The brightest star is the 2.8-magnitude Beta Hydri, also the closest reasonably bright star to the south celestial pole. Pulsating between magnitude 3.26 and 3.33, Gamma Hydri is a variable red giant 60 times the diameter of our Sun. Lying near it is VW Hydri, one of the brightest dwarf novae in the heavens. Four star systems in Hydrus have been found to have exoplanets to date, including HD 10180, which could bear up to nine planetary companions. Hydrus was one of the twelve constellations established by the Dutch astronomer Petrus Plancius from the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the "Eerste Schipvaart", to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in the German cartographer Johann Bayer's "Uranometria" of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name "De Waterslang", "The Water Snake", it representing a type of snake encountered on the expedition rather than a mythical creature. The French explorer and astronomer Nicolas Louis de Lacaille called it "l’Hydre Mâle" on the 1756 version of his planisphere of the southern skies, distinguishing it from the feminine Hydra. The French name was retained by Jean Fortin in 1776 for his "Atlas Céleste", while Lacaille Latinised the name to Hydrus for his revised "Coelum Australe Stelliferum" in 1763. Irregular in shape, Hydrus is bordered by Mensa to the southeast, Eridanus to the east, Horologium and Reticulum to the northeast, Phoenix to the north, Tucana to the northwest and west, and Octans to the south; Lacaille had shortened Hydrus' tail to make space for this last constellation he had drawn up. Covering 243 square degrees and 0.589% of the night sky, it ranks 61st of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Hyi". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 12 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −57.85° and −82.06°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere. Indeed, Herman Melville mentions it and Argo Navis in Moby Dick "beneath effulgent Antarctic Skies", highlighting his knowledge of the southern constellations from whaling voyages. A line drawn between the long axis of the Southern Cross to Beta Hydri and then extended 4.5 times will mark a point due south. Hydrus culminates at midnight around 26 October. Keyzer and de Houtman assigned fifteen stars to the constellation in their Malay and Madagascan vocabulary, with a star that would be later designated as Alpha Hydri marking the head, Gamma the chest and a number of stars that were later allocated to Tucana, Reticulum, Mensa and Horologium marking the body and tail. Lacaille charted and designated 20 stars with the Bayer designations Alpha through to Tau in 1756. Of these, he used the designations Eta, Pi and Tau twice each, for three sets of two stars close together, and omitted Omicron and Xi. He assigned Rho to a star that subsequent astronomers were unable to find. Beta Hydri, the brightest star in Hydrus, is a yellow star of apparent magnitude 2.8, lying 24 light-years from Earth. It has about 104% of the mass of the Sun and 181% of the Sun's radius, with more than three times the Sun's luminosity. The spectrum of this star matches a stellar classification of G2 IV, with the luminosity class of 'IV' indicating this is a subgiant star. As such, it is a slightly more evolved star than the Sun, with the supply of hydrogen fuel at its core becoming exhausted. It is the nearest subgiant star to the Sun and one of the oldest stars in the solar neighbourhood. Thought to be between 6.4 and 7.1 billion years old, this star bears some resemblance to what the Sun may look like in the far distant future, making it an object of interest to astronomers. It is also the closest bright star to the south celestial pole. Located at the northern edge of the constellation and just southwest of Achernar is Alpha Hydri, a white sub-giant star of magnitude 2.9, situated 72 light-years from Earth. Of spectral type F0IV, it is beginning to cool and enlarge as it uses up its supply of hydrogen. It is twice as massive and 3.3 times as wide as our sun and 26 times more luminous. A line drawn between Alpha Hydri and Beta Centauri is bisected by the south celestial pole. In the southeastern corner of the constellation is Gamma Hydri, a red giant of spectral type M2III located 214 light-years from Earth. It is a semi-regular variable star, pulsating between magnitudes 3.26 and 3.33. Observations over five years were not able to establish its periodicity. It is around 1.5 to 2 times as massive as our Sun, and has expanded to about 60 times the Sun's diameter. It shines with about 655 times the luminosity of the Sun. Located 3° northeast of Gamma is the VW Hydri, a dwarf nova of the SU Ursae Majoris type. It is a close binary system that consists of a white dwarf and another star, the former drawing off matter from the latter into a bright accretion disk. These systems are characterised by frequent eruptions and less frequent supereruptions. The former are smooth, while the latter exhibit short "superhumps" of heightened activity. One of the brightest dwarf novae in the sky, it has a baseline magnitude of 14.4 and can brighten to magnitude 8.4 during peak activity. BL Hydri is another close binary system composed of a low mass star and a strongly magnetic white dwarf. Known as a polar or AM Herculis variable, these produce polarized optical and infrared emissions and intense soft and hard X-ray emissions to the frequency of the white dwarf's rotation period—in this case 113.6 minutes. There are two notable optical double stars in Hydrus. Pi Hydri, composed of Pi1 Hydri and Pi2 Hydri, is divisible in binoculars. Around 476 light-years distant, Pi1 is a red giant of spectral type M1III that varies between magnitudes 5.52 and 5.58. Pi2 is an orange giant of spectral type K2III and shining with a magnitude of 5.7, around 488 light-years from Earth. Eta Hydri is the other optical double, composed of Eta1 and Eta2. Eta1 is a blue-white main sequence star of spectral type B9V that was suspected of being variable, and is located just over 700 light-years away. Eta2 has a magnitude of 4.7 and is a yellow giant star of spectral type G8.5III around 218 light-years distant, which has evolved off the main sequence and is expanding and cooling on its way to becoming a red giant. Calculations of its mass indicate it was most likely a white A-type main sequence star for most of its existence, around twice the mass of our Sun. A planet, Eta2 Hydri b, greater than 6.5 times the mass of Jupiter was discovered in 2005, orbiting around Eta2 every 711 days at a distance of 1.93 astronomical units (AU). Three other systems have been found to have planets, most notably the Sun-like star HD 10180, which has seven planets, plus possibly an additional two for a total of nine—as of 2012 more than any other system to date, including the Solar System. Lying around from the Earth, it has an apparent magnitude of 7.33. GJ 3021 is a solar twin—a star very like our own Sun—around 57 light-years distant with a spectral type G8V and magnitude of 6.7. It has a Jovian planet companion (GJ 3021 b). Orbiting about 0.5 AU from its sun, it has a minimum mass 3.37 times that of Jupiter and a period of around 133 days. The system is a complex one as the faint star GJ 3021B orbits at a distance of 68 AU; it is a red dwarf of spectral type M4V. HD 20003 is a star of magnitude 8.37. It is a yellow main sequence star of spectral type G8V a little cooler and smaller than our Sun around 143 light-years away. It has two planets that are around 12 and 13.5 times as massive as the Earth with periods of just under 12 and 34 days respectively. Hydrus contains only faint deep-sky objects. IC 1717 was a deep-sky object discovered by the Danish astronomer John Louis Emil Dreyer in the late 19th century. The object at the coordinate Dreyer observed is no longer there, and is now a mystery. It was very likely to have been a faint comet. PGC 6240, known as the White Rose Galaxy, is a giant spiral galaxy surrounded by shells resembling rose petals, located around 345 million light years from the Solar System. Unusually, it has cohorts of globular clusters of three distinct ages suggesting bouts of post-starburst formation following a merger with another galaxy. The constellation also contains a spiral galaxy, NGC 1511, which lies edge on to observers on Earth and is readily viewed in amateur telescopes. Located mostly in Dorado, the Large Magellanic Cloud extends into Hydrus. The globular cluster NGC 1466 is an outlying component of the galaxy, and contains many RR Lyrae-type variable stars. It has a magnitude of 11.59 and is thought to be over 12 billion years old. Two stars, HD 24188 of magnitude 6.3 and HD 24115 of magnitude 9.0, lie nearby in its foreground. NGC 602 is composed of an emission nebula and a young, bright open cluster of stars that is an outlying component on the eastern edge of the Small Magellanic Cloud, a satellite galaxy to the Milky Way. Most of the cloud is located in the neighbouring constellation Tucana.
https://en.wikipedia.org/wiki?curid=13768
Hercules Hercules () is a Roman hero and god. He was the Roman equivalent of the Greek divine hero Heracles, who was the son of Zeus (Roman equivalent Jupiter) and the mortal Alcmene. In classical mythology, Hercules is famous for his strength and for his numerous far-ranging adventures. The Romans adapted the Greek hero's iconography and myths for their literature and art under the name "Hercules". In later Western art and literature and in popular culture, "Hercules" is more commonly used than "Heracles" as the name of the hero. Hercules was a multifaceted figure with contradictory characteristics, which enabled later artists and writers to pick and choose how to represent him. This article provides an introduction to representations of Hercules in the later tradition. Although he was seen as the champion of the weak and a great protector, Hercules' personal problems started at birth. Hera sent two witches to prevent the birth, but they were tricked by one of Alcmene's servants and sent to another room. Hera then sent serpents to kill him in his cradle, but Hercules strangled them both. In one version of the myth, Alcmene abandoned her baby in the woods in order to protect him from Hera's wrath, but he was found by the goddess Athena who brought him to Hera, claiming he was an orphan child left in the woods who needed nourishment. Hera suckled Hercules at her own breast until the infant bit her nipple, at which point she pushed him away, spilling her milk across the night sky and so forming the Milky Way. She then gave the infant back to Athena and told her to take care of the baby herself. In feeding the child from her own breast, the goddess inadvertently imbued him with further strength and power. Hercules is known for his many adventures, which took him to the far reaches of the Greco-Roman world. One cycle of these adventures became canonical as the "Twelve Labours", but the list has variations. One traditional order of the labours is found in the "Bibliotheca" as follows: Hercules had a greater number of "deeds on the side" "(parerga)" that have been popular subjects for art, including: The Latin name "Hercules" was borrowed through Etruscan, where it is represented variously as Heracle, Hercle, and other forms. Hercules was a favorite subject for Etruscan art, and appears often on bronze mirrors. The Etruscan form "Herceler" derives from the Greek "Heracles" via syncope. A mild oath invoking Hercules ("Hercule!" or "Mehercle!") was a common interjection in Classical Latin. Hercules had a number of myths that were distinctly Roman. One of these is Hercules' defeat of Cacus, who was terrorizing the countryside of Rome. The hero was associated with the Aventine Hill through his son Aventinus. Mark Antony considered him a personal patron god, as did the emperor Commodus. Hercules received various forms of religious veneration, including as a deity concerned with children and childbirth, in part because of myths about his precocious infancy, and in part because he fathered countless children. Roman brides wore a special belt tied with the "knot of Hercules", which was supposed to be hard to untie. The comic playwright Plautus presents the myth of Hercules' conception as a sex comedy in his play "Amphitryon"; Seneca wrote the tragedy "Hercules Furens" about his bout with madness. During the Roman Imperial era, Hercules was worshipped locally from Hispania through Gaul. Tacitus records a special affinity of the Germanic peoples for Hercules. In chapter 3 of his "Germania", Tacitus states: Some have taken this as Tacitus equating the Germanic "Þunraz" with Hercules by way of "interpretatio romana". In the Roman era Hercules' Club amulets appear from the 2nd to 3rd century, distributed over the empire (including Roman Britain, c.f. Cool 1986), mostly made of gold, shaped like wooden clubs. A specimen found in Köln-Nippes bears the inscription "DEO HER[culi]", confirming the association with Hercules. In the 5th to 7th centuries, during the Migration Period, the amulet is theorized to have rapidly spread from the Elbe Germanic area across Europe. These Germanic "Donar's Clubs" were made from deer antler, bone or wood, more rarely also from bronze or precious metals. The amulet type is replaced by the Viking Age Thor's hammer pendants in the course of the Christianization of Scandinavia from the 8th to 9th century. After the Roman Empire became Christianized, mythological narratives were often reinterpreted as allegory, influenced by the philosophy of late antiquity. In the 4th century, Servius had described Hercules' return from the underworld as representing his ability to overcome earthly desires and vices, or the earth itself as a consumer of bodies. In medieval mythography, Hercules was one of the heroes seen as a strong role model who demonstrated both valor and wisdom, while the monsters he battles were regarded as moral obstacles. One glossator noted that when Hercules became a constellation, he showed that strength was necessary to gain entrance to Heaven. Medieval mythography was written almost entirely in Latin, and original Greek texts were little used as sources for Hercules' myths. The Renaissance and the invention of the printing press brought a renewed interest in and publication of Greek literature. Renaissance mythography drew more extensively on the Greek tradition of Heracles, typically under the Romanized name Hercules, or the alternate name Alcides. In a chapter of his book "Mythologiae" (1567), the influential mythographer Natale Conti collected and summarized an extensive range of myths concerning the birth, adventures, and death of the hero under his Roman name Hercules. Conti begins his lengthy chapter on Hercules with an overview description that continues the moralizing impulse of the Middle Ages: Hercules, who subdued and destroyed monsters, bandits, and criminals, was justly famous and renowned for his great courage. His great and glorious reputation was worldwide, and so firmly entrenched that he'll always be remembered. In fact the ancients honored him with his own temples, altars, ceremonies, and priests. But it was his wisdom and great soul that earned those honors; noble blood, physical strength, and political power just aren't good enough. In 1600, the citizens of Avignon bestowed on Henry of Navarre (the future King Henry IV of France) the title of the "Hercule Gaulois" ("Gallic Hercules"), justifying the extravagant flattery with a genealogy that traced the origin of the House of Navarre to a nephew of Hercules' son Hispalus. Road of Hercules The Road of Hercules is a route across Southern Gaul that is associated with the path Hercules took during his 10th labor of retrieving the Cattle of Geryon from the Red Isles. Hannibal took the same path on his march towards Italy and encouraged the belief that he was the second Hercules. Primary sources often make comparisons between Hercules and Hannibal. Hannibal further tried to invoke parallels between himself and Hercules by starting his march on Italy by visiting the shrine of Hercules at Gades. While crossing the alps, he performed labors in a heroic manner. A famous example was noted by Livy, when Hannibal fractured the side of a cliff that was blocking his march. Worship from women In ancient Roman society women were usually limited to two types of cults. Those that address feminine matters such as childbirth, and cults that required virginal chastity. However, there is evidence suggesting there were female worshippers of Apollo, Mars, Jupiter, and Hercules. Some scholars believe that women were completely prohibited from any of Hercules's cults. Others believe it was only the "Ara Maxima" that they were not allowed to worship at. Macrobius in his first book of "Saturnalia" paraphrases from Varro's actinology: ""For when Hercules was bringing the cattle of Geryon through Italy, a women replied to the thirsty hero that she could not give him water because it was the day of the Goddess Women and it was unlawful for a man to taste what had been prepared for her. Hercules, therefore, when he was about to offer a sacrifice forbid the presence of women and ordered Potitius and Pinarius who where in charge of his rites, not to allow any women from taking part"". Macrobius states that women were restricted in their participation in Hercules cults, but to what extent remains ambiguous. He mentions that women were not allowed to participate in Sacrum which is general term used to describe anything that was believed to have belonged to the gods. This could include anything from a precious item to a temple. Due to the general nature of a Sacrum, we can not judge the extent of the prohibition from Macrobius alone. There is also ancient writings on this topic from Aulus Gellius when speaking on how Romans swore oaths. He mentioned that Roman women do not swear on Hercules, nor to Roman men swear on Castor. He went on to say that women refrain from sacrificing to Hercules. Propertius, whom in his poem 4.9 also mentions similar information as Macrobius. This is evidence that he was also using Varro as a source. Worship in myth There is evidence of Hercules worship in myth in the Latin epic poem "The Aeneid". In the 8th book of the poem Aeneas finally reaches the future site of Rome, where he meets Evander and the Arcadians making sacrifices to Hercules on the banks of the Tiber river. They share a feast, and Evander tells the story of how Hercules defeated the monster Cascus, and describes him as a triumphant hero. Translated from the Latin text of Vergil, Evander stated: "Time brought to us in our time of need the aid and arrival of a god. For there came that mightiest avenger, the victor Hercules, proud with the slaughter and the spoils of threefold Geryon, and he drove the mighty bulls here, and the cattle filled both valley and riverside. Hercules was also mentioned in the Fables of Gaius Julius Hyginus. For example, in his fable about Philoctetes he tells the story of how Philoctetes built a funeral pyre for Hercules so his body could be consumed and raised to immortality. Hercules and the Roman triumph According to Livy (9.44.16) Romans were commemorating military victories by building statues to Hercules as early as 305 BCE. Also, philosopher Piny the Elder dates Hercules worship back to the time of Evander, by accrediting him with erecting a statue in the Forum Boarium of Hercules. Scholars agree that there would have been 5–7 temples in Augustan Rome. There are believed to be related Republican "triumphatores," however, not necessarily triumphal dedications. There is two temples located in the Campus Martius. One, being the Temple of Hercules Musarum, dedicated between 187 and 179 BCE by M. Fulvius Nobilior. And the other being the Temple of Hercules Custos, likely renovated by Sulla in the 80s BCE. In Roman works of art and in Renaissance and post-Renaissance art, Hercules can be identified by his attributes, the lion skin and the gnarled club (his favorite weapon); in mosaic he is shown tanned bronze, a virile aspect. Hercules was among the earliest figures on ancient Roman coinage, and has been the main motif of many collector coins and medals since. One example is the 20 euro Baroque Silver coin issued on September 11, 2002. The obverse side of the coin shows the Grand Staircase in the town palace of Prince Eugene of Savoy in Vienna, currently the Austrian Ministry of Finance. Gods and demi-gods hold its flights, while Hercules stands at the turn of the stairs. Six successive ships of the British Royal Navy, from the 18th to the 20th century, bore the name HMS "Hercules". In the French Navy, there were no less than nineteen ships called "Hercule", plus three more named "Alcide" which is another name of the same hero. Hercules' name was also used for five ships of the US Navy, four ships of the Spanish Navy, four of the Argentine Navy and two of the Swedish Navy, as well as for numerous civilian sailing and steam ships – see links at Hercules (ship). In modern aviation a military transport aircraft produced by Lockheed Martin carries the title Lockheed C-130 Hercules. A series of nineteen Italian Hercules movies were made in the late 1950s and early 1960s. The actors who played Hercules in these films were Steve Reeves, Gordon Scott, Kirk Morris, Mickey Hargitay, Mark Forest, Alan Steel, Dan Vadis, Brad Harris, Reg Park, Peter Lupus (billed as Rock Stevens) and Michael Lane. A number of English-dubbed Italian films that featured the name of Hercules in their title were not intended to be movies about Hercules.
https://en.wikipedia.org/wiki?curid=13770
History of Poland The history of Poland () spans over a thousand years, from medieval tribes, Christianization and monarchy; through Poland's Golden Age, expansionism and becoming one of the largest European powers; to its collapse and partitions, two world wars, communism, and the restoration of democracy. The roots of Polish history can be traced to the Iron Age, when the territory of present-day Poland was settled by various tribes including Celts, Scythians, Germanic clans, Sarmatians, Slavs and Balts. However, it was the West Slavic Lechites, the closest ancestors of ethnic Poles, who established permanent settlements in the Polish lands during the Early Middle Ages. The Lechitic Western Polans, a tribe whose name means "people living in open fields", dominated the region, and gave Poland - which lies in the North-Central European Plain - its name. The first ruling dynasty, the Piasts, emerged in the 10th century AD. Duke Mieszko I is considered the "de facto" creator of the Polish state and is widely recognized for his adoption of Western Christianity in 966 CE. Mieszko's dominion was formally reconstituted as a medieval kingdom in 1025 by his son Bolesław I the Brave, known for military expansion under his rule. The most successful and the last Piast monarch, Casimir III the Great, presided over a period of economic prosperity and territorial aggrandizement before his death in 1370 without male heirs. The period of the Jagiellonian dynasty in the 14th–16th centuries brought close ties with the Lithuania, a cultural Renaissance in Poland and continued territorial expansion as well as Polonization that culminated in the establishment of the Polish–Lithuanian Commonwealth in 1569, one of Europe's largest countries. The Commonwealth was able to sustain the levels of prosperity achieved during the Jagiellonian period, while its political system matured as a unique noble democracy with an elective monarchy. From the mid-17th century, however, the huge state entered a period of decline caused by devastating wars and the deterioration of its political system. Significant internal reforms were introduced in the late 18th century, such as Europe's first Constitution of 3 May 1791, but neighboring powers did not allow the reforms to advance. The existence of the Commonwealth ended in 1795 after a series of invasions and partitions of Polish territory carried out by the Russian Empire in the east, the Kingdom of Prussia in the west and the Habsburg Monarchy in the south. From 1795 until 1918, no truly independent Polish state existed, although strong Polish resistance movements operated. The opportunity to regain sovereignty only materialized after World War I, when the three partitioning imperial powers were fatally weakened in the wake of war and revolution. The Second Polish Republic was established in 1918 and existed as an independent state until 1939, when Nazi Germany and the Soviet Union invaded Poland, marking the beginning of World War II. Millions of Polish citizens of different faiths or identities perished in the course of the Nazi occupation of Poland between 1939 and 1945 through planned genocide and extermination. A Polish government-in-exile nonetheless functioned throughout the war and the Poles contributed to the Allied victory through participation in military campaigns on both the eastern and western fronts. The westward advances of the Soviet Red Army in 1944 and 1945 compelled Nazi Germany's forces to retreat from Poland, which led to the establishment of a satellite communist country, known from 1952 as the Polish People's Republic. As a result of territorial adjustments mandated by the Allies at the end of World War II in 1945, Poland's geographic centre of gravity shifted towards the west and the re-defined Polish lands largely lost their historic multi-ethnic character through the extermination, expulsion and migration of various ethnic groups during and after the war. By the late 1980s, the Polish reform movement Solidarity became crucial in bringing about a peaceful transition from a planned economy and a communist state to a capitalist economic system and a liberal parliamentary democracy. This process resulted in the creation of the modern Polish state, the Third Polish Republic, founded in 1989. In prehistoric and protohistoric times, over a period of at least 600,000 years, the area of present-day Poland was intermittently inhabited by members of the genus "Homo". It went through the Stone Age, Bronze Age and Iron Age stages of development, along with the nearby regions. The Neolithic period ushered in the Linear Pottery culture, whose founders migrated from the Danube River area beginning about 5500 BC. This culture was distinguished by the establishment of the first settled agricultural communities in modern Polish territory. Later, between about 4400 and 2000 BC, the native post-Mesolithic populations would also adopt and further develop the agricultural way of life. Poland's Early Bronze Age began around 2400–2300 BC, whereas its Iron Age commenced c. 750–700 BC. One of the many cultures that have been uncovered, the Lusatian culture, spanned the Bronze and Iron Ages and left notable settlement sites. Around 400 BC, Poland was settled by Celts of the La Tène culture. They were soon followed by emerging cultures with a strong Germanic component, influenced first by the Celts and then by the Roman Empire. The Germanic peoples migrated out of the area by about 500 AD during the great Migration Period of the European Dark Ages. Wooded regions to the north and east were settled by Balts. According to mainstream archaeological research, Slavs have resided in modern Polish territories for only 1,500 years. However, recent genetic studies determined that people who live in the current territory of Poland include the descendants of the people who inhabited the area for thousands of years, beginning in the early Neolithic period. The West Slavic and Lechitic peoples as well as any remaining minority clans on ancient Polish lands were organized into tribal units, of which the larger ones were later known as the Polish tribes; the names of many tribes are found on the list compiled by the anonymous Bavarian Geographer in the 9th century. In the 9th and 10th centuries, these tribes gave rise to developed regions along the upper Vistula, the coast of the Baltic Sea and in Greater Poland. The latest tribal undertaking, in Greater Poland, resulted in the formation of a lasting political structure in the 10th century that became the state of Poland. Poland was established as a state under the Piast dynasty, which ruled the country between the 10th and 14th centuries. Historical records referring to the Polish state begin with the rule of Duke Mieszko I, whose reign commenced sometime before 963 and continued until his death in 992. Mieszko converted to Christianity in 966, following his marriage to Princess Doubravka of Bohemia, a fervent Christian. The event is known as the "baptism of Poland", and its date is often used to mark a symbolic beginning of Polish statehood. Mieszko completed a unification of the Lechitic tribal lands that was fundamental to the new country's existence. Following its emergence, Poland was led by a series of rulers who converted the population to Christianity, created a strong kingdom and fostered a distinctive Polish culture that was integrated into the broader European culture. Mieszko's son, Duke Bolesław I the Brave (r. 992–1025), established a Polish Church structure, pursued territorial conquests and was officially crowned the first king of Poland in 1025, near the end of his life. Bolesław also sought to spread Christianity to parts of eastern Europe that remained pagan, but suffered a setback when his greatest missionary, Adalbert of Prague, was killed in Prussia in 997. During the Congress of Gniezno in the year 1000, Holy Roman Emperor Otto III recognized the Archbishopric of Gniezno, an institution crucial for the continuing existence of the sovereign Polish state. During the reign of Otto's successor, Holy Roman Emperor Henry II, Bolesław fought prolonged wars with the Kingdom of Germany between 1002 and 1018. Bolesław I's expansive rule overstretched the resources of the early Polish state, and it was followed by a collapse of the monarchy. Recovery took place under Casimir I the Restorer (r. 1039–58). Casimir's son Bolesław II the Generous (r. 1058–79) became involved in a conflict with Bishop Stanislaus of Szczepanów that ultimately caused his downfall. Bolesław had the bishop murdered in 1079 after being excommunicated by the Polish church on charges of adultery. This act sparked a revolt of Polish nobles that led to Bolesław's deposition and expulsion from the country. Around 1116, Gallus Anonymus wrote a seminal chronicle, the "Gesta principum Polonorum", intended as a glorification of his patron Bolesław III Wrymouth (r. 1107–38), a ruler who revived the tradition of military prowess of Bolesław I's time. Gallus' work remains a paramount written source for the early history of Poland. After Bolesław III divided Poland among his sons in his Testament of 1138, internal fragmentation eroded the Piast monarchical structures in the 12th and 13th centuries. In 1180, Casimir II the Just, who sought papal confirmation of his status as a senior duke, granted immunities and additional privileges to the Polish Church at the Congress of Łęczyca. Around 1220, Wincenty Kadłubek wrote his "Chronica seu originale regum et principum Poloniae", another major source for early Polish history. In 1226, one of the regional Piast dukes, Konrad I of Masovia, invited the Teutonic Knights to help him fight the Baltic Prussian pagans. The Teutonic Order destroyed the Prussians but kept their lands, which resulted in centuries of warfare between Poland and the Teutonic Knights, and later between Poland and the German Prussian state. The first Mongol invasion of Poland began in 1240; it culminated in the defeat of Polish and allied Christian forces and the death of the Silesian Piast Duke Henry II the Pious at the Battle of Legnica in 1241. In 1242, Wrocław became the first Polish municipality to be incorporated, as the period of fragmentation brought economic development and growth of towns. New cities were founded and existing settlements were granted town status per Magdeburg Law. In 1264, Bolesław the Pious granted Jewish liberties in the Statute of Kalisz. Attempts to reunite the Polish lands gained momentum in the 13th century, and in 1295, Duke Przemysł II of Greater Poland managed to become the first ruler since Bolesław II to be crowned king of Poland. He ruled over a limited territory and was soon killed. In 1300–05 King Wenceslaus II of Bohemia also reigned as king of Poland. The Piast Kingdom was effectively restored under Władysław I the Elbow-high (r. 1306–33), who became king in 1320. In 1308, the Teutonic Knights seized Gdańsk and the surrounding region of Pomerelia. King Casimir III the Great (r. 1333–70), Władysław's son and the last of the Piast rulers, strengthened and expanded the restored Kingdom of Poland, but the western provinces of Silesia (formally ceded by Casimir in 1339) and most of Polish Pomerania were lost to the Polish state for centuries to come. Progress was made in the recovery of the separately governed central province of Mazovia, however, and in 1340, the conquest of Red Ruthenia began, marking Poland's expansion to the east. The Congress of Kraków, a vast convocation of central, eastern, and northern European rulers probably assembled to plan an anti-Turkish crusade, took place in 1364, the same year that the future Jagiellonian University, one of the oldest European universities, was founded. On 9 October 1334, Casimir III confirmed the privileges granted to Jews in 1264 by Bolesław the Pious and allowed them to settle in Poland in great numbers. After the Polish royal line and Piast junior branch died out in 1370, Poland came under the rule of Louis I of Hungary of the Capetian House of Anjou, who presided over a union of Hungary and Poland that lasted until 1382. In 1374, Louis granted the Polish nobility the Privilege of Koszyce to assure the succession of one of his daughters in Poland. His youngest daughter Jadwiga (d. 1399) assumed the Polish throne in 1384. In 1386, Grand Duke Jogaila of Lithuania converted to Catholicism and married Queen Jadwiga of Poland. This act enabled him to become a king of Poland himself, and he ruled as Władysław II Jagiełło until his death in 1434. The marriage established a personal Polish–Lithuanian union ruled by the Jagiellonian dynasty. The first in a series of formal "unions" was the Union of Krewo of 1385, whereby arrangements were made for the marriage of Jogaila and Jadwiga. The Polish–Lithuanian partnership brought vast areas of Ruthenia controlled by the Grand Duchy of Lithuania into Poland's sphere of influence and proved beneficial for the nationals of both countries, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries. When Queen Jadwiga died in 1399, the Kingdom of Poland fell to her husband's sole possession. In the Baltic Sea region, Poland's struggle with the Teutonic Knights continued and culminated in the Battle of Grunwald (1410), a great victory that the Poles and Lithuanians were unable to follow up with a decisive strike against the main seat of the Teutonic Order at Malbork Castle. The Union of Horodło of 1413 further defined the evolving relationship between the Kingdom of Poland and the Grand Duchy of Lithuania. The privileges of the "szlachta" (nobility) kept expanding and in 1425 the rule of "Neminem captivabimus", which protected the noblemen from arbitrary royal arrests, was formulated. The reign of the young Władysław III (1434–44), who succeeded his father Władysław II Jagiełło and ruled as king of Poland and Hungary, was cut short by his death at the Battle of Varna against the forces of the Ottoman Empire. This disaster led to an interregnum of three years that ended with the accession of Władysław's brother Casimir IV Jagiellon in 1447. Critical developments of the Jagiellonian period were concentrated during Casimir IV's long reign, which lasted until 1492. In 1454, Royal Prussia was incorporated by Poland and the Thirteen Years' War of 1454–66 with the Teutonic state ensued. In 1466, the milestone Peace of Thorn was concluded. This treaty divided Prussia to create East Prussia, the future Duchy of Prussia, a separate entity that functioned as a fief of Poland under the administration of the Teutonic Knights. Poland also confronted the Ottoman Empire and the Crimean Tatars in the south, and in the east helped Lithuania fight the Grand Duchy of Moscow. The country was developing as a feudal state, with a predominantly agricultural economy and an increasingly dominant landed nobility. Kraków, the royal capital, was turning into a major academic and cultural center, and in 1473 the first printing press began operating there. With the growing importance of "szlachta" (middle and lower nobility), the king's council evolved to become by 1493 a bicameral General Sejm (parliament) that no longer represented exclusively top dignitaries of the realm. The "Nihil novi" act, adopted in 1505 by the Sejm, transferred most of the legislative power from the monarch to the Sejm. This event marked the beginning of the period known as "Golden Liberty", when the state was ruled in principle by the "free and equal" Polish nobility. In the 16th century, the massive development of folwark agribusinesses operated by the nobility led to increasingly abusive conditions for the peasant serfs who worked them. The political monopoly of the nobles also stifled the development of cities, some of which were thriving during the late Jagiellonian era, and limited the rights of townspeople, effectively holding back the emergence of the middle class. In the 16th century, Protestant Reformation movements made deep inroads into Polish Christianity and the resulting Reformation in Poland involved a number of different denominations. The policies of religious tolerance that developed in Poland were nearly unique in Europe at that time and many who fled regions torn by religious strife found refuge in Poland. The reigns of King Sigismund I the Old (1506–1548) and King Sigismund II Augustus (1548–1572) witnessed an intense cultivation of culture and science (a Golden Age of the Renaissance in Poland), of which the astronomer Nicolaus Copernicus (1473–1543) is the best known representative. Jan Kochanowski (1530–1584) was a poet and the premier artistic personality of the period. In 1525, during the reign of Sigismund I, the Teutonic Order was secularized and Duke Albert performed an act of homage before the Polish king (the Prussian Homage) for his fief, the Duchy of Prussia. Mazovia was finally fully incorporated into the Polish Crown in 1529. The reign of Sigismund II ended the Jagiellonian period, but gave rise to the Union of Lublin (1569), an ultimate fulfillment of the union with Lithuania. This agreement transferred Ukraine from the Grand Duchy of Lithuania to Poland and transformed the Polish–Lithuanian polity into a real union, preserving it beyond the death of the childless Sigismund II, whose active involvement made the completion of this process possible. Livonia in the far northeast was incorporated by Poland in 1561 and Poland entered the Livonian War against Russia. The executionist movement, which attempted to check the progressing domination of the state by the magnate families of Poland and Lithuania, peaked at the Sejm in Piotrków in 1562–63. On the religious front, the Polish Brethren split from the Calvinists, and the Protestant Brest Bible was published in 1563. The Jesuits, who arrived in 1564, were destined to make a major impact on Poland's history. The Union of Lublin of 1569 established the Polish–Lithuanian Commonwealth, a federal state more closely unified than the earlier political arrangement between Poland and Lithuania. The union was run largely by the nobility through the system of central parliament and local assemblies, but was headed by elected kings. The formal rule of the nobility, who were proportionally more numerous than in other European countries, constituted an early democratic system ("a sophisticated noble democracy"), in contrast to the absolute monarchies prevalent at that time in the rest of Europe. The beginning of the Commonwealth coincided with a period in Polish history when great political power was attained and advancements in civilization and prosperity took place. The Polish–Lithuanian Union became an influential participant in European affairs and a vital cultural entity that spread Western culture (with Polish characteristics) eastward. In the second half of the 16th century and the first half of the 17th century, the Commonwealth was one of the largest and most populous states in contemporary Europe, with an area approaching and a population of about ten million. Its economy was dominated by export-focused agriculture. Nationwide religious toleration was guaranteed at the Warsaw Confederation in 1573. After the rule of the Jagiellonian dynasty ended in 1572, Henry of Valois (later King Henry III of France) was the winner of the first "free election" by the Polish nobility, held in 1573. He had to agree to the restrictive "pacta conventa" obligations and fled Poland in 1574 when news arrived of the vacancy of the French throne, to which he was the heir presumptive. From the start, the royal elections increased foreign influence in the Commonwealth as foreign powers sought to manipulate the Polish nobility to place candidates amicable to their interests. The reign of Stephen Báthory of Hungary followed (r. 1576–1586). He was militarily and domestically assertive and is revered in Polish historical tradition as a rare case of successful elective king. The establishment of the legal Crown Tribunal in 1578 meant a transfer of many appellate cases from the royal to noble jurisdiction. A period of rule under the Swedish House of Vasa began in the Commonwealth in the year 1587. The first two kings from this dynasty, Sigismund III (r. 1587–1632) and Władysław IV (r. 1632–1648), repeatedly attempted to intrigue for accession to the throne of Sweden, which was a constant source of distraction for the affairs of the Commonwealth. At that time, the Catholic Church embarked on an ideological counter-offensive and the Counter-Reformation claimed many converts from Polish and Lithuanian Protestant circles. In 1596, the Union of Brest split the Eastern Christians of the Commonwealth to create the Uniate Church of the Eastern Rite, but subject to the authority of the pope. The Zebrzydowski rebellion against Sigismund III unfolded in 1606–1608. Seeking supremacy in Eastern Europe, the Commonwealth fought wars with Russia between 1605 and 1618 in the wake of Russia's Time of Troubles; the series of conflicts is referred to as the Polish–Muscovite War or the "Dymitriads". The efforts resulted in expansion of the eastern territories of the Polish–Lithuanian Commonwealth, but the goal of taking over the Russian throne for the Polish ruling dynasty was not achieved. Sweden sought supremacy in the Baltic during the Polish–Swedish wars of 1617–1629, and the Ottoman Empire pressed from the south in the Battles at Cecora in 1620 and Khotyn in 1621. The agricultural expansion and serfdom policies in Polish Ukraine resulted in a series of Cossack uprisings. Allied with the Habsburg Monarchy, the Commonwealth did not directly participate in the Thirty Years' War. Władysław's IV reign was mostly peaceful, with a Russian invasion in the form of the Smolensk War of 1632–1634 successfully repelled. The Orthodox Church hierarchy, banned in Poland after the Union of Brest, was re-established in 1635. During the reign of John II Casimir Vasa (r. 1648–1668), the third and last king of his dynasty, the nobles' democracy fell into decline as a result of foreign invasions and domestic disorder. These calamities multiplied rather suddenly and marked the end of the Polish Golden Age. Their effect was to render the once powerful Commonwealth increasingly vulnerable to foreign intervention. The Cossack Khmelnytsky Uprising of 1648–1657 engulfed the south-eastern regions of the Polish crown; its long-term effects were disastrous for the Commonwealth. The first "liberum veto" (a parliamentary device that allowed any member of the Sejm to dissolve a current session immediately) was exercised by a deputy in 1652. This practice would eventually weaken Poland's central government critically. In the Treaty of Pereyaslav (1654), the Ukrainian rebels declared themselves subjects of the Tsar of Russia. The Second Northern War raged through the core Polish lands in 1655–1660; it included a brutal and devastating invasion of Poland referred to as the Swedish Deluge. The war ended in 1660 with the Treaty of Oliva, which resulted in the loss of some of Poland's northern possessions. In 1657 the Treaty of Bromberg established the independence of the Duchy of Prussia. The Commonwealth forces did well in the Russo-Polish War (1654–1667), but the end result was the permanent division of Ukraine between Poland and Russia, as agreed to in the Truce of Andrusovo (1667). Towards the end of the war, the Lubomirski's rebellion, a major magnate revolt against the king, destabilized and weakened the country. The large-scale slave raids of the Crimean Tatars also had highly deleterious effects on the Polish economy. "Merkuriusz Polski", the first Polish newspaper, was published in 1661. In 1668, grief-stricken at the recent death of his wife and frustrated by the disastrous political setbacks of his reign, John II Casimir abdicated the throne and fled to France. King Michał Korybut Wiśniowiecki, a native Pole, was elected to replace John II Casimir in 1669. The Polish–Ottoman War (1672–76) broke out during his reign, which lasted until 1673, and continued under his successor, John III Sobieski (r. 1674–1696). Sobieski intended to pursue Baltic area expansion (and to this end he signed the secret Treaty of Jaworów with France in 1675), but was forced instead to fight protracted wars with the Ottoman Empire. By doing so, Sobieski briefly revived the Commonwealth's military might. He defeated the expanding Muslims at the Battle of Khotyn in 1673 and decisively helped deliver Vienna from a Turkish onslaught at the Battle of Vienna in 1683. Sobieski's reign marked the last high point in the history of the Commonwealth: in the first half of the 18th century, Poland ceased to be an active player in international politics. The Treaty of Perpetual Peace (1686) with Russia was the final border settlement between the two countries before the First Partition of Poland in 1772. The Commonwealth, subjected to almost constant warfare until 1720, suffered enormous population losses and massive damage to its economy and social structure. The government became ineffective in the wake of large-scale internal conflicts, corrupted legislative processes and manipulation by foreign interests. The nobility fell under the control of a handful of feuding magnate families with established territorial domains. The urban population and infrastructure fell into ruin, together with most peasant farms, whose inhabitants were subjected to increasingly extreme forms of serfdom. The development of science, culture and education came to a halt or regressed. The royal election of 1697 brought a ruler of the Saxon House of Wettin to the Polish throne: Augustus II the Strong (r. 1697–1733), who was able to assume the throne only by agreeing to convert to Roman Catholicism. He was succeeded by his son Augustus III (r. 1734–1763). The reigns of the Saxon kings (who were both simultaneously prince-electors of Saxony) were disrupted by competing candidates for the throne and witnessed further disintegration of the Commonwealth. The Great Northern War of 1700–1721, a period seen by the contemporaries as a temporary eclipse, may have been the fatal blow that brought down the Polish political system. Stanisław Leszczyński was installed as king in 1704 under Swedish protection, but lasted only a few years. The Silent Sejm of 1717 marked the beginning of the Commonwealth's existence as a Russian protectorate: the Tsardom would guarantee the reform-impeding Golden Liberty of the nobility from that time on in order to cement the Commonwealth's weak central authority and a state of perpetual political impotence. In a resounding break with traditions of religious tolerance, Protestants were executed during the Tumult of Thorn in 1724. In 1732, Russia, Austria and Prussia, Poland's three increasingly powerful and scheming neighbors, entered into the secret Treaty of the Three Black Eagles with the intention of controlling the future royal succession in the Commonwealth. The War of the Polish Succession was fought in 1733–1735 to assist Leszczyński in assuming the throne of Poland for a second time. Amidst considerable foreign involvement, his efforts were unsuccessful. The Kingdom of Prussia became a strong regional power and succeeded in wresting the historically Polish province of Silesia from the Habsburg Monarchy in the Silesian Wars; it thus constituted an ever-greater threat to Poland's security. The personal union between the Commonwealth and the Electorate of Saxony did give rise to the emergence of a reform movement in the Commonwealth and the beginnings of the Polish Enlightenment culture, the major positive developments of this era. The first Polish public library was the Załuski Library in Warsaw, opened to the public in 1747. During the later part of the 18th century, fundamental internal reforms were attempted in the Polish–Lithuanian Commonwealth as it slid into extinction. The reform activity, initially promoted by the magnate Czartoryski family faction known as the "Familia", provoked a hostile reaction and military response from neighboring powers, but it did create conditions that fostered economic improvement. The most populous urban center, the capital city of Warsaw, replaced Danzig (Gdańsk) as the leading trade center, and the importance of the more prosperous urban social classes increased. The last decades of the independent Commonwealth's existence were characterized by aggressive reform movements and far-reaching progress in the areas of education, intellectual life, art and the evolution of the social and political system. The royal election of 1764 resulted in the elevation of Stanisław August Poniatowski, a refined and worldly aristocrat connected to the Czartoryski family, but hand-picked and imposed by Empress Catherine the Great of Russia, who expected him to be her obedient follower. Stanisław August ruled the Polish–Lithuanian state until its dissolution in 1795. The king spent his reign torn between his desire to implement reforms necessary to save the failing state and the perceived necessity of remaining in a subordinate relationship to his Russian sponsors. The Bar Confederation (1768–1772) was a rebellion of nobles directed against Russia's influence in general and Stanisław August, who was seen as its representative, in particular. It was fought to preserve Poland's independence and the nobility's traditional interests. After several years, it was brought under control by forces loyal to the king and those of the Russian Empire. Following the suppression of the Bar Confederation, parts of the Commonwealth were divided up among Prussia, Austria and Russia in 1772 at the instigation of Frederick the Great of Prussia, an action that became known as the First Partition of Poland: the outer provinces of the Commonwealth were seized by agreement among the country's three powerful neighbors and only a rump state remained. In 1773, the "Partition Sejm" ratified the partition under duress as a "fait accompli". However, it also established the Commission of National Education, a pioneering in Europe education authority often called the world's first ministry of education. The long-lasting session of parliament convened by King Stanisław August is known as the Great Sejm or Four-Year Sejm; it first met in 1788. Its landmark achievement was the passing of the Constitution of 3 May 1791, the first singular pronouncement of a supreme law of the state in modern Europe. A moderately reformist document condemned by detractors as sympathetic to the ideals of the French Revolution, it soon generated strong opposition from the conservative circles of the Commonwealth's upper nobility and from Empress Catherine of Russia, who was determined to prevent the rebirth of a strong Commonwealth. The nobility's Targowica Confederation, formed in Russian imperial capital of Saint Petersburg, appealed to Catherine for help, and in May 1792, the Russian army entered the territory of the Commonwealth. The Polish–Russian War of 1792, a defensive war fought by the forces of the Commonwealth against Russian invaders, ended when the Polish king, convinced of the futility of resistance, capitulated by joining the Targowica Confederation. The Russian-allied confederation took over the government, but Russia and Prussia in 1793 arranged for the Second Partition of Poland anyway. The partition left the country with a critically reduced territory that rendered it essentially incapable of an independent existence. The Commonwealth's Grodno Sejm of 1793, the last Sejm of the state's existence, was compelled to confirm the new partition. Radicalized by recent events, Polish reformers (whether in exile or still resident in the reduced area remaining to the Commonwealth) were soon working on preparations for a national insurrection. Tadeusz Kościuszko, a popular general and a veteran of the American Revolution, was chosen as its leader. He returned from abroad and issued Kościuszko's proclamation in Kraków on March 24, 1794. It called for a national uprising under his supreme command. Kościuszko emancipated many peasants in order to enroll them as "kosynierzy" in his army, but the hard-fought insurrection, despite widespread national support, proved incapable of generating the foreign assistance necessary for its success. In the end, it was suppressed by the combined forces of Russia and Prussia, with Warsaw captured in November 1794 in the aftermath of the Battle of Praga. In 1795, a Third Partition of Poland was undertaken by Russia, Prussia and Austria as a final division of territory that resulted in the effective dissolution of the Polish–Lithuanian Commonwealth. King Stanisław August Poniatowski was escorted to Grodno, forced to abdicate, and retired to Saint Petersburg. Tadeusz Kościuszko, initially imprisoned, was allowed to emigrate to the United States in 1796. The response of the Polish leadership to the last partition is a matter of historical debate. Literary scholars found that the dominant emotion of the first decade was despair that produced a moral desert ruled by violence and treason. On the other hand, historians have looked for signs of resistance to foreign rule. Apart from those who went into exile, the nobility took oaths of loyalty to their new rulers and served as officers in their armies. Although no sovereign Polish state existed between 1795 and 1918, the idea of Polish independence was kept alive throughout the 19th century. There were a number of uprisings and other armed undertakings waged against the partitioning powers. Military efforts after the partitions were first based on the alliances of Polish émigrés with post-revolutionary France. Jan Henryk Dąbrowski's Polish Legions fought in French campaigns outside of Poland between 1797 and 1802 in hopes that their involvement and contribution would be rewarded with the liberation of their Polish homeland. The Polish national anthem, "Poland Is Not Yet Lost", or "Dąbrowski's Mazurka", was written in praise of his actions by Józef Wybicki in 1797. The Duchy of Warsaw, a small, semi-independent Polish state, was created in 1807 by Napoleon in the wake of his defeat of Prussia and the signing of the Treaties of Tilsit with Emperor Alexander I of Russia. The Army of the Duchy of Warsaw, led by Józef Poniatowski, participated in numerous campaigns in alliance with France, including the successful Austro-Polish War of 1809, which, combined with the outcomes of other theaters of the War of the Fifth Coalition, resulted in an enlargement of the duchy's territory. The French invasion of Russia in 1812 and the German Campaign of 1813 saw the duchy's last military engagements. The Constitution of the Duchy of Warsaw abolished serfdom as a reflection of the ideals of the French Revolution, but it did not promote land reform. After Napoleon's defeat, a new European order was established at the Congress of Vienna, which met in the years 1814 and 1815. Adam Jerzy Czartoryski, a former close associate of Emperor Alexander I, became the leading advocate for the Polish national cause. The Congress implemented a new partition scheme, which took into account some of the gains realized by the Poles during the Napoleonic period. The Duchy of Warsaw was replaced in 1815 with a new Kingdom of Poland, unofficially known as Congress Poland. The residual Polish kingdom was joined to the Russian Empire in a personal union under the Russian tsar and it was allowed its own constitution and military. East of the kingdom, large areas of the former Polish–Lithuanian Commonwealth remained directly incorporated into the Russian Empire as the Western Krai. These territories, along with Congress Poland, are generally considered to form the Russian Partition. The Russian, Prussian, and Austrian "partitions" are informal names for the lands of the former Commonwealth, not actual units of administrative division of Polish–Lithuanian territories after partitions. The Prussian Partition included a portion separated as the Grand Duchy of Posen. Peasants under the Prussian administration were gradually enfranchised under the reforms of 1811 and 1823. The limited legal reforms in the Austrian Partition were overshadowed by its rural poverty. The Free City of Cracow was a tiny republic created by the Congress of Vienna under the joint supervision of the three partitioning powers. Despite the bleak from the standpoint of Polish patriots political situation, economic progress was made in the lands taken over by foreign powers because the period after the Congress of Vienna witnessed a significant development in the building of early industry. Economic historians have made new estimates on GDP per capita, 1790–1910. They confirm the hypothesis of semi-peripheral development of Polish territories in the 19th century and the slow process of catching-up with the core economies. The increasingly repressive policies of the partitioning powers led to resistance movements in partitioned Poland, and in 1830 Polish patriots staged the November Uprising. This revolt developed into a full-scale war with Russia, but the leadership was taken over by Polish conservatives who were reluctant to challenge the empire and hostile to broadening the independence movement's social base through measures such as land reform. Despite the significant resources mobilized, a series of errors by several successive chief commanders appointed by the insurgent Polish National Government led to the defeat of its forces by the Russian army in 1831. Congress Poland lost its constitution and military, but formally remained a separate administrative unit within the Russian Empire. After the defeat of the November Uprising, thousands of former Polish combatants and other activists emigrated to Western Europe. This phenomenon, known as the Great Emigration, soon dominated Polish political and intellectual life. Together with the leaders of the independence movement, the Polish community abroad included the greatest Polish literary and artistic minds, including the Romantic poets Adam Mickiewicz, Juliusz Słowacki, Cyprian Norwid, and the composer Frédéric Chopin. In occupied and repressed Poland, some sought progress through nonviolent activism focused on education and economy, known as organic work; others, in cooperation with the emigrant circles, organized conspiracies and prepared for the next armed insurrection. The planned national uprising failed to materialize because the authorities in the partitions found out about secret preparations. The Greater Poland uprising ended in a fiasco in early 1846. In the Kraków uprising of February 1846, patriotic action was combined with revolutionary demands, but the result was the incorporation of the Free City of Cracow into the Austrian Partition. The Austrian officials took advantage of peasant discontent and incited villagers against the noble-dominated insurgent units. This resulted in the Galician slaughter of 1846, a large-scale rebellion of serfs seeking relief from their post-feudal condition of mandatory labor as practiced in "folwarks". The uprising freed many from bondage and hastened decisions that led to the abolition of Polish serfdom in the Austrian Empire in 1848. A new wave of Polish involvement in revolutionary movements soon took place in the partitions and in other parts of Europe in the context of the Spring of Nations revolutions of 1848 (e.g. Józef Bem's participation in the revolutions in Austria and Hungary). The 1848 German revolutions precipitated the Greater Poland uprising of 1848, in which peasants in the Prussian Partition, who were by then largely enfranchised, played a prominent role. As a matter of continuous policy, the Russian autocracy kept assailing Polish national core values of language, religion and culture. In consequence, despite the limited liberalization measures allowed in Congress Poland under the rule of Tsar Alexander II of Russia, a renewal of popular liberation activities took place in 1860–1861. During large-scale demonstrations in Warsaw, Russian forces inflicted numerous casualties on the civilian participants. The "Red", or left-wing faction of Polish activists, which promoted peasant enfranchisement and cooperated with Russian revolutionaries, became involved in immediate preparations for a national uprising. The "White", or right-wing faction, was inclined to cooperate with the Russian authorities and countered with partial reform proposals. In order to cripple the manpower potential of the Reds, Aleksander Wielopolski, the conservative leader of the government of Congress Poland, arranged for a partial selective conscription of young Poles for the Russian army in the years 1862 and 1863. This action hastened the outbreak of hostilities. The January Uprising, joined and led after the initial period by the Whites, was fought by partisan units against an overwhelmingly advantaged enemy. The uprising lasted from January 1863 to the spring of 1864, when Romuald Traugutt, the last supreme commander of the insurgency, was captured by the tsarist police. On 2 March 1864, the Russian authority, compelled by the uprising to compete for the loyalty of Polish peasants, officially published an enfranchisement decree in Congress Poland along the lines of an earlier land reform proclamation of the insurgents. The act created the conditions necessary for the development of the capitalist system on central Polish lands. At the time when most Poles realized the futility of armed resistance without external support, the various sections of Polish society were undergoing deep and far-reaching evolution in the areas of social, economic and cultural development. The failure of the January Uprising in Poland caused a major psychological trauma and became a historic watershed; indeed, it sparked the development of modern Polish nationalism. The Poles, subjected within the territories under the Russian and Prussian administrations to still stricter controls and increased persecution, sought to preserve their identity in non-violent ways. After the uprising, Congress Poland was downgraded in official usage from the "Kingdom of Poland" to the "Vistula Land" and was more fully integrated into Russia proper, but not entirely obliterated. The Russian and German languages were imposed in all public communication, and the Catholic Church was not spared from severe repression. Public education was increasingly subjected to Russification and Germanisation measures. Illiteracy was reduced, most effectively in the Prussian partition, but education in the Polish language was preserved mostly through unofficial efforts. The Prussian government pursued German colonization, including the purchase of Polish-owned land. On the other hand, the region of Galicia (western Ukraine and southern Poland) experienced a gradual relaxation of authoritarian policies and even a Polish cultural revival. Economically and socially backward, it was under the milder rule of the Austro-Hungarian Monarchy and from 1867 was increasingly allowed limited autonomy. "Stańczycy", a conservative Polish pro-Austrian faction led by great land owners, dominated the Galician government. The Polish Academy of Learning (an academy of sciences) was founded in Kraków in 1872. Social activities termed "organic work" consisted of self-help organizations that promoted economic advancement and work on improving the competitiveness of Polish-owned businesses, industrial, agricultural or other. New commercial methods of generating higher productivity were discussed and implemented through trade associations and special interest groups, while Polish banking and cooperative financial institutions made the necessary business loans available. The other major area of effort in organic work was educational and intellectual development of the common people. Many libraries and reading rooms were established in small towns and villages, and numerous printed periodicals manifested the growing interest in popular education. Scientific and educational societies were active in a number of cities. Such activities were most pronounced in the Prussian Partition. Positivism in Poland replaced Romanticism as the leading intellectual, social and literary trend. It reflected the ideals and values of the emerging urban bourgeoisie. Around 1890, the urban classes gradually abandoned the positivist ideas and came under the influence of modern pan-European nationalism. Under the partitioning powers, economic diversification and progress, including large-scale industrialisation, were introduced in the traditionally agrarian Polish lands, but this development turned out to be very uneven. Advanced agriculture was practiced in the Prussian Partition, except for Upper Silesia, where the coal-mining industry created a large labor force. The densest network of railroads was built in German-ruled western Poland. In Russian Congress Poland, a striking growth of industry, railways and towns took place, all against the background of an extensive, but less productive agriculture. The industrial initiative, capital and know-how were provided largely by entrepreneurs who were not ethnic Poles. Warsaw (a metallurgical center) and Łódź (a textiles center) grew rapidly, as did the total proportion of urban population, making the region the most economically advanced in the Russian Empire (industrial production exceeded agricultural production there by 1909). The coming of the railways spurred some industrial growth even in the vast Russian Partition territories outside of Congress Poland. The Austrian Partition was rural and poor, except for the industrialized Cieszyn Silesia area. Galician economic expansion after 1890 included oil extraction and resulted in the growth of Lemberg (Lwów, Lviv) and Kraków. Economic and social changes involving land reform and industrialization, combined with the effects of foreign domination, altered the centuries-old social structure of Polish society. Among the newly emergent strata were wealthy industrialists and financiers, distinct from the traditional, but still critically important landed aristocracy. The intelligentsia, an educated, professional or business middle class, often originated from lower gentry, landless or alienated from their rural possessions, and from urban people. Many smaller agricultural enterprises based on serfdom did not survive the land reforms. The industrial proletariat, a new underprivileged class, was composed mainly of poor peasants or townspeople forced by deteriorating conditions to migrate and search for work in urban centers in their countries of origin or abroad. Millions of residents of the former Commonwealth of various ethnic groups worked or settled in Europe and in North and South America. Social and economic changes were partial and gradual. The degree of industrialisation, relatively fast-paced in some areas, lagged behind the advanced regions of Western Europe. The three partitions developed different economies and were more economically integrated with their mother states than with each other. In the Prussian Partition, for example, agricultural production depended heavily on the German market, whereas the industrial sector of Congress Poland relied more on the Russian market. In the 1870s–1890s, large-scale socialist, nationalist, agrarian and other political movements of great ideological fervor became established in partitioned Poland and Lithuania, along with corresponding political parties to promote them. Of the major parties, the socialist First Proletariat was founded in 1882, the Polish League (precursor of National Democracy) in 1887, the Polish Social Democratic Party of Galicia and Silesia in 1890, the Polish Socialist Party in 1892, the Marxist Social Democracy of the Kingdom of Poland and Lithuania in 1893, the agrarian People's Party of Galicia in 1895 and the Jewish socialist Bund in 1897. Christian democracy regional associations allied with the Catholic Church were also active; they united into the Polish Christian Democratic Party in 1919. The main minority ethnic groups of the former Commonwealth, including Ukrainians, Lithuanians, Belarusians and Jews, were getting involved in their own national movements and plans, which met with disapproval on the part of those Polish independence activists who counted on an eventual rebirth of the Commonwealth or the rise of a Commonwealth-inspired federal structure (a political movement referred to as Prometheism). Around the start of the 20th century, the Young Poland cultural movement, centered in Austrian Galicia, took advantage of a milieu conducive to liberal expression in that region and was the source of Poland's finest artistic and literary productions. In this same era, Marie Skłodowska Curie, a pioneer radiation scientist, performed her groundbreaking research in Paris. The Revolution of 1905–1907 in Russian Poland, the result of many years of pent-up political frustrations and stifled national ambitions, was marked by political maneuvering, strikes and rebellion. The revolt was part of much broader disturbances throughout the Russian Empire associated with the general Revolution of 1905. In Poland, the principal revolutionary figures were Roman Dmowski and Józef Piłsudski. Dmowski was associated with the right-wing nationalist movement National Democracy, whereas Piłsudski was associated with the Polish Socialist Party. As the authorities re-established control within the Russian Empire, the revolt in Congress Poland, placed under martial law, withered as well, partially as a result of tsarist concessions in the areas of national and workers' rights, including Polish representation in the newly created Russian Duma. The collapse of the revolt in the Russian Partition, coupled with intensified Germanization in the Prussian Partition, left Austrian Galicia as the territory where Polish patriotic action was most likely to flourish. In the Austrian Partition, Polish culture was openly cultivated, and in the Prussian Partition, there were high levels of education and living standards, but the Russian Partition remained of primary importance for the Polish nation and its aspirations. About 15.5 million Polish-speakers lived in the territories most densely populated by Poles: the western part of the Russian Partition, the Prussian Partition and the western Austrian Partition. Ethnically Polish settlement spread over a large area further to the east, including its greatest concentration in the Vilnius Region, amounted to only over 20% of that number. Polish paramilitary organizations oriented toward independence, such as the Union of Active Struggle, were formed in 1908–1914, mainly in Galicia. The Poles were divided and their political parties fragmented on the eve of World War I, with Dmowski's National Democracy (pro-Entente) and Piłsudski's faction assuming opposing positions. The outbreak of World War I in the Polish lands offered Poles unexpected hopes for achieving independence as a result of the turbulence that engulfed the empires of the partitioning powers. All three of the monarchies that had benefited from the partition of Polish territories (Germany, Austria and Russia) were dissolved by the end of the war, and many of their territories were dispersed into new political units. At the start of the war, the Poles found themselves conscripted into the armies of the partitioning powers in a war that was not theirs. Furthermore, they were frequently forced to fight each other, since the armies of Germany and Austria were allied against Russia. Piłsudski's paramilitary units stationed in Galicia were turned into the Polish Legions in 1914 and as a part of the Austro-Hungarian Army fought on the Russian front until 1917, when the formation was disbanded. Piłsudski, who refused demands that his men fight under German command, was arrested and imprisoned by the Germans and became a heroic symbol of Polish nationalism. Due to a series of German victories on the Eastern Front, the area of Congress Poland became occupied by the Central Powers of Germany and Austria; Warsaw was captured by the Germans on 5 August 1915. In the Act of 5th November 1916, a fresh incarnation of the Kingdom of Poland ("Królestwo Regencyjne") was proclaimed by Germany and Austria on formerly Russian-controlled territories, within the German "Mitteleuropa" scheme. The sponsor states were never able to agree on a candidate to assume the throne, however; rather, it was governed in turn by German and Austrian governor-generals, a Provisional Council of State, and a Regency Council. This increasingly autonomous puppet state existed until November 1918, when it was replaced by the newly established Republic of Poland. The existence of this "kingdom" and its planned Polish army had a positive effect on the Polish national efforts on the Allied side, but in the Treaty of Brest-Litovsk of March 1918 the victorious in the east Germany imposed harsh conditions on defeated Russia and ignored Polish interests. Toward the end of the war, the German authorities engaged in massive, purposeful devastation of industrial and other economic potential of Polish lands in order to impoverish the country, a likely future competitor of Germany. The independence of Poland had been campaigned for in Russia and in the West by Dmowski and in the West by Ignacy Jan Paderewski. Tsar Nicholas II of Russia, and then the leaders of the February Revolution and the October Revolution of 1917, installed governments who declared in turn their support for Polish independence. In 1917, France formed the Blue Army (placed under Józef Haller) that comprised about 70,000 Poles by the end of the war, including men captured from German and Austrian units and 20,000 volunteers from the United States. There was also a 30,000-men strong Polish anti-German army in Russia. Dmowski, operating from Paris as head of the Polish National Committee (KNP), became the spokesman for Polish nationalism in the Allied camp. On the initiative of Woodrow Wilson's Fourteen Points, Polish independence was officially endorsed by the Allies in June 1918. In all, about two million Poles served in the war, counting both sides, and about 400–450,000 died. Much of the fighting on the Eastern Front took place in Poland, and civilian casualties and devastation were high. The final push for independence of Poland took place on the ground in October–November 1918. Near the end of the war, Austro-Hungarian and German units were being disarmed, and the Austrian army's collapse freed Cieszyn and Kraków at the end of October. Lviv was then contested in the Polish–Ukrainian War of 1918–1919. Ignacy Daszyński headed the first short-lived independent Polish government in Lublin from 7 November, the leftist Provisional People's Government of the Republic of Poland, proclaimed as a democracy. Germany, now defeated, was forced by the Allies to stand down its large military forces in Poland. Overtaken by the German Revolution of 1918–1919 at home, the Germans released Piłsudski from prison. He arrived in Warsaw on 10 November and was granted extensive authority by the Regency Council; Piłsudski's authority was also recognized by the Lublin government. On 22 November, he became the temporary head of state. Piłsudski was held by many in high regard, but was resented by the right-wing National Democrats. The emerging Polish state was internally divided, heavily war-damaged and economically dysfunctional. After more than a century of foreign rule, Poland regained its independence at the end of World War I as one of the outcomes of the negotiations that took place at the Paris Peace Conference of 1919. The Treaty of Versailles that emerged from the conference set up an independent Polish nation with an outlet to the sea, but left some of its boundaries to be decided by plebiscites. The largely German-inhabited Free City of Danzig was granted a separate status that guaranteed its use as a port by Poland. In the end, the settlement of the German-Polish border turned out to be a prolonged and convoluted process. The dispute helped engender the Greater Poland Uprising of 1918–1919, the three Silesian uprisings of 1919–1921, the East Prussian plebiscite of 1920, the Upper Silesia plebiscite of 1921 and the 1922 Silesian Convention in Geneva. Other boundaries were settled by war and subsequent treaties. A total of six border wars were fought in 1918–1921, including the Polish–Czechoslovak border conflicts over Cieszyn Silesia in January 1919. As distressing as these border conflicts were, the Polish–Soviet War of 1919–1921 was the most important series of military actions of the era. Piłsudski had entertained far-reaching anti-Russian cooperative designs in Eastern Europe, and in 1919 the Polish forces pushed eastward into Lithuania, Belarus and Ukraine by taking advantage of the Russian preoccupation with a civil war, but they were soon confronted with the Soviet westward offensive of 1918–1919. Western Ukraine was already a theater of the Polish–Ukrainian War, which eliminated the proclaimed West Ukrainian People's Republic in July 1919. In the autumn of 1919, Piłsudski rejected urgent pleas from the former Entente powers to support Anton Denikin's White movement in its advance on Moscow. The Polish–Soviet War proper began with the Polish Kiev Offensive in April 1920. Allied with the Directorate of Ukraine of the Ukrainian People's Republic, the Polish armies had advanced past Vilnius, Minsk and Kiev by June. At that time, a massive Soviet counter-offensive pushed the Poles out of most of Ukraine. On the northern front, the Soviet army reached the outskirts of Warsaw in early August. A Soviet triumph and the quick end of Poland seemed inevitable. However, the Poles scored a stunning victory at the Battle of Warsaw (1920). Afterwards, more Polish military successes followed, and the Soviets had to pull back. They left swathes of territory populated largely by Belarusians or Ukrainians to Polish rule. The new eastern boundary was finalized by the Peace of Riga in March 1921. The defeat of the Russian armies forced Vladimir Lenin and the Soviet leadership to postpone their strategic objective of linking up with the German and other European revolutionary leftist collaborators to spread communist revolution. Lenin also hoped for generating support for the Red Army in Poland, which failed to materialize. Piłsudski's seizure of Vilnius in October 1920 (known as Żeligowski's Mutiny) was a nail in the coffin of the already poor Lithuania–Poland relations that had been strained by the Polish–Lithuanian War of 1919–1920; both states would remain hostile to one another for the remainder of the interwar period. Piłsudski's concept of Intermarium (an East European federation of states inspired by the tradition of the multiethnic Polish–Lithuanian Commonwealth that would include a hypothetical multinational successor state to the Grand Duchy of Lithuania) had the fatal flaw of being incompatible with his assumption of Polish domination, which would amount to an encroachment on the neighboring peoples' lands and aspirations. At the time of rising national movements, the plan thus ceased being a feature of Poland's politics. A larger federated structure was also opposed by Dmowski's National Democrats. Their representative at the Peace of Riga talks, Stanisław Grabski, opted for leaving Minsk, Berdychiv, Kamianets-Podilskyi and the surrounding areas on the Soviet side of the border. The National Democrats did not want to assume the lands they considered politically undesirable, as such territorial enlargement would result in a reduced proportion of citizens who were ethnically Polish. The Peace of Riga settled the eastern border by preserving for Poland a substantial portion of the old Commonwealth's eastern territories at the cost of partitioning the lands of the former Grand Duchy of Lithuania (Lithuania and Belarus) and Ukraine. The Ukrainians ended up with no state of their own and felt betrayed by the Riga arrangements; their resentment gave rise to extreme nationalism and anti-Polish hostility. The Kresy (or borderland) territories in the east won by 1921 would form the basis for a swap arranged and carried out by the Soviets in 1943–1945, who at that time compensated the re-emerging Polish state for the eastern lands lost to the Soviet Union with conquered areas of eastern Germany. The successful outcome of the Polish–Soviet War gave Poland a false sense of its prowess as a self-sufficient military power and encouraged the government to try to resolve international problems through imposed unilateral solutions. The territorial and ethnic policies of the interwar period contributed to bad relations with most of Poland's neighbors and uneasy cooperation with more distant centers of power, especially France and Great Britain. Among the chief difficulties faced by the government of the new Polish republic was the lack of an integrated infrastructure among the formerly separate partitions, a deficiency that disrupted industry, transportation, trade, and other areas. The first Polish legislative election for the re-established Sejm (national parliament) took place in January 1919. A temporary Small Constitution was passed by the body the following month. The rapidly growing population of Poland within its new boundaries was ¾ agricultural and ¼ urban; Polish was the primary language of only ⅔ of the inhabitants of the new country. The minorities had very little voice in the government. The permanent March Constitution of Poland was adopted in March 1921. At the insistence of the National Democrats, who were concerned about how aggressively Józef Piłsudski might exercise presidential powers if he were elected to office, the constitution mandated limited prerogatives for the presidency. The proclamation of the March Constitution was followed by a short and turbulent period of constitutional order and parliamentary democracy that lasted until 1926. The legislature remained fragmented, without stable majorities, and governments changed frequently. The open-minded Gabriel Narutowicz was elected president constitutionally (without a popular vote) by the National Assembly in 1922. However, members of the nationalist right-wing faction did not regard his elevation as legitimate. They viewed Narutowicz rather as a traitor whose election was pushed through by the votes of alien minorities. Narutowicz and his supporters were subjected to an intense harassment campaign, and the president was assassinated on 16 December 1922, after serving only five days in office. Land reform measures were passed in 1919 and 1925 under pressure from an impoverished peasantry. They were partially implemented, but resulted in the parcellation of only 20% of the great agricultural estates. Poland endured numerous economic calamities and disruptions in the early 1920s, including waves of workers' strikes such as the 1923 Kraków riot. The German–Polish customs war, initiated by Germany in 1925, was one of the most damaging external factors that put a strain on Poland's economy. On the other hand, there were also signs of progress and stabilization, for example a critical reform of finances carried out by the competent government of Władysław Grabski, which lasted almost two years. Certain other achievements of the democratic period having to do with the management of governmental and civic institutions necessary to the functioning of the reunited state and nation were too easily overlooked. Lurking on the sidelines was a disgusted army officer corps unwilling to subject itself to civilian control, but ready to follow the retired Piłsudski, who was highly popular with Poles and just as dissatisfied with the Polish system of government as his former colleagues in the military. On 12 May 1926, Piłsudski staged the May Coup, a military overthrow of the civilian government mounted against President Stanisław Wojciechowski and the troops loyal to the legitimate government. Hundreds died in fratricidal fighting. Piłsudski was supported by several leftist factions who ensured the success of his coup by blocking the railway transportation of government forces. He also had the support of the conservative great landowners, a move that left the right-wing National Democrats as the only major social force opposed to the takeover. Following the coup, the new regime initially respected many parliamentary formalities, but gradually tightened its control and abandoned pretenses. The Centrolew, a coalition of center-left parties, was formed in 1929, and in 1930 called for the "abolition of dictatorship". In 1930, the Sejm was dissolved and a number of opposition deputies were imprisoned at the Brest Fortress. Five thousand political opponents were arrested ahead of the Polish legislative election of 1930, which was rigged to award a majority of seats to the pro-regime Nonpartisan Bloc for Cooperation with the Government (BBWR). The authoritarian Sanation regime ("sanation" meant to denote "healing") that Piłsudski led until his death in 1935 (and would remain in place until 1939) reflected the dictator's evolution from his center-left past to conservative alliances. Political institutions and parties were allowed to function, but the electoral process was manipulated and those not willing to cooperate submissively were subjected to repression. From 1930, persistent opponents of the regime, many of the leftist persuasion, were imprisoned and subjected to staged legal processes with harsh sentences, such as the Brest trials, or else detained in the Bereza Kartuska prison and similar camps for political prisoners. About three thousand were detained without trial at different times at the Bereza internment camp between 1934 and 1939. In 1936 for example, 369 activists were taken there, including 342 Polish communists. Rebellious peasants staged riots in 1932, 1933 and the 1937 peasant strike in Poland. Other civil disturbances were caused by striking industrial workers (e.g. events of the "Bloody Spring" of 1936), nationalist Ukrainians and the activists of the incipient Belarusian movement. All became targets of ruthless police-military pacification. Besides sponsoring political repression, the regime fostered Józef Piłsudski's cult of personality that had already existed long before he assumed dictatorial powers. Piłsudski signed the Soviet–Polish Non-Aggression Pact in 1932 and the German–Polish Non-Aggression Pact in 1934, but in 1933 he insisted that there was no threat from the East or West and said that Poland's politics were focused on becoming fully independent without serving foreign interests. He initiated the policy of maintaining an equal distance and an adjustable middle course regarding the two great neighbors, later continued by Józef Beck. Piłsudski kept personal control of the army, but it was poorly equipped, poorly trained and had poor preparations in place for possible future conflicts. His only war plan was a defensive war against a Soviet invasion. The slow modernization after Piłsudski's death fell far behind the progress made by Poland's neighbors and measures to protect the western border, discontinued by Piłsudski from 1926, were not undertaken until March 1939. Sanation deputies in the Sejm used a parliamentary maneuver to abolish the democratic March Constitution and push through a more authoritarian April Constitution in 1935; it reduced the powers of the Sejm, which Piłsudski despised. The process and the resulting document were seen as illegitimate by the anti-Sanation opposition, but during World War II, the Polish government-in-exile recognized the April Constitution in order to uphold the legal continuity of the Polish state. When Marshal Piłsudski died in 1935, he retained the support of dominant sections of Polish society even though he never risked testing his popularity in an honest election. His regime was dictatorial, but at that time only Czechoslovakia remained democratic in all of the regions neighboring Poland. Historians have taken widely divergent views of the meaning and consequences of the coup Piłsudski perpetrated and his personal rule that followed. Independence stimulated the development of Polish culture in the Interbellum and intellectual achievement was high. Warsaw, whose population almost doubled between World War I and World War II, was a restless, burgeoning metropolis. It outpaced Kraków, Lwów and Wilno, the other major population centers of the country. Mainstream Polish society was not affected by the repressions of the Sanation authorities overall; many Poles enjoyed relative stability, and the economy improved markedly between 1926 and 1929, only to become caught up in the global Great Depression. After 1929, the country's industrial production and gross national income slumped by about 50%. The Great Depression brought low prices for farmers and unemployment for workers. Social tensions increased, including rising antisemitism. A major economic transformation and multi-year state plan to achieve national industrial development, as embodied in the Central Industrial Region initiative launched in 1936, was led by Minister Eugeniusz Kwiatkowski. Motivated primarily by the need for a native arms industry, the initiative was in progress at the time of the outbreak of World War II. Kwiatkowski was also the main architect of the earlier Gdynia seaport project. The prevalent in political circles nationalism was fueled by the large size of Poland's minority populations and their separate agendas. According to the language criterion of the Polish census of 1931, the Poles constituted 69% of the population, Ukrainians 15%, Jews (defined as speakers of the Yiddish language) 8.5%, Belarusians 4.7%, Germans 2.2%, Lithuanians 0.25%, Russians 0.25% and Czechs 0.09%, with some geographical areas dominated by a particular minority. In time, the ethnic conflicts intensified, and the Polish state grew less tolerant of the interests of its national minorities. In interwar Poland, compulsory free general education substantially reduced illiteracy rates, but discrimination was practiced in a way that resulted in a dramatic decrease in the number of Ukrainian language schools and official restrictions on Jewish attendance at selected schools in the late 1930s. The population grew steadily, reaching 35 million in 1939. However, the overall economic situation in the interwar period was one of stagnation. There was little money for investment inside Poland, and few foreigners were interested in investing there. Total industrial production barely increased between 1913 and 1939 (within the area delimited by the 1939 borders), but because of population growth (from 26.3 million in 1919 to 34.8 million in 1939), the "per capita" output actually decreased by 18%. Conditions in the predominant agricultural sector kept deteriorating between 1929 and 1939, which resulted in rural unrest and a progressive radicalization of the Polish peasant movement that became increasingly inclined toward militant anti-state activities. It was firmly repressed by the authorities. According to Norman Davies, the failures of the Sanation regime (combined with the objective economic realities) caused a radicalization of the Polish masses by the end of the 1930s, but he warns against drawing parallels with the incomparably more repressive regimes of Nazi Germany or the Stalinist Soviet Union. After Piłsudski's death in 1935, Poland was governed until (and initially during) the German invasion of 1939 by old allies and subordinates known as "Piłsudski's colonels". They had neither the vision nor the resources to cope with the perilous situation facing Poland in the late 1930s. The colonels had gradually assumed greater powers during Piłsudski's life by manipulating the ailing marshal behind the scenes. Eventually they achieved an overt politicization of the army that did nothing to help prepare the country for war. Foreign policy was the responsibility of Józef Beck, under whom Polish diplomacy attempted balanced approaches toward Germany and the Soviet Union, unfortunately without success, on the basis of a flawed understanding of the European geopolitics of his day. Beck had numerous foreign policy schemes and harbored illusions of Poland's status as a great power. He alienated most of Poland's neighbors, but is not blamed by historians for the ultimate failure of relations with Germany. The principal events of his tenure were concentrated in its last two years. In the case of the 1938 Polish ultimatum to Lithuania, the Polish action nearly resulted in a German takeover of southwest Lithuania, the Klaipėda Region (Memel Territory), which had a largely German population. Also in 1938, the Polish government opportunistically undertook a hostile action against the Czechoslovak state as weakened by the Munich Agreement and annexed a small piece of territory on its borders. In this case, Beck's understanding of the consequences of the Polish military move turned out to be completely mistaken, because in the end the German occupation of Czechoslovakia markedly weakened Poland's own position. Furthermore, Beck erroneously believed that Nazi-Soviet ideological contradictions would preclude their cooperation. At home, increasingly alienated and suppressed minorities threatened unrest and violence. Extreme nationalist circles such as the National Radical Camp grew more outspoken. One of the groups, the Camp of National Unity, combined many nationalists with Sanation supporters and was connected to the new strongman, Marshal Edward Rydz-Śmigły, whose faction of the Sanation ruling movement was increasingly nationalistic. In the late 1930s, the exile bloc Front Morges united several major Polish anti-Sanation figures, including Ignacy Paderewski, Władysław Sikorski, Wincenty Witos, Wojciech Korfanty and Józef Haller. It gained little influence inside Poland, but its spirit soon reappeared during World War II, within the Polish government-in-exile. In October 1938, Joachim von Ribbentrop first proposed German-Polish territorial adjustments and Poland's participation in the Anti-Comintern Pact against the Soviet Union. The status of the Free City of Danzig was one of the key bones of contention. Approached by Ribbentrop again in March 1939, the Polish government expressed willingness to address issues causing German concern, but effectively rejected Germany's stated demands and thus refused to allow Poland to be turned by Adolf Hitler into a German puppet state. Hitler, incensed by the British and French declarations of support for Poland, abrogated the German–Polish Non-Aggression Pact in late April 1939. To protect itself from an increasingly aggressive Nazi Germany, already responsible for the annexations of Austria (in the Anschluss of 1938), Czechoslovakia (in 1939) and a part of Lithuania after the 1939 German ultimatum to Lithuania, Poland entered into a military alliance with Britain and France (the 1939 Anglo-Polish military alliance and the Franco-Polish alliance (1921), as updated in 1939). However, the two Western powers were defense-oriented and not in a strong position, either geographically or in terms of resources, to assist Poland. Attempts were therefore made by them to induce Soviet-Polish cooperation, which they viewed as the only militarily viable arrangement. Diplomatic manoeuvers continued in the spring and summer of 1939, but in their final attempts, the Franco-British talks with the Soviets in Moscow on forming an anti-Nazi defensive military alliance failed. Warsaw's refusal to allow the Red Army to operate on Polish territory doomed the Western efforts. The final contentious Allied-Soviet exchanges took place on 21 and 23 August 1939. The regime of Joseph Stalin was the target of an intense German counter-initiative and was concurrently involved in increasingly effective negotiations with Hitler's agents. On 23 August, an outcome contrary to the exertions of the Allies became a reality: in Moscow, Germany and the Soviet Union hurriedly signed the Molotov–Ribbentrop Pact, which secretly provided for the dismemberment of Poland into Nazi- and Soviet-controlled zones. On 1 September 1939, Hitler ordered an invasion of Poland, the opening event of World War II. Poland had signed an Anglo-Polish military alliance as recently as the 25th of August, and had long been in alliance with France. The two Western powers soon declared war on Germany, but they remained largely inactive (the period early in the conflict became known as the Phoney War) and extended no aid to the attacked country. The technically and numerically superior "Wehrmacht" formations rapidly advanced eastwards and engaged massively in the murder of Polish civilians over the entire occupied territory. On 17 September, a Soviet invasion of Poland began. The Soviet Union quickly occupied most of the areas of eastern Poland that were inhabited by a significant Ukrainian and Belarusian minority. The two invading powers divided up the country as they had agreed in the secret provisions of the Molotov–Ribbentrop Pact. Poland's top government officials and military high command fled the war zone and arrived at the Romanian Bridgehead in mid-September. After the Soviet entry they sought refuge in Romania. Among the military operations in which Poles held out the longest (until late September or early October) were the Siege of Warsaw, the Battle of Hel and the resistance of the Independent Operational Group Polesie. Warsaw fell on 27 September after a heavy German bombardment that killed tens of thousands civilians and soldiers. Poland was ultimately partitioned between Germany and the Soviet Union according to the terms of the German–Soviet Frontier Treaty signed by the two powers in Moscow on 29 September. Gerhard Weinberg has argued that the most significant Polish contribution to World War II was sharing its code-breaking results. This allowed the British to perform the cryptanalysis of the Enigma and decipher the main German military code, which gave the Allies a major advantage in the conflict. As regards actual military campaigns, some Polish historians have argued that simply resisting the initial invasion of Poland was the country's greatest contribution to the victory over Nazi Germany, despite its defeat. The Polish Army of nearly one million men significantly delayed the start of the Battle of France, planned by the Germans for 1939. When the Nazi offensive in the West did happen, the delay caused it to be less effective, a possibly crucial factor in the victory of the Battle of Britain. After Germany invaded the Soviet Union as part of its Operation Barbarossa in June 1941, the whole of pre-war Poland was overrun and occupied by German troops. German-occupied Poland was divided from 1939 into two regions: Polish areas annexed by Nazi Germany directly into the German "Reich" and areas ruled under a so-called General Government of occupation. The Poles formed an underground resistance movement and a Polish government-in-exile that operated first in Paris, then, from July 1940, in London. Polish-Soviet diplomatic relations, broken since September 1939, were resumed in July 1941 under the Sikorski–Mayski agreement, which facilitated the formation of a Polish army (the Anders' Army) in the Soviet Union. In November 1941, Prime Minister Sikorski flew to the Soviet Union to negotiate with Stalin on its role on the Soviet-German front, but the British wanted the Polish soldiers in the Middle East. Stalin agreed, and the army was evacuated there. The organizations forming the Polish Underground State that functioned in Poland throughout the war were loyal to and formally under the Polish government-in-exile, acting through its Government Delegation for Poland. During World War II, hundreds of thousands of Poles joined the underground Polish Home Army ("Armia Krajowa"), a part of the Polish Armed Forces of the government-in-exile. About 200,000 Poles fought on the Western Front in the Polish Armed Forces in the West loyal to the government-in-exile, and about 300,000 in the under the Soviet command on the Eastern Front. The pro-Soviet resistance movement in Poland, led by the Polish Workers' Party, was active from 1941. It was opposed by the gradually forming extreme nationalistic National Armed Forces. Beginning in late 1939, hundreds of thousands of Poles from the Soviet-occupied areas were deported and taken east. Of the upper-ranking military personnel and others deemed uncooperative or potentially harmful by the Soviets, about 22,000 were secretly executed by them at the Katyn massacre. In April 1943, the Soviet Union broke off deteriorating relations with the Polish government-in-exile after the German military announced the discovery of mass graves containing murdered Polish army officers. The Soviets claimed that the Poles committed a hostile act by requesting that the Red Cross investigate these reports. From 1941, the implementation of the Nazi Final Solution began, and the Holocaust in Poland proceeded with force. Warsaw was the scene of the Warsaw Ghetto Uprising in April–May 1943, triggered by the liquidation of the Warsaw Ghetto by German SS units. The elimination of Jewish ghettos in German-occupied Poland took place in many cities. As the Jewish people were being removed to be exterminated, uprisings were waged against impossible odds by the Jewish Combat Organization and other desperate Jewish insurgents. At a time of increasing cooperation between the Western Allies and the Soviet Union in the wake of the Nazi invasion of 1941, the influence of the Polish government-in-exile was seriously diminished by the death of Prime Minister Władysław Sikorski, its most capable leader, in a plane crash on 4 July 1943. Around that time, Polish-communist civilian and military organizations opposed to the government, led by Wanda Wasilewska and supported by Stalin, were formed in the Soviet Union. In July 1944, the Soviet Red Army and Soviet-controlled Polish People's Army entered the territory of future postwar Poland. In protracted fighting in 1944 and 1945, the Soviets and their Polish allies defeated and expelled the German army from Poland at a cost of over 600,000 Soviet soldiers lost. The greatest single undertaking of the Polish resistance movement in World War II and a major political event was the Warsaw Uprising that began on 1 August 1944. The uprising, in which most of the city's population participated, was instigated by the underground Home Army and approved by the Polish government-in-exile in an attempt to establish a non-communist Polish administration ahead of the arrival of the Red Army. The uprising was originally planned as a short-lived armed demonstration in expectation that the Soviet forces approaching Warsaw would assist in any battle to take the city. The Soviets had never agreed to an intervention, however, and they halted their advance at the Vistula River. The Germans used the opportunity to carry out a brutal suppression of the forces of the pro-Western Polish underground. The bitterly fought uprising lasted for two months and resulted in the death or expulsion from the city of hundreds of thousands of civilians. After the defeated Poles surrendered on 2 October, the Germans carried out a planned destruction of Warsaw on Hitler's orders that obliterated the remaining infrastructure of the city. The Polish First Army, fighting alongside the Soviet Red Army, entered a devastated Warsaw on 17 January 1945. From the time of the Tehran Conference in late 1943, there was broad agreement among the three Great Powers (the United States, the United Kingdom, and the Soviet Union) that the locations of the borders between Germany and Poland and between Poland and the Soviet Union would be fundamentally changed after the conclusion of World War II. Stalin's view that Poland should be moved far to the west was accepted by Polish communists, whose organizations included the Polish Workers' Party and the Union of Polish Patriots. The communist-led State National Council, a quasi-parliamentary body, was in existence in Warsaw from the beginning of 1944. In July 1944, a communist-controlled Polish Committee of National Liberation was established in Lublin, to nominally govern the areas liberated from German control. The move prompted protests from Prime Minister Stanisław Mikołajczyk and his Polish government-in-exile. By the time of the Yalta Conference in February 1945, the communists had already established a Provisional Government of the Republic of Poland. The Soviet position at the conference was strong because of their decisive contribution to the war effort and as a result of their occupation of immense amounts of land in central and eastern Europe. The Great Powers gave assurances that the communist provisional government would be converted into an entity that would include democratic forces from within the country and active abroad, but the London-based government-in-exile was not mentioned. A Provisional Government of National Unity and subsequent democratic elections were the agreed stated goals. The disappointing results of these plans and the failure of the Western powers to ensure a strong participation of non-communists in the immediate post-war Polish government were seen by many Poles as a manifestation of Western betrayal. A lack of accurate data makes it difficult to document numerically the extent of the human losses suffered by Polish citizens during World War II. Additionally, many assertions made in the past must be considered suspect due to flawed methodology and a desire to promote certain political agendas. The last available enumeration of ethnic Poles and the large ethnic minorities is the Polish census of 1931. Exact population figures for 1939 are therefore not known. According to the United States Holocaust Memorial Museum, at least 3 million Polish Jews and at least 1.9 million non-Jewish Polish civilians were killed. According to the historians Brzoza and Sowa, about 2 million ethnic Poles were killed, but it is not known, even approximately, how many Polish citizens of other ethnicities perished, including Ukrainians, Belarusians, and Germans. Millions of Polish citizens were deported to Germany for forced labor or to German extermination camps such as Treblinka, Auschwitz and Sobibór. Nazi Germany intended to exterminate the Jews completely, in actions that have come to be described collectively as the Holocaust. The Poles were to be expelled from areas controlled by Nazi Germany through a process of resettlement that started in 1939. Such Nazi operations matured into a plan known as the "Generalplan Ost" that amounted to displacement, enslavement and partial extermination of the Slavic people and was expected to be completed within 15 years. The majority of Poles remained indifferent to the Jewish plight, and neither assisted nor persecuted Jews. Of those who have helped rescue, shelter and protect Jews from the Nazi atrocity, Yad Vashem and the State of Israel have recognized 6,992 individuals as "Righteous Among the Nations". In an attempt to incapacitate Polish society, the Nazis and the Soviets executed tens of thousands of members of the intelligentsia and community leadership during events such as the German AB-Aktion in Poland, Operation Tannenberg and the Katyn massacre. Over 95% of the Jewish losses and 90% of the ethnic Polish losses were caused directly by Nazi Germany, whereas 5% of the ethnic Polish losses were caused by the Soviets and 5% by Ukrainian nationalists. The large-scale Jewish presence in Poland that had endured for centuries was rather quickly put to an end by the policies of extermination implemented by the Nazis during the war. Waves of displacement and emigration that took place both during and after the war removed from Poland a majority of the Jews who survived. Further significant Jewish emigration followed events such as the Polish October political thaw of 1956 and the 1968 Polish political crisis. In 1940–1941, some 325,000 Polish citizens were deported by the Soviet regime. The number of Polish citizens who died at the hands of the Soviets is estimated at less than 100,000. In 1943–1944, Ukrainian nationalists associated with the Organization of Ukrainian Nationalists (OUN) and the Ukrainian Insurgent Army perpetrated the Massacres of Poles in Volhynia and Eastern Galicia. Estimates of the number of Polish civilian victims vary greatly, from tens to hundreds of thousands. Approximately 90% of Poland's war casualties were the victims of prisons, death camps, raids, executions, the annihilation of ghettos, epidemics, starvation, excessive work and ill treatment. The war left one million children orphaned and 590,000 persons disabled. The country lost 38% of its national assets (whereas Britain lost only 0.8%, and France only 1.5%). Nearly half of pre-war Poland was expropriated by the Soviet Union, including the two great cultural centers of Lwów and Wilno. The policies of Nazi Germany have been judged after the war by the International Military Tribunal at the Nuremberg trials and Polish genocide trials to be aimed at extermination of Jews, Poles and Roma, and to have "all the characteristics of genocide in the biological meaning of this term". By the terms of the 1945 Potsdam Agreement signed by the three victorious Great Powers, the Soviet Union retained most of the territories captured as a result of the Molotov–Ribbentrop Pact of 1939, including western Ukraine and western Belarus, and gained others. Lithuania and the Königsberg area of East Prussia were officially incorporated into the Soviet Union, in the case of the former without the recognition of the Western powers. Poland was compensated with the bulk of Silesia, including Breslau (Wrocław) and Grünberg (Zielona Góra), the bulk of Pomerania, including Stettin (Szczecin), and the greater southern portion of the former East Prussia, along with Danzig (Gdańsk), pending a final peace conference with Germany which eventually never took place. Collectively referred to by the Polish authorities as the "Recovered Territories", they were included in the reconstituted Polish state. With Germany's defeat Poland was thus shifted west in relation to its prewar location, to the area between the Oder–Neisse and Curzon lines, which resulted in a country more compact and with much broader access to the sea. The Poles lost 70% of their pre-war oil capacity to the Soviets, but gained from the Germans a highly developed industrial base and infrastructure that made a diversified industrial economy possible for the first time in Polish history. The flight and expulsion of Germans from what was eastern Germany prior to the war began before and during the Soviet conquest of those regions from the Nazis, and the process continued in the years immediately after the war. 8,030,000 Germans were evacuated, expelled, or migrated by 1950. Early expulsions in Poland were undertaken by the Polish communist authorities even before the Potsdam Conference (the "wild expulsions" from June to mid July 1945, when the Polish military and militia expelled nearly all people from the districts immediately east of the Oder–Neisse line), to ensure the establishment of ethnically homogeneous Poland. About 1% (100,000) of the German civilian population east of the Oder–Neisse line perished in the fighting prior to the surrender in May 1945, and afterwards some 200,000 Germans in Poland were employed as forced labor prior to being expelled. Many Germans died in labor camps such as the Zgoda labour camp and the Potulice camp. Of those Germans who remained within the new borders of Poland, many later chose to emigrate to post-war Germany. On the other hand, 1.5–2 million ethnic Poles moved or were expelled from the previously Polish areas annexed by the Soviet Union. The vast majority were resettled in the former German territories. At least one million Poles remained in what had become the Soviet Union, and at least half a million ended up in the West or elsewhere outside of Poland. However, contrary to the official declaration that the former German inhabitants of the Recovered Territories had to be removed quickly to house Poles displaced by the Soviet annexation, the Recovered Territories initially faced a severe population shortage. Many exiled Poles could not return to the country for which they had fought because they belonged to political groups incompatible with the new communist regimes, or because they originated from areas of pre-war eastern Poland that were incorporated into the Soviet Union (see Polish population transfers (1944–1946)). Some were deterred from returning simply on the strength of warnings that anyone who had served in military units in the West would be endangered. Many Poles were pursued, arrested, tortured and imprisoned by the Soviet authorities for belonging to the Home Army or other formations (see Anti-communist resistance in Poland (1944–1946)), or were persecuted because they had fought on the Western front. Territories on both sides of the new Polish-Ukrainian border were also "ethnically cleansed". Of the Ukrainians and Lemkos living in Poland within the new borders (about 700,000), close to 95% were forcibly moved to the Soviet Ukraine, or (in 1947) to the new territories in northern and western Poland under Operation Vistula. In Volhynia, 98% of the Polish pre-war population was either killed or expelled; in Eastern Galicia, the Polish population was reduced by 92%. According to Timothy D. Snyder, about 70,000 Poles and about 20,000 Ukrainians were killed in the ethnic violence that occurred in the 1940s, both during and after the war. According to an estimate by historian Jan Grabowski, about 50,000 of the 250,000 Polish Jews who escaped the Nazis during the liquidation of ghettos survived without leaving Poland (the remainder perished). More were repatriated from the Soviet Union and elsewhere, and the February 1946 population census showed about 300,000 Jews within Poland's new borders. Of the surviving Jews, many chose to emigrate or felt compelled to because of the anti-Jewish violence in Poland. Because of changing borders and the mass movements of people of various nationalities, the emerging communist Poland ended up with a mainly homogeneous, ethnically Polish population (97.6% according to the December 1950 census). The remaining members of ethnic minorities were not encouraged, by the authorities or by their neighbors, to emphasize their ethnic identities. In response to the February 1945 Yalta Conference directives, a Polish Provisional Government of National Unity was formed in June 1945 under Soviet auspices; it was soon recognized by the United States and many other countries. The Soviet domination was apparent from the beginning, as prominent leaders of the Polish Underground State were brought to trial in Moscow (the "Trial of the Sixteen" of June 1945). In the immediate post-war years, the emerging communist rule was challenged by opposition groups, including militarily by the so-called "cursed soldiers", of whom thousands perished in armed confrontations or were pursued by the Ministry of Public Security and executed. Such guerillas often pinned their hopes on expectations of an imminent outbreak of World War III and defeat of the Soviet Union. The Polish right-wing insurgency faded after the amnesty of February 1947. The Polish people's referendum of June 1946 was arranged by the communist Polish Workers' Party to legitimize its dominance in Polish politics and claim widespread support for the party's policies. Although the Yalta agreement called for free elections, the Polish legislative election of January 1947 was controlled by the communists. Some democratic and pro-Western elements, led by Stanisław Mikołajczyk, former prime minister-in-exile, participated in the Provisional Government and the 1947 elections, but were ultimately eliminated through electoral fraud, intimidation and violence. In times of severe political confrontation and radical economic change, members of Mikołajczyk's agrarian movement (the Polish People's Party) attempted to preserve the existing aspects of mixed economy and protect property and other rights. However, after the 1947 elections, the Government of National Unity ceased to exist and the communists moved towards abolishing the post-war partially pluralistic "people's democracy" and replacing it with a state socialist system. The communist-dominated front Democratic Bloc of the 1947 elections, turned into the Front of National Unity in 1952, became officially the source of governmental authority. The Polish government-in-exile, lacking international recognition, remained in continuous existence until 1990. The Polish People's Republic ("Polska Rzeczpospolita Ludowa") was established under the rule of the communist Polish United Workers' Party (PZPR). The name change from the Polish Republic was not officially adopted, however, until the proclamation of the Constitution of the Polish People's Republic in 1952. The ruling PZPR was formed by the forced amalgamation in December 1948 of the communist Polish Workers' Party (PPR) and the historically non-communist Polish Socialist Party (PPS). The PPR chief had been its wartime leader Władysław Gomułka, who in 1947 declared a "Polish road to socialism" as intended to curb, rather than eradicate, capitalist elements. In 1948 he was overruled, removed and imprisoned by Stalinist authorities. The PPS, re-established in 1944 by its left wing, had since been allied with the communists. The ruling communists, who in post-war Poland preferred to use the term "socialism" instead of "communism" to identify their ideological basis, needed to include the socialist junior partner to broaden their appeal, claim greater legitimacy and eliminate competition on the political Left. The socialists, who were losing their organization, were subjected to political pressure, ideological cleansing and purges in order to become suitable for unification on the terms of the PPR. The leading pro-communist leaders of the socialists were the prime ministers Edward Osóbka-Morawski and Józef Cyrankiewicz. During the most oppressive phase of the Stalinist period (1948–1953), terror was justified in Poland as necessary to eliminate reactionary subversion. Many thousands of perceived opponents of the regime were arbitrarily tried and large numbers were executed. The People's Republic was led by discredited Soviet operatives such as Bolesław Bierut, Jakub Berman and Konstantin Rokossovsky. The independent Catholic Church in Poland was subjected to property confiscations and other curtailments from 1949, and in 1950 was pressured into signing an accord with the government. In 1953 and later, despite a partial thaw after the death of Stalin that year, the persecution of the Church intensified and its head, Cardinal Stefan Wyszyński, was detained. A key event in the persecution of the Polish Church was the Stalinist show trial of the Kraków Curia in January 1953. In the Warsaw Pact, formed in 1955, the Polish Army was the second largest, after the Soviet Army. In 1944, large agricultural holdings and former German property in Poland started to be redistributed through land reform, and industry started to be nationalized. Communist restructuring and the imposition of work-space rules encountered active worker opposition already in the years 1945–1947. The moderate Three-Year Plan of 1947–1949 continued with the rebuilding, socialization and socialist restructuring of the economy. It was followed by the Six-Year Plan of 1950–1955 for heavy industry. The rejection of the Marshall Plan in 1947 made aspirations for catching up with West European standards of living unrealistic. The government's highest economic priority was the development of heavy industry useful to the military. State-run or controlled institutions common in all the socialist countries of eastern Europe were imposed on Poland, including collective farms and worker cooperatives. The latter were dismantled in the late 1940s as not socialist enough, although they were later re-established; even small-scale private enterprises were eradicated. Stalinism introduced heavy political and ideological propaganda and indoctrination in social life, culture and education. Great strides were made, however, in the areas of employment (which became nearly full), universal public education (which nearly eradicated adult illiteracy), health care and recreational amenities. Many historic sites, including the central districts of Warsaw and Gdańsk, both devastated during the war, were rebuilt at great cost. The communist industrialization program led to increased urbanization and educational and career opportunities for the intended beneficiaries of the social transformation, along the lines of the peasants-workers-working intelligentsia paradigm. The most significant improvement was accomplished in the lives of Polish peasants, many of whom were able to leave their impoverished and overcrowded village communities for better conditions in urban centers. Those who stayed behind took advantage of the implementation of the 1944 land reform decree of the Polish Committee of National Liberation, which terminated the antiquated but widespread parafeudal socioeconomic relations in Poland. The Stalinist attempts at establishing collective farms generally failed. Due to urbanization, the national percentage of the rural population decreased in communist Poland by about 50%. A majority of Poland's residents of cities and towns still live in apartment blocks built during the communist era, in part to accommodate migrants from rural areas. In March 1956, after the 20th Congress of the Communist Party of the Soviet Union in Moscow ushered in de-Stalinization, Edward Ochab was chosen to replace the deceased Bolesław Bierut as first secretary of the Polish United Workers' Party. As a result, Poland was rapidly overtaken by social restlessness and reformist undertakings; thousands of political prisoners were released and many people previously persecuted were officially rehabilitated. Worker riots in Poznań in June 1956 were violently suppressed, but they gave rise to the formation of a reformist current within the communist party. Amidst the continuing social and national upheaval, a further shakeup took place in the party leadership as part of what is known as the Polish October of 1956. While retaining most traditional communist economic and social aims, the regime led by Władysław Gomułka, the new first secretary of the PZPR, liberalized internal life in Poland. The dependence on the Soviet Union was somewhat mollified, and the state's relationships with the Church and Catholic lay activists were put on a new footing. A repatriation agreement with the Soviet Union allowed the repatriation of hundreds of thousands of Poles who were still in Soviet hands, including many former political prisoners. Collectivization efforts were abandoned—agricultural land, unlike in other Comecon countries, remained for the most part in the private ownership of farming families. State-mandated provisions of agricultural products at fixed, artificially low prices were reduced, and from 1972 eliminated. The legislative election of 1957 was followed by several years of political stability that was accompanied by economic stagnation and curtailment of reforms and reformists. One of the last initiatives of the brief reform era was a nuclear weapons–free zone in Central Europe proposed in 1957 by Adam Rapacki, Poland's foreign minister. Culture in the Polish People's Republic, to varying degrees linked to the intelligentsia's opposition to the authoritarian system, developed to a sophisticated level under Gomułka and his successors. The creative process was often compromised by state censorship, but significant works were created in fields such as literature, theater, cinema and music, among others. Journalism of veiled understanding and varieties of native and Western popular culture were well represented. Uncensored information and works generated by émigré circles were conveyed through a variety of channels. The Paris-based "Kultura" magazine developed a conceptual framework for dealing with the issues of borders and the neighbors of a future free Poland, but for ordinary Poles Radio Free Europe was of foremost importance. One of the confirmations of the end of an era of greater tolerance was the expulsion from the communist party of several prominent "Marxist revisionists" in the 1960s. In 1965, the Conference of Polish Bishops issued the Letter of Reconciliation of the Polish Bishops to the German Bishops, a gesture intended to heal bad mutual feelings left over from World War II. In 1966, the celebrations of the 1,000th anniversary of the Christianization of Poland led by Cardinal Stefan Wyszyński and other bishops turned into a huge demonstration of the power and popularity of the Catholic Church in Poland. The post-1956 liberalizing trend, in decline for a number of years, was reversed in March 1968, when student demonstrations were suppressed during the 1968 Polish political crisis. Motivated in part by the Prague Spring movement, the Polish opposition leaders, intellectuals, academics and students used a historical-patriotic "Dziady" theater spectacle series in Warsaw (and its termination forced by the authorities) as a springboard for protests, which soon spread to other centers of higher education and turned nationwide. The authorities responded with a major crackdown on opposition activity, including the firing of faculty and the dismissal of students at universities and other institutions of learning. At the center of the controversy was also the small number of Catholic deputies in the Sejm (the Znak Association members) who attempted to defend the students. In an official speech, Gomułka drew attention to the role of Jewish activists in the events taking place. This provided ammunition to a nationalistic and antisemitic communist party faction headed by Mieczysław Moczar that was opposed to Gomułka's leadership. Using the context of the military victory of Israel in the Six-Day War of 1967, some in the Polish communist leadership waged an antisemitic campaign against the remnants of the Jewish community in Poland. The targets of this campaign were accused of disloyalty and active sympathy with Israeli aggression. Branded "Zionists", they were scapegoated and blamed for the unrest in March 1968, which eventually led to the emigration of much of Poland's remaining Jewish population (about 15,000 Polish citizens left the country). With the active support of the Gomułka regime, the Polish People's Army took part in the infamous Warsaw Pact invasion of Czechoslovakia in August 1968, after the Brezhnev Doctrine was informally announced. In the final major achievement of Gomułka diplomacy, the governments of Poland and West Germany signed in December 1970 the Treaty of Warsaw, which normalized their relations and made possible meaningful cooperation in a number of areas of bilateral interest. In particular, West Germany recognized the post-World War II "de facto" border between Poland and East Germany. Price increases for essential consumer goods triggered the Polish protests of 1970. In December, there were disturbances and strikes in the Baltic Sea port cities of Gdańsk, Gdynia, and Szczecin that reflected deep dissatisfaction with living and working conditions in the country. The activity was centered in the industrial shipyard areas of the three coastal cities. Dozens of protesting workers and bystanders were killed in police and military actions, generally under the authority of Gomułka and Minister of Defense Wojciech Jaruzelski. In the aftermath, Edward Gierek replaced Gomułka as first secretary of the communist party. The new regime was seen as more modern, friendly and pragmatic, and at first it enjoyed a degree of popular and foreign support. To revitalize the economy, from 1971 the Gierek regime introduced wide-ranging reforms that involved large-scale foreign borrowing. These actions initially caused improved conditions for consumers, but in a few years the strategy backfired and the economy deteriorated. Another attempt to raise food prices resulted in the June 1976 protests. The Workers' Defence Committee (KOR), established in response to the crackdown that followed, consisted of dissident intellectuals determined to support industrial workers, farmers and students persecuted by the authorities. The opposition circles active in the late 1970s were emboldened by the Helsinki Conference processes. In October 1978, the Archbishop of Kraków, Cardinal Karol Józef Wojtyła, became Pope John Paul II, head of the Catholic Church. Catholics and others rejoiced at the elevation of a Pole to the papacy and greeted his June 1979 visit to Poland with an outpouring of emotion. Fueled by large infusions of Western credit, Poland's economic growth rate was one of the world's highest during the first half of the 1970s, but much of the borrowed capital was misspent, and the centrally planned economy was unable to use the new resources effectively. The 1973 oil crisis caused recession and high interest rates in the West, to which the Polish government had to respond with sharp domestic consumer price increases. The growing debt burden became insupportable in the late 1970s, and negative economic growth set in by 1979. Around 1 July 1980, with the Polish foreign debt standing at more than $20 billion, the government made yet another attempt to increase meat prices. Workers responded with escalating work stoppages that culminated in the 1980 general strikes in Lublin. In mid-August, labor protests at the Gdańsk Shipyard gave rise to a chain reaction of strikes that virtually paralyzed the Baltic coast by the end of the month and, for the first time, closed most coal mines in Silesia. The Inter-Enterprise Strike Committee coordinated the strike action across hundreds of workplaces and formulated the 21 demands as the basis for negotiations with the authorities. The Strike Committee was sovereign in its decision-making, but was aided by a team of "expert" advisers that included the well-known dissidents Jacek Kuroń, Karol Modzelewski, Bronisław Geremek and Tadeusz Mazowiecki. On 31 August 1980, representatives of workers at the Gdańsk Shipyard, led by an electrician and activist Lech Wałęsa, signed the Gdańsk Agreement with the government that ended their strike. Similar agreements were concluded in Szczecin (the Szczecin Agreement) and in Silesia. The key provision of these agreements was the guarantee of the workers' right to form independent trade unions and the right to strike. Following the successful resolution of the largest labor confrontation in communist Poland's history, nationwide union organizing movements swept the country. Edward Gierek was blamed by the Soviets for not following their "fraternal" advice, not shoring up the communist party and the official trade unions and allowing "anti-socialist" forces to emerge. On 5 September 1980, Gierek was replaced by Stanisław Kania as first secretary of the PZPR. Delegates of the emergent worker committees from all over Poland gathered in Gdańsk on 17 September and decided to form a single national union organization named "Solidarity". While party–controlled courts took up the contentious issues of Solidarity's legal registration as a trade union (finalized by November 10), planning had already begun for the imposition of martial law. A parallel farmers' union was organized and strongly opposed by the regime, but Rural Solidarity was eventually registered (12 May 1981). In the meantime, a rapid deterioration of the authority of the communist party, disintegration of state power and escalation of demands and threats by the various Solidarity–affiliated groups were occurring. According to Kuroń, a "tremendous social democratization movement in all spheres" was taking place and could not be contained. Wałęsa had meetings with Kania, which brought no resolution to the impasse. Following the Warsaw Pact summit in Moscow, the Soviet Union proceeded with a massive military build-up along Poland's border in December 1980, but during the summit Kania forcefully argued with Leonid Brezhnev and other allied communists leaders against the feasibility of an external military intervention, and no action was taken. The United States, under presidents Jimmy Carter and Ronald Reagan, repeatedly warned the Soviets about the consequences of a direct intervention, while discouraging an open insurrection in Poland and signaling to the Polish opposition that there would be no rescue by the NATO forces. In February 1981, Defense Minister General Wojciech Jaruzelski assumed the position of prime minister. The Solidarity social revolt had thus far been free of any major use of force, but in March 1981 in Bydgoszcz three activists were beaten up by the secret police. In a nationwide "warning strike" the 9.5-million-strong Solidarity union was supported by the population at large, but a general strike was called off by Wałęsa after the 30 March settlement with the government. Both Solidarity and the communist party were badly split and the Soviets were losing patience. Kania was re-elected at the Party Congress in July, but the collapse of the economy continued and so did the general disorder. At the first Solidarity National Congress in September–October 1981 in Gdańsk, Lech Wałęsa was elected national chairman of the union with 55% of the vote. An appeal was issued to the workers of the other East European countries, urging them to follow in the footsteps of Solidarity. To the Soviets, the gathering was an "anti-socialist and anti-Soviet orgy" and the Polish communist leaders, increasingly led by Jaruzelski and General Czesław Kiszczak, were ready to apply force. In October 1981, Jaruzelski was named first secretary of the PZPR. The Plenum's vote was 180 to 4, and he kept his government posts. Jaruzelski asked parliament to ban strikes and allow him to exercise extraordinary powers, but when neither request was granted, he decided to proceed with his plans anyway. On 12–13 December 1981, the regime declared martial law in Poland, under which the army and the ZOMO special police forces were used to crush Solidarity. The Soviet leaders insisted that Jaruzelski pacifies the opposition with the forces at his disposal, without Soviet involvement. Almost all Solidarity leaders and many affiliated intellectuals were arrested or detained. Nine workers were killed in the Pacification of Wujek. The United States and other Western countries responded by imposing economic sanctions against Poland and the Soviet Union. Unrest in the country was subdued, but continued. During martial law, Poland was ruled by the so-called Military Council of National Salvation. The open or semi-open opposition communications, as recently practiced, were replaced by underground publishing (known in the eastern bloc as Samizdat), and Solidarity was reduced to a few thousand underground activists. Having achieved some semblance of stability, the Polish regime relaxed and then rescinded martial law over several stages. By December 1982 martial law was suspended and a small number of political prisoners, including Wałęsa, were released. Although martial law formally ended in July 1983 and a partial amnesty was enacted, several hundred political prisoners remained in jail. Jerzy Popiełuszko, a popular pro-Solidarity priest, was abducted and murdered by security functionaries in October 1984. Further developments in Poland occurred concurrently with and were influenced by the reformist leadership of Mikhail Gorbachev in the Soviet Union (processes known as Glasnost and Perestroika). In September 1986, a general amnesty was declared and the government released nearly all political prisoners. However, the country lacked basic stability, as the regime's efforts to organize society from the top down had failed, while the opposition's attempts at creating an "alternate society" were also unsuccessful. With the economic crisis unresolved and societal institutions dysfunctional, both the ruling establishment and the opposition began looking for ways out of the stalemate. Facilitated by the indispensable mediation of the Catholic Church, exploratory contacts were established. Student protests resumed in February 1988. Continuing economic decline led to strikes across the country in April, May and August. The Soviet Union, increasingly destabilized, was unwilling to apply military or other pressure to prop up allied regimes in trouble. The Polish government felt compelled to negotiate with the opposition and in September 1988 preliminary talks with Solidarity leaders ensued in Magdalenka. Numerous meetings that took place involved Wałęsa and General Kiszczak, among others. In November, the regime made a major public relations mistake by allowing a televised debate between Wałęsa and Alfred Miodowicz, chief of the All-Poland Alliance of Trade Unions, the official trade union organization. The fitful bargaining and intra-party squabbling led to the official Round Table Negotiations in 1989, followed by the Polish legislative election in June of that year, a watershed event marking the fall of communism in Poland. The Polish Round Table Agreement of April 1989 called for local self-government, policies of job guarantees, legalization of independent trade unions and many wide-ranging reforms. The current Sejm promptly implemented the deal and agreed to National Assembly elections that were set for 4 June and 18 June. Only 35% of the seats in the Sejm (national legislature's lower house) and all of the Senate seats were freely contested; the remaining Sejm seats (65%) were guaranteed for the communists and their allies. The failure of the communists at the polls (almost all of the contested seats were won by the opposition) resulted in a political crisis. The new April Novelization to the constitution called for re-establishment of the Polish presidency and on 19 July the National Assembly elected the communist leader, General Wojciech Jaruzelski, to that office. His election, seen at the time as politically necessary, was barely accomplished with tacit support from some Solidarity deputies, and the new president's position was not strong. Moreover, the unexpected definitiveness of the parliamentary election results created new political dynamics and attempts by the communists to form a government failed. On 19 August, President Jaruzelski asked journalist and Solidarity activist Tadeusz Mazowiecki to form a government; on 12 September, the Sejm voted approval of Prime Minister Mazowiecki and his cabinet. Mazowiecki decided to leave the economic reform entirely in the hands of economic liberals led by the new Deputy Prime Minister Leszek Balcerowicz, who proceeded with the design and implementation of his "shock therapy" policy. For the first time in post-war history, Poland had a government led by non-communists, setting a precedent soon to be followed by other Eastern Bloc nations in a phenomenon known as the Revolutions of 1989. Mazowiecki's acceptance of the "thick line" formula meant that there would be no "witch-hunt", i.e., an absence of revenge seeking or exclusion from politics in regard to former communist officials. In part because of the attempted indexation of wages, inflation reached 900% by the end of 1989, but was soon dealt with by means of radical methods. In December 1989, the Sejm approved the Balcerowicz Plan to transform the Polish economy rapidly from a centrally planned one to a free market economy. The Constitution of the Polish People's Republic was amended to eliminate references to the "leading role" of the communist party and the country was renamed the "Republic of Poland". The communist Polish United Workers' Party dissolved itself in January 1990. In its place, a new party, Social Democracy of the Republic of Poland, was created. "Territorial self-government", abolished in 1950, was legislated back in March 1990, to be led by locally elected officials; its fundamental unit was the administratively independent gmina. In October 1990, the constitution was amended to curtail the term of President Jaruzelski. In November 1990, the German–Polish Border Treaty, with unified Germany, was signed. In November 1990, Lech Wałęsa was elected president for a five-year term; in December, he became the first popularly elected president of Poland. Poland's first free parliamentary election was held in October 1991. 18 parties entered the new Sejm, but the largest representation received only 12% of the total vote. There were several post-Solidarity governments between the 1989 election and the 1993 election, after which the "post-communist" left-wing parties took over. In 1993, the formerly Soviet Northern Group of Forces, a vestige of past domination, left Poland. In 1995, Aleksander Kwaśniewski of the Social Democratic Party was elected president and remained in that capacity for the next ten years (two terms). In 1997, the new Constitution of Poland was finalized and approved in a referendum; it replaced the Small Constitution of 1992, an amended version of the communist constitution. Poland joined NATO in 1999. Elements of the Polish Armed Forces have since participated in the Iraq War and the Afghanistan War. Poland joined the European Union as part of its enlargement in 2004. However, Poland has not adopted the euro as its currency and legal tender, but instead uses the Polish złoty. After the election of the conservative Law and Justice party in 2015, the Polish government repeatedly clashed with EU institutions on the issue of judicial reform and was accused by the European Commission and the European Parliament of undermining "European Values" and eroding democratic standards. However, the Polish government headed by the Law and Justice party maintained that the reforms were necessary due to the prevalence of corruption within the Polish judiciary and the continued presence of holdover Communist era judges. "a."Piłsudski's family roots in the Polonized gentry of the Grand Duchy of Lithuania and the resulting perspective of seeing himself and people like him as legitimate Lithuanians put him in conflict with modern Lithuanian nationalists (who in Piłsudski's lifetime redefined the scope and meaning of the "Lithuanian" identity), and, by extension, with other nationalists including the Polish modern nationalist movement. "b."In 1938, Poland and Romania refused to agree to a Franco-British proposal that in the event of war with Nazi Germany, Soviet forces would be allowed to cross their territories to aid Czechoslovakia. The Polish ruling elites considered the Soviets in some ways more threatening than the Nazis. The Soviet Union repeatedly declared intention to fulfill its obligations under the 1935 treaty with Czechoslovakia and defend Czechoslovakia militarily. A transfer of land and air forces through Poland and/or Romania was required and the Soviets approached about it the French, who also had a treaty with Czechoslovakia (and with Poland and with the Soviet Union). Edward Rydz-Śmigły rebuked the French suggestion on that matter in 1936, and in 1938 Józef Beck pressured Romania not to allow even Soviet warplanes to fly over its territory. Like Hungary, Poland was looking into using the German-Czechoslovak conflict to settle its own territorial grievances, namely disputes over parts of Zaolzie, Spiš and Orava. "c." In October 1939, the British Foreign Office notified the Soviets that the United Kingdom would be satisfied with a postwar creation of small ethnic Poland, patterned after the Duchy of Warsaw. An establishment of Poland restricted to "minimal size", according to ethnographic boundaries (such as the lands common to both the prewar Poland and postwar Poland), was planned by the Soviet People's Commissariat for Foreign Affairs in 1943–1944. Such territorial reduction was recommended by Ivan Maisky to Vyacheslav Molotov in early 1944, because of what Maisky saw as Poland's historically unfriendly disposition toward Russia and the Soviet Union, likely in some way to continue. Joseph Stalin opted for a larger version, allowing a "swap" (territorial compensation for Poland), which involved the eastern lands gained by Poland at the Peace of Riga of 1921 and now lost, and eastern Germany conquered from the Nazis in 1944–1945. In regard to the several major disputed areas: Lower Silesia west of the Oder and the Eastern Neisse rivers (the British wanted it to remain a part of the future German state), Stettin (in 1945 the German communists already established their administration there), "Zakerzonia" (western Red Ruthenia demanded by the Ukrainians), and the Białystok region (Białystok was claimed by the communists of the Byelorussian SSR), the Soviet leader made decisions that favored Poland. Other territorial and ethnic scenarios were also possible, generally with possible outcomes less advantageous to Poland than the form the country assumed. "d."Timothy D. Snyder spoke of about 100,000 Jews killed by Poles during the Nazi occupation, the majority probably by members of the collaborationist Blue Police. This number would have likely been many times higher had Poland entered into an alliance with Germany in 1939, as advocated by some Polish historians and others. "e."Some may have falsely claimed the Jewish identity hoping for permission to emigrate. The communist authorities, pursuing the concept of Poland of single ethnicity (in accordance with the recent border changes and expulsions), were allowing the Jews to leave the country. For a discussion of early communist Poland's ethnic politics, see Timothy D. Snyder, "The Reconstruction of Nations", chapters on modern "Ukrainian Borderland". "f."A Communist Party of Poland had existed in the past, but was eliminated in Stalin's purges in 1938. "g."The Soviet leadership, which had previously ordered the crushing of the Uprising of 1953 in East Germany, the Hungarian Revolution of 1956 and the Prague Spring in 1968, in late 1970 became worried about potential demoralizing effects that deployment against Polish workers would have on the Polish army, a crucial Warsaw Pact component. The Soviets withdrew their support for Gomułka, who insisted on the use of force; he and his close associates were subsequently ousted from the Polish Politburo by the Polish Central Committee. "h."East of the Molotov-Ribbentrop line, the population was 43% Polish, 33% Ukrainian, 8% Belarusian and 8% Jewish. The Soviet Union did not want to appear as an aggressor, and moved its troops to eastern Poland under the pretext of offering protection to "the kindred Ukrainian and Belorussian people". "i."Joseph Stalin at the 1943 Tehran Conference discussed with Winston Churchill and Franklin D. Roosevelt new post-war borders in central-eastern Europe, including the shape of a future Poland. He endorsed the Piast Concept, which justified a massive shift of Poland's frontiers to the west. Stalin resolved to secure and stabilize the western reaches of the Soviet Union and disable the future military potential of Germany by constructing a compact and ethnically defined Poland (along with the Soviet ethnic Ukraine, Belarus and Lithuania) and by radically altering the region's system of national borders. After 1945, the Polish communist regime wholeheartedly adopted and promoted the Piast Concept, making it the centerpiece of their claim to be the true inheritors of Polish nationalism. After all the killings and population transfers during and after the war, the country was 99% "Polish". "j.""All the currently available documents of Nazi administration show that, together with the Jews, the stratum of the Polish intelligentsia was marked for total extermination. In fact, Nazi Germany achieved this goal almost by half, since Poland lost 50 percent of her citizens with university diplomas and 35 percent of those with a gimnazium diploma." According to Brzoza and Sowa, 450,000 of Polish citizens had completed higher, secondary, or trade school education by the outbreak of the war. 37.5% of people with higher education perished, 30% of those with general secondary education, and 53.3% of trade school graduates. "k."Decisive political events took place in Poland shortly before the Hungarian Revolution of 1956. Władysław Gomułka, a reformist party leader, was reinstated to the Politburo of the PZPR and the Eighth Plenum of its Central Committee was announced to convene on 19 October 1956, all without seeking a Soviet approval. The Soviet Union responded with military moves and intimidation and its "military-political delegation", led by Nikita Khrushchev, quickly arrived in Warsaw. Gomułka tried to convince them of his loyalty but insisted on the reforms that he considered essential, including a replacement of Poland's Soviet-trusted minister of defense, Konstantin Rokossovsky. The disconcerted Soviets returned to Moscow, the PZPR Plenum elected Gomułka first secretary and removed Rokossovsky from the Politburo. On 21 October, the Soviet Presidium followed Khrushchev's lead and decided unanimously to "refrain from military intervention" in Poland, a decision likely influenced also by the ongoing preparations for the invasion of Hungary. The Soviet gamble paid off, because Gomułka in the coming years turned out to be a very dependable Soviet ally and an orthodox communist. However, unlike the other Warsaw Pact countries, Poland did not endorse the Soviet armed intervention in Hungary. The Hungarian Revolution was intensely supported by the Polish public. "l."The delayed reinforcements were coming and the government military commanders General Tadeusz Rozwadowski and Władysław Anders wanted to keep on fighting the coup perpetrators, but President Stanisław Wojciechowski and the government decided to surrender to prevent the imminent spread of civil war. The coup brought to power the "Sanation" regime under Józef Piłsudski (Edward Rydz-Śmigły after Piłsudski's death). The Sanation regime persecuted the opposition within the military and in general. Rozwadowski died after abusive imprisonment, according to some accounts murdered. Another major opponent of Piłsudski, General Włodzimierz Zagórski, disappeared in 1927. According to Aleksandra Piłsudska, the marshal's wife, following the coup and for the rest of his life Piłsudski lost his composure and appeared over-burdened. At the time of Rydz-Śmigły's command, the Sanation camp embraced the ideology of Roman Dmowski, Piłsudski's nemesis. Rydz-Śmigły did not allow General Władysław Sikorski, an enemy of the Sanation movement, to participate as a soldier in the country's defense against the Invasion of Poland in September 1939. During World War II in France and then in Britain, the Polish government-in-exile became dominated by anti-Sanation politicians. The perceived Sanation followers were in turn persecuted (in exile) under prime ministers Sikorski and Stanisław Mikołajczyk. "m."General Zygmunt Berling of the Soviet-allied First Polish Army attempted in mid-September a crossing of the Vistula and landing at Czerniaków to aid the insurgents, but the operation was defeated by the Germans and the Poles suffered heavy losses. "n."The decision to launch the Warsaw Uprising resulted in the destruction of the city, its population and its elites and has been a source of lasting controversy. According to the historians Czesław Brzoza and Andrzej Leon Sowa, orders of further military offensives, issued at the end of August 1944 as a continuation of Operation Tempest, show a loss of the sense of responsibility for the country's fate on the part of the underground Polish leadership. "o."One of the party leaders Mieczysław Rakowski, who abandoned his mentor Gomułka following the 1970 crisis, saw the demands of the demonstrating workers as "exclusively socialist" in character, because of the way they were phrased. Most people in communist Poland, including opposition activists, did not question the supremacy of socialism or the socialist idea; misconduct by party officials, such as not following the provisions of the constitution, was blamed. From the time of Gierek, this assumed standard of political correctness was increasingly challenged: pluralism, and then free market, became frequently used concepts. "p."The Polish Sanation authorities were provoked by the independence-seeking Organization of Ukrainian Nationalists (OUN). OUN engaged in political assassinations, terror and sabotage, to which the Polish state responded with a repressive campaign in the 1930s, as Józef Piłsudski and his successors imposed collective responsibility on the villagers in the affected areas. After the disturbances of 1933 and 1934, the Bereza Kartuska prison camp was established; it became notorious for its brutal regime. The government brought Polish settlers and administrators to parts of Volhynia with a centuries-old tradition of Ukrainian peasant rising against Polish land owners (and to Eastern Galicia). In the late 1930s, after Piłsudski's death, military persecution intensified and a policy of "national assimilation" was aggressively pursued. Military raids, public beatings, property confiscations and the closing and destruction of Orthodox churches aroused lasting enmity in Galicia and antagonized Ukrainian society in Volhynia at the worst possible moment, according to Timothy D. Snyder. However, he also notes that "Ukrainian terrorism and Polish reprisals touched only part of the population, leaving vast regions unaffected" and "the OUN's nationalist prescription, a Ukrainian state for ethnic Ukrainians alone was far from popular". Halik Kochanski wrote of the legacy of bitterness between the Ukrainians and Poles that soon exploded in the context of World War II. See also: History of the Ukrainian minority in Poland. "q."In Poland, officials of central government (the provincial office of "wojewoda") can overrule elected territorial and municipal local governments. However, in such cases "wojewoda" decisions have sometimes been invalidated by courts. "r."Foreign policy was one of the few governmental areas in which Piłsudski took an active interest. He saw Poland's role and opportunity as lying in Eastern Europe and advocated passive relations with the West. He felt that a German attack should not be feared, because even if this unlikely event were to take place, the Western powers would be bound to restrain Germany and come to Poland's rescue. "s."According to the researcher Jan Sowa, the Commonwealth failed as a state because it was not able to conform to the emerging new European order established at the Peace of Westphalia of 1648. Poland's elective kings, restricted by the self-serving and short-sighted nobility, could not impose a strong and efficient central government with its characteristic post-Westphalian internal and external sovereignty. The inability of Polish kings to levy and collect taxes (and therefore sustain a standing army) and conduct independent foreign policy were among the chief obstacles to Poland competing effectively on the changed European scene, where absolutist power was a prerequisite for survival and became the foundation for the abolition of serfdom and gradual formation of parliamentarism. "t."Besides the Home Army there were other major underground fighting formations: Bataliony Chłopskie, National Armed Forces (NSZ) and Gwardia Ludowa (later Armia Ludowa). From 1943, the leaders of the nationalistic NSZ collaborated with Nazi Germany in a case unique in occupied Poland. The NSZ conducted an anti-communist civil war. Before the arrival of the Soviets, the NSZ's Holy Cross Mountains Brigade left Poland under the protection of the German army. According to the historians Czesław Brzoza and Andrzej Leon Sowa, participation figures given for the underground resistance are often inflated. In the spring of 1944, the time of the most extensive involvement of the underground organizations, there were most likely considerably fewer than 500,000 military and civilian personnel participating, over the entire spectrum, from the right wing to the communists. "u."According to Jerzy Eisler, about 1.1 million people may have been imprisoned or detained in 1944–1956 and about 50,000 may have died because of the struggle and persecution, including about 7,000 soldiers of the right-wing underground killed in the 1940s. According to Adam Leszczyński, up to 30,000 people were killed by the communist regime during the first several years after the war. "v."According to Andrzej Stelmachowski, one of the key participants of the Polish systemic transformation, Minister Leszek Balcerowicz pursued extremely liberal economic policies, often extraordinarily painful for society. The December 1989 Sejm statute of credit relations reform introduced an "incredible" system of privileges for banks, which were allowed to unilaterally alter interest rates on already existing contracts. The exceedingly high rates they instantly introduced ruined many previously profitable enterprises and caused a complete breakdown of the apartment block construction industry, which had long-term deleterious effects on the state budget as well. Balcerowicz's policies also caused permanent damage to Polish agriculture, an area in which he lacked expertise, and to the often successful and useful Polish cooperative movement. According to Karol Modzelewski, a dissident and critic of the economic transformation, in 1989 Solidarity no longer existed, having been in reality eliminated during the martial law period. What the "post-Solidarity elites" did in 1989 amounted to a betrayal of the old Solidarity base, and the retribution was only a matter of time. "w."Led by Władysław Anders, the Polish II Corps fought in 1944–1945 in the Allied Italian Campaign, where the corps' main engagement was the Battle of Monte Cassino. "x."The Piast Concept, of which the chief proponent was Jan Ludwik Popławski (late 19th century), was based on the claim that the Piast homeland was inhabited by so-called "native" aboriginal Slavs and Slavonic Poles since time immemorial and only later was "infiltrated" by "alien" Celts, Germanic peoples, and others. After 1945, the so-called "autochthonous" or "aboriginal" school of Polish prehistory received official backing and a considerable degree of popular support in Poland. According to this view, the Lusatian Culture, which flourished between the Oder and the Vistula in the early Iron Age, was said to be Slavonic; all non-Slavonic tribes and peoples recorded in the area at various points in ancient times were dismissed as "migrants" and "visitors". In contrast, the critics of this theory, such as Marija Gimbutas, regarded it as an unproved hypotheses and for them the date and origin of the westward migration of the Slavs were largely uncharted; the Slavonic connections of the Lusatian Culture were entirely imaginary; and the presence of an ethnically mixed and constantly changing collection of peoples on the North European Plain was taken for granted. "y."According to the count presented by Prime Minister and Internal Affairs Minister Felicjan Sławoj Składkowski before the Sejm committee in January 1938, 818 people were killed in police suppression of labor protests (industrial and agricultural) during the 1932–1937 period. "z."John II Casimir Vasa is known for his remarkable and accurate prediction of the Partitions of Poland, made over a century before the event's occurrence. "a1."According to war historian Ben Macintyre, "The Polish contribution to allied victory in the Second World War was extraordinary, perhaps even decisive, but for many years it was disgracefully played down, obscured by the politics of the Cold War." "b1."Piłsudski left the Polish Socialist Party in 1914 and severed his connections with the socialist movement, but many activists from the Left and of other political orientations presumed his continuing involvement there. "c1."Woodrow Wilson's Fourteen Points program was subsequently weakened by internal developments in the US, Britain, France, and Germany. In the last case, Poland was denied the city of Danzig on the Baltic coast. "d1."The government of Soviet Russia issued in August 1918 a decree strongly supportive of the independence of Poland, but at that time no Polish lands were under Russian control. More recent general history of Poland books in English Published in Poland
https://en.wikipedia.org/wiki?curid=13772
Houston Houston ( ) is the most populous city in the U.S. state of Texas, fourth most populous city in the United States, most populous city in the Southern United States, as well as the sixth most populous in North America, with an estimated 2019 population of 2,320,268. Located in Southeast Texas near Galveston Bay and the Gulf of Mexico, it is the seat of Harris County and the principal city of the Greater Houston metropolitan area, which is the fifth most populous metropolitan statistical area in the United States and the second most populous in Texas after the Dallas-Fort Worth metroplex, with a population of 6,997,384 in 2018. Comprising a total area of , Houston is the eighth most expansive city in the United States (including consolidated city-counties). It is the largest city in the United States by total area, whose government is not consolidated with that of a county, parish or borough. Though primarily in Harris County, small portions of the city extend into Fort Bend and Montgomery counties, bordering other principal communities of Greater Houston such as Sugar Land and The Woodlands. The city of Houston was founded by land investors on August 30, 1836, at the confluence of Buffalo Bayou and White Oak Bayou (a point now known as Allen's Landing) and incorporated as a city on June 5, 1837. The city is named after former General Sam Houston, who was president of the Republic of Texas and had won Texas' independence from Mexico at the Battle of San Jacinto east of Allen's Landing. After briefly serving as the capital of the Texas Republic in the late 1830s, Houston grew steadily into a regional trading center for the remainder of the 19th century. The arrival of the 20th century saw a convergence of economic factors which fueled rapid growth in Houston, including a burgeoning port and railroad industry, the decline of Galveston as Texas' primary port following a devastating 1900 hurricane, the subsequent construction of the Houston Ship Channel, and the Texas oil boom. In the mid-20th century, Houston's economy diversified as it became home to the Texas Medical Center—the world's largest concentration of healthcare and research institutions—and NASA's Johnson Space Center, where the Mission Control Center is located. Houston's economy since the late 19th century has a broad industrial base in energy, manufacturing, aeronautics, and transportation. Leading in healthcare sectors and building oilfield equipment, Houston has the second most Fortune 500 headquarters of any U.S. municipality within its city limits (after New York City). The Port of Houston ranks first in the United States in international waterborne tonnage handled and second in total cargo tonnage handled. Nicknamed the "Bayou City" "Space City", "H-Town", and "the 713", Houston has become a global city, with strengths in culture, medicine, and research. The city has a population from various ethnic and religious backgrounds and a large and growing international community. Houston is the most diverse metropolitan area in Texas and has been described as the most racially and ethnically diverse major metropolis in the U.S. It is home to many cultural institutions and exhibits, which attract more than 7 million visitors a year to the Museum District. Houston has an active visual and performing arts scene in the Theater District and offers year-round resident companies in all major performing arts. The Houston area is located on land that was once home of the Karankawa (kə rang′kə wä′,-wô′,-wə) and the Atakapa (əˈtɑːkəpə) indigenous peoples for at least 2,000 years before the first known settlers arrived. These tribes are almost nonexistent today; this was most likely caused by foreign disease, as well as competition with various exploration groups in the 18th and 19th centuries. However, the land remained largely uninhabited until settlement in the 1830s. The first documented settlers to arrive in the Houston area were the Allen brothers. The Allen brothers—Augustus Chapman and John Kirby—explored town sites on Buffalo Bayou and Galveston Bay. According to historian David McComb, "[T]he brothers, on August 26, 1836, bought from Elizabeth E. Parrott, wife of T.F.L. Parrott and widow of John Austin, the south half of the lower league [ tract] granted to her by her late husband. They paid $5,000 total, but only $1,000 of this in cash; notes made up the remainder." The Allen brothers ran their first advertisement for Houston just four days later in the "Telegraph and Texas Register", naming the notional town in honor of President Sam Houston. They successfully lobbied the Republic of Texas Congress to designate Houston as the temporary capital, agreeing to provide the new government with a state capitol building. About a dozen persons resided in the town at the beginning of 1837, but that number grew to about 1,500 by the time the Texas Congress convened in Houston for the first time that May. Houston was granted incorporation on June 5, 1837, with James S. Holman becoming its first mayor. In the same year, Houston became the county seat of Harrisburg County (now Harris County). In 1839, the Republic of Texas relocated its capital to Austin. The town suffered another setback that year when a yellow fever epidemic claimed about one life out of every eight residents. Yet it persisted as a commercial center, forming a symbiosis with its Gulf Coast port, Galveston. Landlocked farmers brought their produce to Houston, using Buffalo Bayou to gain access to Galveston and the Gulf of Mexico. Houston merchants profited from selling staples to farmers and shipping the farmers' produce to Galveston. The great majority of slaves in Texas came with their owners from the older slave states. Sizable numbers, however, came through the domestic slave trade. New Orleans was the center of this trade in the Deep South, but slave dealers were in Houston. Thousands of enslaved blacks lived near the city before the American Civil War. Many of them near the city worked on sugar and cotton plantations, while most of those in the city limits had domestic and artisan jobs. In 1840, the community established a chamber of commerce in part to promote shipping and navigation at the newly created port on Buffalo Bayou. By 1860, Houston had emerged as a commercial and railroad hub for the export of cotton. Railroad spurs from the Texas inland converged in Houston, where they met rail lines to the ports of Galveston and Beaumont. During the American Civil War, Houston served as a headquarters for General John Magruder, who used the city as an organization point for the Battle of Galveston. After the Civil War, Houston businessmen initiated efforts to widen the city's extensive system of bayous so the city could accept more commerce between Downtown and the nearby port of Galveston. By 1890, Houston was the railroad center of Texas. In 1900, after Galveston was struck by a devastating hurricane, efforts to make Houston into a viable deep-water port were accelerated. The following year, the discovery of oil at the Spindletop oil field near Beaumont prompted the development of the Texas petroleum industry. In 1902, President Theodore Roosevelt approved a $1 million improvement project for the Houston Ship Channel. By 1910, the city's population had reached 78,800, almost doubling from a decade before. African Americans formed a large part of the city's population, numbering 23,929 people, which was nearly one-third of Houston's residents. President Woodrow Wilson opened the deep-water Port of Houston in 1914, seven years after digging began. By 1930, Houston had become Texas' most populous city and Harris County the most populous county. In 1940, the U.S. Census Bureau reported Houston's population as 77.5% white and 22.4% black. When World War II started, tonnage levels at the port decreased and shipping activities were suspended; however, the war did provide economic benefits for the city. Petrochemical refineries and manufacturing plants were constructed along the ship channel because of the demand for petroleum and synthetic rubber products by the defense industry during the war. Ellington Field, initially built during World War I, was revitalized as an advanced training center for bombardiers and navigators. The Brown Shipbuilding Company was founded in 1942 to build ships for the U.S. Navy during World War II. Due to the boom in defense jobs, thousands of new workers migrated to the city, both blacks and whites competing for the higher-paying jobs. President Roosevelt had established a policy of nondiscrimination for defense contractors, and blacks gained some opportunities, especially in shipbuilding, although not without resistance from whites and increasing social tensions that erupted into occasional violence. Economic gains of blacks who entered defense industries continued in the postwar years. In 1945, the M.D. Anderson Foundation formed the Texas Medical Center. After the war, Houston's economy reverted to being primarily port-driven. In 1948, the city annexed several unincorporated areas, more than doubling its size. Houston proper began to spread across the region. In 1950, the availability of air conditioning provided impetus for many companies to relocate to Houston, where wages were lower than those in the North; this resulted in an economic boom and produced a key shift in the city's economy toward the energy sector. The increased production of the expanded shipbuilding industry during World War II spurred Houston's growth, as did the establishment in 1961 of NASA's "Manned Spacecraft Center" (renamed the Lyndon B. Johnson Space Center in 1973). This was the stimulus for the development of the city's aerospace industry. The Astrodome, nicknamed the "Eighth Wonder of the World", opened in 1965 as the world's first indoor domed sports stadium. During the late 1970s, Houston had a population boom as people from the Rust Belt states moved to Texas in large numbers. The new residents came for numerous employment opportunities in the petroleum industry, created as a result of the Arab oil embargo. With the increase in professional jobs, Houston has become a destination for many college-educated persons, most recently including African Americans in a reverse Great Migration from northern areas. In 1997, Houstonians elected Lee P. Brown as the city's first African American mayor. In June 2001, Tropical Storm Allison dumped up to of rain on parts of Houston, causing what was then the worst flooding in the city's history. The storm cost billions of dollars in damage and killed 20 people in Texas. By December of the same year, Houston-based energy company Enron collapsed into the largest U.S. bankruptcy (at that time), a result of being investigated for off-the-books partnerships which were allegedly used to hide debt and inflate profits. The company lost no less than $70 billion. In August 2005, Houston became a shelter to more than 150,000 people from New Orleans, who evacuated from Hurricane Katrina. One month later, about 2.5 million Houston-area residents evacuated when Hurricane Rita approached the Gulf Coast, leaving little damage to the Houston area. This was the largest urban evacuation in the history of the United States. In September 2008, Houston was hit by Hurricane Ike. As many as 40% of residents refused to leave Galveston Island because they feared the type of traffic problems that had happened after Hurricane Rita. During its recent history, Houston has flooded several times from heavy rainfall, which has been becoming increasingly common. This has been exacerbated by a lack of zoning laws, which allowed unregulated building of residential homes and other structures in flood-prone areas. During the floods in 2015 and 2016, each of which dropped at least a foot of rain, parts of the city were covered in several inches of water. Even worse flooding happened in late August 2017, when Hurricane Harvey stalled over southeastern Texas, much like Tropical Storm Allison did sixteen years earlier, causing severe flooding in the Houston area, with some areas receiving over of rain. The rainfall exceeded 50 inches in several areas locally, breaking the national record for rainfall. The damage for the Houston area is estimated at up to $125 billion U.S. dollars, and it is considered to be one of the worst natural disasters in the history of the United States, with the death toll exceeding 70 people. On January 31, 2018, the Houston City Council agreed to forgive large water bills thousands of households faced in the aftermath of Hurricane Harvey, as Houston Public Works found 6,362 homeowners' water utility bills had at least doubled. Houston has also been the site of numerous industrial disasters and construction accidents. In 2019, OSHA found that Texas was the leading state in the nation for crane accidents. In Houston, a 2008 crane collapse at a refinery killed 4 people and injured 6. The crane that collapsed was one of the largest cranes in the nation, possessing a 400-foot boom that can lift more than a million pounds. Due to the industrial infrastructure in and around Houston, natural disasters like Hurricane Harvey have also led to numerous toxic spills and disasters, including the 2017 Arkema plant explosion. Houston is located east of Austin, west of the Louisiana border, and south of Dallas. The city has a total area of ; this comprises over of land and covered by water. Most of Houston is located on the gulf coastal plain, and its vegetation is classified as Western Gulf coastal grasslands while further north, it transitions into a subtropical jungle, the Big Thicket. Much of the city was built on forested land, marshes, or swamps, and are all still visible in surrounding areas. Flat terrain and extensive greenfield development have combined to worsen flooding. Downtown stands about above sea level, and the highest point in far northwest Houston is about in elevation. The city once relied on groundwater for its needs, but land subsidence forced the city to turn to ground-level water sources such as Lake Houston, Lake Conroe, and Lake Livingston. The city owns surface water rights for 1.20 billion gallons of water a day in addition to 150 million gallons a day of groundwater. Houston has four major bayous passing through the city that accept water from the extensive drainage system. Buffalo Bayou runs through Downtown and the Houston Ship Channel, and has three tributaries: White Oak Bayou, which runs through the Houston Heights community northwest of Downtown and then towards Downtown; Brays Bayou, which runs along the Texas Medical Center; and Sims Bayou, which runs through the south of Houston and Downtown Houston. The ship channel continues past Galveston and then into the Gulf of Mexico. Houston is a flat marshy area where an extensive drainage system has been built. The adjoining prairie land drains into the city, which is prone to flooding. Underpinning Houston's land surface are unconsolidated clays, clay shales, and poorly cemented sands up to several miles deep. The region's geology developed from river deposits formed from the erosion of the Rocky Mountains. These sediments consist of a series of sands and clays deposited on decaying organic marine matter, that over time, transformed into oil and natural gas. Beneath the layers of sediment is a water-deposited layer of halite, a rock salt. The porous layers were compressed over time and forced upward. As it pushed upward, the salt dragged surrounding sediments into salt dome formations, often trapping oil and gas that seeped from the surrounding porous sands. The thick, rich, sometimes black, surface soil is suitable for rice farming in suburban outskirts where the city continues to grow. The Houston area has over 150 active faults (estimated to be 300 active faults) with an aggregate length of up to , including the Long Point–Eureka Heights fault system which runs through the center of the city. No significant historically recorded earthquakes have occurred in Houston, but researchers do not discount the possibility of such quakes having occurred in the deeper past, nor occurring in the future. Land in some areas southeast of Houston is sinking because water has been pumped out of the ground for many years. It may be associated with slip along the faults; however, the slippage is slow and not considered an earthquake, where stationary faults must slip suddenly enough to create seismic waves. These faults also tend to move at a smooth rate in what is termed "fault creep", which further reduces the risk of an earthquake. Houston's climate is classified as humid subtropical ("Cfa" in the Köppen climate classification system), typical of the Southern United States. While not located in Tornado Alley, like much of Northern Texas, spring supercell thunderstorms sometimes bring tornadoes to the area. Prevailing winds are from the south and southeast during most of the year, which bring heat and moisture from the nearby Gulf of Mexico and Galveston Bay. Summers in Houston are hot and humid. Temperatures in summer reach almost daily. The city reaches or surpasses this temperature on an average of 107 days per year, including a majority of days from June to September; additionally, an average of 5 days per year reach or exceed . Houston's characteristic subtropical humidity often results in a higher apparent temperature, and summer mornings average over 90% relative humidity. Air conditioning is ubiquitous in Houston; in 1981, annual spending on electricity for interior cooling exceeded $600 million (equivalent to $ billion in ), and by the late 1990s, approximately 90% of Houston homes featured air conditioning systems. The record highest temperature recorded in Houston is at Bush Intercontinental Airport, during September 4, 2000, and again on August 27, 2011. Houston has mild winters, with occasional cold spells. In January, the normal mean temperature at George Bush Intercontinental Airport is , with an average of 13 days per year with a low at or below , occurring on average between December 3 and February 20, allowing for a growing season of 286 days. Twenty-first century snow events in Houston include a storm on December 24, 2004, which saw of snow accumulate in parts of the metro area, and an event on December 7, 2017, which precipitated of snowfall. Snowfalls of at least on both December 10, 2008, and December 4, 2009, marked the first time measurable snowfall had occurred in two consecutive years in the city's recorded history. Overall, Houston has seen measurable snowfall 38 times between 1895 and 2018. On February 14 and 15, 1895, Houston received of snow, its largest snowfall from one storm on record. The coldest temperature officially recorded in Houston was on January 18, 1930. Houston generally receives ample rainfall, averaging about annually based on records between 1981 and 2010. Many parts of the city have a high risk of localized flooding due to flat topography, ubiquitous low-permeability clay-silt prairie soils, and inadequate infrastructure. During the mid-2010s, Greater Houston experienced consecutive major flood events in 2015 ("Memorial Day"), 2016 ("Tax Day"), and 2017 (Hurricane Harvey). Overall, there have been more casualties and property loss from floods in Houston than in any other locality in the United States. The majority of rainfall occurs between April and October (the wet season of Southeast Texas), when the moisture from the Gulf of Mexico evaporates extensively over the city. Houston has excessive ozone levels and is routinely ranked among the most ozone-polluted cities in the United States. Ground-level ozone, or smog, is Houston's predominant air pollution problem, with the American Lung Association rating the metropolitan area's ozone level twelfth on the "Most Polluted Cities by Ozone" in 2017, after major cities such as Los Angeles, Phoenix, New York City, and Denver. The industries located along the ship channel are a major cause of the city's air pollution. The rankings are in terms of peak-based standards, focusing strictly on the worst days of the year; the average ozone levels in Houston are lower than what is seen in most other areas of the country, as dominant winds ensure clean, marine air from the Gulf. Excessive man-made emissions in the Houston area led to a persistent increase of atmospheric carbon dioxide over the city. Such an increase, often regarded as "CO2 urban dome," is driven by a combination of strong emissions and stagnant atmospheric conditions. Moreover, Houston is the only metropolitan area with less than ten million citizens where such CO2 dome can be detected by satellites. Because of Houston's ample year round rainfall and proximity to the Gulf Coast, the city is prone to flooding from heavy rains; the most notable flooding events include Hurricane Harvey in 2017 and Tropical Storm Imelda in 2019. In response to Hurricane Harvey, Mayor Sylvester Turner of Houston initiated plans to require developers to build homes that will be less susceptible to flooding by raising them two feet above the 500-year floodplains. Hurricane Harvey damaged hundreds of thousands of homes and dumped trillions of gallons of water into the city. In places this led to feet of standing water that blocked streets and flooded homes. The Houston City Council passed this regulation in 2018 with a vote of 9–7. If these rules had been in place earlier, it is estimated that 84% of homes in the 100-year and 500-year floodplains would not have been damaged. In a recent case of testing these regulations, near the Brickhouse Gulley, an old golf course that served as a floodplain and reservoir for flood waters, had been announced. A massive developer, Meritage Homes, bought the land and planned to develop the 500-year floodplain into 900 residential homes. Their plan would bring in $360 million in revenue and boost tax revenue for the city. In order to meet the new regulations, all they had to do was infill the site to raise its ground level two feet above the 500 year floodplain and build a trench to deal with runoff. Before Hurricane Harvey, the city bought $10.7 million in houses in this area specifically to take them out of a dangerous area. The sudden change of heart, especially after Hurricane Harvey seems likely to be motivated by additionally tax revenues. In addition to being a floodplain, this area is also a floodway, making it even more hazardous to live there. Harris County, like other counties, face the common issue that they can not direct developers where they can and can not build, instead they can only impose regulations. Houston was incorporated in 1837 and adopted a ward system of representation shortly afterward in 1840. The six original wards of Houston are the progenitors of the 11 modern-day geographically-oriented Houston City Council districts, though the city abandoned the ward system in 1905 in favor of a commission government, and, later, the existing mayor–council government. Locations in Houston are generally classified as either being inside or outside the Interstate 610 loop. The "Inner Loop" encompasses a area which includes Downtown, pre–World War II residential neighborhoods and streetcar suburbs, and newer high-density apartment and townhouse developments. Outside the loop, the city's typology is more suburban, though many major business districts—such as Uptown, Westchase, and the Energy Corridor—lie well outside the urban core. In addition to Interstate 610, two additional loop highways encircle the city: Beltway 8, with a radius of approximately from Downtown, and State Highway 99 (the Grand Parkway), with a radius of . Approximately 470,000 people live within the Interstate 610 loop, while 1.65 million live between Interstate 610 and Beltway 8 and 2.25 million live within Harris County outside Beltway 8 in 2015. Though Houston is the largest city in the United States without formal zoning regulations, it has developed similarly to other Sun Belt cities because the city's land use regulations and legal covenants have played a similar role. Regulations include mandatory lot size for single-family houses and requirements that parking be available to tenants and customers. Such restrictions have had mixed results. Though some have blamed the city's low density, urban sprawl, and lack of pedestrian-friendliness on these policies, others have credited the city's land use patterns with providing significant affordable housing, sparing Houston the worst effects of the 2008 real estate crisis. The city issued 42,697 building permits in 2008 and was ranked first in the list of healthiest housing markets for 2009. In referendums in 1948, 1962, and 1993, voters rejected efforts to establish separate residential and commercial land-use districts. Consequently, rather than a single central business district as the center of the city's employment, multiple districts have grown throughout the city in addition to Downtown, which include Uptown, the Texas Medical Center, Midtown, Greenway Plaza, Memorial City, the Energy Corridor, Westchase, and Greenspoint. Houston had the fifth-tallest skyline in North America (after New York City, Chicago, Toronto and Miami) and 36th-tallest in the world in 2015. A seven-mile (11 km) system of tunnels and skywalks links Downtown buildings containing shops and restaurants, enabling pedestrians to avoid summer heat and rain while walking between buildings. In the 1960s, Downtown Houston consisted of a collection of midrise office structures. Downtown was on the threshold of an energy industryled boom in 1970. A succession of skyscrapers was built throughout the 1970s—many by real estate developer Gerald D. Hines—culminating with Houston's tallest skyscraper, the 75-floor, -tall JPMorgan Chase Tower (formerly the Texas Commerce Tower), completed in 1982. It is the tallest structure in Texas, 19th tallest building in the United States, and was previously 85th-tallest skyscraper in the world, based on highest architectural feature. In 1983, the 71-floor, -tall Wells Fargo Plaza (formerly Allied Bank Plaza) was completed, becoming the second-tallest building in Houston and Texas. Based on highest architectural feature, it is the 21st-tallest in the United States. In 2007, Downtown had over 43 million square feet (4,000,000 m²) of office space. Centered on Post Oak Boulevard and Westheimer Road, the Uptown District boomed during the 1970s and early 1980s when a collection of midrise office buildings, hotels, and retail developments appeared along Interstate 610 West. Uptown became one of the most prominent instances of an edge city. The tallest building in Uptown is the 64-floor, -tall, Philip Johnson and John Burgee designed landmark Williams Tower (known as the Transco Tower until 1999). At the time of construction, it was believed to be the world's tallest skyscraper outside a central business district. The new 20-story Skanska building and BBVA Compass Plaza are the newest office buildings built in Uptown after 30 years. The Uptown District is also home to buildings designed by noted architects I. M. Pei, César Pelli, and Philip Johnson. In the late 1990s and early 2000s, a mini-boom of midrise and highrise residential tower construction occurred, with several over 30 stories tall. Since 2000 over 30 skyscrapers have been developed in Houston; all told, 72 high-rises tower over the city, which adds up to about 8,300 units. In 2002, Uptown had more than 23 million square feet (2,100,000 m²) of office space with 16 million square feet (1,500,000 m²) of class A office space. The 2010 United States Census reported that Houston had a population of 2,100,263 residents. In 2017, the census-estimated population rose to 2,312,717, and in 2018 to 2,325,502. An estimated 600,000 undocumented immigrants resided in the Houston area in 2017, comprising nearly 9% of the city's metropolitan population. Per the American Community Survey's 2014-2018 estimates, Houston's age distribution was 486,083 under 15; 147,710 aged 15 to 19; 603,586 aged 20 to 34; 726,877 aged 35 to 59; and 357,834 aged 60 and older. The median age was 33.1, up from 32.9 in 2017 and down from 33.5 in 2014; the city's youthfulness was attributed to an influx of an African American New Great Migration, Hispanic or Latin American, and Asian immigrants into Texas. For every 100 females, there were 98.5 males. There were 976,745 housing units in 2018 and 848,340 households. 42.9% of Houstonians owned housing units with an average of 2.67 persons per household. The median monthly owner costs with a mortgage were $1,598, and $524 without a mortgage. Houston's median gross rent from 2014-2018 was $990. The median household income in 2018 was $51,140 and 20.6% of Houstonians lived below the poverty line. Houston is a majority-minority city. The Rice University Kinder Institute for Urban Research, a think tank, has described Greater Houston as "one of the most ethnically and culturally diverse metropolitan areas in the country". Houston's diversity, fueled by large waves of immigrants, has been attributed to its relatively low cost of living, strong job market, and role as a hub for refugee resettlement. Houston has long been known as a popular destination for African-Americans due to the city's well-established and influential African American community and hip-hop culture. A 2012 Kinder Institute report found that, based on the evenness of population distribution between the four major racial groups in the United States (non-Hispanic white, non-Hispanic black, Hispanic or Latino, and Asian), Greater Houston was the most ethnically diverse metropolitan area in the United States, ahead of New York City. In 2017, according to the U.S. Census Bureau, non-Hispanic whites made up 24.9% of the population of Houston proper, Hispanics or Latinos 44.5%, Blacks or African Americans 22.9%, and Asians 6.7%. In 2018, non-Hispanic whites made up 24.6% of the population, Hispanics or Latinos 44.8%, Blacks or African Americans 22.5%, and Asians 6.9%. Compared with its metropolitan area, the city of Houston's population has a higher proportion of minorities than whites. In 2010, whites (including Hispanic whites) made up 51% of the city of Houston's population; 26% of the total population was non-Hispanic whites. Blacks or African Americans made up 25% of Houston's population, American Indians made up 0.7% of the population, Asians made up 6% (1.7% Vietnamese, 1.3% Chinese, 1.3% Indian, 0.9% Pakistani, 0.4% Filipino, 0.3% Korean, 0.1% Japanese) and Pacific Islanders made up 0.1%. Individuals from some other race made up 15.2% of the city's population, of which 0.2% were non-Hispanic. Individuals from two or more races made up 3.3% of the city. At the 2000 Census, the racial makeup of the city in was 49.3% White, 25.3% Black or African American, 5.3% Asian, 0.7% American Indian, 0.1% Pacific Islander, 16.5% from some other race, and 3.1% from two or more races. In addition, Hispanics made up 37.4% of Houston's population in 2000, while non-Hispanic whites made up 30.8%. The proportion of non-Hispanic whites in Houston has decreased significantly since 1970, when it was 62.4%. Historically, Houston has been a center of Protestant Christianity, being part of the Bible Belt. Other Christian groups including Eastern and Oriental Orthodox Christianity, and non-Christian religions did not grow for much of the city's history because immigration was predominantly from Western Europe (which at the time was dominated by Western Christianity and favored by the quotas in federal immigration law). The Immigration and Nationality Act of 1965 removed the quotas, allowing for the growth of other religions. According to a 2014 study by the Pew Research Center, 73% of the population of the Houston area identified themselves as Christians, about 50% of whom claimed Protestant affiliations and about 19% claimed Roman Catholic affiliations. Nationwide, about 71% of respondents identified as Christians. About 20% of Houston-area residents claimed no religious affiliation, compared to about 23% nationwide. The same study says that area residents identifying with other religions (including Judaism, Buddhism, Islam, and Hinduism) collectively made up about 7% of the area population. Lakewood Church in Houston, led by Pastor Joel Osteen, is the largest church in the United States. A megachurch, it had 44,800 weekly attendees in 2010, up from 11,000 weekly in 2000. Since 2005 it has occupied the former Compaq Center sports stadium. In September 2010, "Outreach Magazine" published a list of the 100 largest Christian churches in the United States, and inside the list were the following Houston-area churches: Lakewood, Second Baptist Church Houston, Woodlands Church, Church Without Walls and First Baptist Church. According to the list, Houston and Dallas were tied as the second most popular city for megachurches.The Roman Catholic Archdiocese of Galveston-Houston, the largest Catholic jurisdiction in Texas and fifth-largest in the United States, was established in 1847. The Archdiocese of Galveston-Houston claims approximately 1.7 million Catholics within its boundaries. A variety of Eastern and Oriental Orthodox churches can be found in Houston. Immigrants from Eastern Europe, the Middle East, Ethiopia, India and other areas have added to Houston's Eastern and Oriental Orthodox population. As of 2011 in the entire State of Texas there were 32,000 people who actively attend Orthodox churches. In 2013 Father John Whiteford, the pastor of St. Jonah Orthodox Church near Spring, stated that there were about 6,000-9,000 Eastern Orthodox Christians in Houston. Houston's Jewish community, estimated at 47,000 in 2001, has been present in the city since the 1800s. Houstonian Jews have origins from throughout the United States, Israel, Mexico, Russia, and other places. As of 2016 there were over 40 synagogues in Greater Houston. The largest synagogues in Houston are Congregation Beth Yeshurun, a Conservative Jewish temple, and the Reform Jewish congregations Beth Israel and Emanu-El. Houston has a large and diverse Muslim community; the largest in Texas and the Southern United States, as of 2012. It is estimated that Muslims make up 1.2% of Houston's population. As of 2016, Muslims in the Houston area included South Asians, Middle Easterners, Africans, Turks, and Indonesians. In 2000 there were over 41 mosques and storefront religious centers, with the largest being the "Al-Noor" Mosque (Mosque of Light) of the Islamic Society of Greater Houston. Houston is recognized worldwide for its energy industry—particularly for oil and natural gas—as well as for biomedical research and aeronautics. Renewable energy sources—wind and solar—are also growing economic bases in the city. The Houston Ship Channel is also a large part of Houston's economic base. Because of these strengths, Houston is designated as a global city by the Globalization and World Cities Study Group and Network and global management consulting firm A.T. Kearney. The Houston area is the top U.S. market for exports, surpassing New York City in 2013, according to data released by the U.S. Department of Commerce's International Trade Administration. In 2012, the Houston–The Woodlands–Sugar Land area recorded $110.3 billion in merchandise exports. Petroleum products, chemicals, and oil and gas extraction equipment accounted for roughly two-thirds of the metropolitan area's exports last year. The top three destinations for exports were Mexico, Canada, and Brazil. The Houston area is a leading center for building oilfield equipment. Much of its success as a petrochemical complex is due to its busy ship channel, the Port of Houston. In the United States, the port ranks first in international commerce and 16th among the largest ports in the world. Unlike most places, high oil and gasoline prices are beneficial for Houston's economy, as many of its residents are employed in the energy industry. Houston is the beginning or end point of numerous oil, gas, and products pipelines. The Houston–The Woodlands–Sugar Land metro area's gross domestic product (GDP) in 2016 was $478 billion, making it the sixth-largest of any metropolitan area in the United States and larger than Iran's, Colombia's, or the United Arab Emirates' GDP. Only 27 countries other than the United States have a gross domestic product exceeding Houston's regional gross area product (GAP). In 2010, mining (which consists almost entirely of exploration and production of oil and gas in Houston) accounted for 26.3% of Houston's GAP up sharply in response to high energy prices and a decreased worldwide surplus of oil production capacity, followed by engineering services, health services, and manufacturing. The University of Houston System's annual impact on the Houston area's economy equates to that of a major corporation: $1.1 billion in new funds attracted annually to the Houston area, $3.13 billion in total economic benefit, and 24,000 local jobs generated. This is in addition to the 12,500 new graduates the U.H. System produces every year who enter the workforce in Houston and throughout Texas. These degree-holders tend to stay in Houston. After five years, 80.5% of graduates are still living and working in the region. In 2006, the Houston metropolitan area ranked first in Texas and third in the U.S. within the category of "Best Places for Business and Careers" by "Forbes" magazine. Ninety-one foreign governments have established consular offices in Houston's metropolitan area, the third-highest in the nation. Forty foreign governments maintain trade and commercial offices here with 23 active foreign chambers of commerce and trade associations. Twenty-five foreign banks representing 13 nations operate in Houston, providing financial assistance to the international community. In 2008, Houston received top ranking on "Kiplinger's Personal Finance" "Best Cities of 2008" list, which ranks cities on their local economy, employment opportunities, reasonable living costs, and quality of life. The city ranked fourth for highest increase in the local technological innovation over the preceding 15 years, according to "Forbes" magazine. In the same year, the city ranked second on the annual "Fortune" 500 list of company headquarters, first for "Forbes" magazine's "Best Cities for College Graduates", and first on their list of "Best Cities to Buy a Home". In 2010, the city was rated the best city for shopping, according to "Forbes". In 2012, the city was ranked number one for paycheck worth by "Forbes" and in late May 2013, Houston was identified as America's top city for employment creation. In 2013, Houston was identified as the number one U.S. city for job creation by the U.S. Bureau of Statistics after it was not only the first major city to regain all the jobs lost in the preceding economic downturn, but also after the crash, more than two jobs were added for every one lost. Economist and vice president of research at the Greater Houston Partnership Patrick Jankowski attributed Houston's success to the ability of the region's real estate and energy industries to learn from historical mistakes. Furthermore, Jankowski stated that "more than 100 foreign-owned companies relocated, expanded or started new businesses in Houston" between 2008 and 2010, and this openness to external business boosted job creation during a period when domestic demand was problematically low. Also in 2013, Houston again appeared on "Forbes"' list of "Best Places for Business and Careers". Located in the American South, Houston is a diverse city with a large and growing international community. The Greater Houston metropolitan area is home to an estimated 1.1 million (21.4 percent) residents who were born outside the United States, with nearly two-thirds of the area's foreign-born population from south of the United States–Mexico border. Additionally, more than one in five foreign-born residents are from Asia. The city is home to the nation's third-largest concentration of consular offices, representing 92 countries. Many annual events celebrate the diverse cultures of Houston. The largest and longest-running is the annual Houston Livestock Show and Rodeo, held over 20 days from early to late March, and is the largest annual livestock show and rodeo in the world. Another large celebration is the annual night-time Houston Gay Pride Parade, held at the end of June. Other notable annual events include the Houston Greek Festival, Art Car Parade, the Houston Auto Show, the Houston International Festival, and the Bayou City Art Festival, which is considered to be one of the top five art festivals in the United States. Houston is highly regarded for its diverse food and restaurant culture. Several major publications have consistently named Houston one of "America's Best Food Cities". Houston received the official nickname of "Space City" in 1967 because it is the location of NASA's Lyndon B. Johnson Space Center. Other nicknames often used by locals include "Bayou City", "Clutch City", "Crush City", "Magnolia City", "H-Town", and "Culinary Capital of the South". The Houston Theater District, located in Downtown, is home to nine major performing arts organizations and six performance halls. It is the second-largest concentration of theater seats in a downtown area in the United States. Houston is one of few United States cities with permanent, professional, resident companies in all major performing arts disciplines: opera (Houston Grand Opera), ballet (Houston Ballet), music (Houston Symphony Orchestra), and theater (The Alley Theatre, Theatre Under the Stars). Houston is also home to folk artists, art groups and various small progressive arts organizations. Houston attracts many touring Broadway acts, concerts, shows, and exhibitions for a variety of interests. Facilities in the Theater District include the Jones Hall—home of the Houston Symphony Orchestra and Society for the Performing Arts—and the Hobby Center for the Performing Arts. The Museum District's cultural institutions and exhibits attract more than 7 million visitors a year. Notable facilities include The Museum of Fine Arts, the Houston Museum of Natural Science, the Contemporary Arts Museum Houston, the Station Museum of Contemporary Art, the Holocaust Museum Houston, the Children's Museum of Houston, and the Houston Zoo. Located near the Museum District are The Menil Collection, Rothko Chapel, the Moody Center for the Arts and the Byzantine Fresco Chapel Museum. Bayou Bend is a facility of the Museum of Fine Arts that houses one of America's most prominent collections of decorative art, paintings, and furniture. Bayou Bend is the former home of Houston philanthropist Ima Hogg. The National Museum of Funeral History is located in Houston near the George Bush Intercontinental Airport. The museum houses the original Popemobile used by Pope John Paul II in the 1980s along with numerous hearses, embalming displays, and information on famous funerals. Venues across Houston regularly host local and touring rock, blues, country, dubstep, and Tejano musical acts. While Houston has never been widely known for its music scene, Houston hip-hop has become a significant, independent music scene that is influential nationwide. Houston is the birthplace of the chopped and screwed remixing-technique in Hip-hop which was pioneered by DJ Screw from the city. Other notable Hip-hop artists in the area include Slim Thug, Paul Wall, Mike Jones, Bun B, Geto Boys, Chamillionaire, South Park Mexican, Travis Scott and Megan Thee Stallion. The Theater District is a 17-block area in the center of Downtown Houston that is home to the Bayou Place entertainment complex, restaurants, movies, plazas, and parks. Bayou Place is a large multilevel building containing full-service restaurants, bars, live music, billiards, and Sundance Cinema. The Bayou Music Center stages live concerts, stage plays, and stand-up comedy. Space Center Houston is the official visitors' center of NASA's Lyndon B. Johnson Space Center. The Space Center has many interactive exhibits including moon rocks, a shuttle simulator, and presentations about the history of NASA's manned space flight program. Other tourist attractions include the Galleria (Texas' largest shopping mall, located in the Uptown District), Old Market Square, the Downtown Aquarium, and Sam Houston Race Park. Houston's current Chinatown and the Mahatma Gandhi District are two major ethnic enclaves, reflecting Houston's multicultural makeup. Restaurants, bakeries, traditional-clothing boutiques, and specialty shops can be found in both areas. Houston is home to 337 parks, including Hermann Park, Terry Hershey Park, Lake Houston Park, Memorial Park, Tranquility Park, Sesquicentennial Park, Discovery Green, Buffalo Bayou Park and Sam Houston Park. Within Hermann Park are the Houston Zoo and the Houston Museum of Natural Science. Sam Houston Park contains restored and reconstructed homes which were originally built between 1823 and 1905. A proposal has been made to open the city's first botanic garden at Herman Brown Park. Of the 10 most populous U.S. cities, Houston has the most total area of parks and green space, . The city also has over 200 additional green spaces—totaling over that are managed by the city—including the Houston Arboretum and Nature Center. The Lee and Joe Jamail Skatepark is a public skatepark owned and operated by the city of Houston, and is one of the largest skateparks in Texas consisting of a 30,000-ft2 (2,800 m2)in-ground facility. The Gerald D. Hines Waterwall Park—located in the Uptown District of the city—serves as a popular tourist attraction and for weddings and various celebrations. A 2011 study by Walk Score ranked Houston the 23rd most walkable of the 50 largest cities in the United States. Houston has sports teams for every major professional league except the National Hockey League. The Houston Astros are a Major League Baseball expansion team formed in 1962 (known as the "Colt .45s" until 1965) that won the World Series in 2017 and previously appeared in 2005. It is the only MLB team to have won pennants in both modern leagues. The Houston Rockets are a National Basketball Association franchise based in the city since 1971. They have won two NBA Championships, one in 1994 and another in 1995 under star players Hakeem Olajuwon, Otis Thorpe, Clyde Drexler, Vernon Maxwell, and Kenny Smith. The Houston Texans are a National Football League expansion team formed in 2002. The Houston Dynamo is a Major League Soccer franchise that has been based in Houston since 2006, winning two MLS Cup titles in 2006 and 2007. The Houston Dash team plays in the National Women's Soccer League. The Houston SaberCats are a rugby team that plays in Major League Rugby. Minute Maid Park (home of the Astros) and Toyota Center (home of the Rockets), are located in Downtown Houston. Houston has the NFL's first retractable-roof stadium with natural grass, NRG Stadium (home of the Texans). Minute Maid Park is also a retractable-roof stadium. Toyota Center also has the largest screen for an indoor arena in the United States built to coincide with the arena's hosting of the 2013 NBA All-Star Game. BBVA Compass Stadium is a soccer-specific stadium for the Houston Dynamo, the Texas Southern Tigers football team, and Houston Dash, located in East Downtown. Aveva Stadium (home of the SaberCats) is located in south Houston. In addition, NRG Astrodome was the first indoor stadium in the world, built in 1965. Other sports facilities include Hofheinz Pavilion (Houston Cougars basketball), Rice Stadium (Rice Owls football), and NRG Arena. TDECU Stadium is where the University of Houston's Cougars football team plays. Houston has hosted several major sports events: the 1968, 1986 and 2004 Major League Baseball All-Star Games; the 1989, 2006 and 2013 NBA All-Star Games; Super Bowl VIII, Super Bowl XXXVIII, and Super Bowl LI, as well as hosting the 1981, 1986, 1994 and 1995 NBA Finals, winning the latter two, and hosting the 2005 World Series, 2017 World Series and 2019 World Series, the city won its first baseball championship during the 2017 event. NRG Stadium hosted Super Bowl LI on February 5, 2017. The city has hosted several major professional and college sporting events, including the annual Houston Open golf tournament. Houston hosts the annual Houston College Classic baseball tournament every February, and the Texas Kickoff and Bowl in September and December, respectively. The Grand Prix of Houston, an annual auto race on the IndyCar Series circuit was held on a 1.7-mile temporary street circuit in NRG Park. The October 2013 event was held using a tweaked version of the 2006–2007 course. The event had a 5-year race contract through 2017 with IndyCar. In motorcycling, the Astrodome hosted an AMA Supercross Championship round from 1974 to 2003 and the NRG Stadium since 2003. Houston is also one of the first cities in the world to have a major eSports team represent it, in the form of the Houston Outlaws. The Outlaws play in the Overwatch League and are one of two Texan teams, the other being the Dallas Fuel. Houston is also one of eight cities to have an XFL team, the Houston Roughnecks. The city of Houston has a strong mayoral form of municipal government. Houston is a home rule city and all municipal elections in the Texas are nonpartisan. The city's elected officials are the mayor, city controller and 16 members of the Houston City Council. The current mayor of Houston is Sylvester Turner, a Democrat elected on a nonpartisan ballot. Houston's mayor serves as the city's chief administrator, executive officer, and official representative, and is responsible for the general management of the city and for seeing that all laws and ordinances are enforced. The original city council line-up of 14 members (nine district-based and five at-large positions) was based on a U.S. Justice Department mandate which took effect in 1979. At-large council members represent the entire city. Under the city charter, once the population in the city limits exceeded 2.1 million residents, two additional districts were to be added. The city of Houston's official 2010 census count was 600 shy of the required number; however, as the city was expected to grow beyond 2.1 million shortly thereafter, the two additional districts were added for, and the positions filled during, the August 2011 elections. The city controller is elected independently of the mayor and council. The controller's duties are to certify available funds prior to committing such funds and processing disbursements. The city's fiscal year begins on July 1 and ends on June 30. Chris Brown is the city controller, serving his first term . As the result of a 2015 referendum in Houston, a mayor is elected for a four-year term, and can be elected to as many as two consecutive terms. The term limits were spearheaded in 1991 by conservative political activist Clymer Wright. During 1991–2015, the city controller and city council members were subjected to a two-year, three-term limitation – the 2015 referendum amended term limits to two four-year terms. some councilmembers who served two terms and won a final term will have served eight years in office, whereas a freshman councilmember who won a position in 2013 can serve up to two additional terms under the previous term limit law – a select few will have at least 10 years of incumbency once their term expires. Houston is considered to be a politically divided city whose balance of power often sways between Republicans and Democrats. Much of the city's wealthier areas vote Republican while the city's working class and minority areas vote Democratic. According to the 2005 Houston Area Survey, 68 percent of non-Hispanic whites in Harris County are declared or favor Republicans while 89 percent of non-Hispanic blacks in the area are declared or favor Democrats. About 62 percent of Hispanics (of any race) in the area are declared or favor Democrats. The city has often been known to be the most politically diverse city in Texas, a state known for being generally conservative. As a result, the city is often a contested area in statewide elections. In 2009, Houston became the first U.S. city with a population over 1 million citizens to elect a gay mayor, by electing Annise Parker. Texas has banned sanctuary cities, but Houston Mayor Sylvester Turner said that Houston will not assist ICE agents with immigration raids. Houston had 303 homicides in 2015 and 302 homicides in 2016. Officials predicted there would be 323 homicides in 2016. Instead, there was no increase in Houston's homicide rate between 2015 and 2016. Houston's murder rate ranked 46th of U.S. cities with a population over 250,000 in 2005 (per capita rate of 16.3 murders per 100,000 population). In 2010, the city's murder rate (per capita rate of 11.8 murders per 100,000 population) was ranked sixth among U.S. cities with a population of over 750,000 (behind New York City, Chicago, Detroit, Dallas, and Philadelphia) according to the Federal Bureau of Investigation (FBI). Murders fell by 37 percent from January to June 2011, compared with the same period in 2010. Houston's total crime rate including violent and nonviolent crimes decreased by 11 percent. The FBI's Uniform Crime Report (UCR) indicates a downward trend of violent crime in Houston over the ten- and twenty-year periods ending in 2016, which is consistent with national trends. This trend toward lower rates of violent crime in Houston includes the murder rate, though it had seen a four-year uptick that lasted through 2015. Houston's violent crime rate was 8.6% percent higher in 2016 from the previous year. However, from 2006 to 2016, violent crime was still down 12 percent in Houston. Houston is a significant hub for trafficking of cocaine, cannabis, heroin, MDMA, and methamphetamine due to its size and proximity to major illegal drug exporting nations. Houston is one of the country's largest hubs for human trafficking. In the early 1970s, Houston, Pasadena and several coastal towns were the site of the Houston mass murders, which at the time were the deadliest case of serial killing in American history. In 1853 the first execution in Houston took place in public at Founder's Cemetery in the Fourth Ward; initially the cemetery was the execution site, but post-1868 executions took place in the jail facilities. Nineteen school districts exist within the city of Houston. The Houston Independent School District (HISD) is the seventh-largest school district in the United States and the largest in Texas. HISD has 112 campuses that serve as magnet or vanguard schools—specializing in such disciplines as health professions, visual and performing arts, and the sciences. There are also many charter schools that are run separately from school districts. In addition, some public school districts also have their own charter schools. The Houston area encompasses more than 300 private schools, many of which are accredited by Texas Private School Accreditation Commission recognized agencies. The Houston Area independent schools offer education from a variety of different religious as well as secular viewpoints. The Houston area Catholic schools are operated by the Archdiocese of Galveston-Houston. Four distinct state universities are located in Houston. The University of Houston (UH) is a nationally recognized and is the flagship institution of the University of Houston System. The university in Texas, the University of Houston has nearly 44,000 students on its campus in the Third Ward. The University of Houston–Clear Lake and the University of Houston–Downtown are universities within the University of Houston System; they are not branch campuses of the University of Houston. Slightly west of the University of Houston is Texas Southern University (TSU), one of the largest and most comprehensive historically black universities in the United States with approximately 10,000 students. Texas Southern University is the first state university in Houston, founded in 1927. Several private institutions of higher learning are located within the city. Rice University, the most selective university in Texas and one of the most selective in the United States, is a private, secular institution with a high level of research activity. Founded in 1912, Rice's historic, heavily wooded campus, located adjacent to Hermann Park and the Texas Medical Center, hosts approximately 4,000 undergraduate and 3,000 post-graduate students. To the north in Neartown, the University of St. Thomas, founded in 1947, is Houston's only Catholic university. St. Thomas provides a liberal arts curriculum for roughly 3,000 students at its historic 19-block campus along Montrose Boulevard. In southwest Houston, Houston Baptist University (HBU), founded in 1960, offers bachelor's and graduate degrees at its Sharpstown campus. The school is affiliated with the Baptist General Convention of Texas and has a student population of approximately 3,000. Three community college districts have campuses in and around Houston. The Houston Community College System (HCC) serves most of Houston proper; its main campus and headquarters are located in Midtown. Suburban northern and western parts of the metropolitan area are served by various campuses of the Lone Star College System, while the southeastern portion of Houston is served by San Jacinto College, and a northeastern portion is served by Lee College. The Houston Community College and Lone Star College systems are among the 10 largest institutions of higher learning in the United States. Houston also hosts a number of graduate schools in law and healthcare. The University of Houston Law Center and Thurgood Marshall School of Law at Texas Southern University are public, ABA-accredited law schools, while the South Texas College of Law, located in Downtown, serves as a private, independent alternative. The Texas Medical Center is home to a high density of health professions schools, including two medical schools: McGovern Medical School, part of The University of Texas Health Science Center at Houston, and Baylor College of Medicine, a highly selective private institution. Prairie View A&M University's nursing school is located in the Texas Medical Center. Additionally, both Texas Southern University and the University of Houston have pharmacy schools, and the University of Houston hosts a college of optometry. The primary network-affiliated television stations are KPRC-TV (NBC), KHOU (CBS), KTRK-TV (ABC), KRIV (Fox), KIAH (The CW), KTXH (MyNetworkTV), KXLN-DT (Univision) and KTMD-TV (Telemundo). KTRK-TV, KRIV, KTXH, KXLN-DT and KTMD-TV operate as owned-and-operated stations of their networks. The Houston–The Woodlands–Sugar Land metropolitan area is served by one public television station and one public radio station. KUHT ("Houston Public Media") is a PBS member station and is the first public television station in the United States. Houston Public Radio is listener-funded and comprises one NPR member station, KUHF ("News 88.7"). The University of Houston System owns and holds broadcasting licenses to KUHT and KUHF. The stations broadcast from the Melcher Center for Public Broadcasting, located on the campus of the University of Houston. Houston is served by the "Houston Chronicle", its only major daily newspaper with wide distribution. The Hearst Corporation, which owns and operates the "Houston Chronicle", bought the assets of the "Houston Post"—its long-time rival and main competition—when "Houston Post" ceased operations in 1995. The "Houston Post" was owned by the family of former Lieutenant Governor Bill Hobby of Houston. The only other major publication to serve the city is the "Houston Press"—which was a free alternative weekly newspaper before the destruction caused by Hurricane Harvey resulted in the publication switching to an online-only format on November 2, 2017. Houston is the seat of the Texas Medical Center, which describes itself as containing the world's largest concentration of research and healthcare institutions. All 49 member institutions of the Texas Medical Center are non-profit organizations. They provide patient and preventive care, research, education, and local, national, and international community well-being. Employing more than 73,600 people, institutions at the medical center include 13 hospitals and two specialty institutions, two medical schools, four nursing schools, and schools of dentistry, public health, pharmacy, and virtually all health-related careers. It is where one of the first—and still the largest—air emergency service, Life Flight, was created, and an inter-institutional transplant program was developed. Around 2007, more heart surgeries were performed at the Texas Medical Center than anywhere else in the world. Some of the academic and research health institutions at the center include MD Anderson Cancer Center, Baylor College of Medicine, UT Health Science Center, Memorial Hermann Hospital, Houston Methodist Hospital, Texas Children's Hospital, and University of Houston College of Pharmacy. In the 2000s, the Baylor College of Medicine was annually considered within the top ten medical schools in the nation; likewise, the MD Anderson Cancer Center had been consistently ranked as one of the top two U.S. hospitals specializing in cancer care by "U.S. News & World Report" since 1990. The Menninger Clinic, a psychiatric treatment center, is affiliated with Baylor College of Medicine and the Houston Methodist Hospital System. With hospital locations nationwide and headquarters in Houston, the Triumph Healthcare hospital system was the third largest long term acute care provider nationally in 2005. Houston is considered an automobile-dependent city, with an estimated 77.2% of commuters driving alone to work in 2016, up from 71.7% in 1990 and 75.6% in 2009. In 2016, another 11.4% of Houstonians carpooled to work, while 3.6% used public transit, 2.1% walked, and 0.5% bicycled. A commuting study estimated that the median length of commute in the region was in 2012. According to the 2013 American Community Survey, the average work commute in Houston (city) takes 26.3 minutes. A 1999 Murdoch University study found that Houston had both the lengthiest commute and lowest urban density of 13 large American cities surveyed, and a 2017 Arcadis study ranked Houston 22nd out of 23 American cities in transportation sustainability. Harris County is one of the largest consumers of gasoline in the United States, ranking second (behind Los Angeles County) in 2013. Despite the region's high rate of automobile usage, attitudes towards transportation among Houstonians indicate a growing preference for walkability. A 2017 study by the Rice University Kinder Institute for Urban Research found that 56% of Harris County residents have a preference for dense housing in a mixed-use, walkable setting as opposed to single-family housing in a low-density area. A plurality of survey respondents also indicated that traffic congestion was the most significant problem facing the metropolitan area. In addition, many households in the city of Houston have no car. In 2015, 8.3 percent of Houston households lacked a car, which was virtually unchanged in 2016 (8.1 percent). The national average was 8.7 percent in 2016. Houston averaged 1.59 cars per household in 2016, compared to a national average of 1.8. The eight-county Greater Houston metropolitan area contains over of roadway, of which 10%, or approximately , is limited-access highway. The Houston region's extensive freeway system handles over 40% of the regional daily vehicle miles traveled (VMT). Arterial roads handle an additional 40% of daily VMT, while toll roads, of which Greater Houston has , handle nearly 10%. Greater Houston possesses a hub-and-spoke limited-access highway system, in which a number of freeways radiate outward from Downtown, with ring roads providing connections between these radial highways at intermediate distances from the city center. The city is crossed by three Interstate highways, Interstate 10, Interstate 45, and Interstate 69 (commonly known as U.S. Route 59), as well as a number of other United States routes and state highways. Major freeways in Greater Houston are often referred to by either the cardinal direction or geographic location they travel towards. Highways that follow the cardinal convention include U.S. Route 290 ("Northwest Freeway"), Interstate 45 north of Downtown ("North Freeway"), Interstate 10 east of Downtown "(East Freeway"), Texas State Highway 288 ("South" "Freeway"), and Interstate 69 south of Downtown ("Southwest Freeway"). Highways that follow the location convention include Interstate 10 west of Downtown ("Katy Freeway"), Interstate 69 north of Downtown ("Eastex Freeway"), Interstate 45 south of Downtown ("Gulf Freeway"), and Texas State Highway 225 ("La Porte" or "Pasadena Freeway"). Three loop freeways provide north–south and east–west connectivity between Greater Houston's radial highways. The innermost loop is Interstate 610, commonly known as the "Inner Loop", which encircles Downtown, the Texas Medical Center, Greenway Plaza, the cities of West University Place and Southside Place, and many core neighborhoods. The State Highway Beltway 8, often referred to as "the Beltway", forms the middle loop at a radius of roughly . A third, loop with a radius of approximately , State Highway 99 (the "Grand Parkway"), is currently under construction, with six of eleven segments completed . Completed segments D through G provide a continuous limited-access tollway connection between Sugar Land, Katy, Cypress, Spring, and Porter. A system of toll roads, operated by the Harris County Toll Road Authority (HCTRA) and Fort Bend County Toll Road Authority (FBCTRA), provides additional options for regional commuters. The Sam Houston Tollway, which encompasses the mainlanes of Beltway 8 (as opposed to the frontage roads, which are untolled), is the longest tollway in the system, covering the entirety of the Beltway with the exception of a free section between Interstate 45 and Interstate 69 near George Bush Intercontinental Airport. The region is serviced by four spoke tollways: a set of managed lanes on the Katy Freeway; the Hardy Toll Road, which parallels Interstate 45 north of Downtown up to Spring; the Westpark Tollway, which services Houston's western suburbs out to Fulshear; and Fort Bend Parkway, which connects to Sienna Plantation. Westpark Tollway and Fort Bend Parkway are operated conjunctly with the Fort Bend County Toll Road Authority. Greater Houston's freeway system is monitored by Houston TranStar, a partnership of four government agencies which is responsible for providing transportation and emergency management services to the region. Greater Houston's arterial road network is established at the municipal level, with the City of Houston exercising planning control over both its incorporated area and extraterritorial jurisdiction (ETJ). Therefore, Houston exercises transportation planning authority over a area over five counties, many times larger than its corporate area. The "Major Thoroughfare and Freeway Plan", updated annually, establishes the city's street hierarchy, identifies roadways in need of widening, and proposes new roadways in unserved areas. Arterial roads are organized into four categories, in decreasing order of intensity: "major thoroughfares", "transit corridor streets", "collector streets", and "local streets". Roadway classification affects anticipated traffic volumes, roadway design, and right of way breadth. Ultimately, the system is designed to ferry traffic from neighborhood streets to major thoroughfares, which connect into the limited-access highway system. Notable arterial roads in the region include Westheimer Road, Memorial Drive, Texas State Highway 6, Farm to Market Road 1960, Bellaire Boulevard, and Telephone Road. The Metropolitan Transit Authority of Harris County (METRO) provides public transportation in the form of buses, light rail, high-occupancy vehicle (HOV) lanes, and paratransit to fifteen municipalities throughout the Greater Houston area and parts of unincorporated Harris County. METRO's service area covers containing a population of 3.6 million. METRO's local bus network services approximately 275,000 riders daily with a fleet of over 1,200 buses. The agency's 75 local routes contain nearly 8,900 stops and saw nearly 67 million boardings during the 2016 fiscal year. A park and ride system provides commuter bus service from 34 transit centers scattered throughout the region's suburban areas; these express buses operate independently of the local bus network and utilize the region's extensive system of HOV lanes. Downtown and the Texas Medical Center have the highest rates of transit use in the region, largely due to the park and ride system, with nearly 60% of commuters in each district utilizing public transit to get to work. METRO began light rail service in 2004 with the opening of the north-south Red Line connecting Downtown, Midtown, the Museum District, the Texas Medical Center, and NRG Park. In the early 2010s, two additional lines—the Green Line, servicing the East End, and the Purple Line, servicing the Third Ward—opened, and the Red Line was extended northward to Northline, bringing the total length of the system to . Two light rail lines outlined in a five-line system approved by voters in a 2003 referendum have yet to be constructed. The Uptown Line, which would run along Post Oak Boulevard in Uptown, is currently under construction as a bus rapid transit line—the city's first—while the University Line has been postponed indefinitely. The light rail system saw approximately 16.8 million boardings in fiscal year 2016. Amtrak's thrice-weekly Los Angeles–New Orleans serves Houston at a station northwest of downtown. There were 14,891 boardings and alightings in FY2008, 20,327 in FY2012, and 20,205 in FY2018. A daily Amtrak Thruway Motorcoach connects Houston with Amtrak's Chicago–San Antonio at Longview. Houston City Council approved the Houston Bike Plan in March 2017, at that time entering the plan into the Houston Code of Ordinances. Houston has the largest number of bike commuters in Texas with over 160 miles of dedicated bikeways. The city is currently in the process of expanding its on and off street bikeway network. In 2015, Downtown Houston added a cycle track on Lamar Street, running from Sam Houston Park to Discovery Green. In August 2017, Houston City Council approved spending for construction of 13 additional miles of bike trails. Houston's bicycle sharing system started service with nineteen stations in May 2012. Houston Bcycle (also known as B-Cycle), a local non-profit, runs the subscription program, supplying bicycles and docking stations, while partnering with other companies to maintain the system. The network expanded to 29 stations and 225 bicycles in 2014, registering over 43,000 checkouts of equipment during the first half of the same year. In 2017, Bcycle logged over 142,000 check outs while expanding to 56 docking stations. The Houston Airport System, a branch of the municipal government, oversees the operation of three major public airports in the city. Two of these airports, George Bush Intercontinental Airport and William P. Hobby Airport, offer commercial aviation service to a variety of domestic and international destinations and served 55 million passengers in 2016. The third, Ellington Airport, is home to the Ellington Field Joint Reserve Base. The Federal Aviation Administration and the state of Texas selected the Houston Airport System as "Airport of the Year" in 2005, largely due to the implementation of a $3.1 billion airport improvement program for both major airports in Houston. George Bush Intercontinental Airport (IAH), located north of Downtown Houston between Interstates 45 and 69, is the eighth busiest commercial airport in the United States (by total passengers and aircraft movements) and forty-third busiest globally. The five-terminal, five-runway, airport served 40 million passengers in 2016, including 10 million international travelers. In 2006, the United States Department of Transportation named IAH the fastest-growing of the top ten airports in the United States. The Houston Air Route Traffic Control Center is located at Bush Intercontinental. Houston was the headquarters of Continental Airlines until its 2010 merger with United Airlines with headquarters in Chicago; regulatory approval for the merger was granted in October of that year. Bush Intercontinental is currently United Airlines' second largest hub, behind O'Hare International Airport. United Airlines' share of the Houston Airport System's commercial aviation market was nearly 60% in 2017 with 16 million enplaned passengers. In early 2007, Bush Intercontinental Airport was named a model "port of entry" for international travelers by U.S. Customs and Border Protection. William P. Hobby Airport (HOU), known as Houston International Airport until 1967, operates primarily short- to medium-haul domestic and international flights to 60 destinations. The four-runway, facility is located approximately southeast of Downtown Houston. In 2015, Southwest Airlines launched service from a new international terminal at Hobby to several destinations in Mexico, Central America, and the Caribbean. These were the first international flights flown from Hobby since the opening of Bush Intercontinental in 1969. Houston's aviation history is showcased in the 1940 Air Terminal Museum, located in the old terminal building on the west side of the airport. In 2009, Hobby Airport was recognized with two awards for being one of the top five performing airports globally and for customer service by Airports Council International. Houston's third municipal airport is Ellington Airport, used by the military, government (including NASA) and general aviation sectors. The Mayor's Office of Trade and International Affairs (MOTIA) is the city's liaison to Houston's sister cities and to the national governing organization, Sister Cities International. Through their official city-to-city relationships, these volunteer associations promote people-to-people diplomacy and encourage citizens to develop mutual trust and understanding through commercial, cultural, educational, and humanitarian exchanges.
https://en.wikipedia.org/wiki?curid=13774
Hard disk drive A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that uses magnetic storage to store and retrieve digital data using one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data even when powered off. Introduced by IBM in 1956, HDDs were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like cell phones and tablets, rely on flash products. More than 224 companies have produced HDDs historically, though after extensive industry consolidation most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly, sales revenues and unit shipments are declining because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. The revenues for SSDs, most of which use NAND, slightly exceed those for HDDs. Flash storage products had more than twice the revenue of hard disk drives as of 2017. Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. Cost per bit for SSDs is falling, and the price premium over HDDs has narrowed. The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of : a 1-terabyte (TB) drive has a capacity of gigabytes (GB; where 1 gigabyte = bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Also there is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes (powers of 10) by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised. Performance is specified by the time required to move the heads to a track or cylinder (average access time) adding the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate). The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial Attached SCSI) cables. The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two medium-sized refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 50 disks. The 350 had a single arm with two read/write heads, one up and one down, that moved both horizontally across a pair of platters and vertically from one set of platters to a second set. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about -inch thick and 24 inches in diameter. While the IBM 350, 355, 7300 and 1405 used only two read/write heads per arm, the 1301 used a single array of heads ("comb"), one per platter, moving horizontally as a single unit. Cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches (about 6 µm) above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives. Some high-performance HDDs were manufactured with one head per track, "e.g.", Burroughs B-475 in 1964, IBM 2305 in 1970, so that no time was lost physically moving the heads to a track and the only latency was the time for the desired block of data to rotate into position under the head. Known as fixed-head or head-per-track disk drives, they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester". Its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was later powered on. This greatly reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. Later "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters in diameter. A few years later, designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, and then drives that used a form factor (a mounting width equivalent to that used by contemporary floppy disk drives). The latter were primarily intended for the then-fledgling personal computer (PC) market. As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s their cost had been reduced to the point where they were standard on all but the cheapest computers. Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter internal HDDs proliferated on personal computers. External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh Plus did not feature a hard drive bay at all), so on those models external SCSI disks were the only reasonable option for expanding upon any internal storage. HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content. In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. As of 2018, HDDs were forecast to reach 100 TB capacities around 2025, but as of 2019 the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage (NAND), represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase. The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013. A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions. A typical HDD design consists of a "" that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is thick. The platters in contemporary HDDs are spun at speeds varying from 4,200 RPM in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. , the platters in most consumer-grade HDDs spin at 5,400 or 7,200 RPM. Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it. In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track but modern drives (since the 1990s) use zone bit recording – increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones. In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects⁠ ⁠— thermally induced magnetic instability which is commonly known as the "superparamagnetic limit". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, and as of 2007 used in certain HDDs. In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called "exchange spring media" magnetic storage technology, also known as "exchange coupled composite media", allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer. A typical HDD has two electric motors: a spindle motor that spins the disks and an actuator (motor) that positions the read/write head assembly across the spinning disks. The disk motor has an external rotor attached to the disks; the stator windings are fixed in place. Opposite the actuator at the end of the head support arm is the read-write head; thin printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 "g". The "" is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the "voice coil" by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet). The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the center of the actuator bearing) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head. The HDD's electronics control the movement of the actuator and the rotation of the disk and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology) or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed. Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data. In the newest drives, as of 2009, low-density parity-check codes (LDPC) were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon Limit and thus provide the highest storage density available. Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve pool"), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and "Soft ECC Correction" are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure. The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located. Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include: Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive. The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host. The rate of areal density advancement was similar to Moore's law (doubling every two years) through 2010: 60% per year during 1988–1996, 100% during 1996–2003 and 30% during 2003–2010. Speaking in 1997, Gordon Moore called the increase "flabbergasting", while observing later that growth cannot continue forever. Price improvement decelerated to −12% per year during 2010–2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10% per year during 2010–2016, and there was difficulty in migrating from perpendicular recording to newer technologies. As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s areal density progress has been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal to noise smaller grains are required; smaller grains may self-reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a strong enough magnetic field sufficient to write the medium in the increasingly smaller space taken by grains. Magnetic storage technologies are being developed to address this trilemma, and compete with flash memory–based solid-state drives (SSDs). In 2013, Seagate introduced shingled magnetic recording (SMR), intended as something of a "stopgap" technology between PMR and Seagate's intended successor heat-assisted magnetic recording (HAMR), SMR utilises overlapping tracks for increased data density, at the cost of design complexity and lower data access speeds (particularly write speeds and random access 4k speeds). By contrast, Western Digital focused on developing ways to seal helium-filled drives instead of the usual filtered air. This reduces turbulence and friction, and fits more platters into the same enclosure space, though helium gas is notoriously difficult to prevent escaping. Other recording technologies are under development , including Seagate's heat-assisted magnetic recording (HAMR). HAMR requires a different architecture with redesigned media and read/write heads, new lasers, and new near-field optical transducers. HAMR is expected to ship commercially in late 2020 or 2021. Technical issues delayed the introduction of HAMR by a decade, from earlier projections of 2009, 2015, 2016, and the first half of 2019. Some drives have adopted dual independent actuator arms to increase read/write speeds and compete with SSDs. HAMR's planned successor, bit-patterned recording (BPR), has been removed from the roadmaps of Western Digital and Seagate. Western Digital's microwave-assisted magnetic recording (MAMR), is expected to be shipped commercially in 2021, with sampling in 2020. Two-dimensional magnetic recording (TDMR) and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads have appeared in research papers. A 3D-actuated vacuum drive (3DHD) concept has been proposed. The rate of areal density growth has dropped below the historical Moore's law rate of 40% per year. Depending upon assumptions on feasibility and timing of these technologies, Seagate forecasts that areal density will grow 20% per year during 2020–2034. The highest-capacity desktop HDDs had 16 TB in late 2019. The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons: the operating system using some space, use of some space for data redundancy, and space use for file system structures. Also the difference in capacity reported in SI decimal prefixed units vs. binary prefixes can lead to a false impression of missing capacity. Modern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer's product specification, and from the drive itself through use of operating system functions that invoke low-level drive commands. The gross capacity of older HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive. Some modern SATA drives also report cylinder-head-sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C/H/S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C/H/S method to describe modern large drives, the number of heads is often set to 64, although a typical hard disk drive, , has between one and four platters. In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. For RAID subsystems, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with drives loses of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512-byte sectors for the checksum data. Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user. Data is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error checking data, and spacing. The process of initializing these logical blocks on the physical disk platters is called "low-level formatting", which is usually performed at the factory and is not normally changed in the field. "High-level formatting" writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file. Examples of partition mapping scheme include Master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data. The total capacity of HDDs is given by manufacturers using SI decimal prefixes such as gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). This practice dates back to the early days of computing; by the 1970s, "million", "mega" and "M" were consistently used in the decimal sense for drive capacity. However, capacities of memory are quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000. Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when reporting HDD capacity. The default behavior of the command-line utility on Linux is to report the HDD capacity as a number of 1024-byte units. The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries. HDD price per byte improved at the rate of −40% per year during 1988–1996, −51% per year during 1996–2003 and −34% per year during 2003–2010. The price improvement decelerated to −13% per year during 2011–2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities and have held at -11% per year during 2010–2017. The Federal Reserve Board has published a quality-adjusted price index for large-scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large-scale storage systems improved at the rate of ‒30% per year during 2004–2009 and ‒22% per year during 2009–2014. IBM's first hard disk drive, the IBM 350, used a stack of fifty 24-inch platters, stored 3.75 MB of data (approximately the size of one modern digital picture), and was of a size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk, which used six 14-inch (nominal size) platters in a removable pack and was roughly the size of a washing machine. This became a standard platter size for many years, used also by other manufacturers. The IBM 2314 used platters of the same size in an eleven-high pack and introduced the "drive in a drawer" layout. sometimes called the"pizza oven", although the "drawer" was not the complete drive. Into the 1970s HDDs were offered in standalone cabinets of varying dimensions containing from one to four HDDs. Beginning in the late 1960s drives were offered that fit entirely into a chassis that would mount in a 19-inch rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in removable packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-to-late 1980s the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters, was a popular product. With increasing sales of microcomputers having built in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Starting with the Shugart Associates SA1000 HDD "Form factors", initially followed those of 8-inch, 5½-inch, and 3½-inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5″, 5.75″ and 4″ wide. Because there were no smaller floppy disk drives, smaller HDD form factors developed from product offerings or industry standards. 2½-inch drives are actually 2.75″ wide. , 2½-inch and 3½-inch hard disks are the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters. The factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads, including: Delay may also occur if the drive disks are stopped to save energy. Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress. Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity. , a typical 7,200-rpm desktop HDD has a sustained "disk-to-buffer" data transfer rate up to 1,030 Mbit/s. This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000-rpm drives. A current widely used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s (10-bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files. HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate. Other performance considerations include quality-adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance. Current hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA , SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read/write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive. Typically a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks. Modern interfaces connect the drive to the host interface with a single data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals. Due to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash – a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads. The HDD's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the "breather filter"). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about . Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives – they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface, and can render the data unreadable for a short period until the head temperature stabilizes (so called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal). When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read-write head faults, they can be replaced using specialized tools in a dust-free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk-platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX-like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving. A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer-grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade" of a drive does not relate to the drive's failure rate. A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom's Hardware summarized research findings as follows: To minimize cost and overcome failures of individual HDDs, storage systems providers rely on redundant HDD arrays. HDDs that fail are replaced on an ongoing basis. More than 200 companies have manufactured HDDs over time, but consolidations have concentrated production to just three manufacturers today: Western Digital, Seagate, and Toshiba. Production is mainly in the Pacific rim. Worldwide revenue for disk storage declined eight percent per year, from a peak of $38 billion in 2012 to $22 billion (estimated) in 2019. Production of HDD storage grew 15% per year during 2011–2017, from 335 to 780 exabytes per year. HDD shipments declined seven percent per year during this time period, from 620 to 406 million units. HDD shipments were projected to drop by 18% during 2018–2019, from 375 million to 309 million units. In 2018, Seagate has 40% of unit shipments, Western Digital has 37% of unit shipments, while Toshiba has 23% of unit shipments. The average sales price for the two largest manufacturers was $60 per unit in 2015. HDDs are being superseded by solid-state drives (SSDs) in markets where their higher speed (up to 4950 megabytes per second for M.2 (NGFF) NVME SSDs or 2500 megabytes per second for PCIe expansion card drives), ruggedness, and lower power are more important than price, since the bit cost of SSDs is four to nine times higher than HDDs. As of 2016, HDDs are reported to have a failure rate of 2–9% per year, while SSDs have fewer failures: 1–3% per year. However, SSDs have more un-correctable data errors than HDDs. SSDs offer larger capacities (up to 100 TB) than the largest HDD and/or higher storage densities (100 TB and 30 TB SSDs are housed in 2.5 inch HDD cases but with the same height as a 3.5-inch HDD), although their cost remains prohibitive. A laboratory demonstration of a 1.33-Tb 3D NAND chip with 96 layers (NAND commonly used in solid state drives (SSDs)) had 5.5 Tbit/in2 as of 2019, while the maximum areal density for HDDs is 1.5 Tbit/in2. The areal density of flash memory is doubling every two years, similar to Moore's law (40% per year) and faster than the 10–20% per year for HDDs. As of 2018, the maximum capacity was 16 terabytes for an HDD, and 100 terabytes for an SSD. HDDs were used in 70% of the desktop and notebook computers produced in 2016, and SSDs were used in 30%. The usage share of HDDs is declining and could drop below 50% in 2018–2019 according to one forecast, because SSDs are replacing smaller-capacity (less than one-terabyte) HDDs in desktop and notebook computers and MP3 players. The market for silicon-based flash memory (NAND) chips, used in SSDs and other applications, is growing faster than for HDDs. Worldwide NAND revenue grew 16% per year from $22 billion to $57 billion during 2011–2017, while production grew 45% per year from 19 exabytes to 175 exabytes. External hard disk drives typically connect via USB; variants using USB 2.0 interface generally have slower data transfer rates when compared to internally mounted hard drives connected through SATA. Plug and play drive functionality offers system compatibility and features large storage options and portable design. , available capacities for external hard disk drives ranged from 500 GB to 10 TB. External hard disk drives are usually available as assembled integrated products but may be also assembled by combining an external enclosure (with USB or other interface) with a separately purchased drive. They are available in 2.5-inch and 3.5-inch sizes; 2.5-inch variants are typically called "portable external drives", while 3.5-inch variants are referred to as "desktop external drives". "Portable" drives are packaged in smaller and lighter enclosures than the "desktop" drives; additionally, "portable" drives use power provided by the USB connection, while "desktop" drives require external power bricks. Features such as encryption, biometric security or multiple interfaces (for example, FireWire) are available at a higher cost. There are pre-assembled external hard disk drives that, when taken out from their enclosures, cannot be used internally in a laptop or desktop computer due to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel ATA) interfaces.
https://en.wikipedia.org/wiki?curid=13777
Hebrew calendar The Hebrew or Jewish calendar (Hebrew: , ) is a lunisolar calendar used today predominantly for Jewish religious observances. It determines the dates for Jewish holidays and the appropriate public reading of Torah portions, "yahrzeits" (dates to commemorate the death of a relative), and daily Psalm readings, among many ceremonial uses. In Israel, it is used for religious purposes, provides a time frame for agriculture and is an official calendar for civil purposes, although the latter usage has been steadily declining in favor of the Gregorian calendar. The present Hebrew calendar is the product of evolution, including a Babylonian influence. Until the Tannaitic period (approximately 10–220 CE), the calendar employed a new crescent moon, with an additional month normally added every two or three years to correct for the difference between twelve lunar months and the solar year. The year in which it was added was based on observation of natural agriculture-related events in ancient Israel. Through the Amoraic period (200–500 CE) and into the Geonic period, this system was gradually displaced by the mathematical rules used today. The principles and rules were fully codified by Maimonides in the in the 12th century. Maimonides' work also replaced counting "years since the destruction of the Temple" with the modern creation-era . The Hebrew lunar year is about eleven days shorter than the solar year and uses the 19-year Metonic cycle to bring it into line with the solar year, with the addition of an intercalary month every two or three years, for a total of seven times per 19 years. Even with this intercalation, the average Hebrew calendar year is longer by about 6 minutes and 40 seconds than the current mean tropical year, so that every 217 years the Hebrew calendar will fall a day behind the current mean tropical year; and about every 238 years it will fall a day behind the mean Gregorian calendar year. The era used since the Middle Ages is the epoch (Latin for "in the year of the world"; , "from the creation of the world"). As with (A.D. or AD), the words or abbreviation for (A.M. or AM) for the era should properly "precede" the date rather than follow it. AM began at sunset on and will end at sunset on . The Jewish day is of no fixed length. Based on the classic rabbinic interpretation of Genesis ("There was evening and there was morning, one day"), a day in the rabbinic Hebrew calendar runs from sunset (the start of "the evening") to the next sunset. The same definition appears in the Bible in Leviticus 23:32, where the holiday of Yom Kippur is defined as lasting "from evening to evening". Halachically, a day ends and a new one starts when three stars are visible in the sky. The time between true sunset and the time when the three stars are visible (known as 'tzait ha'kochavim') is known as 'bein hashmashot', and there are differences of opinion as to which day it falls into for some uses. This may be relevant, for example, in determining the date of birth of a child born during that gap. There is no clock in the Jewish scheme, so that the local civil clock is used. Though the civil clock, including the one in use in Israel, incorporates local adoptions of various conventions such as time zones, standard times and daylight saving, these have no place in the Jewish scheme. The civil clock is used only as a reference point – in expressions such as: "Shabbat starts at ...". The steady progression of sunset around the world and seasonal changes results in gradual civil time changes from one day to the next based on observable astronomical phenomena (the sunset) and not on man-made laws and conventions. In Judaism, an hour is defined as 1/12 of the time from sunrise to sunset, so an hour can be less than 60 minutes in winter, and more than 60 minutes in summer. This proportional hour is known as a "sha'ah z'manit" (lit. a time-related hour). A Jewish hour is divided into 1080 "halakim" (singular: "helek") or parts. A part is 3⅓ seconds or 1/18 minute. The ultimate ancestor of the helek was a small Babylonian time period called a "barleycorn", itself equal to 1/72 of a Babylonian "time degree" (1° of celestial rotation). These measures are not generally used for everyday purposes. Instead of the international date line convention, there are varying opinions as to where the day changes. One opinion uses the antimeridian of Jerusalem (located at 144°47' W, passing through eastern Alaska). Other opinions exist as well. (See International date line in Judaism.) The weekdays start with Sunday (day 1, or "Yom Rishon") and proceed to Saturday (day 7), Shabbat. Since some calculations use division, a remainder of 0 signifies Saturday. While calculations of days, months and years are based on fixed hours equal to 1/24 of a day, the beginning of each "halachic" day is based on the local time of sunset. The end of the Shabbat and other Jewish holidays is based on nightfall ("Tzeth haKochabim") which occurs some amount of time, typically 42 to 72 minutes, after sunset. According to Maimonides, nightfall occurs when three medium-sized stars become visible after sunset. By the 17th century, this had become three second-magnitude stars. The modern definition is when the center of the sun is 7° below the geometric (airless) horizon, somewhat later than civil twilight at 6°. The beginning of the daytime portion of each day is determined both by dawn and sunrise. Most "halachic" times are based on some combination of these four times and vary from day to day throughout the year and also vary significantly depending on location. The daytime hours are often divided into "Sha'oth Zemaniyoth" or "Halachic hours" by taking the time between sunrise and sunset or between dawn and nightfall and dividing it into 12 equal hours. The nighttime hours are similarly divided into 12 equal portions, albeit a different amount of time than the "hours" of the daytime. The earliest and latest times for Jewish services, the latest time to eat chametz on the day before Passover and many other rules are based on "Sha'oth Zemaniyoth". For convenience, the modern day using "Sha'oth Zemaniyoth" is often discussed as if sunset were at 6:00 pm, sunrise at 6:00 am and each hour were equal to a fixed hour. For example, "halachic" noon may be after 1:00 pm in some areas during daylight saving time. Within the Mishnah, however, the numbering of the hours starts with the "first" hour after the start of the day. The Hebrew week (, ) is a cycle of seven days, mirroring the seven-day period of the Book of Genesis in which the world is created. Each day of the week runs from sunset to the following sunset and is figured locally. The weekly cycle, which runs concurrently with but independently of the monthly and annual cycles. The names for the days of the week are simply the day number within the week, with Shabbat being the seventh day. In Hebrew, these names may be abbreviated using the numerical value of the Hebrew letters, for example ("Day 1", or Yom Rishon ()): The names of the days of the week are modeled on the seven days mentioned in the creation story (). For example, "... And there was evening and there was morning, a second day" corresponds to "Yom Sheni" meaning "second day". (However, for days 1, 6, and 7 the modern name differs slightly from the version in Genesis.) The seventh day, Shabbat, as its Hebrew name indicates, is a day of rest in Judaism. In Talmudic Hebrew, the word "Shabbat" () can also mean "week", so that in ritual liturgy a phrase like "Yom Reviʻi beShabbat" means "the fourth day in the week". The period from 1 Adar (or Adar II, in leap years) to 29 Marcheshvan contains all of the festivals specified in the Bible – Pesach (15 Nisan), Shavuot (6 Sivan), Rosh Hashanah (1 Tishrei), Yom Kippur (10 Tishrei), Sukkot (15 Tishrei), and Shemini Atzeret (22 Tishrei). This period is fixed, during which no adjustments are made. There are additional rules in the Hebrew calendar to prevent certain holidays from falling on certain days of the week. (See Rosh Hashanah postponement rules, below.) These rules are implemented by adding an extra day to Marcheshvan (making it 30 days long) or by removing one day from Kislev (making it 29 days long). Accordingly, a common Hebrew calendar year can have a length of 353, 354 or 355 days, while a leap Hebrew calendar year can have a length of 383, 384 or 385 days. The Hebrew calendar is a lunisolar calendar, meaning that months are based on lunar months, but years are based on solar years. The calendar year features twelve lunar months of twenty-nine or thirty days, with an intercalary lunar month added periodically to synchronize the twelve lunar cycles with the longer solar year. (These extra months are added seven times every nineteen years. See Leap months, below.) The beginning of each Jewish lunar month is based on the appearance of the new moon. Although originally the new lunar crescent had to be observed and certified by witnesses, the moment of the true new moon is now approximated arithmetically as the molad, which is the mean new moon to a precision of one part. The mean period of the lunar month (precisely, the synodic month) is very close to 29.5 days. Accordingly, the basic Hebrew calendar year is one of twelve lunar months alternating between 29 and 30 days: In leap years (such as 5779) an additional month, Adar I (30 days) is added after Shevat, while the regular Adar is referred to as "Adar II." The insertion of the leap month mentioned above is based on the requirement that Passover—the festival celebrating the Exodus from Egypt, which took place in the spring—always occurs in the [northern hemisphere's] spring season. Since the adoption of a fixed calendar, intercalations in the Hebrew calendar have been assigned to fixed points in a 19-year cycle. Prior to this, the intercalation was determined empirically. Maimonides, discussing the calendrical rules in his Mishneh Torah (1178), notes: "By how much does the solar year exceed the lunar year? By approximately 11 days. Therefore, whenever this excess accumulates to about 30 days, or a little more or less, one month is added and the particular year is made to consist of 13 months, and this is the so-called embolismic (intercalated) year. For the year could not consist of twelve months plus so-and-so many days, since it is said: throughout the months of the year (), which implies that we should count the year by months and not by days." The Bible does not directly mention the addition of "embolismic" or intercalary months. However, without the insertion of embolismic months, Jewish festivals would gradually shift outside of the seasons required by the Torah. This has been ruled as implying a requirement for the insertion of embolismic months to reconcile the lunar cycles to the seasons, which are integral to solar yearly cycles. In a regular ("kesidran") year, Marcheshvan has 29 days and Kislev has 30 days. However, because of the Rosh Hashanah postponement rules (see below) Kislev may lose a day to have 29 days, and the year is called a short ("chaser") year, or Marcheshvan may acquire an additional day to have 30 days, and the year is called a full ("maleh") year. The calendar rules have been designed to ensure that Rosh Hashanah does not fall on a Sunday, Wednesday or Friday. This is to ensure that Yom Kippur does not directly precede or follow Shabbat, which would create practical difficulties, and that Hoshana Rabbah is not on a Shabbat, in which case certain ceremonies would be lost for a year. The 12 lunar months of the Hebrew calendar are the normal months from new moon to new moon: the year normally contains twelve months averaging 29.52 days each. The discrepancy compared to the mean synodic month of 29.53 days is due to Adar I in a leap year always having thirty days. This means that the calendar year normally contains 354 days, roughly 11 days shorter than the solar year. Traditionally, for the Babylonian and Hebrew lunisolar calendars, the years 3, 6, 8, 11, 14, 17, and 19 are the long (13-month) years of the Metonic cycle. This cycle forms the basis of the Christian ecclesiastical calendar and the Hebrew calendar and is used for the computation of the date of Easter each year. During leap years Adar I (or Adar Aleph – "first Adar") is added before the regular Adar. Adar I is actually considered to be the extra month, and has 30 days. Adar II (or Adar Bet – "second Adar") is the "real" Adar, and has the usual 29 days. For this reason, holidays such as Purim are observed in Adar II, not Adar I. The Hebrew calendar year conventionally begins on Rosh Hashanah. However, other dates serve as the beginning of the year for different religious purposes. There are three qualities that distinguish one year from another: whether it is a leap year or a common year; on which of four permissible days of the week the year begins; and whether it is a deficient, regular, or complete year. Mathematically, there are 24 (2×4×3) possible combinations, but only 14 of them are valid. Each of these patterns is called a "keviyah" (Hebrew קביעה for "a setting" or "an established thing"), and is encoded as a series of two or three Hebrew letters. See Four gates. In Hebrew there are two common ways of writing the year number: with the thousands, called ("major era"), and without the thousands, called ("minor era"). Thus, the current year is written as ' ‎() using the "major era" and ' ‎(%1000) using the "minor era". In 1178 CE, Maimonides wrote in the Mishneh Torah that he had chosen the epoch from which calculations of all dates should be as "the third day of Nisan in this present year ... which is the year 4938 of the creation of the world" (22 March 1178). He included all the rules for the calculated calendar and their scriptural basis, including the modern epochal year in his work, and beginning formal usage of the "anno mundi" era. From the eleventh century, "anno mundi" dating became dominant throughout most of the world's Jewish communities. Today, the rules detailed in Maimonides' calendrical code are those generally used by Jewish communities throughout the world. Since the codification by Maimonides in 1178, the Jewish calendar has used the Anno Mundi epoch (Latin for "in the year of the world," abbreviated "AM" or "A.M.", Hebrew ), sometimes referred to as the "Hebrew era", to distinguish it from other systems based on some computation of creation, such as the Byzantine calendar. There is also reference in the Talmud to years since the creation based on the calculation in the "Seder Olam Rabbah" of Rabbi Jose ben Halafta in about 160 CE. By his calculation, based on the Masoretic Text, Adam was created in 3760 BCE, later confirmed by the Muslim chronologist al-Biruni as 3448 years before the Seleucid era. An example is the c. 8th century Baraita of Samuel. According to rabbinic reckoning, the beginning of "year 1" is "not" Creation, but about one year before Creation, with the new moon of its first month (Tishrei) to be called "molad tohu" (the mean new moon of chaos or nothing). The Jewish calendar's epoch (reference date), 1 Tishrei AM 1, is equivalent to Monday, 7 October 3761 BCE in the proleptic Julian calendar, the equivalent tabular date (same daylight period) and is about one year "before" the traditional Jewish date of Creation on 25 Elul AM 1, based upon the "Seder Olam Rabbah". Thus, adding 3760 before Rosh Hashanah or 3761 after to a Julian year number starting from 1 CE will yield the Hebrew year. For earlier years there may be a discrepancy [see: Missing years (Jewish calendar)]. The "Seder Olam Rabbah" also recognized the importance of the Jubilee and Sabbatical cycles as a long-term calendrical system, and attempted at various places to fit the Sabbatical and Jubilee years into its chronological scheme. Occasionally, "Anno Mundi" is styled as "Anno Hebraico (AH)", though this is subject to confusion with notation for the Islamic Hijri year. The Jewish calendar has several distinct new years, used for different purposes. The use of multiple starting dates for a year is comparable to different starting dates for civil "calendar years", "tax or fiscal years", "academic years", and so on. The "Mishnah" (c. 200 CE) identifies four new-year dates: The 1st of Nisan is the new year for kings and festivals; the 1st of Elul is the new year for the cattle tithe... the 1st of Tishri is the new year for years, of the years of release and jubilee years, for the planting and for vegetables; and the 1st of Shevat is the new year for trees—so the school of Shammai; and the school of Hillel say: On the 15th thereof. Two of these dates are especially prominent: For the dates of the Jewish New Year see Jewish and Israeli holidays 2000–2050 or calculate using the section "Conversion between Jewish and civil calendars". The Jewish calendar is based on the Metonic cycle of 19 years, of which 12 are common (non-leap) years of 12 months and 7 are leap years of 13 months. To determine whether a Jewish year is a leap year, one must find its position in the 19-year Metonic cycle. This position is calculated by dividing the Jewish year number by 19 and finding the remainder. (Since there is no year 0, a remainder of 0 indicates that the year is year 19 of the cycle.) For example, the Jewish year divided by 19 results in a remainder of % 19, indicating that it is year of the Metonic cycle. Years 3, 6, 8, 11, 14, 17, and 19 of the Metonic cycle are leap years. To assist in remembering this sequence, some people use the mnemonic Hebrew word GUCHADZaT , where the Hebrew letters "gimel-vav-het aleph-dalet-zayin-tet" are used as Hebrew numerals equivalent to 3, 6, 8, 1, 4, 7, 9. The "keviyah" records whether the year is leap or common: פ for "peshuta" (פשוטה), meaning simple and indicating a common year, and מ indicating a leap year (me'uberet, מעוברת). Another memory aid notes that intervals of the major scale follow the same pattern as do Jewish leap years, with "do" corresponding to year 19 (or 0): a whole step in the scale corresponds to two common years between consecutive leap years, and a half step to one common year between two leap years. This connection with the major scale is more plain in the context of 19 equal temperament: counting the tonic as 0, the notes of the major scale in 19 equal temperament are numbers 0 (or 19), 3, 6, 8, 11, 14, 17, the same numbers as the leap years in the Hebrew calendar. A simple rule for determining whether a year is a leap year has been given above. However, there is another rule which not only tells whether the year is leap but also gives the fraction of a month by which the calendar is behind the seasons, useful for agricultural purposes. To determine whether year "n" of the calendar is a leap year, find the remainder on dividing [(7 × "n") + 1] by 19. If the remainder is 6 or less it is a leap year; if it is 7 or more it is not. For example, the The This works because as there are seven leap years in nineteen years the difference between the solar and lunar years increases by 7/19-month per year. When the difference goes above 18/19-month this signifies a leap year, and the difference is reduced by one month. To calculate the day on which Rosh Hashanah of a given year will fall, it is necessary first to calculate the expected molad (moment of lunar conjunction or new moon) of Tishrei in that year, and then to apply a set of rules to determine whether the first day of the year must be postponed. The molad can be calculated by multiplying the number of months that will have elapsed since some (preceding) molad whose weekday is known by the mean length of a (synodic) lunar month, which is 29 days, 12 hours, and 793 parts (there are 1080 "parts" in an hour, so that one part is equal to 3 seconds). The very first molad, the molad tohu, fell on Sunday evening at 11.11 in the local time of Jerusalem, or -3761/10/6 (Proleptic Julian calendar) 20:50:23.1 UTC, or in Jewish terms Day 2, 5 hours, and 204 parts. In calculating the number of months that will have passed since the known molad that one uses as the starting point, one must remember to include any leap months that falls within the elapsed interval, according to the cycle of leap years. A 19-year cycle of 235 synodic months has 991 weeks 2 days 16 hours 595 parts, a common year of 12 synodic months has 50 weeks 4 days 8 hours 876 parts, while a leap year of 13 synodic months has 54 weeks 5 days 21 hours 589 parts. The two months whose numbers of days may be adjusted, Marcheshvan and Kislev, are the eighth and ninth months of the Hebrew year, whereas Tishrei is the seventh month (in the traditional counting of the months, even though it is the first month of a new calendar year). Any adjustments needed to postpone Rosh Hashanah must be made to the adjustable months in the year that precedes the year of which the Rosh Hashanah will be the first day. Just four potential conditions are considered to determine whether the date of Rosh Hashanah must be postponed. These are called the Rosh Hashanah postponement rules, or "deḥiyyot": The first of these rules ("deḥiyyat molad zaken") is referred to in the Talmud. Nowadays, molad zaken is used as a device to prevent the molad falling on the second day of the month. The second rule, ("deḥiyyat lo ADU"), is applied for religious reasons. Another two rules are applied much less frequently and serve to prevent impermissible year lengths. Their names are Hebrew acronyms that refer to the ways they are calculated: At the innovation of the sages, the calendar was arranged to ensure that Yom Kippur would not fall on a Friday or Sunday, and Hoshana Rabbah would not fall on Shabbat. These rules have been instituted because Shabbat restrictions also apply to Yom Kippur, so that if Yom Kippur were to fall on Friday, it would not be possible to make necessary preparations for Shabbat (such as candle lighting). Similarly, if Yom Kippur fell on a Sunday, it would not be possible to make preparations for Yom Kippur because the preceding day is Shabbat. Additionally, the laws of Shabbat override those of Hoshana Rabbah, so that if Hoshana Rabbah were to fall on Shabbat certain rituals that are a part of the Hoshana Rabbah service (such as carrying willows, which is a form of work) could not be performed. To prevent Yom Kippur (10 Tishrei) from falling on a Friday or Sunday, Rosh Hashanah (1 Tishrei) cannot fall on Wednesday or Friday. Likewise, to prevent Hoshana Rabbah (21 Tishrei) from falling on a Saturday, Rosh Hashanah cannot fall on a Sunday. This leaves only four days on which Rosh Hashanah can fall: Monday, Tuesday, Thursday, and Saturday, which are referred to as the "four gates". Each day is associated with a number (its order in the week, beginning with Sunday as day 1). Numbers in Hebrew have been traditionally denominated by Hebrew letters. Thus the "keviyah" uses the letters ה ,ג ,ב and ז (representing 2, 3, 5, and 7, for Monday, Tuesday, Thursday, and Saturday) to denote the starting day of the year. The postponement of the year is compensated for by adding a day to the second month or removing one from the third month. A Jewish common year can only have 353, 354, or 355 days. A leap year is always 30 days longer, and so can have 383, 384, or 385 days. Whether a year is deficient, regular, or complete is determined by the time between two adjacent Rosh Hashanah observances and the leap year. While the "keviyah" is sufficient to describe a year, a variant specifies the day of the week for the first day of Pesach (Passover) in lieu of the year length. A Metonic cycle equates to 235 lunar months in each 19-year cycle. This gives an average of 6,939 days, 16 hours, and 595 parts for each cycle. But due to the Rosh Hashanah postponement rules (preceding section) a cycle of 19 Jewish years can be either 6,939, 6,940, 6,941, or 6,942 days in duration. Since none of these values is evenly divisible by seven, the Jewish calendar repeats exactly only following 36,288 Metonic cycles, or 689,472 Jewish years. There is a near-repetition every 247 years, except for an excess of 50 minutes 16 seconds (905 parts). The annual calendar of a numbered Hebrew year, displayed as 12 or 13 months partitioned into weeks, can be determined by consulting the table of Four gates, whose inputs are the year's position in the 19-year cycle and its molad Tishrei. The resulting type ("keviyah") of the desired year in the body of the table is a triple consisting of two numbers and a letter (written left-to-right in English). The left number of each triple is the day of the week of , Rosh Hashanah ; the letter indicates whether that year is deficient (D), regular (R), or complete (C), the number of days in Chesvan and Kislev; while the right number of each triple is the day of the week of , the first day of Passover or Pesach , within the same Hebrew year (next Julian/Gregorian year). The "keviyah" in Hebrew letters are written right-to-left, so their days of the week are reversed, the right number for and the left for . The year within the 19-year cycle alone determines whether that year has one or two Adars. This table numbers the days of the week and hours for the limits of molad Tishrei in the Hebrew manner for calendrical calculations, that is, both begin at , thus is noon Saturday. The years of a 19-year cycle are organized into four groups: common years after a leap year but before a common year ; common years between two leap years ; common years after a common year but before a leap year ; and leap years , all between common years. The oldest surviving table of Four gates was written by Saadia Gaon (892–942). It is so named because it identifies the four allowable days of the week on which can occur. Comparing the days of the week of molad Tishrei with those in the "keviyah" shows that during 39% of years is not postponed beyond the day of the week of its molad Tishrei, 47% are postponed one day, and 14% are postponed two days. This table also identifies the seven types of common years and seven types of leap years. Most are represented in any 19-year cycle, except one or two may be in neighboring cycles. The most likely type of year is 5R7 in 18.1% of years, whereas the least likely is 5C1 in 3.3% of years. The day of the week of is later than that of by one, two or three days for common years and three, four or five days for leap years in deficient, regular or complete years, respectively. See Jewish and Israeli holidays 2000–2050 From very early times, the Mesopotamian lunisolar calendar was in wide use by the countries of the western Asia region. The structure, which was also used by the Israelites, was based on lunar months with the intercalation of an additional month to bring the cycle closer to the solar cycle, although there is no mention of this additional month anywhere in the Hebrew Bible. From several verses in Genesis (, , ), it is implied that the months are thirty days long. There is also an indication that there were twelve months in the annual cycle. Biblical references to the pre-exilic calendar include ten of the twelve months identified by number rather than by name. Prior to the Babylonian exile, the names of only four months are referred to in the Tanakh: All of these are believed to be Canaanite names. The last three of these names are only mentioned in connection with the building of the First Temple; Håkan Ulfgard suggests that the use of what are rarely used Canaanite (or in the case of Ethanim perhaps Northwest-semitic) names indicates that "the author is consciously utilizing an archaizing terminology, thus giving the impression of an ancient story...". During the Babylonian captivity, the Jewish people adopted the Babylonian names for the months. The Babylonian calendar descended directly from the Sumerian calendar. These Babylonian month-names (such as Nisan, Iyyar, Tammuz, Ab, Elul, Tishri and Adar) are shared with the modern Syrian calendar (currently used in the Arabic-speaking countries of the Fertile crescent) and the modern Assyrian calendar, indicating a common origin. The origin is thought to be the Babylonian calendar. The modern Turkish calendar includes the names Şubat (February), Nisan (April), Temmuz (July) and Eylul (September). The former name for October was Tesrin. According to some Christian and Karaite sources, the tradition in ancient Israel was that 1 Nisan would not start until the barley is ripe, being the test for the onset of spring. If the barley was not ripe, an intercalary month would be added before Nisan. In the 1st century, Josephus stated that while – Moses...appointed Nisan...as the first month for the festivals...the commencement of the year for everything relating to divine worship, but for selling and buying and other ordinary affairs he preserved the ancient order [i. e. the year beginning with Tishrei]." Edwin Thiele has concluded that the ancient northern Kingdom of Israel counted years using the ecclesiastical new year starting on 1 Aviv (Nisan), while the southern Kingdom of Judah counted years using the civil new year starting on 1 Tishrei. The practice of the Kingdom of Israel was also that of Babylon, as well as other countries of the region. The practice of Judah is continued in modern Judaism. Before the adoption of the current "Anno Mundi" year numbering system, other systems were used. In early times, the years were counted from some significant historic event such as the Exodus. During the period of the monarchy, it was the widespread practice in western Asia to use era year numbers according to the accession year of the monarch of the country involved. This practice was followed by the united kingdom of Israel, kingdom of Judah, kingdom of Israel, Persia, and others. Besides, the author of Kings coordinated dates in the two kingdoms by giving the accession year of a monarch in terms of the year of the monarch of the other kingdom, though some commentators note that these dates do not always synchronise. Other era dating systems have been used at other times. For example, Jewish communities in the Babylonian diaspora counted the years from the first deportation from Israel, that of Jehoiachin in 597 BCE. The era year was then called "year of the captivity of Jehoiachin". During the Hellenistic Maccabean period, Seleucid era counting was used, at least in Land of Israel (under Greek influence at the time). The Books of the Maccabees used Seleucid era dating exclusively, as did Josephus writing in the Roman period. From the 1st-10th centuries, the center of world Judaism was in the Middle East (primarily Iraq and Palestine), and Jews in these regions also used Seleucid era dating, which they called the "Era of Contracts [or Documents]". The Talmud states: Rav Aha bar Jacob then put this question: How do we know that our Era [of Documents] is connected with the Kingdom of Greece at all? Why not say that it is reckoned from the Exodus from Egypt, omitting the first thousand years and giving the years of the next thousand? In that case, the document is really post-dated!Said Rav Nahman: In the Diaspora the Greek Era alone is used.He [Rav Aha] thought that Rav Nahman wanted to dispose of him anyhow, but when he went and studied it thoroughly he found that it is indeed taught [in a Baraita]: In the Diaspora the Greek Era alone is used. The use of the era of documents (i.e., Seleucid era) continued till the 16th century in the East, and was employed even in the 19th century among the Jews of Yemen. Occasionally in Talmudic writings, reference was made to other starting points for eras, such as destruction era dating, being the number of years since the 70 CE destruction of the Second Temple. In the 8th and 9th centuries, as the center of Jewish life moved from Babylonia to Europe, counting using the Seleucid era "became meaningless", and thus was replaced by the "anno mundi system". There is indication that Jews of the Rhineland in the early Middle Ages used the "years after the destruction of the Temple". When the observational form of the calendar was in use, whether or not an embolismic month was announced after the "last month" (Adar) depended on 'aviv [i.e., the ripeness of barley], fruits of trees, and the equinox. On two of these grounds it should be intercalated, but not on one of them alone. It may be noted that in the Bible the name of the first month, "Aviv", literally means "spring". Thus, if Adar was over and spring had not yet arrived, an additional month was observed. The Tanakh contains several commandments related to the keeping of the calendar and the lunar cycle, and records changes that have taken place to the Hebrew calendar. stresses the importance in Israelite religious observance of the new month (Hebrew: , Rosh Chodesh, "beginning of the month"): "... in your new moons, ye shall blow with the trumpets over your burnt-offerings..." Similarly in . "The beginning of the month" meant the appearance of a new moon, and in . "This month is to you". According to the "Mishnah" and Tosefta, in the Maccabean, Herodian, and Mishnaic periods, new months were determined by the sighting of a new crescent, with two eyewitnesses required to testify to the Sanhedrin to having seen the new lunar crescent at sunset. The practice in the time of Gamaliel II (c. 100 CE) was for witnesses to select the appearance of the moon from a collection of drawings that depicted the crescent in a variety of orientations, only a few of which could be valid in any given month. These observations were compared against calculations. At first the beginning of each Jewish month was signaled to the communities of Israel and beyond by fires lit on mountaintops, but after the Samaritans began to light false fires, messengers were sent. The inability of the messengers to reach communities outside Israel before mid-month High Holy Days (Succot and Passover) led outlying communities to celebrate scriptural festivals for two days rather than one, observing the second feast-day of the Jewish diaspora because of uncertainty of whether the previous month ended after 29 or 30 days. It has been noted that the procedures described in the Mishnah and Tosefta are all plausible procedures for regulating an empirical lunar calendar. Fire-signals, for example, or smoke-signals, are known from the pre-exilic Lachish ostraca. Furthermore, the Mishnah contains laws that reflect the uncertainties of an empirical calendar. Mishnah Sanhedrin, for example, holds that when one witness holds that an event took place on a certain day of the month, and another that the same event took place on the following day, their testimony can be held to agree, since the length of the preceding month was uncertain. Another Mishnah takes it for granted that it cannot be known in advance whether a year's lease is for twelve or thirteen months. Hence it is a reasonable conclusion that the Mishnaic calendar was actually used in the Mishnaic period. The accuracy of the Mishnah's claim that the Mishnaic calendar was also used in the late Second Temple period is less certain. One scholar has noted that there are no laws from Second Temple period sources that indicate any doubts about the length of a month or of a year. This led him to propose that the priests must have had some form of computed calendar or calendrical rules that allowed them to know in advance whether a month would have 30 or 29 days, and whether a year would have 12 or 13 months. Between 70 and 1178 CE, the observation-based calendar was gradually replaced by a mathematically calculated one. The Talmuds indicate at least the beginnings of a transition from a purely empirical to a computed calendar. Samuel of Nehardea (c.165-254) stated that he could determine the dates of the holidays by calculation rather than observation. According to a statement attributed to Yose (late 3rd century), Purim could not fall on a Sabbath nor a Monday, lest Yom Kippur fall on a Friday or a Sunday. This indicates that, by the time of the redaction of the Jerusalem Talmud (c. 400 CE), there were a fixed number of days in all months from Adar to Elul, also implying that the extra month was already a second Adar added before the regular Adar. Elsewhere, Rabbi Simon is reported to have counseled "those who make the computations" not to set Rosh Hashana or Hoshana Rabbah on Shabbat. This indicates that there was a group who "made computations" and controlled, to some extent, the day of the week on which Rosh Hashana would fall. There is a tradition, first mentioned by Hai Gaon (died 1038 CE), that Hillel b. R. Yehuda "in the year 670 of the Seleucid era" (i.e., 358–359 CE) was responsible for the new calculated calendar with a fixed intercalation cycle. Later writers, such as Nachmanides, explained Hai Gaon's words to mean that the entire computed calendar was due to Hillel b. Yehuda in response to persecution of Jews. Maimonides (12th century) stated that the Mishnaic calendar was used "until the days of Abaye and Rava" (c. 320–350 CE), and that the change came when "the land of Israel was destroyed, and no permanent court was left." Taken together, these two traditions suggest that Hillel b. Yehuda (whom they identify with the mid-4th-century Jewish patriarch Ioulos, attested in a letter of the Emperor Julian, and the Jewish patriarch Ellel, mentioned by Epiphanius) instituted the computed Hebrew calendar because of persecution. H. Graetz linked the introduction of the computed calendar to a sharp repression following a failed Jewish insurrection that occurred during the rule of the Christian emperor Constantius and Gallus. A later writer, S. Lieberman, argued instead that the introduction of the fixed calendar was due to measures taken by Christian Roman authorities to prevent the Jewish patriarch from sending calendrical messengers. Both the tradition that Hillel b. Yehuda instituted the complete computed calendar, and the theory that the computed calendar was introduced due to repression or persecution, have been questioned. Furthermore, two Jewish dates during post-Talmudic times (specifically in 506 and 776) are impossible under the rules of the modern calendar, indicating that its arithmetic rules were developed in Babylonia during the times of the Geonim (7th to 8th centuries). The Babylonian rules required the delay of the first day of Tishrei when the new moon occurred after noon. Except for the epoch year number (the fixed reference point at the beginning of year 1, which at that time was one year later than the epoch of the modern calendar), the calendar rules reached their current form by the beginning of the 9th century, as described by the Persian Muslim astronomer al-Khwarizmi in 823. Al-Khwarizmi's study of the Jewish calendar describes the 19-year intercalation cycle, the rules for determining on what day of the week the first day of the month Tishrī shall fall, the interval between the Jewish era (creation of Adam) and the Seleucid era, and the rules for determining the mean longitude of the sun and the moon using the Jewish calendar. Not all the rules were in place by 835. In 921, Aaron ben Meïr proposed changes to the calendar. Though the proposals were rejected, they indicate that all of the rules of the modern calendar (except for the epoch) were in place before that date. In 1000, the Muslim chronologist al-Biruni described all of the modern rules of the Hebrew calendar, except that he specified three different epochs used by various Jewish communities being one, two, or three years later than the modern epoch. While imprisoned in Auschwitz, Jews made every effort to observe Jewish tradition in the camps, despite the monumental dangers in doing so. The Hebrew calendar, which is a tradition with great importance to Jewish practice and rituals was particularly dangerous since no tools of telling of time, such as watches and calendars were permitted in the camps. The keeping of a Hebrew calendar was a rarity amongst prisoners and there are only two known surviving calendars that were made in Auschwitz, both of which were made by women. Before this, the tradition of making a Hebrew calendar was greatly assumed to be the job of a man in Jewish society. Early Zionist pioneers were impressed by the fact that the calendar preserved by Jews over many centuries in far-flung diasporas, as a matter of religious ritual, was geared to the climate of their original country: the Jewish New Year marks the transition from the dry season to the rainy one, and major Jewish holidays such as Sukkot, Passover, and Shavuot correspond to major points of the country's agricultural year such as planting and harvest. Accordingly, in the early 20th century the Hebrew calendar was re-interpreted as an agricultural rather than religious calendar. After the creation of the State of Israel, the Hebrew calendar became one of the official calendars of Israel, along with the Gregorian calendar. Holidays and commemorations not derived from previous Jewish tradition were to be fixed according to the Hebrew calendar date. For example, the Israeli Independence Day falls on 5 Iyar, Jerusalem Reunification Day on 28 Iyar, Yom HaAliyah on 10 Nisan, and the Holocaust Commemoration Day on 27 Nisan. Nevertheless, since the 1950s usage of the Hebrew calendar has steadily declined, in favor of the Gregorian calendar. At present, Israelis—except for the religiously observant—conduct their private and public life according to the Gregorian calendar, although the Hebrew calendar is still widely acknowledged, appearing in public venues such as banks (where it is legal for use on cheques and other documents, though only rarely do people make use of this option) and on the mastheads of newspapers. The Jewish New Year (Rosh Hashanah) is a two-day public holiday in Israel. However, since the 1980s an increasing number of secular Israelis celebrate the Gregorian New Year (usually known as "Silvester Night"—"ליל סילבסטר") on the night between 31 December and 1 January. Prominent rabbis have on several occasions sharply denounced this practice, but with no noticeable effect on the secularist celebrants. Wall calendars commonly used in Israel are hybrids. Most are organised according to Gregorian rather than Jewish months, but begin in September, when the Jewish New Year usually falls, and provide the Jewish date in small characters. Outside of Rabbinic Judaism, evidence shows a diversity of practice. Karaites use the lunar month and the solar year, but the Karaite calendar differs from the current Rabbinic calendar in a number of ways. The Karaite calendar is identical to the Rabbinic calendar used before the Sanhedrin changed the Rabbinic calendar from the lunar, observation based, calendar to the current, mathematically based, calendar used in Rabbinic Judaism today. In the lunar Karaite calendar, the beginning of each month, the Rosh Chodesh, can be calculated, but is confirmed by the observation in Israel of the first sightings of the new moon. This may result in an occasional variation of a maximum of one day, depending on the inability to observe the new moon. The day is usually "picked up" in the next month. The addition of the leap month (Adar II) is determined by observing in Israel the ripening of barley at a specific stage (defined by Karaite tradition) (called aviv), rather than using the calculated and fixed calendar of rabbinic Judaism. Occasionally this results in Karaites being one month ahead of other Jews using the calculated rabbinic calendar. The "lost" month would be "picked up" in the next cycle when Karaites would observe a leap month while other Jews would not. Furthermore, the seasonal drift of the rabbinic calendar is avoided, resulting in the years affected by the drift starting one month earlier in the Karaite calendar. Also, the four rules of postponement of the rabbinic calendar are not applied, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. In the Middle Ages many Karaite Jews outside Israel followed the calculated rabbinic calendar, because it was not possible to retrieve accurate aviv barley data from the land of Israel. However, since the establishment of the State of Israel, and especially since the Six-Day War, the Karaite Jews that have made "aliyah" can now again use the observational calendar. The Samaritan community's calendar also relies on lunar months and solar years. Calculation of the Samaritan calendar has historically been a secret reserved to the priestly family alone, and was based on observations of the new crescent moon. More recently, a 20th-century Samaritan High Priest transferred the calculation to a computer algorithm. The current High Priest confirms the results twice a year, and then distributes calendars to the community. The epoch of the Samaritan calendar is year of the entry of the Children of Israel into the Land of Israel with Joshua. The month of Passover is the first month in the Samaritan calendar, but the year number increments in the sixth month. Like in the Rabbinic calendar, there are seven leap years within each 19-year cycle. However, the Rabbinic and Samaritan calendars' cycles are not synchronized, so Samaritan festivals—notionally the same as the Rabbinic festivals of Torah origin—are frequently one month off from the date according to the Rabbinic calendar. Additionally, as in the Karaite calendar, the Samaritan calendar does not apply the four rules of postponement, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. Many of the Dead Sea (Qumran) Scrolls have references to a unique calendar, used by the people there, who are often assumed to be Essenes. The year of this calendar used the ideal Mesopotamian calendar of twelve 30-day months, to which were added 4 days at the equinoxes and solstices (cardinal points), making a total of 364 days. There was some ambiguity as to whether the cardinal days were at the beginning of the months or at the end, but the clearest calendar attestations give a year of four seasons, each having three months of 30, 30, and 31 days with the cardinal day the extra day at the end, for a total of 91 days, or exactly 13 weeks. Each season started on the 4th day of the week (Wednesday), every year. (Ben-Dov, "Head of All Years", pp. 16–17) With only 364 days, it is clear that the calendar would after a few years be very noticeably different from the actual seasons, but there is nothing to indicate what was done about this problem. Various suggestions have been made by scholars. One is that nothing was done and the calendar was allowed to change with respect to the seasons. Another suggestion is that changes were made irregularly, only when the seasonal anomaly was too great to be ignored any longer. (Ben-Dov, "Head of All Years", pp. 19–20) The writings often discuss the moon, but the calendar was not based on the movement of the moon any more than indications of the phases of the moon on a modern western calendar indicate that that is a lunar calendar. Recent analysis of one of the last scrolls remaining to be deciphered has revealed it relates to this calendar and that the sect used the word "tekufah" to identify each of the four special days marking the transitions between the seasons. Calendrical evidence for the postexilic Persian period is found in papyri from the Jewish colony at Elephantine, in Egypt. These documents show that the Jewish community of Elephantine used the Egyptian and Babylonian calendars. The Sardica paschal table shows that the Jewish community of some eastern city, possibly Antioch, used a calendrical scheme that kept Nisan 14 within the limits of the Julian month of March. Some of the dates in the document are clearly corrupt, but they can be emended to make the sixteen years in the table consistent with a regular intercalation scheme. Peter, the bishop of Alexandria (early 4th century CE), mentions that the Jews of his city "hold their Passover according to the course of the moon in the month of Phamenoth, or according to the intercalary month every third year in the month of Pharmuthi", suggesting a fairly consistent intercalation scheme that kept Nisan 14 approximately between Phamenoth 10 (March 6 in the 4th century CE) and Pharmuthi 10 (April 5). Jewish funerary inscriptions from Zoar, south of the Dead Sea, dated from the 3rd to the 5th century, indicate that when years were intercalated, the intercalary month was at least sometimes a repeated month of Adar. The inscriptions, however, reveal no clear pattern of regular intercalations, nor do they indicate any consistent rule for determining the start of the lunar month. In 1178, Maimonides included all the rules for the calculated calendar and their scriptural basis, including the modern epochal year in his work, "Mishneh Torah". Today, the rules detailed in Maimonides' code are those generally used by Jewish communities throughout the world. A "new moon" (astronomically called a lunar conjunction and, in Hebrew, a molad) is the moment at which the sun and moon are aligned horizontally with respect to a north-south line (technically, they have the same ecliptical longitude). The period between two new moons is a synodic month. The actual length of a synodic month varies from about 29 days 6 hours and 30 minutes (29.27 days) to about 29 days and 20 hours (29.83 days), a variation range of about 13 hours and 30 minutes. Accordingly, for convenience, a long-term average length, identical to the mean synodic month of ancient times (also called the molad interval) is used. The molad interval is formula_1 days, or 29 days, 12 hours, and 793 "parts" (1 "part" = 1/18 minute; 3 "parts" = 10 seconds) (i.e., 29.530594 days), and is the same value determined by the Babylonians in their System B about 300 BCE and was adopted by the Greek astronomer Hipparchus in the 2nd century BCE and by the Alexandrian astronomer Ptolemy in the "Almagest" four centuries later (who cited Hipparchus as his source). Its remarkable accuracy (less than one second from the true value) is thought to have been achieved using records of lunar eclipses from the 8th to 5th centuries BCE. This value is as close to the correct value of 29.530589 days as it is possible for a value to come that is rounded off to whole "parts". The discrepancy makes the molad interval about 0.6 seconds too long. Put another way, if the molad is taken as the time of mean conjunction at some reference meridian, then this reference meridian is drifting slowly eastward. If this drift of the reference meridian is traced back to the mid-4th century, the traditional date of the introduction of the fixed calendar, then it is found to correspond to a longitude midway between the Nile and the end of the Euphrates. The modern molad moments match the mean solar times of the lunar conjunction moments near the meridian of Kandahar, Afghanistan, more than 30° east of Jerusalem. Furthermore, the discrepancy between the molad interval and the mean synodic month is accumulating at an accelerating rate, since the mean synodic month is progressively shortening due to gravitational tidal effects. Measured on a strictly uniform time scale, such as that provided by an atomic clock, the mean synodic month is becoming gradually longer, but since the tides slow Earth's rotation rate even more, the mean synodic month is becoming gradually shorter in terms of mean solar time. The mean year of the current mathematically based Hebrew calendar is 365 days 5 hours 55 minutes and 25+25/57 seconds (365.2468 days) – computed as the molad/monthly interval of 29.530594 days × 235 months in a 19-year metonic cycle ÷ 19 years per cycle. In relation to the Gregorian calendar, the mean Gregorian calendar year is 365 days 5 hours 49 minutes and 12 seconds (365.2425 days), and the drift of the Hebrew calendar in relation to it is about a day every 231 years. Although the molad of Tishrei is the only molad moment that is not ritually announced, it is actually the only one that is relevant to the Hebrew calendar, for it determines the provisional date of Rosh Hashanah, subject to the Rosh Hashanah postponement rules. The other monthly molad moments are announced for mystical reasons. With the moladot on average almost 100 minutes late, this means that the molad of Tishrei lands one day later than it ought to in (100 minutes) ÷ (1440 minutes per day) = 5 of 72 years or nearly 7% of years. Therefore, the seemingly small drift of the moladot is already significant enough to affect the date of Rosh Hashanah, which then cascades to many other dates in the calendar year and sometimes, due to the Rosh Hashanah postponement rules, also interacts with the dates of the prior or next year. The molad drift could be corrected by using a progressively shorter molad interval that corresponds to the actual mean lunar conjunction interval at the original molad reference meridian. Furthermore, the molad interval determines the calendar mean year, so using a progressively shorter molad interval would help correct the excessive length of the Hebrew calendar mean year, as well as helping it to "hold onto" the northward equinox for the maximum duration. When the 19-year intercalary cycle was finalised in the 4th century, the earliest Passover (in year 16 of the cycle) coincided with the northward equinox, which means that Passover fell near the "first" full moon after the northward equinox, or that the northward equinox landed within one lunation before 16 days after the "molad" of "Nisan". This is still the case in about 80% of years; but, in about 20% of years, Passover is a month late by these criteria (as it was in AM 5765, 5768 and 5776, the 8th, 11th and 19th years of the 19-year cycle = Gregorian 2005, 2008 and 2016 CE). Presently, this occurs after the "premature" insertion of a leap month in years 8, 11, and 19 of each 19-year cycle, which causes the northward equinox to land on exceptionally early Hebrew dates in such years. This problem will get worse over time, and so beginning in AM 5817 (2057 CE), year 3 of each 19-year cycle will also be a month late. If the calendar is not amended, then Passover will start to land on or after the summer solstice around AM 16652 (12892 CE). In theory, the exact year when this will begin to occur depends on uncertainties in the future tidal slowing of the Earth rotation rate, and on the accuracy of predictions of precession and Earth axial tilt. The seriousness of the spring equinox drift is widely discounted on the grounds that Passover will remain in the spring season for many millennia, and the text of the Torah is generally not interpreted as having specified tight calendrical limits. The Hebrew calendar also drifts with respect to the autumn equinox, and at least part of the harvest festival of Sukkot is already more than a month after the equinox in years 1, 9, and 12 of each 19-year cycle; beginning in AM 5818 (2057 CE), this will also be the case in year 4. (These are the same year numbers as were mentioned for the spring season in the previous paragraph, except that they get incremented at Rosh Hashanah.) This progressively increases the probability that Sukkot will be cold and wet, making it uncomfortable or impractical to dwell in the traditional "succah" during Sukkot. The first winter seasonal prayer for rain is not recited until "Shemini Atzeret", after the end of Sukkot, yet it is becoming increasingly likely that the rainy season in Israel will start before the end of Sukkot. No equinox or solstice will ever be more than a day or so away from its mean date according to the solar calendar, while nineteen Jewish years average 6939d 16h 33m 03s compared to the 6939d 14h 26m 15s of nineteen mean tropical years. This discrepancy has mounted up to six days, which is why the earliest Passover currently falls on 26 March (as in AM 5773 / 2013 CE). Given the length of the year, the length of each month is fixed as described above, so the real problem in determining the calendar for a year is determining the number of days in the year. In the modern calendar, this is determined in the following manner. The day of Rosh Hashanah and the length of the year are determined by the time and the day of the week of the Tishrei "molad", that is, the moment of the average conjunction. Given the Tishrei "molad" of a certain year, the length of the year is determined as follows: First, one must determine whether each year is an ordinary or leap year by its position in the 19-year Metonic cycle. Years 3, 6, 8, 11, 14, 17, and 19 are leap years. Secondly, one must determine the number of days between the starting Tishrei "molad" (TM1) and the Tishrei "molad" of the next year (TM2). For calendar descriptions in general the day begins at 6 p.m., but for the purpose of determining Rosh Hashanah, a "molad" occurring on or after noon is treated as belonging to the next day (the first "deḥiyyah"). All months are calculated as 29d, 12h, 44m, 3s long (MonLen). Therefore, in an ordinary year TM2 occurs 12 × MonLen days after TM1. This is usually 354 calendar days after TM1, but if TM1 is on or after 3:11:20 a.m. and before noon, it will be 355 days. Similarly, in a leap year, TM2 occurs 13 × MonLen days after TM1. This is usually 384 days after TM1, but if TM1 is on or after noon and before 2:27:16 p.m., TM2 will be only 383 days after TM1. In the same way, from TM2 one calculates TM3. Thus the four natural year lengths are 354, 355, 383, and 384 days. However, because of the holiday rules, Rosh Hashanah cannot fall on a Sunday, Wednesday, or Friday, so if TM2 is one of those days, Rosh Hashanah in year 2 is postponed by adding one day to year 1 (the second "deḥiyyah"). To compensate, one day is subtracted from year 2. It is to allow for these adjustments that the system allows 385-day years (long leap) and 353-day years (short ordinary) besides the four natural year lengths. But how can year 1 be lengthened if it is already a long ordinary year of 355 days or year 2 be shortened if it is a short leap year of 383 days? That is why the third and fourth "deḥiyyah"s are needed. If year 1 is already a long ordinary year of 355 days, there will be a problem if TM1 is on a Tuesday, as that means TM2 falls on a Sunday and will have to be postponed, creating a 356-day year. In this case, Rosh Hashanah in year 1 is postponed from Tuesday (the third "deḥiyyah"). As it cannot be postponed to Wednesday, it is postponed to Thursday, and year 1 ends up with 354 days. On the other hand, if year 2 is already a short year of 383 days, there will be a problem if TM2 is on a Wednesday. because Rosh Hashanah in year 2 will have to be postponed from Wednesday to Thursday and this will cause year 2 to be only 382 days long. In this case, year 2 is extended by one day by postponing Rosh Hashanah in year 3 from Monday to Tuesday (the fourth "deḥiyyah"), and year 2 will have 383 days. Given the importance in Jewish ritual of establishing the accurate timing of monthly and annual times, some futurist writers and researchers have considered whether a "corrected" system of establishing the Hebrew date is required. The mean year of the current mathematically based Hebrew calendar has "drifted" an average of 7–8 days late relative to the equinox relationship that it originally had. It is not possible, however, for any individual Hebrew date to be a week or more "late", because Hebrew months always begin within a day or two of the "molad" moment. What happens instead is that the traditional Hebrew calendar "prematurely" inserts a leap month one year before it "should have been" inserted, where "prematurely" means that the insertion causes the spring equinox to land more than 30 days before the latest acceptable moment, thus causing the calendar to run "one month late" until the time when the leap month "should have been" inserted prior to the following spring. This presently happens in 4 years out of every 19-year cycle (years 3, 8, 11, and 19), implying that the Hebrew calendar currently runs "one month late" more than 21% of the time. Dr. Irv Bromberg has proposed a 353-year cycle of 4,366 months, which would include 130 leap months, along with use of a progressively shorter "molad" interval, which would keep an amended fixed arithmetic Hebrew calendar from drifting for more than seven millennia. It takes about 3 centuries for the spring equinox to drift an average of th of a "molad" interval earlier in the Hebrew calendar. That is a very important time unit, because it can be cancelled by simply truncating a 19-year cycle to 11 years, omitting 8 years including three leap years from the sequence. That is the essential feature of the 353-year leap cycle (). Religious questions abound about how such a system might be implemented and administered throughout the diverse aspects of the world Jewish community. The times below (moladot Nisan) can be used to determine the day the Jewish ecclesiastical (spring) year starts over a period of nineteen years: Every nineteen years this time is 2 days, 16 hours, 33 1/18 minutes later in the week. That is either the same or the previous day in the civil calendar, depending on whether the difference in the day of the week is three or two days. If 29 February is included fewer than five times in the nineteen – year period the date will be later by the number of days which corresponds to the difference between the actual number of insertions and five. If the year is due to start on Sunday, it actually begins on the following Tuesday if the following year is due to start on Friday morning. If due to start on Monday, Wednesday or Friday it actually begins on the following day. If due to start on Saturday, it actually begins on the following day if the previous year was due to begin on Monday morning. The table below lists, for a Jewish year commencing on 23 March, the civil date of the first day of each month. If the year does not begin on 23 March, each month's first day will differ from the date shown by the number of days that the start of the year differs from 23 March. The correct column is the one which shows the correct starting date for the following year in the last row. If 29 February falls within a Jewish month the first day of later months will be a day earlier than shown. In the Julian calendar, every 76 years the Jewish year is due to start 5h 47 14/18m earlier, and 3d 18h 12 4/18m later in the week. 723–730.
https://en.wikipedia.org/wiki?curid=13782
The Holocaust Industry The Holocaust Industry: Reflections on the Exploitation of Jewish Suffering is a 2000 book by Norman Finkelstein, in which the author argues that the American Jewish establishment exploits the memory of the Nazi Holocaust for political and financial gain, as well as to further the interests of Israel. According to Finkelstein, this "Holocaust industry" has corrupted Jewish culture and the authentic memory of the Holocaust. Finkelstein states that his consciousness of "the Nazi holocaust" is rooted in his parents' experiences in the Warsaw Ghetto; with the exception of his parents themselves, "every family member on both sides was exterminated by the Nazis". Nonetheless, during his childhood, no one ever asked any questions about what his mother and father had suffered. He suggests, "This was not a respectful silence. It was indifference." It was only after the establishment of "the Holocaust industry", he suggests, that outpourings of anguish over the plight of the Jews in World War II began. This ideology in turn served to endow Israel with a status as "'victim' state" despite its "horrendous" human rights record. According to Finkelstein, his book is "an anatomy and an indictment of the Holocaust industry". He argues that "'The Holocaust' is an ideological representation of the Nazi holocaust". In the foreword to the first paperback edition, Finkelstein notes that the first hardback edition had been a considerable hit in several European countries and many languages, but had been largely ignored in the United States. He sees "The New York Times" as the main promotional vehicle of the "Holocaust industry", and says that the 1999 Index listed 273 entries for the Holocaust and just 32 entries for the entire continent of Africa. The second (2003) edition contained 100 pages of new material, primarily in chapter 3 on the World Jewish Congress lawsuit against Swiss banks. Finkelstein set out to provide a guide to the relevant sections of the case. He feels that the presiding judge elected not to docket crucial documents, and that the Claims Resolution Tribunal could no longer be trusted. Finkelstein claims the CRT was on course to vindicate the Swiss banks before it changed tack in order to "protect the blackmailers' reputation". In addition to support from individuals such as Noam Chomsky and Alexander Cockburn, the Holocaust historian Raul Hilberg praised Finkelstein's book: The book received negative reviews. According to Israeli journalist Yair Sheleg, in August 2000, German historian Hans Mommsen called it "a most trivial book, which appeals to easily aroused anti-Semitic prejudices." Wolfgang Benz stated to "Le Monde": "It is impossible to learn anything from Finkelstein's book. At best, it is interesting for a psychotherapist." The reviewer of this daily added that Norman Finkelstein "hardly cares about nuance" and Rony Brauman wrote in the preface to the French edition ("L'Industrie de l'Holocauste", Paris, La Fabrique, 2001) that some assertions of N. Finkelstein (especially on the impact of the Six-days war) are wrong, others being pieces of "propaganda". Historian Peter Novick, whose work Finkelstein described as providing the "initial stimulus" for "The Holocaust Industry", asserted in the July 28, 2000 issue of "The Jewish Chronicle" (London) that the book is replete with "false accusations", "egregious misrepresentations", "absurd claims" and "repeated mis-statements" ("A charge into darkness that sheds no light"). Finkelstein replied to the allegations by Novick on his homepage. Hasia Diner has accused Peter Novick and Finkelstein of being "harsh critics of American Jewry from the left," and challenges the notion reflected in their books that American Jews did not begin to commemorate the Holocaust until after 1967. Andrew Ross, reviewing the book for "Salon", wrote: Jonathan Freedland in a column for "The Guardian" wrote that unlike Novick's book, "The Holocaust Industry" does not share its "sensitivity or human empathy - surely prerequisites of any meaningful debate about the Holocaust". Freedland accused Finkelstein of having constructed "an elaborate conspiracy theory, in which the Jews were pushed from apathy to obsession about the Holocaust by a corrupt Jewish leadership bent on building international support for Israel". Finkelstein responded to his critics in the foreword to the second edition, writing "Mainstream critics allege that I conjured a 'conspiracy theory' while those on the Left ridicule the book as a defense of 'the banks'. None, so far as I can tell, question my actual findings." Finkelstein claims that there are two known frauds connected to the Holocaust, that of "The Painted Bird" by Polish writer Jerzy Kosinski – which was published as fiction – and "Fragments" by Binjamin Wilkomirski. He claims that Kosinski and Wilkomirski were defended even after their supposed frauds had been exposed. He identifies some of the defenders as members of the "Holocaust Industry", and writes that they also support each other. Elie Wiesel supported Kosinski; Israel Gutman and Daniel Goldhagen (see below) supported Wilkomirski; Wiesel and Gutman support Goldhagen. Finkelstein compares the media treatment of the Holocaust and the media treatment of other genocides such as the Holodomor and the Armenian Genocide, particularly by members of what he calls "The Holocaust Industry". One to 1.5 million Armenians died in the years between 1915 and 1917/1923 - denial includes the claim that they were the result of a civil war within World War I, or refusal to accept there were deaths. In 2001, Israeli Foreign Minister Shimon Peres went so far as to dismiss it as "allegations". However, by this time historical consensus was changing, and, according to Finkelstein, he was "angrily compared ... to a holocaust denier" by Israel Charny, executive director of the Institute on the Holocaust and Genocide in Jerusalem. According to Finkelstein, Elie Wiesel characterized any suggestion that he has profited from the "Holocaust Industry", or even any criticism at all, as Holocaust denial. Questioning a survivor's testimony, denouncing the role of Jewish collaborators, suggesting that Germans suffered during the bombing of Dresden or that any state except Germany committed crimes in World War II are all evidence of Holocaust denial – according to Deborah Lipstadt – and Finkelstein says the most "insidious" forms of Holocaust denial are "immoral equivalencies", denying the uniqueness of The Holocaust. Finkelstein examines the implications of applying this standard to another member of the "Holocaust Industry", Daniel Goldhagen, who argued that Serbian actions in Kosovo "are, in their essence, different from those of Nazi Germany only in scale". According to Finkelstein, Deborah Lipstadt claims there is widespread Holocaust denial - yet in "Denying the Holocaust" (1993) her prime example is Arthur Butz, author of "The Hoax of the Twentieth Century". The chapter on him is entitled "Entering the Mainstream" - but Finkelstein considers that, were it not for the likes of Lipstadt, no one would ever have heard of Arthur Butz. Finkelstein claims that Holocaust deniers have as much influence in the US as the Flat Earth Society (p. 69). Publishing history of "The Holocaust Industry":
https://en.wikipedia.org/wiki?curid=13786
Hermetic Order of the Golden Dawn The Hermetic Order of the Golden Dawn (; or, more commonly, the Golden Dawn ("Aurora Aurea")) was a secret society devoted to the study and practice of the occult, metaphysics, and paranormal activities during the late 19th and early 20th centuries. Known as a magical order, the Hermetic Order of the Golden Dawn was active in Great Britain and focused its practices on theurgy and spiritual development. Many present-day concepts of ritual and magic that are at the centre of contemporary traditions, such as Wicca and Thelema, were inspired by the Golden Dawn, which became one of the largest single influences on 20th-century Western occultism. The three founders, William Robert Woodman, William Wynn Westcott and Samuel Liddell Mathers, were Freemasons. Westcott appears to have been the initial driving force behind the establishment of the Golden Dawn. The Golden Dawn system was based on hierarchy and initiation like the Masonic lodges; however women were admitted on an equal basis with men. The "Golden Dawn" was the first of three Orders, although all three are often collectively referred to as the "Golden Dawn". The First Order taught esoteric philosophy based on the Hermetic Qabalah and personal development through study and awareness of the four classical elements as well as the basics of astrology, tarot divination, and geomancy. The Second or "Inner" Order, the "Rosae Rubeae et Aureae Crucis" (the Ruby Rose and Cross of Gold), taught magic, including scrying, astral travel, and alchemy. The Third Order was that of the "Secret Chiefs", who were said to be highly skilled; they supposedly directed the activities of the lower two orders by spirit communication with the Chiefs of the Second Order. The foundational documents of the original Order of the Golden Dawn, known as the Cipher Manuscripts, are written in English using the Trithemius cipher. The manuscripts give the specific outlines of the Grade Rituals of the Order and prescribe a curriculum of graduated teachings that encompass the Hermetic Qabalah, astrology, occult tarot, geomancy, and alchemy. According to the records of the Order, the manuscripts passed from Kenneth R. H. Mackenzie, a Masonic scholar, to the Rev. A. F. A. Woodford, whom British occult writer Francis King describes as the fourth founder (although Woodford died shortly after the Order was founded). The documents did not excite Woodford, and in February 1886 he passed them on to Freemason William Wynn Westcott, who managed to decode them in 1887. Westcott, pleased with his discovery, called on fellow Freemason Samuel Liddell MacGregor Mathers for a second opinion. Westcott asked for Mathers' help to turn the manuscripts into a coherent system for lodge work. Mathers in turn asked fellow Freemason William Robert Woodman to assist the two, and he accepted. Mathers and Westcott have been credited with developing the ritual outlines in the Cipher Manuscripts into a workable format. Mathers, however, is generally credited with the design of the curriculum and rituals of the Second Order, which he called the "Rosae Rubae et Aureae Crucis" ("Ruby Rose and Golden Cross" or the "RR et AC"). In October 1887, Westcott claimed to have written to a German countess and prominent Rosicrucian named Anna Sprengel, whose address was said to have been found in the decoded Cipher Manuscripts. According to Westcott, Sprengel claimed the ability to contact certain supernatural entities, known as the Secret Chiefs, that were considered the authorities over any magical order or esoteric organization. Westcott purportedly received a reply from Sprengel granting permission to establish a Golden Dawn temple and conferring honorary grades of Adeptus Exemptus on Westcott, Mathers, and Woodman. The temple was to consist of the five grades outlined in the manuscripts. In 1888, the Isis-Urania Temple was founded in London. In contrast to the S.R.I.A. and Masonry, women were allowed and welcome to participate in the Order in "perfect equality" with men. The Order was more of a philosophical and metaphysical teaching order in its early years. Other than certain rituals and meditations found in the Cipher manuscripts and developed further, "magical practices" were generally not taught at the first temple. For the first four years, the Golden Dawn was one cohesive group later known as "the Outer Order" or "First Order." An "Inner Order" was established and became active in 1892. The Inner Order consisted of members known as "adepts," who had completed the entire course of study for the Outer Order. This group of adepts eventually became known as the Second Order. Eventually, the Osiris temple in Weston-super-Mare, the Horus temple in Bradford (both in 1888), and the Amen-Ra temple in Edinburgh (1893) were founded. In 1893 Mathers founded the Ahathoor temple in Paris. In 1891, Westcott's alleged correspondence with Anna Sprengel suddenly ceased. He claimed to have received word from Germany that she was either dead or that her companions did not approve of the founding of the Order and no further contact was to be made. If the founders were to contact the Secret Chiefs, apparently, it had to be done on their own. In 1892, Mathers professed that a link to the Secret Chiefs had been established. Subsequently, he supplied rituals for the Second Order, calling them the Red Rose and Cross of Gold. The rituals were based on the tradition of the tomb of Christian Rosenkreuz, and a "Vault of Adepts" became the controlling force behind the Outer Order. Later in 1916, Westcott claimed that Mathers also constructed these rituals from materials he received from Frater Lux ex Tenebris, a purported "Continental Adept". Some followers of the Golden Dawn tradition believe that the Secret Chiefs were not human or supernatural beings, but rather symbolic representations of actual or legendary sources of spiritual esotericism. The term came to stand for a great leader or teacher of a spiritual path or practice that found its way into the teachings of the Order. By the mid-1890s, the Golden Dawn was well established in Great Britain, with over one hundred members from every class of Victorian society. Many celebrities belonged to the Golden Dawn, such as the actress Florence Farr, the Irish revolutionary Maud Gonne, the Irish poet William Butler Yeats, the Welsh author Arthur Machen, and the English authors Evelyn Underhill and Aleister Crowley. In 1896 or 1897, Westcott broke all ties to the Golden Dawn, leaving Mathers in control. It has been speculated that his departure was due to his having lost a number of occult-related papers in a hansom cab. Apparently, when the papers were found, Westcott's connection to the Golden Dawn was discovered and brought to the attention of his employers. He may have been told to either resign from the Order or to give up his occupation as coroner. After Westcott's departure, Mathers appointed Florence Farr to be Chief Adept in Anglia. Dr. Henry B. Pullen Burry succeeded Westcott as Cancellarius—one of the three Chiefs of the Order. Mathers was the only active founding member after Westcott's departure. Due to personality clashes with other members and frequent absences from the center of Lodge activity in Great Britain, however, challenges to Mathers's authority as leader developed among the members of the Second Order. Toward the end of 1899, the Adepts of the Isis-Urania and Amen-Ra temples had become dissatisfied with Mathers' leadership, as well as his growing friendship with Aleister Crowley. They had also become anxious to make contact with the Secret Chiefs themselves, instead of relying on Mathers as an intermediary. Within the Isis-Urania temple, disputes were arising between Farr's "The Sphere", a secret society within the Isis-Urania, and the rest of the Adepti Minores. Crowley was refused initiation into the Adeptus Minor grade by the London officials. Mathers overrode their decision and quickly initiated him at the Ahathoor temple in Paris on January 16, 1900. Upon his return to the London temple, Crowley requested from Miss Cracknell, the acting secretary, the papers acknowledging his grade, to which he was now entitled. To the London Adepts, this was the final straw. Farr, already of the opinion that the London temple should be closed, wrote to Mathers expressing her wish to resign as his representative, although she was willing to carry on until a successor was found. Mathers believed Westcott was behind this turn of events and replied on February 16. On March 3, a committee of seven Adepts was elected in London, and requested a full investigation of the matter. Mathers sent an immediate reply, declining to provide proof, refusing to acknowledge the London temple, and dismissing Farr as his representative on March 23. In response, a general meeting was called on March 29 in London to remove Mathers as chief and expel him from the Order. In 1901, W. B. Yeats privately published a pamphlet titled "Is the Order of R. R. & A. C. to Remain a Magical Order?" After the Isis-Urania temple claimed its independence, there were even more disputes, leading to Yeats resigning. A committee of three was to temporarily govern, which included P.W. Bullock, M.W. Blackden and J. W. Brodie-Innes. After a short time, Bullock resigned, and Dr. Robert Felkin took his place. In 1903, A. E. Waite and Blackden joined forces to retain the name Isis-Urania, while Felkin and other London members formed the Stella Matutina. Yeats remained in the Stella Matutina until 1921, while Brodie-Innes continued his Amen-Ra membership in Edinburgh. Once Mathers realised that reconciliation was impossible, he made efforts to reestablish himself in London. The Bradford and Weston-super-Mare temples remained loyal to him, but their numbers were few. He then appointed Edward Berridge as his representative. According to Francis King, historical evidence shows that there were "twenty three members of a flourishing Second Order under Berridge-Mathers in 1913." J.W. Brodie-Innes continued leading the Amen-Ra temple, deciding that the revolt was unjustified. By 1908, Mathers and Brodie-Innes were in complete accord. According to sources that differ regarding the actual date, sometime between 1901 and 1913 Mathers renamed the branch of the Golden Dawn remaining loyal to his leadership to Alpha et Omega. Brodie-Innes assumed command of the English and Scottish temples, while Mathers concentrated on building up his Ahathoor temple and extending his American connections. According to occultist Israel Regardie, the Golden Dawn had spread to the United States of America before 1900 and a Thoth-Hermes temple had been founded in Chicago. By the beginning of the First World War in 1914, Mathers had established two to three American temples. Most temples of the Alpha et Omega and Stella Matutina closed or went into abeyance by the end of the 1930s, with the exceptions of two Stella Matutina temples: Hermes Temple in Bristol, which operated sporadically until 1970, and the Smaragdum Thallasses Temple (commonly referred to as Whare Ra) in Havelock North, New Zealand, which operated regularly until its closure in 1978. Much of the hierarchical structure for the Golden Dawn came from the Societas Rosicruciana in Anglia, which was itself derived from the Order of the Golden and Rosy Cross. The paired numbers attached to the Grades relate to positions on the Tree of Life. The Neophyte Grade of "0=0" indicates no position on the Tree. In the other pairs, the first numeral is the number of steps up from the bottom (Malkuth), and the second numeral is the number of steps down from the top (Kether). The First Order Grades were related to the four elements of Earth, Air, Water, and Fire, respectively. The Aspirant to a Grade received instruction on the metaphysical meaning of each of these Elements and had to pass a written examination and demonstrate certain skills to receive admission to that Grade. The Portal Grade was an "Invisible" or in-between grade separating the First Order from the Second Order. While no temples in the original chartered lineage of the Golden Dawn survived past the 1970s, several organizations have since revived its teachings and rituals. Among these, the following are notable:
https://en.wikipedia.org/wiki?curid=13787
Hash function A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called "hash values", "hash codes", "digests", or simply "hashes". The values are used to index a fixed-size table called a "hash table". Use of a hash function to index a hash table is called "hashing" or "scatter storage addressing". Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval, and storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally and storage space efficient form of data access which avoids the non-linear access time of ordered and unordered lists and structured trees, and the often exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of hash functions relies on statistical properties of key and function interaction: worst case behavior is intolerably bad with a vanishingly small probability, and average case behavior can be nearly optimal (minimal collisions). Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. A hash function takes an input as a key, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed length, like an integer, or variable length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them. A hash function may be considered to perform three functions: A good hash function satisfies two basic properties: 1) it should be very fast to compute; 2) it should minimize duplication of output values (collisions). Hash functions rely on generating favorable probability distributions for their effectiveness, reducing access time to nearly constant. High table loading factors, pathological key sets and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure like linked lists, or systematic probing of the table to find an empty slot. Hash functions are used in conjunction with Hash table to store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or it can be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table: In "chained hashing", each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. In "open address hashing", the table is probed starting from the occupied slot in a specified manner, usually by linear probing, quadratic probing, or double hashing until an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found or the entire table has been searched (item not in table). Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items. Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. A special case of hashing is known as geometric hashing or "the grid method". In these applications, the set of all inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid of "cells". The table is often an array with two or more indices (called a "grid file", "grid index", "bucket grid", and similar names), and the hash function returns an index tuple. This principle is widely used in computer graphics, computational geometry and many other disciplines, to solve many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points, similar shapes in a list of shapes, similar images in an image database, and so on. Hash tables are also used to implement associative arrays and dynamic sets. A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the same probability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number of "collisions"—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, a larger fraction of the lookup operations will have to search through a larger set of colliding table entries. Note that this criterion only requires the value to be "uniformly distributed", not "random" in any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true. Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries. In other words, if a typical set of "m" records is hashed to "n" table slots, the probability of a bucket receiving many more than "m"/"n" records should be vanishingly small. In particular, if "m" is less than "n", very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even if "n" is much larger than "m" – see the birthday problem. In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to be "perfect". There is no algorithmic way of constructing such a function - searching for one is a factorial function of the number of keys to be mapped versus the number of table slots they're mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function, and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function. When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test. This test is a goodness-of-fit measure: it's the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is: formula_1 where: formula_2 is the number of keys, formula_3 is the number of buckets, formula_4 is the number of items in bucket formula_5 A ratio within one confidence interval (0.95 - 1.05) is indicative that the hash function evaluated has an expected uniform distribution. Hash functions can have some technical properties that make it more likely that they'll have a uniform distribution when applied. One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the key space may have low variability. In order for the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the table space) in the output. Each bit should change with probability 50% because if some bits are reluctant to change, the keys become clustered around those values. If the bits want to change too readily, the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature. The relevance of the criterion to a multiplicative hash function is assessed here. In data storage and retrieval applications, use of a hash function is a trade off between search time and data storage space. If search time were unbounded, a very compact unordered linear list would be the best medium; if storage space were unbounded, a randomly accessible structure indexable by the key value would be very large, very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large key space to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, it is highly desirable that the hash function be computable with minimum latency and secondarily in a minimum number of instructions. Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods. Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it's usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions. Division-based implementations can be of particular concern, because division is microprogrammed on nearly all chip architectures. Divide (modulo) by a constant can be inverted to become a multiply by the word-size multiplicative-inverse of the constant. This can be done by the programmer, or by the compiler. Divide can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of assembly instructions resulting may be more than a dozen, and swamp the pipeline. If the architecture has a hardware multiply functional unit, the multiply-by-inverse is likely a better approach. We can allow the table size "n" to not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, let "n" be significantly less than 2"b". Consider a pseudorandom number generator function "P"(key) that is uniform on the interval [0, 2"b" − 1]. A hash function uniform on the interval [0, n-1] is "n" "P"(key)/2"b". We can replace the division by a (possibly faster) right bit shift: "nP"(key) » "b". If keys are being hashed repeatedly, and the hash function is costly, computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly mean the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position. A "universal hashing" scheme is a randomized algorithm that selects a hashing function "h" among a family of such functions, in such a way that the probability of a collision of any two distinct keys is 1/"m", where "m" is the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function. A hash function should be applicable to all situations in which a hash function might be used. A hash function that allows only certain table sizes, strings only up to a certain length, or can't accept a seed (i.e. allow double hashing) isn't as useful as one that does. A hash procedure must be deterministic—meaning that for a given input value it must always generate the same hash value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed in cases that the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible. The determinism is in the context of the reuse of the function. For example, Python adds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed. The Python hash is still a valid hash function when used within a single run. But if the values are persisted (for example, written to disk) they can no longer be treated as valid hash values, since in the next run the random value might differ. It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression which iteratively processes chunks of the input (such as the characters in a string) to produce the hash value. In many applications, the range of hash values may be different for each run of the program, or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input data "z", and the number "n" of allowed hash values. A common solution is to compute a fixed hash function with a very large range (say, 0 to 232 − 1), divide the result by "n", and use the division's remainder. If "n" is itself a power of 2, this can be done by bit masking and bit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between 0 and "n" − 1, for any value of "n" that may occur in the application. Depending on the function, the remainder may be uniform only for certain values of "n", e.g. odd or prime numbers. When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table. A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash function "H"("z","n") – where "z" is the key being hashed and "n" is the number of allowed hash values – such that "H"("z","n" + 1) = "H"("z","n") with probability close to "n"/("n" + 1). Linear hashing and spiral storage are examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property. Extendible hashing uses a dynamic hash function that requires space proportional to "n" to compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional to "n" to compute the value of "H"("z","n") have been invented. A hash function with minimal movement is especially useful in distributed hash tables. In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters. There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method. If the data to be hashed is small enough, one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this "identity" hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value. The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer. Thus the 32-bit integer codice_1 and 32-bit floating-point codice_2 objects can simply use the value directly; whereas the 64-bit integer codice_3 and 64-bit floating-point codice_4 cannot use this method. Other types of data can also use this hashing scheme. For example, when mapping character strings between upper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as in extended ASCII or ISO Latin 1), the table has only 28 = 256 entries; in the case of Unicode characters, the table would have 17×216 = entries. The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit zip codes like 13083 to city names ( entries), etc. Invalid data values (such as the country code "xx" or the zip code 00000) may be left undefined in the table or mapped to some appropriate "null" value. If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, they may be considered to be already 'hashed'. In this case, any number of any bits in the key may be dialed out and collated as an index into the hash table. A simple such hash function would be to mask off the bottom "m" bits to use as an index into a table of size 2m. A folding hash code is produced by dividing the input into n sections of m bits, where 2^m is the table size, and using a parity-preserving bitwise operation like ADD or XOR, to combine the sections. The final operation is a mask or shift to trim off any excess bits at the high or low end. For example, for a table size of 15 bits and key value of 0x0123456789ABCDEF, there are 5 sections 0x4DEF, 0x1357, 0x159E, 0x091A and 0x8. Adding, we obtain 0x7AA4, a 15-bit value. A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is 123,456,789 and the hash table size 10,000, squaring the key produces 1.524157875019e16, so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there are not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good, because an arbitrary key is not a good multiplier. A standard technique is to use a modulo function on the key, by selecting a divisor formula_6 which is a prime number close to the table size, so formula_7. The table size is usually a power of 2. This gives a distribution from formula_8. This gives good results over a large number of key sets. A significant drawback of division hashing is that division is microprogrammed on most modern architectures including x86, and can be 10 times slower than multiply. A second drawback is that it won't break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small. Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to map n bits to m bits. In this approach, formula_9 and we postulate an formula_3th degree polynomial formula_11. A key formula_12 can be regarded as the polynomial formula_13. The remainder using polynomial arithmetic modulo 2 is formula_14. Then formula_15. If formula_16 is constructed to have t or fewer non-zero coefficients, then keys differing by t or fewer bits are guaranteed to not collide. Z a function of k, t and n, a divisor of 2k-1, is constructed from the GF(2k) field. Knuth gives an example: for n=15, m=10 and t=7, formula_17. The derivation is as follows: The usual outcome is that either n will get large, or t will get large, or both, in order for the scheme to be computationally feasible. Therefore, its more suited to hardware or microcode implementation. See also unique permutation hashing, which has a guaranteed best worst-case insertion time. Standard multiplicative hashing uses the formula formula_34 which produces a hash value in formula_35. The value formula_36 is an appropriately chosen value that should be relatively prime to formula_37; it should be large and its binary representation a random mix of 1's and 0's. An important practical special case occurs when formula_38 and formula_39 are powers of 2 and formula_40 is the machine word size. In this case this formula becomes formula_41. This is special because arithmetic modulo formula_42 is done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, in C, for example, this function becomes and for fixed formula_3 and formula_40 this translates into a single integer multiplication and right-shift making it one of the fastest hash functions to compute. Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits. A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. So the resulting function looks like: Fibonacci hashing is a form of multiplicative hashing in which the multiplier is formula_45, where formula_46 is the machine word length and formula_47 (phi) is the golden ratio. formula_47 is an irrational number with approximate value 5/3, and decimal expansion of 1.618033... A property of this multiplier is that it uniformly distributes over the table space, blocks of consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths formula_46 are: Tabulation hashing, more generally known as "Zobrist hashing" after Albert Zobrist, an American computer scientist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys). Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64x12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation), and XORing them together (the starting value could be 0, the identity value for XOR, or a random seed). The resulting value was reduced by modulo, folding or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position. Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28x4 of such random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28x8 64-bit random numbers. This kind of function has some nice theoretical properties, one of which is called "3-tuple independence" meaning every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies. When the data values are long (or variable-length) character strings—such as personal names, web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way. Simplistic hash functions may add the first and last "n" characters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions which do not hash on all characters of a string can readily become linear due to redundancies, clustering or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends or other field(s) are zero or some other invariant constant that doesn't differentiate the keys; then the invariant parts of the keys can be ignored. The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizeable prime number, before adding in the next character, ignoring overflow. Using exclusive 'or' instead of add is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes, which clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0 and printable strings don't use the first 32 byte codes, so the information (95 byte codes) is clustered in the remaining bits in an unobvious manner. The classic approach dubbed the PJW hash based on the work of Peter. J. Weinberger at ATT Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the "Dragon Book". This hash function offsets the bytes 4 bits before ADDing them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, XORed back into the low byte of the cumulative quantity. The result is a word size hash code to which a modulo or other reducing operation can be applied to produce the final hash index. Today, especially with the advent of 64-bit word sizes, much more efficient variable length string hashing by word-chunks is available. Modern microprocessors will allow for much faster processing, if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32 bit or 64 bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified "randomizing" value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table. Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable length string can be converted as (x0ak−1+x1ak−2+...+xk−2a+xk−1). This is simply a polynomial in a non-zero "radix" "a"!=1 that takes the components (x0,x1...,xk−1) as the characters of the input string of length k. It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value of "a" is usually a prime number at least large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions. Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit double long word will hash only a 26 character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit long word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to 1019, or 19 decimal digits with radix 10. In some applications, such as substring search, one can compute a hash function "h" for every "k"-character substring of a given "n"-character string by advancing a window of width "k" characters along the string; where "k" is a fixed integer, and "n" is greater than "k". The straightforward solution, which is to extract such a substring at every character position in the text and compute "h" separately, requires a number of operations proportional to "k"·"n". However, with the proper choice of "h", one can use the technique of rolling hash to compute all those hashes with an effort proportional to "mk" + "n" where "m" is the number of occurrences of the substring. The most familiar algorithm of this type is Rabin-Karp with best and average case performance "O(n+mk)" and worst case "O(n·k)" (in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such as t="AAAAAAAAAAA", and s="AAA"). The hash function used for the algorithm is usually the Rabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used. Worst case result for a hash function can be assessed two ways: theoretical and practical. Theoretical worst case is the probability that all keys map to a single slot. Practical worst case is expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability 1/m, characteristic of universal hash functions. While Knuth worries about adversarial attack on real time systems, Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability of k of n keys mapping to a single slot is formula_50 where formula_51 is the load factor, n/m. The term "hash" offers a natural analogy with its non-technical meaning (to "chop" or "make a mess" out of something), given how hash functions scramble their input data to derive their output. In his research for the precise origin of the term, Donald Knuth notes that, while Hans Peter Luhn of IBM appears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself would only appear in published literature in the late 1960s, on Herbert Hellerman's "Digital Computer System Principles", even though it was already widespread jargon by then.
https://en.wikipedia.org/wiki?curid=13790
High jump The high jump is a track and field event in which competitors must jump unaided over a horizontal bar placed at measured heights without dislodging it. In its modern most practiced format, a bar is placed between two standards with a crash mat for landing. In the modern era, athletes run towards the bar and use the Fosbury Flop method of jumping, leaping head first with their back to the bar. Since ancient times, competitors have introduced increasingly effective techniques to arrive at the current form. The discipline is, alongside the pole vault, one of two vertical clearance events to feature on the Olympic athletics programme. It is contested at the World Championships in Athletics and IAAF World Indoor Championships, and is a common occurrence at track and field meetings. The high jump was among the first events deemed acceptable for women, having been held at the 1928 Olympic Games. Javier Sotomayor (Cuba) is the current men's record holder with a jump of set in 1993 – the longest standing record in the history of the men's high jump. Stefka Kostadinova (Bulgaria) has held the women's world record at since 1987, also the longest-held record in the event. The rules for the high jump are set internationally by the International Association of Athletics Federations (IAAF). Jumpers must take off on one foot. A jump is considered a failure if the bar is dislodged by the action of the jumper whilst jumping or the jumper touches the ground or breaks the plane of the near edge of the bar before clearance. The technique one uses for the jump must be almost flawless in order to have a chance of clearing a high bar. Competitors may begin jumping at any height announced by the chief judge, or may pass, at their own discretion. Most competitions state that three consecutive missed jumps, at any height or combination of heights, will eliminate the jumper from competition. The victory goes to the jumper who clears the greatest height during the final. Tie-breakers are used for any place in which scoring occurs. If two or more jumpers tie for one of these places, the tie-breakers are: 1) the fewest misses at the height at which the tie occurred; and 2) the fewest misses throughout the competition. If the event remains tied for first place (or a limited advancement position to a subsequent meet), the jumpers have a jump-off, beginning at the next greater height. Each jumper has one attempt. The bar is then alternately lowered and raised until only one jumper succeeds at a given height. The first recorded high jump event took place in Scotland in the 19th century. Early jumpers used either an elaborate straight-on approach or a scissors technique. In later years, soon then after, the bar was approached diagonally, and the jumper threw first the inside leg and then the other over the bar in a scissoring motion. Around the turn of the 20th century, techniques began to change, beginning with the Irish-American Michael Sweeney's "Eastern cut-off". By taking off like the scissors and extending his spine and flattening out over the bar, Sweeney raised the world record to in 1895. Another American, George Horine, developed an even more efficient technique, the "Western roll". In this style, the bar again is approached on a diagonal, but the inner leg is used for the take-off, while the outer leg is thrust up to lead the body sideways over the bar. Horine increased the world standard to in 1912. His technique was predominant through the Berlin Olympics of 1936, in which the event was won by Cornelius Johnson at . American and Soviet jumpers were the most successful for the next four decades, and they pioneered the evolution of the straddle technique. Straddle jumpers took off as in the Western roll, but rotated their (belly-down) torso around the bar, obtaining the most efficient and highest clearance (of the bar) up to that time. Straddle-jumper, Charles Dumas, was the first to clear 7 feet (2.13 m), in 1956, American John Thomas pushed the world mark to in 1960. Valeriy Brumel took over the event for the next four years. The elegant Soviet jumper radically sped up his approach run, took the record up to , and won the Olympic gold medal in 1964, before a motorcycle accident ended his career. American coaches, including two-time NCAA champion Frank Costello of the University of Maryland, flocked to Russia to learn from Brumel and his coaches. However, it would be a solitary innovator at Oregon State University, Dick Fosbury, who would bring the high jump into the next century. Taking advantage of the raised, softer landing areas by then in use, Fosbury added a new twist to the outmoded Eastern Cut-off. He directed himself over the bar head and shoulders first, sliding over on his back and landing in a fashion which would likely have broken his neck in the old, sawdust landing pits. After he used this Fosbury flop to win the 1968 Olympic gold medal, the technique began to spread around the world, and soon "floppers" were dominating international high jump competitions. The last straddler to set a world record was Vladimir Yashchenko, who cleared in 1977 and then indoors in 1978. Among renowned high jumpers following Fosbury's lead were Americans Dwight Stones and his rival, tall Franklin Jacobs of Paterson, NJ, who cleared , over his head (a feat equalled 27 years later by Sweden's Stefan Holm); Chinese record-setters Ni-chi Chin and Zhu Jianhua; Germans Gerd Wessig and Dietmar Mögenburg; Swedish Olympic medalist and former world record holder Patrik Sjöberg; and female jumpers Iolanda Balaş of Romania, Ulrike Meyfarth of Germany and Italy's Sara Simeoni. The most important aspect to put of all pieces of the jump together is the body mechanics the jumper uses to jump. Technique and form has evolved greatly over the history of high jump. The popularity of a style depend upon the time period as listed here: Beginnings (1790 - 1875) --> two legged lift over bar / Basic Scissors (1875 - 1892) --> standing jump and straight run-up / Eastern Cut-off scissors (1892 - 1912) --> scissors with rotation / Western Roll (1912 - 1930) --> early straddle technique / Straddle (1930 - 1960) --> basic straddle technique / Dive Straddle (1960 - 1978) --> advanced straddle technique / Fosbury Flop (1968 - current) --> the currently most common technique used / The Fosbury Flop is currently deemed as the most efficient way for competitors of the event to propel themselves over the bar. Still depending on the individual athletes specific strengths and weaknesses there are variations on the separate pieces that make up the jump. For a Fosbury flop depending on the athletes jump foot they will start on the right of left of the mat. Placing their jump foot furthest away from the high jump mat. The athlete will have an eight to ten step approach in total, the last five steps being a curve with three or five steps before on a straight. The athlete will want to mark their approach to attempt to find as much consistency as possible. The approach run of the high jump may actually be more important than the take-off. If a high jumper runs with bad timing or without enough aggression, clearing a high bar becomes more of a challenge. The approach requires a certain shape or curve, the right amount of speed, and the correct number of strides. The approach angle is also critical for optimal height. The straight run will build the momentum and set the tone for the athletes jump. The athlete will start by pushing off with the take off foot with slow powerful steps then begin to quicken and accelerate them. The athlete should be tall and running up right by the end of their three or five steps. On the first step of the curve the athletes take off foot will be landing, they will want to continue accelerating and curving focusing the body towards the opposite back corner of the high jump mat. While staying tall, erect, and leaning away from the mat the athlete should make sure that their final two steps are flat footed, rolling from the heel to toe as well as being the quickest steps. Most great straddle jumpers have a run at angles of about 30 to 40 degrees. The length of the run is determined by the speed of the person's approach. A slower run requires about 8 strides. However, a faster high jumper might need about 13 strides. A greater run speed allows a greater part of the body's forward momentum to be converted upward. The J type approach, favored by Fosbury floppers, allows for horizontal speed, the ability to turn in the air (centripetal force), and good take-off position. This allows for horizontal momentum to turn into vertical momentum, propelling the jumper off the ground and over the bar. The approach should be a hard controlled stride so that a person does not fall from creating an angle with speed. Athletes should run tall and lean on the curve, from the ankles and not the hips. This allows the correct angle to force their hips to rotate during take-off, which allows their center of gravity to pass under the bar. The take off can have slight variations depending on what feels most natural to the athlete. The double arm take off and the single arm take off. With most things in common, for both the athlete should make sure not to take off at the center of the bar. The plant foot should be the foot furthest away from the bar, angled towards the opposite back corner of the matt, and driving the non take off leg knee up. Keeping in mind this is a vertical jump pushing all force straight up. This will be accompanied with a one or two arm swing while driving the knee. Unlike the classic straddle technique, where the take-off foot is "planted" in the same spot at every height, flop-style jumpers must adjust their take-off as the bar is raised. Their approach run must be adjusted slightly so that their take-off spot is slightly further out from the bar in order to allow their hips to clear the bar while still maintaining enough momentum to carry their legs across the bar. Jumpers attempting to reach record heights commonly fail when most of their energy is directed into the vertical effort, and they brush the bar off the standards with the backs of their legs as they stall out in mid-air. An effective approach shape can be derived from physics. For example, the rate of backward spin required as the jumper crosses the bar to facilitate shoulder clearance on the way up and foot clearance on the way down can be determined by computer simulation. This rotation rate can be back-calculated to determine the required angle of lean away from the bar at plant, based on how long the jumper is on the take-off foot. This information, together with the jumper's speed in the curve, can be used to calculate the radius of the curved part of the approach. This is a lot of work and requires measurements of running speed and time of take-off foot on the ground. However, one can work in the opposite direction by assuming an approach radius and watching the resulting backward rotation. This only works if some basic rules are followed in how one executes the approach and take-off. Drills can be practiced to solidify the approach. One drill is to run in a straight line (the linear part of the approach) and then run two to three circles spiraling into one another. Another is to run or skip a circle of any size, two to three times in a row. It is important to train to leap upwards without first leaning into the bar, allowing the momentum of the J approach to carry the body across the bar. The athlete's non take off leg knee will naturally turn their body placing them in the air with their back to the bar. The athlete will then drive their shoulders to the back of their feet arching their body over the bar. The athlete can look over their right should then judge appropriately when to kick both feet over their head causing their body to miss the bar and land on the mat. In high jump, it helps if the athlete is tall, has long legs, and limited weight on their body. They must have a strong lower body and flexibility will help a lot as well. High jumpers tend to go through very vigorous training methods to achieve this ideal body frame. High jumpers must have a fast approach so it is crucial to work on speed and also speed endurance. Lots of high jump competitions may take hours and athletes must make sure they have the endurance to last the entire competition. Common sprint endurance workouts for high jumpers include 200-, 400-, and 800-meter training. Other speed endurance training methods such as hill training or a ladder workout may also be used. It is crucial for high jumpers to have strong lower bodies and cores, as the bar progressively gets higher, the strength of an athlete's legs (along with speed and technique) will help propel them over the bar. Squats, deadlifts, and core exercises will help a high jumper to achieve these goals. It is important, however, for a high jumper to keep a slim figure as any unnecessary weight makes it difficult to jump higher. Arguably the most important training for a high jumper is plyometric training. Because high jump is such a technical event, any mistake in the technique could either lead to failure, injury, or both. To prevent these from happening, high jumpers tend to focus heavily on plyometrics. This includes hurdle jumps, flexibility training, skips, or scissor kick training. Plyometric workouts tend to be performed at the beginning of the workout. Below is a list of all other jumps equal or superior to 2.40 m: Below is a list of all other jumps equal or superior to 2.05 m: Athletes who have won multiple titles at the two most important competitions, the Olympic Games and the World Championships: Kostadinova and Sotomayor are the only high jumpers to have been Olympic Champion, World Champion and broken the world record. All time lists of athletes with the highest recorded jumps above their own height. , 73 different female athletes had ever been able to jump .
https://en.wikipedia.org/wiki?curid=13791
Heraclitus Heraclitus of Ephesus (; ; , fl. 504/3 BC–501/0 BC) son of Bloson, was a pre-Socratic Ionian Greek philosopher, and a native of the city of Ephesus, in modern-day Turkey and then part of the Persian Empire. Due to the oracular and paradoxical nature of his philosophy, and his fondness for word play, he was called "The Obscure" even in antiquity. He wrote a single work, "On Nature", but the obscurity is made worse by its remaining only in fragments. His cryptic utterances have been the subject of numerous interpretations. He has been seen variously as a "material monist or a process philosopher; a scientific cosmologist, a metaphysician, or mainly a religious thinker; an empiricist, a rationalist, or a mystic; a conventional thinker or a revolutionary; a developer of logic or one who denied the law of non-contradiction; the first genuine philosopher or an anti-intellectual obscurantist." He was of distinguished parentage but eschewed his privileged life for a lonely one as a philosopher. Little else is known about his early life and education. He regarded himself as self-taught and a pioneer of wisdom. He was considered a misanthrope given to depression; he was also called "the weeping philosopher", in contrast to Democritus, "the laughing philosopher". Heraclitus believed the world was in accordance with "Logos" (literally, "word", "reason", or "account"). He also believed the world was ultimately made of fire. He was committed to a unity of opposites and harmony in the world. He was most famous for his insistence on ever-present change, or flux or becoming, as the characteristic feature of the world, as stated in the famous saying, "No man ever steps in the same river twice" as well as "Panta rhei", everything flows. This aspect of his philosophy is contrasted with that of Parmenides, who believed in being, and that nothing changes. Both had an influence on Plato and thus, some speculate, on all of Western philosophy. The dates for Heraclitus are uncertain. Scholars have generally believed that either Parmenides was responding to Heraclitus, or Heraclitus to Parmenides, though opinion on who was responding to whom has varied over the course of the 20th and 21st centuries. Most figure Parmenides was responding to Heraclitus, and therefore Heraclitus was the older of the two. Heraclitus is silent on Parmenides, yet Parmenides seems possibly to refer to him, and Heraclitus refers to the likes of Pythagoras. The main source for the life of Heraclitus is the notable doxographer Diogenes Laërtius, although some have questioned the validity of his account as "a tissue of Hellenistic anecdotes, most of them obviously fabricated on the basis of statements in the preserved fragments". It also seems the stories about Heraclitus could be invented to illustrate his character as inferred from his writings. Diogenes Laërtius said that Heraclitus flourished in the 69th Olympiad, 504–501 BC. Considerations such as that he was probably older than Parmenides, and a contemporary of Pythagoras, makes this time frame a reasonable "floruit". His dates of birth and death are based on a life span of 60 years, the age at which Diogenes Laërtius says he died, with this floruit in the middle. Heraclitus was born to an aristocratic family in Ephesus, in the Persian Empire, in what is today called Efes, Turkey. His father was named either Blosôn or Herakôn. Diogenes Laërtius says that he abdicated the kingship ("basileia") in favor of his brother and Strabo confirms that there was a ruling family in Ephesus descended from the Ionian founder, Androclus, which still kept the title and could sit in the chief seat at the games, as well as a few other privileges. How much power the king had is another question, for Ephesus had been part of the Persian Empire since 547 BC and was ruled by a satrap, a more distant figure, as Cyrus the Great allowed the Ionians considerable autonomy. Diogenes Laërtius says that Heraclitus used to play knucklebones with the youths in the great temple of Artemis, the Artemisium, one of the largest temples of the 6th century BC and one of the Seven Wonders of the Ancient World. When asked to start making laws he refused saying that the constitution ("politeia") was "ponêra", which can mean either that it was fundamentally wrong or that he considered it toilsome. Two extant letters between Heraclitus and Darius I, quoted by Diogenes Laërtius, are undoubtedly later forgeries. With regard to education, Diogenes Laërtius says that Heraclitus was "wondrous" from childhood. Diogenes relates that Sotion said he was a "hearer" of Xenophanes, which contradicts Heraclitus' statement (so says Diogenes Laërtius) that he had taught himself by questioning himself. Burnet states in any case that "... Xenophanes left Ionia before Herakleitos was born." Diogenes Laërtius relates that as a boy Heraclitus had said he "knew nothing" but later claimed to "know everything". He "heard no one" but "questioned himself". Diogenes Laërtius relates that Heraclitus had a poor opinion of human affairs. He said "The mysteries practiced among men are unholy mysteries." Timon of Phlius is said to have called him a "mob-reviler". He was not afraid of being a contrarian. He said "Corpses are more fit to be cast out than dung." Heraclitus was no advocate of equality, "One is ten thousand to me, if he be the best." He is generally considered an opponent of democracy. Yet he thinks "All men have a claim to self-ascertainment and sound thinking," and "Thinking is common to all." Heraclitus stressed the heedless unconsciousness of humankind. "The waking have one common world, but the sleeping turn aside each into a world of his own ("idios kosmos")." "Hearing they do not understand, like the deaf. Of them does the saying bear witness: 'present, they are absent.' He also compares the ignorance of the average man to dogs; "Dogs, also, bark at what they do not know." He advises us, "Let us not conjecture randomly about the most important things" and said "a fool is excited by every word." He criticizes Hesiod, Pythagoras, Xenophanes, and Hecataeus as lacking understanding though learned, and has the most scorn for Pythagoras. Though he grants, "Men that love wisdom must be inquirers into very many things indeed;" he said that "The knowledge of the most famous persons, which they guard, is but opinion." He also thought that Homer and Archilochus deserved to be beaten. The only man of note he praises is Bias of Priene, one of the Seven Sages of Greece, whose famous maxim is "most men are bad." "For what thought or wisdom have they? They follow the poets and take the crowd as their teacher, knowing not that "the many are bad and few good." He hated the Athenians and his fellow Ephesians, wishing the latter wealth in punishment for their wicked ways. The Ephesians would "do well to end their lives, every grown man of them, and leave the city to beardless boys, for that they have driven out Hermodorus, the worthiest man among them, saying, 'We will have none who is worthiest among us; or if there be any such, let him go elsewhere and consort with others." According to Diogenes Laërtius: "Finally, he became a hater of his kind ("misanthrope") and wandered the mountains [...] making his diet of grass and herbs." Heraclitus' life as a philosopher was interrupted by dropsy. The physicians he consulted were unable to prescribe a cure. Diogenes Laërtius lists various stories about Heraclitus' death: In two versions, Heraclitus was cured of the dropsy and died of another disease. In one account, however, the philosopher "buried himself in a cowshed, expecting that the noxious damp humour would be drawn out of him by the warmth of the manure", while another says he treated himself with a liniment of cow manure and, after a day prone in the sun, died and was interred in the marketplace. According to Neathes of Cyzicus, after smearing himself with dung, Heraclitus was devoured by dogs. He died after 478 BC from a hydropsy. Burnet has a different theory:"Herakleitos said (fr. 68) that it was death to souls to become water; and we are told accordingly that he died of dropsy. He said (fr. 114) that the Ephesians should leave their city to their children, and (fr. 79) that Time was a child playing draughts. We are therefore told that he refused to take any part in public life, and went to play with the children in the temple of Artemis. He said (fr. 85) that corpses were more fit to be cast out than dung; and we are told that he covered himself with dung when attacked with dropsy. Lastly, he is said to have argued at great length with his doctors because of fr. 58. For these tales see Diog.ix. 3–5." Heraclitus was known to have produced a single work on papyrus, "On Nature". Diogenes Laërtius tells us that Heraclitus deposited his book as a dedication in the Artemisium. As with the other pre-Socratics, his writings survive now only in quoted by other authors. In the case of Heraclitus, there are over one hundred. These are catalogued using the Diels–Kranz numbering system. Diogenes Laërtius also states that Heraclitus' work was "a continuous treatise...but was divided into three discourses, one on the universe, another on politics, and a third on theology." He does not say whether Heraclitus divided them this way or someone else did. Theophrastus says (in Diogenes Laërtius) "...some parts of his work [are] half-finished, while other parts [made] a strange medley." Burnet does not think the work had a title:"We do not know the title of the work of Herakleitos.—if, indeed, it had one— and it is not easy to form a clear idea of its contents. We are told that it was divided into three discourses: one dealing with the universe, one political, and one theological. It is not to be supposed that this division is due to Herakleitos himself; all we can infer is that the work fell naturally into these three parts when the Stoic commentators took their editions of it in hand."We do know the work's opening lines, proving it was indeed a continuous work. Aristotle quotes part of the opening line in the "Rhetoric" to outline the difficulty in punctuating Heraclitus without ambiguity; whether "forever" applied to "being" or to "prove". Sextus Empiricus in "Against the Mathematicians" quotes the whole thing: "Of this "Logos" being forever do men prove to be uncomprehending, both before they hear and once they have heard it. For, though all things come to pass in accordance with this "Logos", they are like the unexperienced experiencing words and deeds such as I explain when I distinguish each thing according to its nature and show how it is. Other men are unaware of what they do when they are awake just as they are forgetful of what they do when they are asleep." Many subsequent philosophers in this period refer to the work. Says Kahn: "Down to the time of Plutarch and Clement, if not later, the little book of Heraclitus was available in its original form to any reader who chose to seek it out." Diogenes Laërtius says: "the book acquired such fame that it produced partisans of his philosophy who were called Heracliteans." Cratylus was one such follower. Antisthenes was another. At some time in antiquity he acquired this epithet denoting that his major sayings were difficult to understand, with frequent paradox, metaphor, and pregnant utterances. In the "Metaphysics" Aristotle mentions how some say Heraclitus denied the law of noncontradiction, and accuses him of not reasoning. According to Diogenes Laërtius, Timon of Phlius called him "the Riddler" (; ), and explained that Heraclitus wrote his book "rather unclearly" ("asaphesteron") so that only the "capable" should attempt it. Heraclitus himself wrote "The lord whose is the oracle at Delphi neither speaks nor hides his meaning, but gives a sign." By the time of Cicero he had become "the dark" (; ) because he had spoken "nimis obscurē", "too obscurely", concerning nature and had done so deliberately in order to be misunderstood. The customary English translation of follows the Latin, "the Obscure". A later tradition referred to Heraclitus as the "weeping philosopher", as opposed to Democritus, who is known as the "laughing philosopher". This was their reaction to the folly of mankind. Diogenes Laërtius ascribes the theory that Heraclitus did not complete some of his works because of melancholia to Theophrastus, though apparently in Theophrastus's time this meant impulsiveness. If Stobaeus writes correctly, Sotion in the early 1st century AD was already combining the two in the imaginative duo of weeping and laughing philosophers: "Among the wise, instead of anger, Heraclitus was overtaken by tears, Democritus by laughter." The view is also expressed by the satirist Juvenal: The motif was also adopted by Lucian of Samosata in his "Sale of Creeds", in which the duo is sold together as a complementary product in the satirical auction of philosophers. Heraclitus's philosophy of change is commonly called becoming, and can be seen in a dialectical relationship and contrasted with Parmenides' concept of "being". For this reason, Heraclitus and Parmenides are commonly considered to be two of the founders of ontology and the issue of the One and the Many, and thus pivotal in the history of Western philosophy and metaphysics. Diogenes Laërtius has a quote summing up Heraclitus's philosophy: "All things come into being by conflict of opposites, and the sum of things ( "ta hola", "the whole") flows like a stream." The meaning of "Logos" (λόγος) is subject to interpretation: "word", "account", "principle", "plan", "formula", "measure", "proportion", "reckoning." Though Heraclitus "quite deliberately plays on the various meanings of "logos"", there is no compelling reason to suppose that he used it in a special technical sense, significantly different from the way it was used in ordinary Greek of his time. Zeller's opinion of Heraclitean logos:"λόγος  in my [Zeller's] opinion, refers indeed primarily to the discourse, but also to the contents of the discourse, the truth expressed in it; a confusion and identification of different ideas, united and apparently included in one word, which should least of all surprise us in Heracleitus. He [Heraclitus] says: ‘ This discourse (the theory of the world laid down in his work) is not recognised by men, although it ever exists (i.e. that which always exists, contains the eternal order of things, the eternal truth), for although all happens according to it (and thus its truth is confirmed by all facts universally) men behave as if they had never had any experience of it, when words or things present themselves to them, as I here represent them ’ (when the views here brought forward are shown them by instruction or by their own perceptions)"The later Stoics understood the "Logos" as "the account which governs everything," and Hippolytus, a Church Father in the 3rd century AD, identified it as meaning the Christian "Word of God", such as in , "In the beginning was the Word ("logos") and the Word was God." Burnet's view of the relationship between Heraclitean logos and Johannine logos:"In any case, the Johannine doctrine of the logos has nothing to do with Herakleitos or with anything at all in Greek philosophy, but comes from the Hebrew Wisdom literature. See Rendel Harris, "The Origin of the Prologue to St. John's Gospel" in "The Expositor", 1916, pp. 147 sqq." Heraclitus's ideas about the "Logos" are expressed in three famous but obscure fragments, with the first cited above, and two others. People must "follow the common" and not live having "their own judgement ("phronēsis")". He seems to say the "Logos" is a public fact perhaps like a proposition or formula, though he would not have considered such things as abstract objects or even immaterial. The last quote can even be taken to be a statement against making arguments "ad hominem": For this reason it is necessary to follow what is common. But although the "Logos" is common, most people live as if they had their own private understanding. Listening not to me but to the Logos... Like the Milesians before him, Thales with water, Anaximander with apeiron, and Anaximenes with air, Heraclitus considered fire as the "arche", the most fundamental element, which gave rise to the other elements, perhaps because living people are warm. Norman Melchert interpreted Heraclitus as using "fire" metaphorically, in lieu of "Logos", as the origin of all things. Others see it as a metaphor for change, like a dancing and flickering flame, or perhaps all of these. It is also speculated this shows the influence of Persian Zoroastrianism, with its concept of Atar. This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out. All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods.The thunderbolt that steers the course of all things. The first quote is the earliest use of "kosmos" in any extant Greek text. On Heraclitus using Fire as a new primary substance, Burnet writes:"All this made it necessary for him to seek out a new primary substance. He wanted not merely something from which opposites could be “ separated out,” but something which of its own nature would pass into everything else, while everything else would pass in turn into it. This he found in Fire, and it is easy to see why, if we consider the phenomenon of combustion. The quantity of fire in a flame burning steadily appears to remain the same, the flame seems to be what we call a “ thing.” And yet the substance of it is continually changing. It is always passing away in smoke, ‘and its place is always being taken by fresh matter from the fuel that feeds it. This is just what we want. If we regard the world as an ” ever-living fire ” (fr. 20), we can understand how it is always becoming all things, while all things are always returning to it." In a seeming response to Anaximander, Heraclitus also believed in a unity of opposites. He characterized all existing entities by pairs of contrary properties This is most famously expressed with his claim "Mortals are immortals and immortals are mortals, the one living the others' death and dying the others' life". This is taken to mean men are mortal gods, and gods immortal men. He would also point out that sleep is like death. He was fond of speaking this way. He also said "Man kindles a light for himself in the night-time, when he has died but is alive. The sleeper, whose vision has been put out, lights up from the dead; he that is awake lights up from the sleeping," and "All the things we see when awake are death, even as all we see in slumber are sleep." In this union of opposites, of both generation and destruction, Heraclitus called the oppositional processes (), "strife", and hypothesizes that the apparently stable state, (), or "justice", is a harmony of it. Anaximander described the same as injustice. Aristotle mentions that Heraclitus disliked Homer because he wished strife would leave the world, which for Heraclitus would destroy the world; "there would be no harmony without high and low notes, and no animals without male and female, which are opposites." Heraclitus is the original philosopher to claim that war is a good thing. He also wrote "Every beast is driven to pasture by blows." We must know that war is common to all and strife is justice, and that all things come into being through strife necessarily. Gods and men honor those who are slain in battle. The people must fight for its law as for its walls. In a metaphor and one of the earliest uses of a force in the history of philosophy, Heraclitus compares the union of opposites to a strung bow or lyre held in shape by an equilibrium of the string tension:There is a harmony in the bending back ( "palintropos") as in the case of the bow and the lyre. He claims this shows something true yet invisible about reality; "a hidden harmony is better than an apparent one." He also noted "the bow's name is life, though its work is death," a play on both bow and life being the same word as written – biós; further evidence of a continuous, written work. On the unity of opposites, Burnet says:"The " strife of opposites " is really an “ attunement ” (armonia). From this it follows that wisdom is not a knowledge of many things, but the perception of the underlying unity of the warring opposites. That this really was the fundamental thought of Herakleitos is stated by Philo. He says: " For that which is made up of both the opposites is one; and, when the one is divided, the opposites are disclosed. Is not this just what the Greeks say their great and much belauded Herakleitos put in the forefront of his philosophy as summing it all up, and boasted of as a new discovery? ” " On Heraclitus' teachings of the one and many, Burnet writes: "The truth Herakleitos proclaimed was that the world is at once one and many, and that it is just the “ opposite tension ” of the opposites that constitutes the unity of the One. It is the same conclusion as that of Pythagoras, though it is put in another way." Burnet also writes about Plato's understanding of Heraclitus:"According to Plato, then, Herakleitos taught that reality was at once many and one. This was not meant as a logical principle. The identity which Herakleitos explains as consisting in difference is just that of the primary substance in all its manifestations. This identity had been realised already by the Milesians, but they had found a difficulty in the difference. Anaximander had treated the strife of opposites as an “ injustice,” and what Herakleitos set himself to show was that, on the contrary, it was the highest justice (fr. 62)." Heraclitus also said "The way up and the way down is one and the same." Similarly he said "In writing, the course taken, straight and crooked, is one and the same." This can be interpreted in various ways. One interpretation is that it shows his monism, though perhaps a dialectical one. Heraclitus does believe all is one. The full quote is "Listening not to me but to the Logos it is wise to agree that all things are one." The one is made up of all things, and all things issue from the one. Hesiod is most men's teacher. Men think he knew very many things, a man who did not know day or night! They are one. Concerning a circle the beginning and end are common. Another is it illustrates the cyclical nature of reality and transformation, a replacement of one element by another, "turnings of fire". This might be another "hidden harmony" and is more consistent with pluralism, not monism. The death of fire is the birth of air, and the death of air is the birth of water. For it is death to souls to become water, and death to water to become earth. But water comes from earth; and from water, soul. Cold things become warm, and what is warm cools; what is wet dries, and the parched is moistened. And it is the same thing in us that is quick and dead, awake and asleep, young and old; the former are shifted and become the latter, and the latter in turn are shifted and become the former. This has also been interpreted to advocate relativism. Good and ill are one. Asses prefer straw to gold. The sea is the purest and impurest water. Fish can drink it and it is good for them, to me it is undrinkable and destructive. Heraclitus recognized the fundamental changing of objects with the flow of time (i.e., impermanence) and the philosophical issue of "becoming". He is credited with the phrase (panta rhei) "everything flows." This famous aphorism used to characterize Heraclitus' thought comes from Simplicius, a neoplatonist, and from Plato's "Cratylus". The word "rhei" (as in rheology) is the Greek word for "to stream", and is etymologically related to Rhea according to Plato's "Cratylus". Compare with the Latin adages "Omnia mutantur" and "Tempora mutantur" (8 AD) and the Buddhist and Hindu concepts of "anicca." On Heraclitus' teachings on Flux, Burnet writes:"Fire burns continuously and without interruption. It is always consuming fuel and always liberating smoke. Everything is either mounting upwards to serve as fuel, or sinking down wards after having nourished the flame. It follows that the whole of reality is like an ever-flowing stream, and that nothing is ever at rest for a moment. The substance of the things we see is in constant change. Even as we look at them, some of the stuff of which they are composed has already passed into something else, while fresh stuff has come into them from another source. This is usually summed up, appropriately enough, in the phrase "All things are flowing ” (panta rei), though this does not seem to be a quotation from Herakleitos. Plato, however, expresses the idea quite clearly. " Nothing ever is, everything is becoming ”; " All things are in motion like streams ”; "All things are passing, and nothing abides ”; " Herakleitos says somewhere that all things pass and naught abides; and, comparing things to the current of a river, he says you cannot step twice into the same stream ” (cf. fr. 41). these are the terms in which he describes the system." His philosophy has been summed up with another famous adage, "No man ever steps in the same river twice." It can be contrasted with Parmenides's statement that "whatever is, is, and what is not cannot be." Heraclitus uses the river image more than once: Ever-newer waters flow on those who step into the same rivers.We both step and do not step in the same rivers. We are and are not. The idea is referenced twice in Plato's "Cratylus". Instead of "flow" Plato uses "chōrei", "to change place" (; )."All entities move and nothing remains still""Everything changes and nothing remains still ... and ... you cannot step twice into the same stream" Simplicius references it thus:"...the natural philosophers who follow Heraclitus, keeping in view the perpetual flux of generation and the fact that all corporeal things are coming to be and departing and never really are (as Timaeus said too) claim that all things are always in flux and that you could not step twice in the same river." According to Aristotle, Cratylus went a step beyond his master's doctrine and proclaimed that one cannot step into the same river once. Compare the Japanese tale "Hōjōki," (1200 AD) which contains the same image of the changing river. However, the German classicist and philosopher interprets this fragment as an indication by Heraclitus, for the world as a steady constant: "You will not find anything, in which the river remains constant. [...] Just the fact, that there is a particular river bed, that there is a source and an estuary etc. is something, that stays identical. And this is [...] the concept of a river". Heraclitus does seem to say change is what unites things, as with his unity of opposites, or the quote "Even the "kykeon" falls apart if it is not stirred." and "Changing it rests." Flux is also expressed by the fact that, rather than thinking the same Sun will rise tomorrow as rose today, Heraclitus said the Sun is new every day. By "God" Heraclitus does not mean a single God as "primum movens" of all things, God as Creator, for the universe is eternal, "it always was and will be;" but the divine as opposed to human; the immortal as opposed to the mortal, the cyclical as opposed to the transient. It is arguably more accurate to speak of "the Divine" and not of "God". Heraclitus distinguishes between human laws and divine law ( ). He said both God and fire are "want and surfeit". In addition to seeing fire as the most fundamental substance, he presents fire as the divine cosmos. Fire is both a substance and a motivator of change, it is active in altering other things. Heraclitus describes it as "the judging and convicting of all things." Judgment here is literally "to separate" (κρίνειν "krinein"). In antiquity this was interpreted to mean that eventually all things will be consumed by fire, a doctrine called ecpyrosis. Hippolytus, from whom we get the quotation, sees it as a reference to divine judgment and Hell. However, he removes the human sense of justice from his concept of God: "To God all things are fair and good and just, but people hold some things wrong and some right." God's custom has wisdom but human custom does not. Wisdom is "to know the thought by which all things are steered through all things", which must not imply that people are or can be wise. Only Zeus is wise. To some degree then Heraclitus seems to be in the mystic's position of urging people to follow God's plan without much of an idea what that may be. In fact there is a note of despair: "The fairest universe ( ) is but a heap of rubbish ( ) piled up ( , i.e. "poured out") at random ( "aimlessly")." Bertrand Russell presents Heraclitus as a mystic in his "Mysticism and Logic". There is the frivolity of a child in both man and God. "Eternity is a child moving counters in a game; the kingly power is a child's." Nietzsche explains this enigmatic quote as "And as the child and the artist plays, so too plays the ever living fire, it builds up and tears down, in innocence – such is the game eternity plays with itself." This quote may also be why there is the story of Heraclitus giving up his kingship to his brother. Heraclitus also stated "human opinions are children's toys." However, "Man is called a baby by God, even as a child [is called a baby] by a man." Heraclitus also states "We should not act and speak like 'children of our parents", interpreted by Marcus Aurelius to mean not simply accept what others believe. He regarded the soul as being a mixture of fire and water, with fire being the noble part of the soul, and water the ignoble part. A soul should therefore aim toward becoming more full of fire and less full of water: a "dry" soul was best. According to Heraclitus, worldly pleasures (drinking most apparently) made the soul "moist", and he considered mastering one's worldly desires to be a noble pursuit which purified the soul's fire. The soul also has a self-increasing "Logos". He believed we breathe in the "logos", as Anaximenes would say of air and the soul. He also stated "It is hard to fight with one's heart's desire. Whatever it wishes to get, it purchases at the cost of soul." This influential quote by Heraclitus "Ethos anthropoi daimon" has led to numerous interpretations. It seems to state one's luck is related to one's character. Whether in this context "daimon" can indeed be translated to mean "fate" is disputed; however, it lends much sense to Heraclitus' observations and conclusions about human nature in general. While the translation with "fate" is generally accepted as in Kahn's "a man's character is his divinity", in some cases, it may also stand for the soul of the departed. Some have interpreted and some fragments support Heraclitus as a kind of proto-empiricist, such as "the things that can be seen, heard and learned are what I prize the most," or "The sun is the size that it appears," "the width of a human foot. but W. K. C. Guthrie disputes this interpretation, citing for "Eyes and ears are bad witnesses to men who have barbarian souls." He also said "sight tells falsehoods" and that "nature loves to hide". He also warned against hearsay, "Eyes are better witnesses than the ears." The sense of smell also seemed to play a role in his philosophy. "If all things were turned to smoke, the nostrils would distinguish them." and "Souls smell in Hades." Heraclitus's most famous follower was Cratylus, who was presented by Plato as a linguistic naturalist, one who believes names must apply naturally to their objects. According to Aristotle, he took the view that nothing can be said about the ever-changing world, and "ended by thinking that one need not say anything, and only moved his finger." He seemed to hold the view continuous change warrants skepticism because we cannot define a thing that does not have a permanent nature. 20th century linguistic philosophy saw a rise in considerations brought up by Cratylus in Plato's dialogue, and thus offered the doctrine called Cratylism. Parmenides's proem argues that change is impossible, and may very well have been referring to Heraclitus with such passages as "Undiscerning crowds, who hold that it is and is not the same, and all things travel in opposite directions!". The pluralists were the first to try and reconcile Heraclitus and Parmenides. Anaxagoras may have been influenced by Heraclitus in his refusal to separate the opposites. Empedocles forces of Love and Hate were probably influenced by Heraclitus' Harmony and Strife. Empedocles is also credited with introducing the concept of the four classical elements. Plato is the most famous to try and reconcile Heraclitus and Parmenides, and through him both influence virtually all subsequent Western philosophy. Plato knew of Heraclitus through Cratylus, and thus wrote his dialogue of the same name. Plato thought the views of Heraclitus entailed that no entity may ever occupy a single state at a single time, and argued against him as follows:How can that be a real thing which is never in the same state? ... for at the moment that the observer approaches, then they become other ... so that you cannot get any further in knowing their nature or state ... but if that which knows and that which is known exist ever ... then I do not think they can resemble a process or flux ... However, Plato does seem influenced by Heraclitus in his concept of the world as always changing, and thus our inability to have knowledge of particulars, and by Parmenides in needing another world, the Platonic realm, where things remain unchanging and universals exist as the objects of knowledge, the Forms. He gives this in the "Symposium", sounding very much like Heraclitus: Even during the period for which any living being is said to live and retain his identity – as a man, for example, is called the same man from boyhood to old age – he does not in fact retain the same attributes, although he is called the same person: he is always becoming a new being and undergoing a process of loss and reparation, which affects his hair, his flesh, his bones, his blood and his whole body. And not only his body, but his soul as well. No man's character, habits, opinions desires pleasures pains and fears remain always the same: new ones come into existence and old ones disappear. Pyrrhonism is a school of philosophical skepticism which flourished between the 3rd century BCE and about the 3rd century CE. One major figure in the school, Aenesidemus, claimed in a now-lost work that Pyrrhonism was a way to Heraclitean philosophy, since opposites appearing to be the case about the same thing leads into opposites being the case about the same thing, and the Pyrrhonists say that opposites appear to be the case about the same thing, while the Heracliteans move from this to their being the case. A later Pyrrhonist philosopher, Sextus Empiricus, disagreed, arguing that opposites' appearing to be the case about the same thing is not a dogma of the Pyrrhonists but a matter occurring not only to the Pyrrhonists but also to the other philosophers, and, indeed, to all mankind. Stoicism was a philosophical school which flourished between the 3rd century BC and about the 3rd century AD. It began among the Greeks and became a major philosophy of the Roman Empire before declining with the rise of Christianity in the 3rd century. While most scholars believe Heraclitus had little effect on the Stoics, scholar A. A. Long argues otherwise. According to him, throughout their long tenure the Stoics believed that the major tenets of their philosophy derived from the thought of Heraclitus, "the importance of Heraclitus to later Stoics is evident most plainly in Marcus Aurelius." Explicit connections of the earliest Stoics to Heraclitus showing how they arrived at their interpretation are missing but they can be inferred from the Stoic fragments, which Long concludes are "modifications of Heraclitus." The Stoic modification of Heraclitus' idea of the Logos was also influential on Jewish philosophers such as Philo of Alexandria, who connected it to "Wisdom personified" as God's creative principle. Philo uses the term Logos throughout his treatises on Hebrew Scripture in a manner clearly influenced by the Stoics. With regard to Stoic modification of Heraclitus, Burnet writes:"Another difficulty we have to face is that most of the commentators on Herakleitos mentioned in Diogenes were Stoics. Now, the Stoics held the Ephesian in peculiar veneration, and sought to interpret him as far as possible in accordance with their own system. Further, they were fond of “ accommodating ” the views of earlier thinkers to their own, and this has had serious consequences. In particular, the Stoic theories of the logos and the ekpyrosis are constantly ascribed to Herakleitos, and the very fragments are adulterated with scraps of Stoic terminology." The Stoics were interested in Heraclitus' treatment of fire. The earliest surviving Stoic work, the "Hymn to Zeus" of Cleanthes, a work transitional from pagan polytheism to the modern religions and philosophies, though not explicitly referencing Heraclitus, adopts what appears to be the Heraclitean logos modified. Zeus rules the universe with law ("nomos") wielding on its behalf the "forked servant", the "fire" of the "ever-living lightning." So far nothing has been said that differs from the Zeus of Homer. But then, says Cleanthes, Zeus uses the fire to "straighten out the common logos" that travels about ("phoitan", "to frequent") mixing with the greater and lesser lights (heavenly bodies). This is Heraclitus' logos, but now it is confused with the "common "nomos"", which Zeus uses to "make the wrong ("perissa", left or odd) right ("artia", right or even)" and "order ("kosmein") the disordered ("akosma")." The Church Fathers were the leaders of the early Christian Church during its first five centuries of existence, roughly contemporaneous to Stoicism under the Roman Empire. The works of dozens of writers in hundreds of pages have survived. All of them had something to say about the Christian form of the "Logos". The Catholic Church found it necessary to distinguish between the Christian "logos" and that of Heraclitus, in order to distance itself from pagans and convert them to Christianity. Church use of the methods and conclusions of ancient philosophy as such was as yet far in the future, even though many were converted philosophers. Hippolytus of Rome therefore identifies Heraclitus along with the other Pre-Socratics (and Academics) as sources of heresy. In "Refutation of All Heresies", one of the best sources on quotes from Heraclitus, Hippolytus says: "What the blasphemous folly is of Noetus, and that he devoted himself to the tenets of Heraclitus the Obscure, not to those of Christ." Hippolytus then goes on to present an inscrutable quote: "God ("theos") is day and night, winter and summer, ... but he takes various shapes, just as fire, when it is mingled with spices, is named according to the savor of each." The fragment seems to support pantheism if taken literally. German physicist and philosopher Max Bernard Weinstein classed his view as a predecessor of pandeism. Hippolytus condemns the obscurity of it. He cannot accuse Heraclitus of being a heretic so he says instead: "Did not (Heraclitus) the Obscure anticipate Noetus in framing a system ...?" The apparent pantheist deity of Heraclitus (if that is what the fragment means) must be equal to the union of opposites and therefore must be corporeal and incorporeal, divine and not-divine, dead and alive, etc., and the Trinity can only be reached by some sort of illusory shape-shifting. The Christian apologist Justin Martyr, however, took a much more positive view of him. In his First Apology, he said both Socrates and Heraclitus were Christians before Christ: "those who lived reasonably are Christians, even though they have been thought atheists; as, among the Greeks, Socrates and Heraclitus, and men like them." The weeping philosopher was still considered an indispensable motif for philosophy through the modern period. Michel de Montaigne proposed two archetypical views of human affairs based on them, selecting Democritus' for himself. G. W. F. Hegel gave Heraclitus high praise. According to him, "the origin of philosophy is to be dated from Heraclitus." He attributes dialectics to Heraclitus rather than, as Aristotle did, to Zeno of Elea. "There is no proposition of Heraclitus which I have not adopted in my Logic." Friedrich Engels who associated with the Young Hegelians also gave Heraclitus the credit for inventing dialectics, relevant to his own dialectical materialism. Ferdinand Lasalle was another socialist also influenced by Heraclitus. Friedrich Nietzsche was profoundly influenced by Heraclitus, as can be seen in his Philosophy in the Tragic Age of the Greeks. Nietzsche sees him as a confident opposition to Anaximander's pessimism. Oswald Spengler was influenced by Nietzsche and also wrote his dissertation on Heraclitus. Martin Heidegger is also influenced by Heraclitus, as seen in his Introduction to Metaphysics, and takes a very different interpretation than Nietzsche and several others. According to Heidegger, "In Heraclitus, to whom is ascribed the doctrine of becoming as diametrically opposed to Parmenides' doctrine of being, says the same as Parmenides." Karl Popper wrote much on Heraclitus, and both Popper and Heraclitus believe in invisible processes at work. The weeping philosopher may have also been mentioned in William Shakespeare's "The Merchant of Venice". J. M. E. McTaggart's illustration of the A-series and B-series of time has been seen as an analogous application to time of Heraclitus and Parmenides views of all of reality, respectively. A. N. Whitehead's process philosophy bears a resemblance to Heraclitus. Carl Jung wrote that Heraclitus "discovered the most marvellous of all psychological laws: the regulative function of opposites ...by which he meant that sooner or later everything runs into its opposite." Jung adopted this law, enantiodromia, into his analytical psychology. He related it with Chinese classics, stating: "If the Western world had followed his lead, we would all be Chinese in our viewpoint instead of Christian. We can think of Heraclitus as making the switch between the East and the West." Furthermore, Jung suggested that Heraclitus was named "the dark" not because his style was too difficult, but precisely "because he spoke too plainly" about the paradoxical nature of existence "and called life itself an “ever-living fire.”" Heraclitus has been done several times in western art, especially as part of the weeping and laughing philosopher motif, and with globes. Donato Bramante painted a fresco, "Democritus and Heraclitus," in Casa Panigarola in Milan in 1477. Heraclitus's most famous depiction in art is in Raphael's "School of Athens," painted around 1510. Raphael chose to depict Michelangelo as Heraclitus. He and Diogenes of Sinope are the only ones to sit alone in the painting. Salvator Rosa also painted Democritus and Heraclitus, as did Luca Giordano, together and separately in the 1650s or so. Giuseppe Torretti sculpted busts of the same in 1705. Giuseppe Antonio Petrini painted "Weeping Heraclitus" circa 1750. Franz Tymmermann in 1538 painted a weeping Heraclitus. Johann Christoph Ludwig Lücke sculpted busts of the same in the 1750s. Franz Xaver Messerschmidt also sculpted them. In 1619, the Dutch Cornelis van Haarlem also painted a laughing Democritus and weeping Heraclitus. Hendrick ter Brugghen's paintings of Heraclitus and Democritus separately in 1628 hang in the Rijksmuseum, and he also painted them together. Around 1630, Dutch painter Johannes Moreelse painted Heraclitus ringing his hands over a globe, sad at the state of the world, and another with Democritus laughing at one. Dirck van Baburen also painted the pair. Egbert van Heemskerck did as well. Peter Paul Rubens painted the pair twice in 1603. Nicolaes Pickenoy also painted the pair. Etienne Parrocel painted him, as did Charles-Antoine Coypel. Jusepe de Ribera painted the pair in 1630.
https://en.wikipedia.org/wiki?curid=13792
Harrison Schmitt Harrison Hagan "Jack" Schmitt (born July 3, 1935) is an American geologist, retired NASA astronaut, university professor, former U.S. senator from New Mexico, and the most recent person still living to have walked on the Moon. In December 1972, as one of the crew on board Apollo 17, Schmitt became the first member of NASA's first scientist-astronaut group to fly in space. As Apollo 17 was the last of the Apollo missions, he also became the twelfth and second-youngest person to set foot on the Moon and the second-to-last person to step off of the Moon (he boarded the Lunar Module shortly before commander Eugene Cernan). Schmitt also remains the only professional scientist to have flown beyond low Earth orbit and to have visited the Moon. He was influential within the community of geologists supporting the Apollo program and, before starting his own preparations for an Apollo mission, had been one of the scientists training those Apollo astronauts chosen to visit the lunar surface. Schmitt resigned from NASA in August 1975 to run for election to the United States Senate as a member from New Mexico. As the Republican candidate in the 1976 election, he defeated Democratic incumbent Joseph Montoya. In the 1982 election, Schmitt was defeated by Democrat Jeff Bingaman. Born July 3, 1935, in Santa Rita, New Mexico, Schmitt grew up in nearby Silver City, and is a graduate of the Western High School (class of 1953). He received a B.S. degree in geology from the California Institute of Technology in 1957 and then spent a year studying geology at the University of Oslo in Norway. He received a Ph.D. in geology from Harvard University in 1964, based on his geological field studies in Norway. Before joining NASA as a member of the first group of scientist-astronauts in June 1965, he worked at the U.S. Geological Survey's Astrogeology Center at Flagstaff, Arizona, developing geological field techniques that would be used by the Apollo crews. Following his selection, Schmitt spent his first year at Air Force UPT learning to become a jet pilot. Upon his return to the astronaut corps in Houston, he played a key role in training Apollo crews to be geologic observers when they were in lunar orbit and competent geologic field workers when they were on the lunar surface. After each of the landing missions, he participated in the examination and evaluation of the returned lunar samples and helped the crews with the scientific aspects of their mission reports. Schmitt spent considerable time becoming proficient in the CSM and LM systems. In March 1970 he became the first of the scientist-astronauts to be assigned to space flight, joining Richard F. Gordon Jr. (Commander) and Vance Brand (Command Module Pilot) on the Apollo 15 backup crew. The flight rotation put these three in line to fly as prime crew on the third following mission, Apollo 18. When Apollo 18 and Apollo 19 were cancelled in September 1970, the community of lunar geologists supporting Apollo felt so strongly about the need to land a professional geologist on the Moon, that they pressured NASA to reassign Schmitt to a remaining flight. As a result, Schmitt was assigned in August 1971 to fly on the last mission, Apollo 17, replacing Joe Engle as Lunar Module Pilot. Schmitt landed on the Moon with commander Gene Cernan in December 1972. Schmitt claims to have taken the photograph of the Earth known as "The Blue Marble", possibly one of the most widely distributed photographic images in existence. NASA officially credits the image to the entire Apollo 17 crew. While on the Moon's surface, Schmitt — the only geologist in the astronaut corps — collected the rock sample designated Troctolite 76535, which has been called "without doubt the most interesting sample returned from the Moon". Among other distinctions, it is the central piece of evidence suggesting that the Moon once possessed an active magnetic field. As he returned to the Lunar Module before Cernan, Schmitt is the next-to-last person to have walked on the Moon's surface. Since the death of Cernan in 2017, Schmitt is the most recent person to have walked on the Moon who is still alive. After the completion of Apollo 17, Schmitt played an active role in documenting the Apollo geologic results and also took on the task of organizing NASA's Energy Program Office. On August 30, 1975, Schmitt resigned from NASA to seek election as a Republican to the United States Senate representing New Mexico in the 1976 election. Schmitt campaigned for fourteen months, and his campaign focused on the future. In the Republican primary, held on June 1, 1976, Schmitt defeated Eugene Peirce. In the election, Schmitt opposed two-term Democratic incumbent Joseph Montoya. He defeated Montoya 57% to 42%. He served one term and, notably, was the chairman of the Science, Technology, and Space Subcommittee of the United States Senate Committee on Commerce. He sought a second term in 1982, facing state Attorney General Jeff Bingaman. Bingaman attacked Schmitt for not paying enough attention to local matters; his campaign slogan asked, "What on Earth has he done for you lately?" This, combined with the deep recession, proved too much for Schmitt to overcome; he was defeated, 54% to 46%. Following his Senate term, Schmitt has been a consultant in business, geology, space, and public policy. Schmitt is an adjunct professor of engineering physics at the University of Wisconsin–Madison, and has long been a proponent of lunar resource utilization. In 1997 he proposed the Interlune InterMars Initiative, listing among its goals the advancement of private-sector acquisition and use of lunar resources, particularly lunar helium-3 as a fuel for notional nuclear fusion reactors. Schmitt was chair of the NASA Advisory Council, whose mandate is to provide technical advice to the NASA Administrator, from November 2005 until his abrupt resignation on October 16, 2008. In November 2008, he quit the Planetary Society over policy advocacy differences, citing the organization's statements on "focusing on Mars as the driving goal of human spaceflight" (Schmitt said that going back to the Moon would speed progress toward a manned Mars mission), on "accelerating research into global climate change through more comprehensive Earth observations" (Schmitt voiced objections to the notion of a present "scientific consensus" on climate change as any policy guide), and on international cooperation (which he felt would retard rather than accelerate progress), among other points of divergence. In January 2011, he was appointed as secretary of the New Mexico Energy, Minerals and Natural Resources Department in the cabinet of Governor Susana Martinez, but was forced to give up the appointment the following month after refusing to submit to a required background investigation. "El Paso Times" called him the "most celebrated" candidate for New Mexico energy secretary. Schmitt wrote a book entitled "Return to the Moon: Exploration, Enterprise, and Energy in the Human Settlement of Space" in 2006. He lives in Silver City, New Mexico, and spends some of his summer at his northern Minnesota lake cabin. Schmitt is also involved in several civic projects, including the improvement of the Senator Harrison H. Schmitt Big Sky Hang Glider Park in Albuquerque, New Mexico. Schmitt's view on climate change emphasizes natural over human factors as driving climate. Schmitt has expressed the view that the risks posed by climate change are overrated and suggests instead that climate change is a tool for people who are trying to increase the size of government. He resigned his membership in the Planetary Society primarily because of its Mars-first policy, but also because of its stance on global warming, writing in his resignation letter that the "'global warming scare' is being used as a political tool to increase government control over American lives, incomes and decision making. It has no place in the Society's activities." Schmitt spoke at the March 2009 International Conference on Climate Change sponsored by the Heartland Institute. He appeared in December that year on the Fox Business Network, saying "[t]he CO2 scare is a red herring". In a 2009 interview with conspiracy theorist and radio host Alex Jones, Schmitt asserted a link between Soviet Communism and the American environmental movement: "I think the whole trend really began with the fall of the Soviet Union. Because the great champion of the opponents of liberty, namely communism, had to find some other place to go and they basically went into the environmental movement." At the Heartland Institute's sixth International Conference on Climate Change Schmitt said that climate change was a stalking horse for National Socialism. Schmitt co-authored a May 8, 2013 "Wall Street Journal" opinion column with William Happer, contending that increasing levels of carbon dioxide in the atmosphere are not significantly correlated with global warming, attributing the "single-minded demonization of this natural and essential atmospheric gas" to advocates of government control of energy production. Noting a positive relationship between crop resistance to drought and increasing carbon dioxide levels, the authors argued, "Contrary to what some would have us believe, increased carbon dioxide in the atmosphere will benefit the increasing population on the planet by increasing agricultural productivity." Schmitt was one of five inductees into the International Space Hall of Fame in 1977. He was one of 24 Apollo astronauts who were inducted into the U.S. Astronaut Hall of Fame in 1997. Schmitt is one of the astronauts featured in the 2007 documentary "In the Shadow of the Moon". He also contributed to the book "NASA's Scientist-Astronauts" by David Shayler and Colin Burgess.
https://en.wikipedia.org/wiki?curid=13793
Hilaire Rouelle Hilaire Marin Rouelle (15 February 1718 – 7 April 1779) was an 18th-century French chemist. Commonly cited as the 1773 discoverer of urea, he was not the first to do so. Dutch scientist Herman Boerhaave had discovered this chemical as early as 1727. Rouelle is known as "le cadet" (the younger) to distinguish him from his older brother, Guillaume-François Rouelle, who was also a chemist.
https://en.wikipedia.org/wiki?curid=13795
Hammer A hammer is a tool consisting of a weighted "head" fixed to a long handle that is swung to deliver an impact to a small area of an object. This can be, for example, to drive nails into wood, to shape metal (as with a forge), or to crush rock. Hammers are used for a wide range of driving, shaping, and breaking applications. The modern hammer head is typically made of steel which has been heat treated for hardness, and the handle (also known as a haft or helve) is typically made of wood or plastic. The claw hammer has a "claw" to pull nails out of wood, and is commonly found in an inventory of household tools in North America. Other types of hammer vary in shape, size, and structure, depending on their purposes. Hammers used in many trades include sledgehammers, mallets, and ball-peen hammers. Although most hammers are hand tools, powered hammers, such as steam hammers and trip hammers, are used to deliver forces beyond the capacity of the human arm. There are over 40 different types of hammers that have many different types of uses. The use of simple hammers dates to around 3.3 million years ago according to the 2012 find made by Sonia Harmand and Jason Lewis of Stony Brook University, who while excavating a site near Kenya's Lake Turkana discovered a very large deposit of various shaped stones including those used to strike wood, bone, or other stones to break them apart and shape them. The first hammers were made without handles. Stones attached to sticks with strips of leather or animal sinew were being used as hammers with handles by about 30,000 BCE during the middle of the Paleolithic Stone Age. The addition of a handle gave the user better control and less accidents. The hammer became the number one tool. Used for building, food and protection. The hammer's archaeological record shows that it may be the oldest tool for which definite evidence exists of its early existence. A traditional hand-held hammer consists of a separate head and a handle, which can be fastened together by means of a special wedge made for the purpose, or by glue, or both. This two-piece design is often used to combine a dense metallic striking head with a non-metallic mechanical-shock-absorbing handle (to reduce user fatigue from repeated strikes). If wood is used for the handle, it is often hickory or ash, which are tough and long-lasting materials that can dissipate shock waves from the hammer head. Rigid fiberglass resin may be used for the handle; this material does not absorb water or decay but does not dissipate shock as well as wood. A loose hammer head is hazardous because it can literally "fly off the handle" when in use, becoming a dangerous uncontrolled missile. Wooden handles can often be replaced when worn or damaged; specialized kits are available covering a range of handle sizes and designs, plus special wedges for attachment. Some hammers are one-piece designs made mostly of a single material. A one-piece metallic hammer may optionally have its handle coated or wrapped in a resilient material such as rubber, for improved grip and to reduce user fatigue. The hammer head may be surfaced with a variety of materials including brass, bronze, wood, plastic, rubber, or leather. Some hammers have interchangeable striking surfaces, which can be selected as needed or replaced when worn out. A large hammer-like tool is a "maul" (sometimes called a "beetle"), a wood- or rubber-headed hammer is a "mallet", and a hammer-like tool with a cutting blade is usually called a "hatchet". The essential part of a hammer is the head, a compact solid mass that is able to deliver a blow to the intended target without itself deforming. The impacting surface of the tool is usually flat or slightly rounded; the opposite end of the impacting mass may have a ball shape, as in the ball-peen hammer. Some upholstery hammers have a magnetized face, to pick up tacks. In the hatchet, the flat hammer head may be secondary to the cutting edge of the tool. The impact between steel hammer heads and the objects being hit can create sparks, which may ignite flammable or explosive gases. These are a hazard in some industries such as underground coal mining (due to the presence of methane gas), or in other hazardous environments such as petroleum refineries and chemical plants. In these environments, a variety of non-sparking metal tools are used, primarily made of aluminium or beryllium copper. In recent years, the handles have been made of durable plastic or rubber, though wood is still widely used because of its shock-absorbing qualities and repairability. Mechanically-powered hammers often look quite different from the hand tools, but nevertheless, most of them work on the same principle. They include: In professional framing carpentry, the manual hammer has almost been completely replaced by the nail gun. In professional upholstery, its chief competitor is the staple gun. A hammer is a simple force amplifier that works by converting mechanical work into kinetic energy and back. In the swing that precedes each blow, the hammer head stores a certain amount of kinetic energy—equal to the length "D" of the swing times the force "f" produced by the muscles of the arm and by gravity. When the hammer strikes, the head is stopped by an opposite force coming from the target, equal and opposite to the force applied by the head to the target. If the target is a hard and heavy object, or if it is resting on some sort of anvil, the head can travel only a very short distance "d" before stopping. Since the stopping force "F" times that distance must be equal to the head's kinetic energy, it follows that "F" is much greater than the original driving force "f"—roughly, by a factor "D"/"d". In this way, great strength is not needed to produce a force strong enough to bend steel, or crack the hardest stone. The amount of energy delivered to the target by the hammer-blow is equivalent to one half the mass of the head times the square of the head's speed at the time of impact formula_1. While the energy delivered to the target increases linearly with mass, it increases quadratically with the speed (see the effect of the handle, below). High tech titanium heads are lighter and allow for longer handles, thus increasing velocity and delivering the same energy with less arm fatigue than that of a heavier steel head hammer. A titanium head has about 3% recoil energy and can result in greater efficiency and less fatigue when compared to a steel head with up to 30% recoil. Dead blow hammers use special rubber or steel shot to absorb recoil energy, rather than bouncing the hammer head after impact. The handle of the hammer helps in several ways. It keeps the user's hands away from the point of impact. It provides a broad area that is better-suited for gripping by the hand. Most importantly, it allows the user to maximize the speed of the head on each blow. The primary constraint on additional handle length is the lack of space to swing the hammer. This is why sledgehammers, largely used in open spaces, can have handles that are much longer than a standard carpenter's hammer. The second most important constraint is more subtle. Even without considering the effects of fatigue, the longer the handle, the harder it is to guide the head of the hammer to its target at full speed. Most designs are a compromise between practicality and energy efficiency. With too long a handle, the hammer is inefficient because it delivers force to the wrong place, off-target. With too short a handle, the hammer is inefficient because it doesn't deliver enough force, requiring more blows to complete a given task. Modifications have also been made with respect to the effect of the hammer on the user. Handles made of shock-absorbing materials or varying angles attempt to make it easier for the user to continue to wield this age-old device, even as nail guns and other powered drivers encroach on its traditional field of use. As hammers must be used in many circumstances, where the position of the person using them cannot be taken for granted, trade-offs are made for the sake of practicality. In areas where one has plenty of room, a long handle with a heavy head (like a sledgehammer) can deliver the maximum amount of energy to the target. It is not practical to use such a large hammer for all tasks, however, and thus the overall design has been modified repeatedly to achieve the optimum utility in a wide variety of situations. Gravity exerts a force on the hammer head. If hammering downwards, gravity increases the acceleration during the hammer stroke and increases the energy delivered with each blow. If hammering upwards, gravity reduces the acceleration during the hammer stroke and therefore reduces the energy delivered with each blow. Some hammering methods, such as traditional mechanical pile drivers, rely entirely on gravity for acceleration on the down stroke. A hammer may cause significant injury if it strikes the body. Both manual and powered hammers can cause peripheral neuropathy or a variety of other ailments when used improperly. Awkward handles can cause repetitive stress injury (RSI) to hand and arm joints, and uncontrolled shock waves from repeated impacts can injure nerves and the skeleton. Additionally, striking metal objects with a hammer may produce small metallic projectiles which can become lodged in the eye. It is therefore recommended to wear safety glasses. A war hammer is a late medieval weapon of war intended for close combat action. The hammer, being one of the most used tools by man, has been used very much in symbols such as flags and heraldry. In the Middle Ages, it was used often in blacksmith guild logos, as well as in many family symbols. The hammer and pick are used as a symbol of mining. In mythology, the gods Thor, Hercules and Sucello all had hammers that appear in their lore and carried different meanings. In Norse mythology, Thor, the god of thunder and lightning, wields a hammer named Mjölnir. Many artifacts of decorative hammers have been found, leading modern practitioners of this religion to often wear reproductions as a sign of their faith. In American folklore, the hammer of John Henry represents the strength and endurance of a man. A well-known symbol with a hammer in it is the Hammer and Sickle, which was the symbol of the former Soviet Union and is strongly linked to communism and early socialism. The hammer in this symbol represents the industrial working class (and the sickle represents the agricultural working class). The hammer is used in some coat of arms in former socialist countries like East Germany. Similarly, the Hammer and Sword symbolizes Strasserism, a strand of National Socialism seeking to appeal to the working class. In Pink Floyd - The Wall, two hammers crossed are used as a symbol for the fascist takeover of the concert during "In the Flesh". This also has the meaning of the hammer beating down any "nails" that stick out. The gavel, a small wooden mallet, is used to symbolize a mandate to preside over a meeting or judicial proceeding, and a graphic image of one is used as a symbol of legislative or judicial decision-making authority. Judah Maccabee was nicknamed "The Hammer", possibly in recognition of his ferocity in battle. The name "Maccabee" may derive from the Aramaic "maqqaba". (see .) The hammer in the song "If I Had a Hammer" represents a relentless message of justice broadcast across the land. The song became a symbol of the civil rights movement.
https://en.wikipedia.org/wiki?curid=13802
Hiragana is a Japanese syllabary, one component of the Japanese writing system, along with "katakana", "kanji" and in some cases "rōmaji" (Latin script). It is a phonetic lettering system. The word "hiragana" literally means "ordinary" or "simple" kana ("simple" originally as contrasted with kanji). Hiragana and katakana are both kana systems. With one or two minor exceptions, each sound in the Japanese language (strictly, each mora) is represented by one character (or one digraph) in each system. This may be either a vowel such as ""a"" (hiragana あ); a consonant followed by a vowel such as ""ka"" (か) or ""n"" (ん), a nasal sonorant which, depending on the context, sounds either like English "m", "n" or "ng" () when syllable-final or like the nasal vowels of French. Because the characters of the kana do not represent single consonants (except in the case of ん "n"), the kana are referred to as syllabic symbols and not alphabetic letters. Hiragana is used to write "okurigana" (kana suffixes following a kanji root, for example to inflect verbs and adjectives), various grammatical and function words including particles, as well as miscellaneous other native words for which there are no kanji or whose kanji form is obscure or too formal for the writing purpose. Words that do have common kanji renditions may also sometimes be written instead in hiragana, according to an individual author's preference, for example to impart an informal feel. Hiragana is also used to write "furigana", a reading aid that shows the pronunciation of kanji characters. There are two main systems of ordering hiragana: the old-fashioned iroha ordering and the more prevalent gojūon ordering. The modern hiragana syllabary consists of 46 base characters: These are conceived as a 5×10 grid ("gojūon", , "Fifty Sounds"), as illustrated in the adjacent table, read and so forth, with the singular consonant appended to the end. Of the 50 theoretically possible combinations, "yi" and "wu" do not exist in the language and "ye", "wi" and "we" are obsolete (or virtually obsolete) in modern Japanese. "wo" (を) is usually pronounced as a vowel ("o") in modern Japanese and is preserved in only one use, as a particle. Romanization of the kana does not always strictly follow the consonant-vowel scheme laid out in the table. For example, ち, nominally "ti", is very often romanised as "chi" in an attempt to better represent the actual sound in Japanese. These basic characters can be modified in various ways. By adding a "dakuten" marker ( ゛), a voiceless consonant is turned into a voiced consonant: "k"→"g", "ts/s"→"z", "t"→"d", "h"→"b" and "ch"/"sh"→"j". For example, か ("ka") becomes が ("ga"). Hiragana beginning with an "h" can also add a "handakuten" marker ( ゜) changing the "h" to a "p". For example, は ("ha") becomes ぱ ("pa"). A small version of the hiragana for "ya", "yu", or "yo" (ゃ, ゅ or ょ respectively) may be added to hiragana ending in "i". This changes the "i" vowel sound to a glide (palatalization) to "a", "u" or "o". For example, き ("ki") plus ゃ (small "ya") becomes ("kya"). Addition of the small "y" kana is called "yōon". A small "tsu" っ, called a "sokuon", indicates that the following consonant is geminated (doubled). In Japanese this is an important distinction in pronunciation; for example, compare , "saka", "hill" with , "sakka", "author". The "sokuon" also sometimes appears at the end of utterances, where it denotes a glottal stop, as in (, Ouch!). However, it cannot be used to double the "na", "ni", "nu", "ne", "no" syllables' consonants – to double these, the singular "n" (ん) is added in front of the syllable, as in みんな ("minna", "all"). Hiragana usually spells long vowels with the addition of a second vowel kana; for example, おかあさん ("o-ka-a-sa-n", "mother"). The "chōonpu" (long vowel mark) (ー) used in katakana is rarely used with hiragana, for example in the word , "rāmen", but this usage is considered non-standard in Japanese; the Okinawan language uses chōonpu with hiragana. In informal writing, small versions of the five vowel kana are sometimes used to represent trailing off sounds (, "haa", , "nee"). Standard and voiced iteration marks are written in hiragana as ゝ and ゞ respectively. The following table shows the complete hiragana together with the Hepburn romanization and IPA transcription in the "gojūon" order. Hiragana with "dakuten" or "handakuten" follow the "gojūon" kana without them, with the "yōon" kana following. Obsolete and normally unused kana are shown in brackets and . Those in bold do not use the initial sound for that row. For all syllables besides ん, the pronunciation indicated is for word-initial syllables, for mid-word pronunciations see below. In the middle of words, the "g" sound (normally ) may turn into a velar nasal or velar fricative . An exception to this is numerals; 15 "jūgo" is considered to be one word, but is pronounced as if it was "jū" and "go" stacked end to end: . In many accents, the "j" and "z" sounds are pronounced as affricates ( and , respectively) at the beginning of utterances and fricatives in the middle of words. For example, "sūji" 'number', "zasshi" 'magazine'. In archaic forms of Japanese, there existed the "kwa" ( ) and "gwa" ( ) digraphs. In modern Japanese, these phonemes have been phased out of usage and only exist in the extended katakana digraphs for approximating foreign language words. The singular "n" is pronounced before "t", "ch", "ts", "n", "r", "z", "j" and "d", before "m", "b" and "p", before "k" and "g", at the end of utterances, and some kind of high nasal vowel before vowels, palatal approximants ("y"), fricative consonants "s", "sh", "h", "f" and "w". In kanji readings, the diphthongs "ou" and "ei" are today usually pronounced (long o) and (long e) respectively. For example, (lit. "toukyou") is pronounced 'Tokyo', and "sensei" is 'teacher'. However, "tou" is pronounced 'to inquire', because the "o" and "u" are considered distinct, "u" being the verb ending in the dictionary form. Similarly, "shite iru" is pronounced 'is doing'. For a more thorough discussion on the sounds of Japanese, please refer to Japanese phonology. An early, now obsolete, hiragana-esque form of "ye" may have existed (𛀁 ) in pre-Classical Japanese (prior to the advent of kana), but is generally represented for purposes of reconstruction by the kanji 江, and its hiragana form is not present in any known orthography. In modern orthography, "ye" can also be written as いぇ (イェ in katakana). It is true that in early periods of kana, hiragana and katakana letters for "ye" were used, but soon after the distinction between /ye/ and /e/ went away, and letters and glyphs were not established. Though "ye" did appear in some textbooks during the Meiji period along with another kana for "yi" in the form of cursive 以. Today it is considered a Hentaigana by scholars and is encoded in Unicode 10 (). Hiragana "wu" also appeared in different Meiji-era textbooks (). Although there are several possible source kanji, it is likely to have been derived from a cursive form of the man'yōgana 汙, although a related variant sometimes listed () is from a cursive form of 紆. It was never commonly used. With a few exceptions for sentence particles は, を, and へ (normally "ha", "wo", and "he", but instead pronounced as "wa", "o", and "e", respectively), and a few other arbitrary rules, Japanese, when written in kana, is phonemically orthographic, i.e. there is a one-to-one correspondence between kana characters and sounds, leaving only words' pitch accent unrepresented. This has not always been the case: a previous system of spelling, now referred to as historical kana usage, differed substantially from pronunciation; the three above-mentioned exceptions in modern usage are the legacy of that system. There are two hiragana pronounced "ji" (じ and ぢ) and two hiragana pronounced "zu" (ず and づ), but to distinguish them, particularly when typing Japanese, sometimes "ぢ" is written as "di" and "づ" is written as "du". These pairs are not interchangeable. Usually, "ji" is written as じ and "zu" is written as ず. There are some exceptions. If the first two syllables of a word consist of one syllable without a "dakuten" and the same syllable with a "dakuten", the same hiragana is used to write the sounds. For example, "chijimeru" ('to boil down' or 'to shrink') is spelled ちぢめる and "tsuzuku" ('to continue') is . For compound words where the dakuten reflects "rendaku" voicing, the original hiragana is used. For example, "chi" ( 'blood') is spelled ち in plain hiragana. When "hana" ('nose') and "chi" ('blood') combine to make "hanaji" ( 'nose bleed'), the sound of changes from "chi" to "ji". So "hanaji" is spelled according to ち: the basic hiragana used to transcribe . Similarly, "tsukau" (; 'to use') is spelled in hiragana, so "kanazukai" (; 'kana use', or 'kana orthography') is spelled in hiragana. However, this does not apply when kanji are used phonetically to write words that do not relate directly to the meaning of the kanji (see also ateji). The Japanese word for 'lightning', for example, is "inazuma" (). The component means 'rice plant', is written in hiragana and is pronounced: "ina". The component means 'wife' and is pronounced "tsuma" (つま) when written in isolation—or frequently as "zuma" when it features after another syllable. Neither of these components have anything to do with 'lightning', but together they do when they compose the word for 'lightning'. In this case, the default spelling in hiragana rather than is used. Officially, ぢ and づ do not occur word-initially pursuant to modern spelling rules. There were words such as "jiban" 'ground' in the historical kana usage, but they were unified under じ in the modern kana usage in 1946, so today it is spelled exclusively . However, "zura" 'wig' (from "katsura") and "zuke" (a sushi term for lean tuna soaked in soy sauce) are examples of word-initial づ today. Some people write the word for hemorrhoids as ぢ (normally じ) for emphasis. No standard Japanese words begin with the kana ん ("n"). This is the basis of the word game shiritori. ん "n" is normally treated as its own syllable and is separate from the other "n"-based kana ("na", "ni" etc.). A notable exception to this is the colloquial negative verb conjugation; for example "wakaranai" meaning "[I] don't understand" is rendered as "wakaran". It is however not a contraction of the former, but instead comes from the classic negative verb conjugation ぬ "nu" ( "wakaranu"). ん is sometimes directly followed by a vowel ("a", "i", "u", "e" or "o") or a palatal approximant ("ya", "yu" or "yo"). These are clearly distinct from the "na", "ni" etc. syllables, and there are minimal pairs such as "kin'en" 'smoking forbidden', "kinen" 'commemoration', "kinnen" 'recent years'. In Hepburn romanization, they are distinguished with an apostrophe, but not all romanization methods make the distinction. For example, past prime minister Junichiro Koizumi's first name is actually "Jun'ichirō" pronounced There are a few hiragana that are rarely used. ゐ "wi" and ゑ "we" are obsolete outside of Okinawan orthography. 𛀁 "e" was an alternate version of え "e" before spelling reform, and was briefly reused for "ye" during initial spelling reforms, but is now completely obsolete. ゔ "vu" is a modern addition used to represent the /v/ sound in foreign languages such as English, but since Japanese from a phonological standpoint does not have a /v/ sound, it is pronounced as /b/ and mostly serves as a more accurate indicator of a word's pronunciation in its original language. However, it is rarely seen because loanwords and transliterated words are usually written in katakana, where the corresponding character would be written as ヴ. , , for "ja"/"ju"/"jo" are theoretically possible in rendaku, but are practically never used. For example, 'throughout Japan' could be written , but is practically always The "myu" kana is extremely rare in originally Japanese words; linguist Haruhiko Kindaichi raises the example of the Japanese family name Omamyūda and claims it is the only occurrence amongst pure Japanese words. Its katakana counterpart is used in many loanwords, however. Hiragana developed from "man'yōgana", Chinese characters used for their pronunciations, a practice that started in the 5th century. The oldest examples of Man'yōgana include the Inariyama Sword, an iron sword excavated at the Inariyama Kofun in 1968. This sword is thought to be made in the year (most commonly taken to be A.D. 471). The forms of the hiragana originate from the cursive script style of Chinese calligraphy. The figure below shows the derivation of hiragana from manyōgana via cursive script. The upper part shows the character in the regular script form, the center character in red shows the cursive script form of the character, and the bottom shows the equivalent hiragana. The cursive script forms are not strictly confined to those in the illustration. Male authors came to write literature using hiragana. Hiragana was used for unofficial writing such as personal letters, while katakana and Chinese were used for official documents. In modern times, the usage of hiragana has become mixed with katakana writing. Katakana is now relegated to special uses such as recently borrowed words (i.e., since the 19th century), names in transliteration, the names of animals, in telegrams, and for emphasis. Originally, for all syllables there was more than one possible hiragana. In 1900, the system was simplified so each syllable had only one hiragana. The deprecated hiragana are now known as . The pangram poem "Iroha-uta" ("ABC song/poem"), which dates to the 10th century, uses every hiragana once (except "n" ん, which was just a variant of む before the Muromachi era). The following table shows the method for writing each hiragana character. It is arranged in the traditional way, beginning top right and reading columns down. The numbers and arrows indicate the stroke order and direction respectively. Hiragana was added to the Unicode Standard in October, 1991 with the release of version 1.0. The Unicode block for Hiragana is U+3040–U+309F: The Unicode hiragana block contains precomposed characters for all hiragana in the modern set, including small vowels and yōon kana for compound syllables, plus the archaic ゐ "wi" and ゑ "we" and the rare ゔ "vu"; the archaic 𛀁 "ye" is included in plane 1 at U+1B001 (see below). All combinations of hiragana with "dakuten" and "handakuten" used in modern Japanese are available as precomposed characters, and can also be produced by using a base hiragana followed by the combining dakuten and handakuten characters (U+3099 and U+309A, respectively). This method is used to add the diacritics to kana that are not normally used with them, for example applying the dakuten to a pure vowel or the handakuten to a kana not in the h-group. Characters U+3095 and U+3096 are small か ("ka") and small け ("ke"), respectively. U+309F is a ligature of より ("yori") occasionally used in vertical text. U+309B and U+309C are spacing (non-combining) equivalents to the combining dakuten and handakuten characters, respectively. Historic and variant forms of Japanese kana characters were first added to the Unicode Standard in October, 2010 with the release of version 6.0, with significantly more added in 2017 as part of Unicode 10. The Unicode block for Kana Supplement is U+1B000–U+1B0FF, and is immediately followed by the Kana Extended-A block (U+1B100–U+1B12F). These blocks include mainly hentaigana (historic or variant hiragana): The Unicode block for Small Kana Extension is U+1B130–U+1B16F: In the following character sequences a kana from the /k/ row is modified by a "handakuten" combining mark to indicate that a syllable starts with an initial nasal, known as "bidakuon". As of Unicode 13.0, these character combinations are explicitly called out as Named Sequences:
https://en.wikipedia.org/wiki?curid=13804
Hohenstaufen The Hohenstaufen ( , , ), also called Staufer, was a noble dynasty of unclear origin that rose to rule the Duchy of Swabia from 1079 and to royal rule in the Holy Roman Empire during the Middle Ages from 1138 until 1254. The most prominent kings Frederick I (1155), Henry VI (1191) and Frederick II (1220) ascended the imperial throne and also ruled Italy and Burgundy. The non-contemporary name is derived from a family castle on the Hohenstaufen mountain at the northern fringes of the Swabian Jura near the town of Göppingen. Under Hohenstaufen reign the Holy Roman Empire reached its greatest territorial extent from 1155 to 1268. The name Hohenstaufen was first used in the 14th century to distinguish the "high" ("hohen") conical hill named Staufen in the Swabian Jura, in the district of Göppingen, from the village of the same name in the valley below. The new name was only applied to the hill castle of Staufen by historians in the 19th century, to distinguish it from other castles of the same name. The name of the dynasty followed, but in recent decades the trend in German historiography has been to prefer the name Staufer, which is closer to contemporary usage. The name "Staufen" itself derives from "Stauf" (OHG "stouf", akin to Early Modern English stoup), meaning "chalice". This term was commonly applied to conical hills in Swabia in the Middle Ages. It is a contemporary term for both the hill and the castle, although its spelling in the Latin documents of the time varies considerably: "Sthouf", "Stophe", "Stophen", "Stoyphe", "Estufin" etc. The castle was built or at least acquired by Duke Frederick I of Swabia in the latter half of the 11th century. Members of the family occasionally used the toponymic surname "de Stauf" or variants thereof. Only in the 13th century does the name come to be applied to the family as a whole. Around 1215 a chronicler referred to the "emperors of Stauf". In 1247, the Emperor Frederick II himself referred to his family as the "domus Stoffensis" (Staufer house), but this was an isolated instance. Otto of Freising (d. 1158) associated the Staufer with the town of Waiblingen and around 1230 Burchard of Ursberg referred to the Staufer as of the "royal lineage of the Waiblingens" ("regia stirps Waiblingensium"). The exact connection between the family and Waiblingen is not clear, but as a name for the family it became very popular. The pro-imperial Ghibelline faction of the Italian civic rivalries of the 13th and 14th centuries derived its name from Waiblingen. In Italian historiography, the Staufer are known as the "Svevi" (Swabians). The origin remains unclear, however, Staufer counts are mentioned in a document of emperor Otto III in 987 as descendants of counts of the region of "Riesgau" near Nördlingen in the Duchy of Swabia, who were related to the Bavarian "Sieghardinger" family. A local count Frederick (d. about 1075) is mentioned as progenitor in a pedigree drawn up by Abbot Wibald of Stavelot at the behest of Emperor Frederick Barbarossa in 1153. He held the office of a Swabian count palatine; his son Frederick of Buren (c.1020–1053) married Hildegard of Egisheim-Dagsburg (d. 1094/95), a niece of Pope Leo IX. Their son Frederick I was appointed Duke of Swabia at Hohenstaufen Castle by the Salian king Henry IV of Germany in 1079. At the same time, Duke Frederick I was engaged to the king's approximately seventeen-year-old daughter, Agnes. Nothing is known about Frederick's life before this event, but he proved to be an imperial ally throughout Henry's struggles against other Swabian lords, namely Rudolf of Rheinfelden, Frederick's predecessor, and the Zähringen and Welf lords. Frederick's brother Otto was elevated to the Strasbourg bishopric in 1082. Upon Frederick's death, he was succeeded by his son, Duke Frederick II, in 1105. Frederick II remained a close ally of the Salians, he and his younger brother Conrad were named the king's representatives in Germany when the king was in Italy. Around 1120, Frederick II married Judith of Bavaria from the rival House of Welf. When the last male member of the Salian dynasty, Emperor Henry V, died without heirs in 1125, a controversy arose about the succession. Duke Frederick II and Conrad, the two current male Staufers, by their mother Agnes, were grandsons of late Emperor Henry IV and nephews of Henry V. Frederick attempted to succeed to the throne of the Holy Roman Emperor (formally known as the King of the Romans) through a customary election, but lost to the Saxon duke Lothair of Supplinburg. A civil war between Frederick's dynasty and Lothair's ended with Frederick's submission in 1134. After Lothair's death in 1137, Frederick's brother Conrad was elected King as Conrad III. Because the Welf duke Henry the Proud, son-in-law and heir of Lothair and the most powerful prince in Germany, who had been passed over in the election, refused to acknowledge the new king, Conrad III deprived him of all his territories, giving the Duchy of Saxony to Albert the Bear and that of Bavaria to Leopold IV, Margrave of Austria. In 1147, Conrad heard Bernard of Clairvaux preach the Second Crusade at Speyer, and he agreed to join King Louis VII of France in a great expedition to the Holy Land which failed. Conrad's brother Duke Frederick II died in 1147, and was succeeded in Swabia by his son, Duke Frederick III. When King Conrad III died without adult heir in 1152, Frederick also succeeded him, taking both German royal and Imperial titles. Frederick I (Reign 2 January 1155 – 10 June 1190), known as Frederick Barbarossa because of his red beard, struggled throughout his reign to restore the power and prestige of the German monarchy against the dukes, whose power had grown both before and after the Investiture Controversy under his Salian predecessors. As royal access to the resources of the church in Germany was much reduced, Frederick was forced to go to Italy to find the finances needed to restore the king's power in Germany. He was soon crowned emperor in Italy, but decades of warfare on the peninsula yielded scant results. The Papacy and the prosperous city-states of the Lombard League in northern Italy were traditional enemies, but the fear of Imperial domination caused them to join ranks to fight Frederick. Under the skilled leadership of Pope Alexander III, the alliance suffered many defeats but ultimately was able to deny the emperor a complete victory in Italy. Frederick returned to Germany. He had vanquished one notable opponent, his Welf cousin, Duke Henry the Lion of Saxony and Bavaria in 1180, but his hopes of restoring the power and prestige of the monarchy seemed unlikely to be met by the end of his life. During Frederick's long stays in Italy, the German princes became stronger and began a successful colonization of Slavic lands. Offers of reduced taxes and manorial duties enticed many Germans to settle in the east in the course of the "Ostsiedlung". In 1163 Frederick waged a successful campaign against the Kingdom of Poland in order to re-install the Silesian dukes of the Piast dynasty. With the German colonization, the Empire increased in size and came to include the Duchy of Pomerania. A quickening economic life in Germany increased the number of towns and Imperial cities, and gave them greater importance. It was also during this period that castles and courts replaced monasteries as centers of culture. Growing out of this courtly culture, Middle High German literature reached its peak in lyrical love poetry, the Minnesang, and in narrative epic poems such as "Tristan", "Parzival", and the "Nibelungenlied". Frederick died in 1190 while on the Third Crusade and was succeeded by his son, Henry VI. Elected king even before his father's death, Henry went to Rome to be crowned emperor. He married Princess Constance of Sicily, and deaths in his wife's family gave him claim of succession and possession of the Kingdom of Sicily in 1189 and 1194 respectively, a source of vast wealth. Henry failed to make royal and Imperial succession hereditary, but in 1196 he succeeded in gaining a pledge that his infant son Frederick would receive the German crown. Faced with difficulties in Italy and confident that he would realize his wishes in Germany at a later date, Henry returned to the south, where it appeared he might unify the peninsula under the Hohenstaufen name. After a series of military victories, however, he fell ill and died of natural causes in Sicily in 1197. His underage son Frederick could only succeed him in Sicily and Malta, while in the Empire the struggle between the House of Staufen and the House of Welf erupted once again. Because the election of a three-year-old boy to be German king appeared likely to make orderly rule difficult, the boy's uncle, Duke Philip of Swabia, brother of late Henry VI, was designated to serve in his place. Other factions however favoured a Welf candidate. In 1198, two rival kings were chosen: the Hohenstaufen Philip of Swabia and the son of the deprived Duke Henry the Lion, the Welf Otto IV. A long civil war began; Philip was about to win when he was murdered by the Bavarian count palatine Otto VIII of Wittelsbach in 1208. Pope Innocent III initially had supported the Welfs, but when Otto, now sole elected monarch, moved to appropriate Sicily, Innocent changed sides and accepted young Frederick II and his ally, King Philip II of France, who defeated Otto at the 1214 Battle of Bouvines. Frederick had returned to Germany in 1212 from Sicily, where he had grown up, and was elected king in 1215. When Otto died in 1218, Fredrick became the undisputed ruler, and in 1220 was crowned Holy Roman Emperor. Philip changed the coat of arms from a black lion on a gold shield to three leopards, probably derived from the arms of his Welf rival Otto IV. The conflict between the Staufer dynasty and the Welf had irrevocably weakened the Imperial authority and the Norman kingdom of Sicily became the base for Staufer rule. Emperor Frederick II spent little time in Germany as his main concerns lay in Southern Italy. He founded the University of Naples in 1224 to train future state officials and reigned over Germany primarily through the allocation of royal prerogatives, leaving the sovereign authority and imperial estates to the ecclesiastical and secular princes. He made significant concessions to the German nobles, such as those put forth in an imperial statute of 1232, which made princes virtually independent rulers within their territories. These measures favoured the further fragmentation of the Empire. By the 1226 Golden Bull of Rimini, Frederick had assigned the military order of the Teutonic Knights to complete the conquest and conversion of the Prussian lands. A reconciliation with the Welfs took place in 1235, whereby Otto the Child, grandson of the late Saxon duke Henry the Lion, was named Duke of Brunswick and Lüneburg. The power struggle with the popes continued and resulted in Fredrick's excommunication in 1227. In 1239, Pope Gregory IX excommunicated Fredrick again, and in 1245 he was condemned as a heretic by a church council. Although Frederick was one of the most energetic, imaginative, and capable rulers of the time, he was not concerned with drawing the disparate forces in Germany together. His legacy was thus that local rulers had more authority after his reign than before it. The clergy also had become more powerful. By the time of Frederick's death in 1250, little centralized power remained in Germany. The Great Interregnum, a period in which there were several elected rival kings, none of whom was able to achieve any position of authority, followed the death of Frederick's son King Conrad IV of Germany in 1254. The German princes vied for individual advantage and managed to strip many powers away from the diminished monarchy. Rather than establish sovereign states however, many nobles tended to look after their families. Their many male heirs created more and smaller estates, and from a largely free class of officials previously formed, many of these assumed or acquired hereditary rights to administrative and legal offices. These trends compounded political fragmentation within Germany. The period was ended in 1273 with the election of Rudolph of Habsburg, a godson of Frederick. Conrad IV was succeeded as duke of Swabia by his only son, two-year-old Conradin. By this time, the office of duke of Swabia had been fully subsumed into the office of the king, and without royal authority had become meaningless. In 1261, attempts to elect young Conradin king were unsuccessful. He also had to defend Sicily against an invasion, sponsored by Pope Urban IV (Jacques Pantaléon) and Pope Clement IV (Guy Folques), by Charles of Anjou, a brother of the French king. Charles had been promised by the popes the Kingdom of Sicily, where he would replace the relatives of Frederick II. Charles had defeated Conradin's uncle Manfred, King of Sicily, in the Battle of Benevento on 26 February 1266. The king himself, refusing to flee, rushed into the midst of his enemies and was killed. Conradin's campaign to retake control ended with his defeat in 1268 at the Battle of Tagliacozzo, after which he was handed over to Charles, who had him publicly executed at Naples. With Conradin, the direct line of the Dukes of Swabia finally ceased to exist, though most of the later emperors were descended from the Staufer dynasty indirectly. During the political decentralization of the late Staufer period, the population had grown from an estimated 8 million in 1200 to about 14 million in 1300, and the number of towns increased tenfold. The most heavily urbanized areas of Germany were located in the south and the west. Towns often developed a degree of independence, but many were subordinate to local rulers if not immediate to the emperor. Colonization of the east also continued in the thirteenth century, most notably through the efforts of the Teutonic Knights. German merchants also began trading extensively on the Baltic. The Kyffhäuser Monument was erected to commemorate Frederick I, and was inaugurated in 1896. On October 29, 1968, the 700th anniversary of the death of Konradin, a society known as "Society for Staufer History" () was founded in Göppingen. The Castel del Monte, Apulia which was built during the 1240s by the Emperor Frederick II was designated as a World Heritage Site in 1996. The German artist, Hans Kloss, painted his "" depicting in great detail the history of the House of Hohenstaufen, located in . From 2000 to 2018, the Committee of Staufer Friends () has built thirty-eight Staufer steles () in Germany, France, Italy, Austria, Czech Republic and the Netherlands. The first ruling Hohenstaufen, Conrad III, like the last one, Conrad IV, was never crowned emperor. After a 20-year period (Great interregnum 1254–1273), the first Habsburg was elected king. "Note: The following kings are already listed above as German Kings" "Note: Some of the following kings are already listed above as German Kings" "Note: Some of the following dukes are already listed above as German Kings" Notes: "For further detailed dynastic relationships, see also :Family tree of the German monarchs".
https://en.wikipedia.org/wiki?curid=13805
History of Malaysia Malaysia is located on a strategic sea-lane that exposes it to global trade and various cultures. Strictly, the name "Malaysia" is a modern concept, created in the second half of the 20th century. However, contemporary Malaysia regards the entire history of Malaya and Borneo, spanning thousands of years back to Prehistoric times, as its own history, and as such it is treated in this page. An early western account of the area is seen in Ptolemy's book "Geographia," which mentions a "Golden Khersonese," now identified as the Malay Peninsula. Hinduism and Buddhism from India and China dominated early regional history, reaching their peak during the reign of the Sumatra-based Srivijaya civilisation, whose influence extended through Sumatra, Java, the Malay Peninsula and much of Borneo from the 7th to the 13th centuries. Although Muslims had passed through the Malay Peninsula as early as the 10th century, it was not until the 14th century that Islam first firmly established itself. The adoption of Islam in the 14th century saw the rise of a number of sultanates, the most prominent of which were the Sultanate of Malacca and the Sultanate of Brunei. Islam had a profound influence on the Malay people, but has also been influenced by them. The Portuguese were the first European colonial powers to establish themselves on the Malay Peninsula and Southeast Asia, capturing Malacca in 1511, followed by the Dutch in 1641. However, it was the British who, after initially establishing bases at Jesselton, Kuching, Penang and Singapore, ultimately secured their hegemony across the territory that is now Malaysia. The Anglo-Dutch Treaty of 1824 defined the boundaries between British Malaya and the Netherlands East Indies (which became Indonesia). On the other hand, the Anglo-Siamese Treaty of 1909 defined the boundaries between British Malaya and Siam (which became Thailand)). The fourth phase of foreign influence was an immigration of Chinese and Indian workers to meet the needs of the colonial economy created by the British in the Malay Peninsula and Borneo. Japanese invasion during World War II ended British domination in Malaysia. The subsequent occupation of Malaya, North Borneo and Sarawak from 1942 to 1945 unleashed nationalism. In the Peninsula, the Malayan Communist Party took up arms against the British. A tough military response ended the insurgency and brought about the establishment of an independent, multi-racial Federation of Malaya on 31 August 1957. On 22 July 1963, Sarawak was granted self-governance. The following month on 31 August 1963, both North Borneo and Singapore were also granted self-governance and all states formed Malaysia on 16 September 1963. Approximately two years later, the Malaysian parliament passed a bill without the consent of signatories of the Malaysia Agreement 1963 to separate Singapore from the Federation. A confrontation with Indonesia occurred in the early-1960s. Race riots in 1969 led to the imposition of emergency rule, and a curtailment of political life and civil liberties which has never been fully reversed. Since 1970 the Barisan Nasional coalition headed by United Malays National Organisation (UMNO) had governed Malaysia until defeated by the Pakatan Harapan coalition which was headed by ex-UMNO leader Mahathir Mohamad on 10 May 2018. On March 2020, the Pakatan Harapan coalition fell when non-PKR, DAP, and AMANAH party members come together to form a government led-by BERSATU leader Muhyiddin Yassin. Stone hand-axes from early hominoids, probably Homo erectus, have been unearthed in Lenggong. They date back 1.83 million years, the oldest evidence of hominid habitation in Southeast Asia. The earliest evidence of modern human habitation in Malaysia is the 40,000-year-old skull excavated from the Niah Caves in today's Sarawak, nicknamed "Deep Skull". It was excavated from a deep trench uncovered by Barbara and Tom Harrisson (a British ethnologist) in 1958. this is also the oldest modern human skull in Southeast Asia. The skull probably belongs to a 16-to 17-year-old adolescent girl. The first foragers visited the West Mouth of Niah Caves (located southwest of Miri) 40,000 years ago when Borneo was connected to the mainland of Southeast Asia. The landscape around the Niah Caves was drier and more exposed than it is now. Prehistorically, the Niah Caves were surrounded by a combination of closed forests with bush, parkland, swamps, and rivers. The foragers were able to survive in the rainforest through hunting, fishing, and gathering molluscs and edible plants. Mesolithic and Neolithic burial sites have also been found in the area. The area around the Niah Caves has been designated the Niah National Park. A study of Asian genetics points to the idea that the original humans in East Asia came from Southeast Asia. The oldest complete skeleton found in Malaysia is 11,000-year-old Perak Man unearthed in 1991. The indigenous groups on the peninsula can be divided into three ethnicities, the Negritos, the Senoi, and the proto-Malays. The first inhabitants of the Malay Peninsula were most probably Negritos. These Mesolithic hunters were probably the ancestors of the Semang, an ethnic Negrito group who have a long history in the Malay Peninsula. The Senoi appear to be a composite group, with approximately half of the maternal mitochondrial DNA lineages tracing back to the ancestors of the Semang and about half to later ancestral migrations from Indochina. Scholars suggest they are descendants of early Austroasiatic-speaking agriculturalists, who brought both their language and their technology to the southern part of the peninsula approximately 4,000 years ago. They united and coalesced with the indigenous population. The Proto Malays have a more diverse origin and had settled in Malaysia by 1000 BC as a result of Austronesian expansion. Although they show some connections with other inhabitants in Maritime Southeast Asia, some also have an ancestry in Indochina around the time of the Last Glacial Maximum about 20,000 years ago. Anthropologists support the notion that the Proto-Malays originated from what is today Yunnan, China. This was followed by an early-Holocene dispersal through the Malay Peninsula into the Malay Archipelago. Around 300 BC, they were pushed inland by the Deutero-Malays, an Iron Age or Bronze Age people descended partly from the Chams of Cambodia and Vietnam. The first group in the peninsula to use metal tools, the Deutero-Malays were the direct ancestors of today's Malaysian Malays, and brought with them advanced farming techniques. The Malays remained politically fragmented throughout the Malay archipelago, although a common culture and social structure was shared. In the first millennium CE, Malays became the dominant race on the peninsula. The small early states that were established were greatly influenced by Indian culture, as was most of Southeast Asia. Indian influence in the region dates back to at least the 3rd century BCE. South Indian culture was spread to Southeast Asia by the south Indian Pallava dynasty in the 4th and 5th century. In ancient Indian literature, the term "Suvarnadvipa" or the "Golden Peninsula" is used in "Ramayana", and some argued that it may be a reference to the Malay Peninsula. The ancient Indian text "Vayu Purana" also mentioned a place named "Malayadvipa" where gold mines may be found, and this term has been proposed to mean possibly Sumatra and the Malay Peninsula. The Malay Peninsula was shown on Ptolemy's map as the "Golden Khersonese". He referred to the Straits of Malacca as "Sinus Sabaricus". Trade relations with China and India were established in the 1st century BC. Shards of Chinese pottery have been found in Borneo dating from the 1st century following the southward expansion of the Han Dynasty. In the early centuries of the first millennium, the people of the Malay Peninsula adopted the Indian religions of Hinduism and Buddhism, religions which had a major effect on the language and culture of those living in Malaysia. The Sanskrit writing system was used as early as the 4th century. There were numerous Malay kingdoms in the 2nd and 3rd century, as many as 30, mainly based on the Eastern side of the Malay peninsula. Among the earliest kingdoms known to have been based in the Malay Peninsula is the ancient kingdom of Langkasuka, located in the northern Malay Peninsula and based somewhere on the west coast. It was closely tied to Funan in Cambodia, which also ruled part of northern Malaysia until the 6th century. In the 5th century, the Kingdom of Pahang was mentioned in the "Book of Song". According to the Sejarah Melayu ("Malay Annals"), the Khmer prince Raja Ganji Sarjuna founded the kingdom of Gangga Negara (modern-day Beruas, Perak) in the 700s. Chinese chronicles of the 5th century CE speak of a great port in the south called Guantoli, which is thought to have been in the Straits of Malacca. In the 7th century, a new port called Shilifoshi is mentioned, and this is believed to be a Chinese rendering of Srivijaya. Gangga Negara is believed to be a lost semi-legendary Hindu kingdom mentioned in the Malay Annals that covered present day Beruas, Dinding and Manjung in the state of Perak, Malaysia with Raja Gangga Shah Johan as one of its kings. Gangga Negara means "a city on the Ganges" in Sanskrit, the name derived from Ganganagar in northwest India where the Kambuja peoples inhabited. Researchers believe that the kingdom was centered at Beruas. Another Malay annal Hikayat Merong Mahawangsa known as Kedah Annals, Gangga Negara may have been founded by Merong Mahawangsa's son Raja Ganji Sarjuna of Kedah, allegedly a descendant of Alexander the Great or by the Khmer royalties no later than the 2nd century. The first research into the Beruas kingdom was conducted by Colonel James Low in 1849 and a century later, by H.G. Quaritch Wales. According to the Museum and Antiquities Department, both researchers agreed that the Gangga Negara kingdom existed between 100 – 1000 CE but could not ascertain the exact site. For years, villagers had unearthed artefacts believed to be from the ancient kingdoms, most of which are at present displayed at the Beruas Museum. Artefacts on display include a 128 kg cannon, swords, kris, coins, tin ingots, pottery from the Ming Dynasty and various eras, and large jars. They can be dated back to the 5th and 6th century. Through these artefacts, it has been postulated that Pengkalan (Ipoh), Kinta Valley, Tanjung Rambutan, Bidor and Sungai Siput were part of the kingdom. Artifacts also suggest that the kingdom's centre might have shifted several times. Gangga Negara was renamed to Beruas after the establishment of Islam there. Ptolemy, a Greek geographer, astronomer, and astrologer, had written about Golden Chersonese, which indicates trade with India and China has existed since the 1st century AD., As early as the 1st century AD, Southeast Asia was the place of a network of coastal city-states, the center of which was the ancient Khmer Funan kingdom in the south of what is now Vietnam. This network encompassed the southern part of the Indochinese peninsula and the western part of the Malay archipelago. These coastal cities had a continuous trade as well as tributary relations with China from a very early period, at the same time being in constant contact with Indian traders. They seem to have shared a common indigenous culture. Gradually, the rulers of the western part of the archipelago adopted Indian cultural and political models e.g. proof of such Indian influence on Indonesian art in the 5th century. Three inscriptions found in Palembang (South Sumatra) and on Bangka Island, written in a form of Malay and in an alphabet derived from the Pallava script, are proof that the archipelago had definitely adopted Indian models while maintaining their indigenous language and social system. These inscriptions reveal the existence of a "Dapunta Hyang" (lord) of Srivijaya who led an expedition against his enemies and who curses those who will not obey his law. Being on the maritime route between China and South India, the Malay peninsula was involved in this trade The Bujang Valley, being strategically located at the northwest entrance of the Strait of Malacca as well as facing the Bay of Bengal, was continuously frequented by Chinese and south Indian traders. Such was proven by the discovery of trade ceramics, sculptures, inscriptions and monuments dated from the 5th to 14th century CE. The Bujang Valley was continuously administered by different thalassocratical powers including Funan, Srivijaya, and Majapahit before the trade declined. In Kedah there are remains showing Buddhist and Hindu influences which have been known for about a century now from the discoveries reported by Col. Low and has recently been subjected to a fairly exhaustive investigation by Dr. Quaritch Wales. Dr. Wales investigated no fewer than thirty sites roundabout Kedah. An inscribed stone bar, rectangular in shape, bears the "ye-dharmma" formula in Pallava script of the 7th century, thus proclaiming the Buddhist character of the shrine near the find-spot (site I) of which only the basement survives. It is inscribed on three faces in "Pallava script" of the 6th century, possibly earlier. Except for the Cherok Tokkun Inscription which was engraved on a large boulder, other inscriptions discovered in Bujang Valley are comparatively small in size and probably were brought in by Buddhist pilgrimage or traders. Between the 7th and the 13th century, much of the Malay peninsula was under the Buddhist Srivijaya empire. The site of Srivijaya's centre is thought be at a river mouth in eastern Sumatra, based near what is now Palembang. For over six centuries the Maharajahs of Srivijaya ruled a maritime empire that became the main power in the archipelago. The empire was based around trade, with local kings (dhatus or community leaders) swearing allegiance to the central lord for mutual profit. The relation between Srivijaya and the Chola Empire of south India was friendly during the reign of Raja Raja Chola I but during the reign of Rajendra Chola I the Chola Empire invaded Srivijaya cities (see Chola invasion of Srivijaya). In 1025 and 1026 Gangga Negara was attacked by Rajendra Chola I of the Chola Empire, the Tamil emperor who is now thought to have laid Kota Gelanggi to waste. Kedah—known as "Kedaram", "Cheh-Cha" (according to "I-Ching") or "Kataha", in ancient Pallava or Sanskrit—was in the direct route of the invasions and was ruled by the Cholas from 1025. A second invasion was led by Virarajendra Chola of the Chola dynasty who conquered Kedah in the late 11th century. The senior Chola's successor, Vira Rajendra Chola, had to put down a Kedah rebellion to overthrow other invaders. The coming of the Chola reduced the majesty of Srivijaya, which had exerted influence over Kedah, Pattani and as far as Ligor. During the reign of Kulothunga Chola I Chola overlordship was established over the Srivijaya province kedah in the late 11th century. The expedition of the Chola Emperors had such a great impression to the Malay people of the medieval period that their name was mentioned in the corrupted form as Raja Chulan in the medieval Malay chronicle Sejarah Melaya. Even today the Chola rule is remembered in Malaysia as many Malaysian princes have names ending with Cholan or Chulan, one such was the Raja of Perak called Raja Chulan. Pattinapalai, a Tamil poem of the 2nd century CE, describes goods from Kedaram heaped in the broad streets of the Chola capital. A 7th-century Indian drama, "Kaumudhimahotsva", refers to Kedah as Kataha-nagari. The "Agnipurana" also mentions a territory known as Anda-Kataha with one of its boundaries delineated by a peak, which scholars believe is Gunung Jerai. Stories from the "Katasaritasagaram" describe the elegance of life in Kataha. The Buddhist kingdom of Ligor took control of Kedah shortly after. Its king Chandrabhanu used it as a base to attack Sri Lanka in the 11th century and ruled the northern parts, an event noted in a stone inscription in Nagapattinum in Tamil Nadu and in the Sri Lankan chronicles, "Mahavamsa". At times, the Khmer kingdom, the Siamese kingdom, and even Cholas kingdom tried to exert control over the smaller Malay states. The power of Srivijaya declined from the 12th century as the relationship between the capital and its vassals broke down. Wars with the Javanese caused it to request assistance from China, and wars with Indian states are also suspected. In the 11th century, the centre of power shifted to Malayu, a port possibly located further up the Sumatran coast near the Jambi River. The power of the Buddhist Maharajas was further undermined by the spread of Islam. Areas which were converted to Islam early, such as Aceh, broke away from Srivijaya's control. By the late 13th century, the Siamese kings of Sukhothai had brought most of Malaya under their rule. In the 14th century, the Hindu Java-based Majapahit empire came into possession of the peninsula. An excavation by Tom Harrisson in 1949 unearthed a series of Chinese ceramics at Santubong (near Kuching) that date to the Tang and the Song dynasties in the 8th to 13th century AD. It is possible that Santubong was an important seaport in Sarawak during the period, but its importance declined during the Yuan dynasty, and the port was deserted during the Ming dynasty. Other archaeological sites in Sarawak can be found inside the Kapit, Song, Serian, and Bau districts. After decades of Javanese domination, there were several last efforts made by Sumatran rulers to revive the old prestige and fortune of Malay-Srivijayan Mandala. Several attempts to revive Srivijaya were made by the fleeing princes of Srivijaya. According to the Malay Annals, a new ruler named Sang Sapurba was promoted as the new paramount of Srivijayan mandala. It was said that after his accession to Seguntang Hill with his two younger brothers, Sang Sapurba entered into a sacred covenant with Demang Lebar Daun, the native ruler of Palembang. The newly installed sovereign afterwards descended from the hill of Seguntang into the great plain of the Musi river, where he married Wan Sendari, the daughter of the local chief, Demang Lebar Daun. Sang Sapurba was said to have reigned in Minangkabau lands. In 1324, a Srivijaya prince, Sri Maharaja Sang Utama Parameswara Batara Sri Tribuwana (Sang Nila Utama), founded the Kingdom of Singapura (Temasek). According to tradition, he was related to Sang Sapurba. He maintained control over Temasek for 48 years. He was recognized as ruler over Temasek by an envoy of the Chinese Emperor sometime around 1366. He was succeeded by his son Paduka Sri Pekerma Wira Diraja (1372–1386) and grandson, Paduka Seri Rana Wira Kerma (1386–1399). In 1401, the last ruler, Paduka Sri Maharaja Parameswara, was expelled from Temasek by forces from Majapahit or Ayutthaya. He later headed north and founded the Sultanate of Malacca in 1402. The Sultanate of Malacca succeeded the Srivijaya Empire as a Malay political entity in the archipelago. Islam came to the Malay Archipelago through the Arab and Indian traders in the 13th century, ending the age of Hinduism and Buddhism. It arrived in the region gradually, and became the religion of the elite before it spread to the commoners. The Islam in Malaysia was influenced by previous religions and was originally not orthodox. The port of Malacca on the west coast of the Malay Peninsula was founded in 1402 by Parameswara, a Srivijaya prince fleeing Temasek (now Singapore), Parameswara in particular sailed to Temasek to escape persecution. There he came under the protection of Temagi, a Malay chief from Patani who was appointed by the king of Siam as regent of Temasek. Within a few days, Parameswara killed Temagi and appointed himself regent. Some five years later he had to leave Temasek, due to threats from Siam. During this period, a Javanese fleet from Majapahit attacked Temasek. Parameswara headed north to found a new settlement. At Muar, Parameswara considered siting his new kingdom at either Biawak Busuk or at Kota Buruk. Finding that the Muar location was not suitable, he continued his journey northwards. Along the way, he reportedly visited Sening Ujong (former name of present-day Sungai Ujong) before reaching a fishing village at the mouth of the Bertam River (former name of the Melaka River), and founded what would become the Malacca Sultanate. Over time this developed into modern-day Malacca Town. According to the "Malay Annals", here Parameswara saw a mouse deer outwitting a dog resting under a Malacca tree. Taking this as a good omen, he decided to establish a kingdom called Malacca. He built and improved facilities for trade. The Malacca Sultanate is commonly considered the first independent state in the peninsula. In 1403, the first official Chinese trade envoy led by Admiral Yin Qing arrived in Malacca. Later, Parameśwara was escorted by Zheng He and other envoys in his successful visits. Malacca's relationships with Ming granted protection to Malacca against attacks from Siam and Majapahit and Malacca officially submitted as a protectorate of Ming China. This encouraged the development of Malacca into a major trade settlement on the trade route between China and India, Middle East, Africa and Europe. To prevent the Malaccan empire from falling to the Siamese and Majapahit, he forged a relationship with the Ming dynasty of China for protection. Following the establishment of this relationship, the prosperity of the Malacca entrepôt was then recorded by the first Chinese visitor, Ma Huan, who travelled together with Admiral Zheng He. In Malacca during the early 15th century, Ming China actively sought to develop a commercial hub and a base of operation for their treasure voyages into the Indian Ocean. Malacca had been a relatively insignificant region, not even qualifying as a polity prior to the voyages according to both Ma Huan and Fei Xin, and was a vassal region of Siam. In 1405, the Ming court dispatched Admiral Zheng He with a stone tablet enfeoffing the Western Mountain of Malacca as well as an imperial order elevating the status of the port to a country. The Chinese also established a government depot (官廠) as a fortified cantonment for their soldiers. Ma Huan reported that Siam did not dare to invade Malacca thereafter. The rulers of Malacca, such as Parameswara in 1411, would pay tribute to the Chinese emperor in person. The emperor of Ming Dynasty China was sending out fleets of ships to expand trade. Admiral Zheng He called at Malacca and brought Parameswara with him on his return to China, a recognition of his position as legitimate ruler of Malacca. In exchange for regular tribute, the Chinese emperor offered Melaka protection from the constant threat of a Siamese attack. Because of its strategic location, Malacca was an important stopping point for Zheng He's fleet. Due to Chinese involvement, Malacca had grown as key alternative to other important and established ports.The Chinese and Indians who settled in the Malay Peninsula before and during this period are the ancestors of today's Baba-Nyonya and Chitty community. According to one theory, Parameswara became a Muslim when he married a Princess of Pasai and he took the fashionable Persian title "Shah", calling himself Iskandar Shah. Chinese chronicles mention that in 1414, the son of the first ruler of Malacca visited the Ming emperor to inform them that his father had died. Parameswara's son was then officially recognised as the second ruler of Melaka by the Chinese Emperor and styled Raja Sri Rama Vikrama, Raja of Parameswara of Temasek and Malacca and he was known to his Muslim subjects as Sultan Sri Iskandar Zulkarnain Shah or Sultan Megat Iskandar Shah. He ruled Malacca from 1414 to 1424. Through the influence of Indian Muslims and, to a lesser extent, Hui people from China, Islam became increasingly common during the 15th century. After an initial period paying tribute to the Ayutthaya, the kingdom rapidly assumed the place previously held by Srivijaya, establishing independent relations with China, and exploiting its position dominating the Straits to control the China-India maritime trade, which became increasingly important when the Mongol conquests closed the overland route between China and the west. Within a few years of its establishment, Malacca officially adopted Islam. Parameswara became a Muslim, and because Malacca was under a Muslim prince, the conversion of Malays to Islam accelerated in the 15th century. The political power of the Malacca Sultanate helped Islam's rapid spread through the archipelago. Malacca was an important commercial centre during this time, attracting trade from around the region. By the start of the 16th century, with the Malacca Sultanate in the Malay peninsula and parts of Sumatra, the Demak Sultanate in Java, and other kingdoms around the Malay archipelago increasingly converting to Islam, it had become the dominant religion among Malays, and reached as far as the modern-day Philippines, leaving Bali as an isolated outpost of Hinduism today. Malacca's reign lasted little more than a century, but during this time became the established centre of Malay culture. Most future Malay states originated from this period. Malacca became a cultural centre, creating the matrix of the modern Malay culture: a blend of indigenous Malay and imported Indian, Chinese and Islamic elements. Malacca's fashions in literature, art, music, dance and dress, and the ornate titles of its royal court, came to be seen as the standard for all ethnic Malays. The court of Malacca also gave great prestige to the Malay language, which had originally evolved in Sumatra and been brought to Malacca at the time of its foundation. In time Malay came to be the official language of all the Malaysian states, although local languages survived in many places. After the fall of Malacca, the Sultanate of Brunei became the major centre of Islam. Before its conversion to Islam, Brunei was known as Poni and it was a vassal-state to the Majapahit Empire. By the 15th century, the empire became a Muslim state, when the King of Brunei converted to Islam, brought by Muslim Indians and Arab merchants from other parts of Maritime Southeast Asia, who came to trade and spread Islam. During the rule of Bolkiah, the fifth Sultan, the empire controlled the coastal areas of northwest Borneo (present-day Brunei, Sarawak and Sabah) and reached the Philippines at Seludong (present-day Manila), Sulu Archipelago and included parts of the island of Mindanao. In the 16th century, the Brunei empire's influence also extended as far as Kapuas River delta in West Kalimantan. Other sultanates in the area had close relations with the Royal House of Brunei, being in some cases effectively under the hegemony of the Brunei ruling family for periods of time, such as the Malay sultans of Pontianak, Samarinda and as far as Banjarmasin who treated the Sultan of Brunei as their leader. The Malay Sultanate of Sambas in present-day West Kalimantan and Sultanate of Sulu in Southern Philippines in particular had developed dynastic relations with the royal house of Brunei. The Sultanate of Sarawak (covering present day Kuching, known to the Portuguese cartographers as "Cerava," and one of the five great seaports on the island of Borneo), though under the influence of the Brunei, was self-governed under Sultan Tengah before being fully integrated into the Bruneian Empire upon the Tengah's death in 1641. The Bruneian empire began to decline during the arrival of western powers. Spain sent several expeditions from Mexico to invade Brunei's territories in the Philippines. They conquered the Bruneian colony of Islamic Manila, Christianized its people, and laid siege to Sulu. Eventually the Spanish, their Visayan allies and their Latin-American recruits assaulted Brunei itself during the Castilian War. The invasion was only temporary as the Spanish then retreated. However, Brunei was unable to regain the territory it lost in the Philippines. Yet, it still maintained sway in Borneo. By the early 19th century, Sarawak had become a loosely governed territory under the control of the Brunei Sultanate. The Bruneian Empire had authority only along the coastal regions of Sarawak held by semi-independent Malay leaders. Meanwhile, the interior of Sarawak suffered from tribal wars fought by Iban, Kayan, and Kenyah peoples, who aggressively fought to expand their territories. Following the discovery of antimony ore in the Kuching region, Pangeran Indera Mahkota (a representative of the Sultan of Brunei) began to develop the territory between 1824 and 1830. When antimony production increased, the Brunei Sultanate demanded higher taxes from Sarawak; this led to civil unrest and chaos. In 1839, Sultan Omar Ali Saifuddin II (1827–1852), ordered his uncle Pengiran Muda Hashim to restore order. It was around this time that James Brooke (who would later become the first White Rajah of Sarawak) arrived in Sarawak, and Pengiran Muda Hashim requested his assistance in the matter, but Brooke refused. However, he agreed to a further request during his next visit to Sarawak in 1841. Pangeran Muda Hashim signed a treaty in 1841 surrendering Sarawak to Brooke. On 24 September 1841, Pengiran Muda Hashim bestowed the title of governor on James Brooke. This appointment was later confirmed by the Sultan of Brunei in 1842. In 1843, James Brooke decided to create a pro-British Brunei government by installing Pengiran Muda Hashim into the Brunei Court as he would be taking the Brooke's advice. James Brooke forced Brunei to appoint Hashim under the guns of East India Company's steamer "Phlegethon", an example of a wider policy of British gunboat diplomacy. The Brunei Court was unhappy with Hashim's appointment and had him assassinated in 1845. In retaliation, James Brooke attacked the Kampong Ayer, the capital of Brunei. After the incident, the Sultan of Brunei sent an apology letter to Queen Victoria. The sultan also confirmed James Brooke's possession of Sarawak and his mining rights of antimony without paying tribute to Brunei. In 1846 Brooke effectively became the Rajah of Sarawak and founded the White Rajah Dynasty of Sarawak. From the 15th century onwards, the Portuguese started seeking a maritime route towards Asia. In 1511, Afonso de Albuquerque led an expedition to Malaya which seized Malacca with the intent of using it as a base for activities in southeast Asia. This was the first colonial claim on what is now Malaysia. The son of the last Sultan of Malacca, Sultan Alauddin Riayat Shah II fled to the southern tip of the peninsula, where he founded a state that which became the Sultanate of Johor. Another son created the Perak Sultanate to the north. By the late 16th century, the tin mines of northern Malaya had been discovered by European traders, and Perak grew wealthy on the proceeds of tin exports. Portuguese influence was strong, as they aggressively tried to convert the population of Malacca to Catholicism. In 1571, the Spanish captured Manila and established a colony in the Philippines, reducing the Sultanate of Brunei's power. After the fall of Malacca to Portugal, the Johor Sultanate on the southern Malay peninsula and the Sultanate of Aceh on northern Sumatra moved to fill the power vacuum left behind. The three powers struggled to dominate the Malay peninsula and the surrounding islands. Meanwhile, the importance of the Strait of Malacca as an East-West shipping route was growing, while the islands of Southeast Asia were themselves prized sources of natural resources (metals, spices, etc.) whose inhabitants were being further drawn in the global economy. After the founding of the Johor Sultanate in 1528 by Alauddin Riayat Shah II, it grew powerful enough to rival the Portuguese though it was never able to recapture the city. Instead it expanded in other directions, building in 130 years one of the largest Malay states. In this time the numerous attempts to recapture Malacca led to a strong backlash from the Portuguese, whose raids even reached Johor's capital of Johor Lama in 1587. In 1607, the Sultanate of Aceh rose as the powerful and wealthiest state in the Malay archipelago. Under Iskandar Muda's reign, the sultanate's control was extended over a number of Malay states. A notable conquest was Perak, a tin-producing state on the Peninsula. In Iskandar Muda's disastrous campaign against Malacca in 1629, the combined Portuguese and Johor forces managed to destroy all the ships of his formidable fleet and 19,000 troops according to a Portuguese account. Aceh forces were not destroyed, however, as Aceh was able to conquer Kedah within the same year and took many of its citizens to Aceh. The Sultan's son-in-law, Iskandar Thani, the former prince of Pahang later became Iskandar Muda's successor. The conflict over control of the straits went on until 1641, when the Dutch (allied to Johor) gained control of Malacca. In the early 17th century, the Dutch East India Company ("Vereenigde Oost-Indische Compagnie", or VOC) was established. During this time the Dutch were at war with Spain, which absorbed the Portuguese Empire due to the Iberian Union. The Dutch expanded across the archipelago, forming an alliance with Johor and using this to push the Portuguese out of Malacca in 1641. Backed by the Dutch, Johor established a loose hegemony over the Malay states, except Perak, which was able to play off Johor against the Siamese to the north and retain its independence. The Dutch did not interfere in local matters in Malacca, but at the same time diverted most trade to its colonies on Java. The weakness of the small coastal Malay states led to the immigration of the Bugis, escaping from Dutch colonisation of Sulawesi, who established numerous settlements on the peninsula which they used to interfere with Dutch trade. They seized control of Johor following the assassination of the last Sultan of the old Melaka royal line in 1699. Bugis expanded their power in the states of Johor, Kedah, Perak, and Selangor. The Minangkabau from central Sumatra migrated into Malaya, and eventually established their own state in Negeri Sembilan. The fall of Johor left a power vacuum on the Malay Peninsula which was partly filled by the Siamese kings of Ayutthaya kingdom, who made the five northern Malay states—Kedah, Kelantan, Patani, Perlis, and Terengganu — their vassals. Johor's eclipse also left Perak as the unrivalled leader of the Malay states. The economic importance of Malaya to Europe grew rapidly during the 18th century. The fast-growing tea trade between China and United Kingdom increased the demand for high-quality Malayan tin, which was used to line tea-chests. Malayan pepper also had a high reputation in Europe, while Kelantan and Pahang had gold mines. The growth of tin and gold mining and associated service industries led to the first influx of foreign settlers into the Malay world – initially Arabs and Indians, later Chinese. English traders had been present in Malay waters since the 17th century. However, with the arrival of the British, European power became dominant in Malaysia. Before the mid-19th-century British interests in the region were predominantly economic, with little interest in territorial control. Already the most powerful coloniser in India, the British were looking towards southeast Asia for new resources. The growth of the China trade in British ships increased the East India Company's desire for bases in the region. Various islands were used for this purpose, but the first permanent acquisition was Penang, leased from the Sultan of Kedah in 1786. This was followed soon after by the leasing of a block of territory on the mainland opposite Penang (known as Province Wellesley). In 1795, during the Napoleonic Wars, the British with the consent of the Netherlands occupied Dutch Melaka to forestall possible French encroachment in the area. When Malacca was handed back to the Dutch in 1815, the British governor, Stamford Raffles, looked for an alternative base, and in 1819 he acquired Singapore from the Sultan of Johor. The exchange of the British colony of Bencoolen for Malacca with the Dutch left the British as the sole colonial power on the peninsula. The territories of the British were set up as free ports, attempting to break the monopoly held by other colonial powers at the time, and making them large bases of trade. They allowed Britain to control all trade through the straits of Malacca. British influence was increased by Malayan fears of Siamese expansionism, to which Britain made a useful counterweight. During the 19th century the Malay Sultans aligned themselves with the British Empire, due to the benefits of associations with the British and the belief in superior British civilisation. In 1824, British hegemony in Malaya (before the name Malaysia) was formalised by the Anglo-Dutch Treaty, which divided the Malay archipelago between Britain and the Netherlands. The Dutch evacuated Melaka and renounced all interest in Malaya, while the British recognised Dutch rule over the rest of the East Indies. By 1826 the British controlled Penang, Malacca, Singapore, and the island of Labuan, which they established as the crown colony of the Straits Settlements, administered first under the East India Company until 1867, when they were transferred to the Colonial Office in London. Initially, the British followed a policy of non-intervention in relations between the Malay states. The commercial importance of tin mining in the Malay states to merchants in the Straits Settlements led to infighting between the aristocracy on the peninsula. The destabilisation of these states damaged the commerce in the area, causing British intervention. The wealth of Perak's tin mines made political stability there a priority for British investors, and Perak was thus the first Malay state to agree to the supervision of a British resident. British gunboat diplomacy was employed to bring about a peaceful resolution to civil disturbances caused by Chinese and Malay gangsters employed in a political fight between Ngah Ibrahim and Raja Muda Abdullah. The Pangkor Treaty of 1874 paved the way for the expansion of British influence in Malaya. The British concluded treaties with some Malay states, installing “residents” who advised the Sultans and soon became the effective rulers of their states. These advisors held power in everything except to do with Malay religion and customs. Johor alone resisted, by modernising and giving British and Chinese investors legal protection. By the turn of the 20th century, the states of Pahang, Selangor, Perak, and Negeri Sembilan, known together as the Federated Malay States, had British advisors. In 1909 the Siamese kingdom was compelled to cede Kedah, Kelantan, Perlis and Terengganu, which already had British advisors, over to the British. Sultan Abu Bakar of Johor and Queen Victoria were personal acquaintances who recognised each other as equals. It was not until 1914 that Sultan Abu Bakar's successor, Sultan Ibrahim, accepted a British adviser. The four previously Thai states and Johor were known as the Unfederated Malay States. The states under the most direct British control developed rapidly, becoming the largest suppliers in the world of first tin, then rubber. By 1910, the pattern of British rule in the Malay lands was established. The Straits Settlements were a Crown colony, ruled by a governor under the supervision of the Colonial Office in London. Their population was about half Chinese, but all residents, regardless of race, were British subjects. The first four states to accept British residents, Perak, Selangor, Negeri Sembilan, and Pahang, were termed the Federated Malay States: while technically independent, they were placed under a Resident-General in 1895, making them British colonies in all but name. The Unfederated Malay States (Johore, Kedah, Kelantan, Perlis, and Terengganu) had a slightly larger degree of independence, although they were unable to resist the wishes of their British residents for long. Johor, as Britain's closest ally in Malay affairs, had the privilege of a written constitution, which gave the Sultan the right to appoint his own Cabinet, but he was generally careful to consult the British first. During the late 19th century the British also gained control of the north coast of Borneo, where Dutch rule had never been established. Development on the Peninsula and Borneo were generally separate until the 19th century. The eastern part of this region (now Sabah) was under the nominal control of the Sultan of Sulu, who later became a vassal of the Spanish East Indies. The rest was the territory of the Sultanate of Brunei. In 1841, British adventurer James Brooke helped the Sultan of Brunei suppress a revolt, and in return received the title of raja and the right to govern the Sarawak River District. In 1846, his title was recognised as hereditary, and the "White Rajahs" began ruling Sarawak as a recognised independent state. The Brookes expanded Sarawak at the expense of Brunei. In 1881, the British North Borneo Company was granted control of the territory of British North Borneo, appointing a governor and legislature. It was ruled from the office in London. Its status was similar to that of a British Protectorate, and like Sarawak it expanded at the expense of Brunei. Until the Philippine independence on 1946, seven British-controlled islands in the north-eastern part of Borneo named Turtle Islands and Cagayan de Tawi-Tawi were ceded to the Philippine government by the Crown colony government of North Borneo. The Philippines then under its irredentism motive since the administration of President Diosdado Macapagal laying claim to eastern Sabah in a basis the territory was part of the present-defunct Sultanate of Sulu's territory. In 1888, what was left of Brunei was made a British protectorate, and in 1891 another Anglo-Dutch treaty formalised the border between British and Dutch Borneo. Unlike some colonial powers, the British always saw their empire as primarily an economic concern, and its colonies were expected to turn a profit for British shareholders. Malaya's obvious attractions were its tin and gold mines, but British planters soon began to experiment with tropical plantation crops—tapioca, gambier, pepper, and coffee. But in 1877 the rubber plant was introduced from Brazil, and rubber soon became Malaya's staple export, stimulated by booming demand from European industry. Rubber was later joined by palm oil as an export earner. All these industries required a large and disciplined labour force, and the British did not regard the Malays as reliable workers. The solution was the importation of plantation workers from India, mainly Tamil-speakers from South India. A small group of Malabaris were brought from the current place called Kerala to help with the rubber plantations, resulting in the small Malabari population seen in Malaysia today. The mines, mills and docks also attracted a flood of immigrant workers from southern China. Soon towns like Singapore, Penang, and Ipoh were majority Chinese, as was Kuala Lumpur, founded as a tin-mining centre in 1857. By 1891, when Malaya's first census was taken, Perak and Selangor, the main tin-mining states, had Chinese majorities. The Chinese mostly arrived poor; yet their belief in industriousness and frugality, their emphasis in their children's education and their maintenance of Confucian family hierarchy, as well as their voluntary connection with tightly knit networks of mutual aid societies (run by "Hui-Guan" 會館, or non-profit organisations with nominal geographic affiliations from different parts of China) all contributed to their prosperity. In the 1890s Yap Ah Loy, who held the title of Kapitan China of Kuala Lumpur, was the richest man in Malaya, owning a chain of mines, plantations and shops. Malaya's banking and insurance industries were run by the Chinese from the start, and Chinese businesses, usually in partnership with London firms, soon had a stranglehold on the economy. Since the Malay Sultans tended to spend well beyond their means, they were soon indebted to Chinese bankers, and this gave the Chinese political as well as economic leverage. At first the Chinese immigrants were mostly men, and many intended to return home when they had made their fortunes. Many did go home, but many more stayed. At first they married Malay women, producing a community of Sino-Malayans or baba people, but soon they began importing Chinese brides, establishing permanent communities and building schools and temples. The Indians were initially less successful, since unlike the Chinese they came mainly as indentured labourers to work in the rubber plantations, and had few of the economic opportunities that the Chinese had. They were also a less united community, since they were divided between Hindus and Muslims and along lines of language and caste. An Indian commercial and professional class emerged during the early 20th century, but the majority of Indians remained poor and uneducated in rural ghettos in the rubber-growing areas. Traditional Malay society had great difficulty coping with both the loss of political sovereignty to the British and of economic power to the Chinese. By the early 20th century it seemed possible that the Malays would become a minority in their own country. The Sultans, who were seen as collaborators with both the British and the Chinese, lost some of their traditional prestige, particularly among the increasing number of Malays with a western education, but the mass of rural Malays continued to revere the Sultans and their prestige was thus an important prop for colonial rule. A small class of Malay nationalist intellectuals began to emerge during the early 20th century, and there was also a revival of Islam in response to the perceived threat of other imported religions, particularly Christianity. In fact few Malays converted to Christianity, although many Chinese did. The northern regions, which were less influenced by western ideas, became strongholds of Islamic conservatism, as they have remained. The one consolation to Malay pride was that the British allowed them a virtual monopoly of positions in the police and local military units, as well as a majority of those administrative positions open to non-Europeans. While the Chinese mostly built and paid for their own schools and colleges, importing teachers from China, the colonial government fostered education for Malays, opening Malay College in 1905 and creating the Malay Administrative Service in 1910. (The college was dubbed "Bab ud-Darajat" – the Gateway to High Rank.) A Malay Teachers College followed in 1922, and a Malay Women's Training College in 1935. All this reflected the official British policy that Malaya belonged to the Malays, and that the other races were but temporary residents. This view was increasingly out of line with reality, and contained the seeds of much future trouble. The Malay teacher's college had lectures and writings that nurtured Malay nationalism and anti-colonialist sentiments. Due to this it is known as the birthplace of Malay nationalism. In 1938, Ibrahim Yaacob, an alumnus of Sultan Idris College, established the Kesatuan Melayu Muda (Young Malays Union or KMM) in Kuala Lumpur. It was the first nationalist political organisation in British Malaya, advocating for the union of all Malays regardless of origin, and fighting for Malay rights and against British Imperialism. A specific ideal the KMM held was "Panji Melayu Raya", which called for the unification of British Malaya and Dutch East Indies. In the years before World War II, the British were concerned with finding the balance between a centralised state and maintaining the power of the Sultans in Malaya. There were no moves to give Malaya a unitary government, and in fact in 1935 the position of Resident-General of the Federated States was abolished, and its powers decentralised to the individual states. With their usual tendency to racial stereotyping, the British regarded the Malays as amiable but unsophisticated and rather lazy, incapable of self-government, although making good soldiers under British officers. They regarded the Chinese as clever but dangerous—and indeed during the 1920s and 1930s, reflecting events in China, the Chinese Nationalist Party (the Kuomintang) and the Communist Party of China built rival clandestine organisations in Malaya, leading to regular disturbances in the Chinese towns. The British saw no way that Malaya's disparate collection of states and races could become a nation, let alone an independent one. Although a belligerent as part of the British Empire, Malaya saw little action during World War I, except for the sinking of the Russian cruiser Zhemchug by the German cruiser SMS Emden on 28 October 1914 during the Battle of Penang. The outbreak of war in the Pacific in December 1941 found the British in Malaya completely unprepared. During the 1930s, anticipating the rising threat of Japanese naval power, they had built a great naval base at Singapore, but never anticipated an invasion of Malaya from the north. Because of the demands of the war in Europe, there was virtually no British air capacity in the Far East. The Japanese were thus able to attack from their bases in French Indo-China with impunity, and despite stubborn resistance from British, Australian, and Indian forces, they overran Malaya in two months. Singapore, with no landward defences, no air cover, and no water supply, was forced to surrender in February 1942, doing irreparable damage to British prestige. British North Borneo and Brunei were also occupied. The Japanese had a racial policy just as the British did. They regarded the Malays as a colonial people liberated from British imperialist rule, and fostered a limited form of Malay nationalism, which gained them some degree of collaboration from the Malay civil service and intellectuals. (Most of the Sultans also collaborated with the Japanese, although they maintained later that they had done so unwillingly.) The Malay nationalist Kesatuan Melayu Muda, advocates of "Melayu Raya", collaborated with the Japanese, based on the understanding that Japan would unite the Dutch East Indies, Malaya and Borneo and grant them independence. The occupiers regarded the Chinese, however, as enemy aliens, and treated them with great harshness: during the so-called "sook ching" (purification through suffering), up to 80,000 Chinese in Malaya and Singapore were killed. Chinese businesses were expropriated and Chinese schools either closed or burned down. Not surprisingly the Chinese, led by the Malayan Communist Party (MCP), became the backbone of the Malayan Peoples' Anti-Japanese Army (MPAJA), a force similar to the Soviet-supported Partisan rebel forces led by local Communist parties in the Eastern European theatre. With British assistance, the MPAJA became the most effective resistance force in the occupied Asian countries. Although the Japanese argued that they supported Malay nationalism, they offended Malay nationalism by allowing their ally Thailand to re-annex the four northern states, Kedah, Perlis, Kelantan, and Terengganu that had been surrendered to the British in 1909. The loss of Malaya's export markets soon produced mass unemployment which affected all races and made the Japanese increasingly unpopular. During occupation, ethnic tensions were raised and nationalism grew. The Malayans were thus on the whole glad to see the British back in 1945, but things could not remain as they were before the war, and a stronger desire for independence grew. Britain was bankrupt and the new Labour government was keen to withdraw its forces from the East as soon as possible. Colonial self-rule and eventual independence were now British policy. The tide of colonial nationalism sweeping through Asia soon reached Malaya. But most Malays were more concerned with defending themselves against the MCP which was mostly made up of Chinese, than with demanding independence from the British; indeed, their immediate concern was that the British not leave and abandon the Malays to the armed Communists of the MPAJA, which was the largest armed force in the country. In 1944, the British drew up plans for a Malayan Union, which would turn the Federated and Unfederated Malay States, plus Penang and Malacca (but not Singapore), into a single Crown colony, with a view towards independence. The Bornean territories and Singapore were left out as it was thought this would make union more difficult to achieve. There was however strong opposition from the Malays, who opposed the weakening of the Malay rulers and the granting of citizenship to the ethnic Chinese and other minorities. The British had decided on equality between races as they perceived the Chinese and Indians as more loyal to the British during the war than the Malays. The Sultans, who had initially supported it, backed down and placed themselves at the head of the resistance. In 1946, the United Malays National Organisation (UMNO) was founded by Malay nationalists led by Dato Onn bin Jaafar, the Chief Minister of Johor. UMNO favoured independence for Malaya, but only if the new state was run exclusively by the Malays. Faced with implacable Malay opposition, the British dropped the plan for equal citizenship. The Malayan Union was thus established in 1946, and was dissolved in 1948 and replaced by the Federation of Malaya, which restored the autonomy of the rulers of the Malay states under British protection. Meanwhile, the Communists were moving towards open insurrection. The MPAJA had been disbanded in December 1945, and the MCP organised as a legal political party, but the MPAJA's arms were carefully stored for future use. The MCP policy was for immediate independence with full equality for all races. This meant it recruited very few Malays. The Party's strength was in the Chinese-dominated trade unions, particularly in Singapore, and in the Chinese schools, where the teachers, mostly born in China, saw the Communist Party of China as the leader of China's national revival. In March 1947, reflecting the international Communist movement's "turn to left" as the Cold War set in, the MCP leader Lai Tek was purged and replaced by the veteran MPAJA guerrilla leader Chin Peng, who turned the party increasingly to direct action. These rebels, under the leadership of the MCP, launched guerrilla operations designed to force the British out of Malaya. In July, following a string of assassinations of plantation managers, the colonial government struck back, declaring a State of Emergency, banning the MCP and arresting hundreds of its militants. The Party retreated to the jungle and formed the Malayan Peoples' Liberation Army, with about 13,000 men under arms, all Chinese. The Malayan Emergency as it was known, lasted from 1948 to 1960 and involved a long anti-insurgency campaign by Commonwealth troops in Malaya. The British strategy, which proved ultimately successful, was to isolate the MCP from its support base by a combination of economic and political concessions to the Chinese and the resettlement of Chinese squatters into "New Villages" in "white areas" free of MCP influence. In December 1948, 24 villagers were executed by British troops. From 1949 the MCP campaign lost momentum and the number of recruits fell sharply. Although the MCP succeeded in assassinating the British High Commissioner, Sir Henry Gurney, in October 1951, this turn to terrorist tactics alienated many moderate Chinese from the Party. The arrival of Lt.-Gen Sir Gerald Templer as British commander in 1952 was the beginning of the end of the Emergency. Templer invented the techniques of counter-insurgency warfare in Malaya and applied them ruthlessly. Although the insurgency was defeated Commonwealth troops remained with the backdrop of the Cold War. Against this backdrop, independence for the Federation within the Commonwealth was granted on 31 August 1957, with Tunku Abdul Rahman as the first prime minister. Chinese reaction against the MCP was shown by the formation of the Malayan Chinese Association (MCA) in 1949 as a vehicle for moderate Chinese political opinion. Its leader Tan Cheng Lock favoured a policy of collaboration with UMNO to win Malayan independence on a policy of equal citizenship, but with sufficient concessions to Malay sensitivities to ease nationalist fears. Tan formed a close collaboration with Tunku (Prince) Abdul Rahman, the Chief Minister of Kedah and from 1951 successor to Datuk Onn as leader of UMNO. Since the British had announced in 1949 that Malaya would soon become independent whether the Malayans liked it or not, both leaders were determined to forge an agreement their communities could live with as a basis for a stable independent state. The UMNO-MCA Alliance, which was later joined by the Malayan Indian Congress (MIC), won convincing victories in local and state elections in both Malay and Chinese areas between 1952 and 1955. The introduction of elected local government was another important step in defeating the Communists. After Joseph Stalin's death in 1953, there was a split in the MCP leadership over the wisdom of continuing the armed struggle. Many MCP militants lost heart and went home, and by the time Templer left Malaya in 1954, the Emergency was over, although Chin Peng led a diehard group that lurked in the inaccessible country along the Thai border for many years. During 1955 and 1956 UMNO, the MCA and the British hammered out a constitutional settlement for a principle of equal citizenship for all races. In exchange, the MCA agreed that Malaya's head of state would be drawn from the ranks of the Malay Sultans, that Malay would be the official language, and that Malay education and economic development would be promoted and subsidised. In effect, this meant that Malaya would be run by the Malays, particularly since they continued to dominate the civil service, the army and the police, but that the Chinese and Indians would have proportionate representation in the Cabinet and the parliament, would run those states where they were the majority, and would have their economic position protected. The difficult issue of who would control the education system was deferred until after independence. This came on 31 August 1957, when Tunku Abdul Rahman became the first Prime Minister of independent Malaya. This left the unfinished business of the other British-ruled territories in the region. After the Japanese surrender the Brooke family and the British North Borneo Company gave up their control of Sarawak and North Borneo respectively, and these became British Crown Colonies. They were much less economically developed than Malaya, and their local political leaderships were too weak to demand independence. Singapore, with its large Chinese majority, achieved autonomy in 1955, and in 1959 the young leader Lee Kuan Yew became Prime Minister. The Sultan of Brunei remained as a British client in his oil-rich enclave. Between 1959 and 1962 the British government orchestrated complex negotiations between these local leaders and the Malayan government. On 24 April 1961, Lee Kuan Yew proposed the idea of forming Malaysia during a meeting to Tunku Abdul Rahman, after which Tunku invited Lee to prepare a paper elaborating on this idea. On 9 May, Lee sent the final version of the paper to Tunku and then deputy Malayan Prime Minister Abdul Razak. There were doubts about the practicality of the idea but Lee assured the Malayan government of continued Malay political dominance in the new federation. Razak supported the idea of the new federation and worked to convince Tunku to back it. On 27 May 1961, Abdul Rahman proposed the idea of forming "Malaysia", which would consist of Brunei, Malaya, North Borneo, Sarawak, and Singapore, all except Malaya still under British rule. It was stated that this would allow the central government to better control and combat communist activities, especially in Singapore. It was also feared that if Singapore became independent, it would become a base for Chinese chauvinists to threaten Malayan sovereignty. The proposed inclusion of British territories besides Singapore was intended to keep the ethnic composition of the new nation similar to that of Malaya, with the Malay and indigenous populations of the other territories canceling out the Chinese majority in Singapore. Although Lee Kuan Yew supported the proposal, his opponents from the Singaporean Socialist Front (Barisan Sosialis) resisted, arguing that this was a ploy for the British to continue controlling the region. Most political parties in Sarawak were also against the merger, and in North Borneo, where there were no political parties, community representatives also stated their opposition. Although the Sultan of Brunei supported the merger, the Parti Rakyat Brunei opposed it as well. At the Commonwealth Prime Ministers Conference in 1961, Abdul Rahman explained his proposal further to its opponents. In October, he obtained agreement from the British government to the plan, provided that feedback be obtained from the communities involved in the merger. The Cobbold Commission, named after its head, Lord Cobbold, conducted a study in the Borneo territories and approved a merger with North Borneo and Sarawak; however, it was found that a substantial number of Bruneians opposed merger. North Borneo drew up a list of points, referred to as the 20-point agreement, proposing terms for its inclusion in the new federation. Sarawak prepared a similar memorandum, known as the 18-point agreement. Some of the points in these agreements were incorporated into the eventual constitution, some were instead accepted orally. These memoranda are often cited by those who believe that Sarawak's and North Borneo's rights have been eroded over time. A referendum was conducted in Singapore to gauge opinion, and 70% supported merger with substantial autonomy given to the state government. The Sultanate of Brunei withdrew from the planned merger due to opposition from certain segments of its population as well as arguments over the payment of oil royalties and the status of the sultan in the planned merger. Additionally, the Bruneian Parti Rakyat Brunei staged an armed revolt, which, though it was put down, was viewed as potentially destabilising to the new nation. After reviewing the Cobbold Commission's findings, the British government appointed the Landsdowne Commission to draft a constitution for Malaysia. The eventual constitution was essentially the same as the 1957 constitution, albeit with some rewording; for instance, giving recognition to the special position of the natives of the Borneo States. North Borneo, Sarawak and Singapore were also granted some autonomy unavailable to the states of Malaya. After negotiations in July 1963, it was agreed that Malaysia would come into being on 31 August 1963, consisting of Malaya, North Borneo, Sarawak, and Singapore. The date was to coincide with the independence day of Malaya and the British giving self-rule to Sarawak and North Borneo. However, Indonesia and the Philippines strenuously objected to this development, with Indonesia claiming Malaysia represented a form of "neocolonialism" and the Philippines claiming North Borneo as its territory. The opposition from the Indonesian government led by Sukarno and attempts by the Sarawak United People's Party delayed the formation of Malaysia. Due to these factors, an eight-member UN team was formed to re-ascertain whether North Borneo and Sarawak truly wanted to join Malaysia. Malaysia formally came into being on 16 September 1963, consisting of Malaya, North Borneo, Sarawak, and Singapore. In 1963 the total population of Malaysia was about 10 million. At the time of independence, Malaya had great economic advantages. It was among the world's leading producers of three valuable commodities; rubber, tin, and palm oil, and was also a significant iron ore producer. These export industries gave the Malayan government a healthy surplus to invest in industrial development and infrastructure projects. Like other developing nations in the 1950s and 1960s, Malaya (and later Malaysia) placed great stress on state planning, although UMNO was never a socialist party. The First and Second Malayan Plans (1956–60 and 1961–65 respectively) stimulated economic growth through state investment in industry and repairing infrastructure such as roads and ports, which had been damaged and neglected during the war and the Emergency. The government was keen to reduce Malaya's dependence on commodity exports, which put the country at the mercy of fluctuating prices. The government was also aware that demand for natural rubber was bound to fall as the production and use of synthetic rubber expanded. Since a third of the Malay workforce worked in the rubber industry it was important to develop alternative sources of employment. Competition for Malaya's rubber markets meant that the profitability of the rubber industry increasingly depended on keeping wages low, which perpetuated rural Malay poverty. Both Indonesia and the Philippines withdrew their ambassadors from Malaya on 15 September 1963, the day before Malaysia's formation. In Jakarta the British and Malayan embassies were stoned, and the British consulate in Medan was ransacked with Malaya's consul taking refuge in the US consulate. Malaysia withdrew its ambassadors in response, and asked Thailand to represent Malaysia in both countries. Indonesian President Soekarno, backed by the powerful Communist Party of Indonesia (PKI), chose to regard Malaysia as a "neocolonialist" plot against his country, and backed a Communist insurgency in Sarawak, mainly involving elements of the local Chinese community. Indonesian irregular forces were infiltrated into Sarawak, where they were contained by Malaysian and Commonwealth of Nations forces. This period of "Konfrontasi", an economic, political, and military confrontation lasted until the downfall of Sukarno in 1966. The Philippines objected to the formation of the federation, claiming North Borneo was part of Sulu, and thus the Philippines. In 1966 the new president, Ferdinand Marcos, dropped the claim, although it has since been revived and is still a point of contention marring Philippine-Malaysian relations. The Depression of the 1930s, followed by the outbreak of the Sino-Japanese War, had the effect of ending Chinese emigration to Malaya. This stabilised the demographic situation and ended the prospect of the Malays becoming a minority in their own country. At the time of independence in 1957, Malays comprised 55% of the population, Chinese 35% and Indians 10%. This balance was altered by the inclusion of the majority-Chinese Singapore, upsetting many Malays. The federation increased the Chinese proportion to close to 40%. Both UMNO and the MCA were nervous about the possible appeal of Lee's People's Action Party (then seen as a radical socialist party) to voters in Malaya, and tried to organise a party in Singapore to challenge Lee's position there. Lee in turn threatened to run PAP candidates in Malaya at the 1964 federal elections, despite an earlier agreement that he would not do so (see PAP-UMNO Relations). Racial tensions intensified as PAP created an opposition alliance aiming for equality between races. This provoked Tunku Abdul Rahman to demand that Singapore withdraw from Malaysia. While the Singaporean leaders attempted to keep Singapore as a part of the Federation, the Malaysian Parliament voted 126–0 on 9 August 1965 in favor of the expulsion of Singapore. The most vexed issues of independent Malaysia were education and the disparity of economic power among the ethnic communities. The Malays felt unhappy with the wealth of the Chinese community, even after the expulsion of Singapore. Malay political movements emerged based around this. However, since there was no effective opposition party, these issues were contested mainly within the coalition government, which won all but one seat in the first post-independence Malayan Parliament. The two issues were related, since the Chinese advantage in education played a large part in maintaining their control of the economy, which the UMNO leaders were determined to end. The MCA leaders were torn between the need to defend their own community's interests and the need to maintain good relations with UMNO. This produced a crisis in the MCA in 1959, in which a more assertive leadership under Lim Chong Eu defied UMNO over the education issue, only to be forced to back down when Tunku Abdul Rahman threatened to break up the coalition. The Education Act of 1961 put UMNO's victory on the education issue into legislative form. Henceforward Malay and English would be the only teaching languages in secondary schools, and state primary schools would teach in Malay only. Although the Chinese and Indian communities could maintain their own Chinese and Tamil-language primary schools, all their students were required to learn Malay, and to study an agreed "Malayan curriculum". Most importantly, the entry exam to the University of Malaya (which moved from Singapore to Kuala Lumpur in 1963) would be conducted in Malay, even though most teaching at the university was in English until the 1970s. This had the effect of excluding many Chinese students. At the same time Malay schools were heavily subsidised, and Malays were given preferential treatment. This obvious defeat for the MCA greatly weakened its support in the Chinese community. As in education, the UMNO government's unspoken agenda in the field of economic development aimed to shift economic power away from the Chinese and towards the Malays. The two Malayan Plans and the First Malaysian Plan (1966–1970) directed resources heavily into developments which would benefit the rural Malay community, such as village schools, rural roads, clinics, and irrigation projects. Several agencies were set up to enable Malay smallholders to upgrade their production and to increase their incomes. The Federal Land Development Authority (FELDA) helped many Malays to buy farms or to upgrade ones they already owned. The state also provided a range of incentives and low-interest loans to help Malays start businesses, and government tendering systematically favoured Malay companies, leading many Chinese-owned businesses to "Malayanise" their management. All this certainly tended to reduce to gap between Chinese and Malay standards of living, although some argued that this would have happened anyway as Malaysia's trade and general prosperity increased. The collaboration of the MCA and the MIC in these policies weakened their hold on the Chinese and Indian electorates. At the same time, the effects of the government's affirmative action policies of the 1950s and 1960s had been to create a discontented class of educated but underemployed Malays. This was a dangerous combination, and led to the formation of a new party, the Malaysian People's Movement (Gerakan Rakyat Malaysia) in 1968. Gerakan was a deliberately non-communal party, bringing in Malay trade unionists and intellectuals as well as Chinese and Indian leaders. At the same time, an Islamist party, the Islamic Party of Malaysia (PAS) and a Democratic socialist party, the Democratic Action Party (DAP), gained increasing support, at the expense of UMNO and the MCA respectively. Following the end of the Malayan Emergency in 1960, the predominantly ethnic Chinese Malayan National Liberation Army, armed wing of the Malayan Communist Party, had retreated to the Malaysian-Thailand border where it had regrouped and retrained for future offensives against the Malaysian government. The insurgency officially began when the MCP ambushed security forces in Kroh–Betong, in the northern part of Peninsular Malaysia, on 17 June 1968. Instead of declaring a "state of emergency" as the British had done previously, the Malaysian government responded to the insurgency by introducing several policy initiatives including the Security and Development Program (KESBAN), "Rukun Tetangga" (Neighbourhood Watch), and the RELA Corps (People's Volunteer Group). At the May 1969 federal elections, the UMNO-MCA-MIC Alliance polled only 48% of the vote, although it retained a majority in the legislature. The MCA lost most of the Chinese-majority seats to Gerakan or DAP candidates. The victorious opposition celebrated by holding a motorcade on the main streets of Kuala Lumpur with supporters holding up brooms as a signal of its intention to make sweeping changes. Fear of what the changes might mean for them (as much of the country's businesses were Chinese-owned), a Malay backlash resulted, leading rapidly to riots and inter-communal violence in which about 6,000 Chinese homes and businesses were burned and at least 184 people were killed, although Western diplomatic sources at the time suggested a toll of close to 600, with most of the victims are Chinese. The government declared a state of emergency, and a National Operations Council, headed by Deputy Prime Minister Tun Abdul Razak, took power from the government of Tunku Abdul Rahman, who, in September 1970, was forced to retire in favour of Abdul Razak. It consisted of nine members, mostly Malay, and wielded full political and military power. Using the Emergency-era Internal Security Act (ISA), the new government suspended Parliament and political parties, imposed press censorship and placed severe restrictions on political activity. The ISA gave the government power to intern any person indefinitely without trial. These powers were widely used to silence the government's critics, and have never been repealed. The Constitution was changed to make illegal any criticism, even in Parliament, of the Malaysian monarchy, the special position of Malays in the country, or the status of Malay as the national language. In 1971 Parliament reconvened, and a new government coalition, the National Front (Barisan Nasional), was formed in 1973 to replace the Alliance party. The coalition consisted of UMNO, the MCA, the MIC, Gerakan, PPP, and regional parties in Sabah and Sarawak. The PAS also joined the Front but was expelled in 1977. The DAP was left outside as the only significant opposition party. Abdul Razak held office until his death in 1976. He was succeeded by Datuk Hussein Onn, the son of UMNO's founder Onn Jaafar, and then by Tun Mahathir Mohamad, who had been Education Minister since 1981, and who held power for 22 years. During these years policies were put in place which led to the rapid transformation of Malaysia's economy and society, such as the controversial New Economic Policy, which was intended to increase proportionally the share of the economic "pie" of the bumiputras as compared to other ethnic groups—was launched by Prime Minister Tun Abdul Razak. Malaysia has since maintained a delicate ethno-political balance, with a system of government that has attempted to combine overall economic development with political and economic policies that promote equitable participation of all races. In 1970 three-quarters of Malaysians living below the poverty line were Malays, the majority of Malays were still rural workers, and Malays were still largely excluded from the modern economy. The government's response was the New Economic Policy of 1971, which was to be implemented through a series of four five-year plans from 1971 to 1990. The plan had two objectives: the elimination of poverty, particularly rural poverty, and the elimination of the identification between race and prosperity. This latter policy was understood to mean a decisive shift in economic power from the Chinese to the Malays, who until then made up only 5% of the professional class. Poverty was tackled through an agricultural policy which resettled 250,000 Malays on newly cleared farmland, more investment in rural infrastructure, and the creation of free trade zones in rural areas to create new manufacturing jobs. Little was done to improve the living standards of the low-paid workers in plantation agriculture, although this group steadily declined as a proportion of the workforce. By 1990 the poorest parts of Malaysia were rural Sabah and Sarawak, which lagged significantly behind the rest of the country. During the 1970s and '80s rural poverty did decline, particularly in the Malayan Peninsula, but critics of the government's policy contend that this was mainly due to the growth of overall national prosperity (due in large part to the discovery of important oil and gas reserves) and migration of rural people to the cities rather than to state intervention. These years saw rapid growth in Malaysian cities, particularly Kuala Lumpur, which became a magnet for immigration both from rural Malaya and from poorer neighbours such as Indonesia, Bangladesh, Thailand and the Philippines. Urban poverty became a problem for the first time, with shanty towns growing up around the cities. The second arm of government policy, driven mainly by Mahathir first as Education Minister and then as Prime Minister, was the transfer of economic power to the Malays. Mahathir greatly expanded the number of secondary schools and universities throughout the country, and enforced the policy of teaching in Malay rather than English. This had the effect of creating a large new Malay professional class. It also created an unofficial barrier against Chinese access to higher education, since few Chinese are sufficiently fluent in Malay to study at Malay-language universities. Chinese families therefore sent their children to universities in Singapore, Australia, Britain or the United States – by 2000, for example, 60,000 Malaysians held degrees from Australian universities. This had the unintended consequence of exposing large numbers of Malaysians to life in Western countries, creating a new source of discontent. Mahathir also greatly expanded educational opportunities for Malay women – by 2000 half of all university students were women. To find jobs for all these new Malay graduates, the government created several agencies for intervention in the economy. The most important of these were PERNAS (National Corporation Ltd.), PETRONAS (National Petroleum Ltd.), and HICOM (Heavy Industry Corporation of Malaysia), which not only directly employed many Malays but also invested in growing areas of the economy to create new technical and administrative jobs which were preferentially allocated to Malays. As a result, the share of Malay equity in the economy rose from 1.5% in 1969 to 20.3% in 1990, and the percentage of businesses of all kinds owned by Malays rose from 39 percent to 68 percent. This latter figure was deceptive because many businesses that appeared to be Malay-owned were still indirectly controlled by Chinese, but there is no doubt that the Malay share of the economy considerably increased. The Chinese remained disproportionately powerful in Malaysian economic life, but by 2000 the distinction between Chinese and Malay business was fading as many new corporations, particularly in growth sectors such as information technology, were owned and managed by people from both ethnic groups. Malaysia's rapid economic progress since 1970, which was only temporarily disrupted by the Asian financial crisis of 1997, has not been matched by change in Malaysian politics. The repressive measures passed in 1970 remain in place. Malaysia has had regular elections since 1974, and although campaigning is reasonably free at election time, it is in effect a one-party state, with the UMNO-controlled National Front usually winning nearly all the seats, while the DAP wins some Chinese urban seats and the PAS some rural Malay ones. Since the DAP and the PAS have diametrically opposed policies, they have been unable to form an effective opposition coalition. There is almost no criticism of the government in the media and public protest remains severely restricted. The ISA continues to be used to silence dissidents, and the members of the UMNO youth movement are deployed to physically intimidate opponents. The restoration of democracy after the 1969 crisis caused disputes in the UMNO, a struggle of power which increased after the death of Tun Abdul Razak. The ailing Datuk Hussein Bin Onn replaced him, but the fight for control shifted to appointing the deputy prime minister. Mahathir Mohamad was chosen, an advocate of Bumiputra who also tried to benefit the other ethnic communities. Under the premiership of Mahathir Mohamad, Malaysia experienced economic growth from the 1980s, a 1985–86 property market depression, and returned to growth through to the mid-1990s. Mahathir increased privatisation and introduced the New Development Policy (NDP), designed to increase economic wealth for all Malaysians, rather than just Malays. The period saw a shift from an agriculture-based economy to one based on manufacturing and industry in areas such as computers and consumer electronics. It was during this period, too, that the physical landscape of Malaysia changed with the emergence of numerous mega-projects. Notable amongst these projects were the construction of the Petronas Twin Towers (at the time the tallest building in the world, and, as of 2016, still the tallest twin building), Kuala Lumpur International Airport (KLIA), the North–South Expressway, the Sepang International Circuit, the Multimedia Super Corridor (MSC), the Bakun hydroelectric dam, and Putrajaya, the new federal administrative capital. Under Mahathir Mohamad's long Prime Ministership (1981–2003), Malaysia's political culture became increasingly centralised and authoritarian, due to Mahathir's belief that the multiethnic Malaysia could only remain stable through controlled democracy. In 1986–87, he faced leadership challenges among his own party. There were also attacks by the government on several non-governmental organisations (NGO) which were critical of various government policies. There were also issues such the questioning by MCA's Lee Kim Sai over the use of the term "pendatang" (immigrants) that was seen as challenging Malay's bumiputra status, as well as rumours of forced conversion to or from Islam. Mahathir initiated a crackdown on opposition dissidents with the use of the Internal Security Act named Operation Lalang. The Internal Security Act was invoked in October 1987 arresting 106 people, including opposition leaders. The head of the judiciary and five members of the supreme court who had questioned his use of the ISA were also arrested, and a clampdown on Malaysia's press occurred. This culminated in the dismissal and imprisonment on unsubstantiated charges of the Deputy Prime Minister, Anwar Ibrahim, in 1997 after an internal dispute within the government. The complicity of the judiciary in this piece of persecution was seen as a particularly clear sign of the decline of Malaysian democracy. The Anwar affair led to the formation of a new party, the People's Justice Party, or Keadilan, led by Anwar's wife, Wan Azizah Wan Ismail. At the 1999 elections Keadilan formed a coalition with the DAP and the PAS known as the Alternative Front (Barisan Alternatif). The result of this was that the PAS won a number of Malay seats from UMNO, but many Chinese voters disapproved of this unnatural alliance with the Islamist PAS, causing the DAP to lose many of its seats to the MCA, including that of its veteran leader, Lim Kit Siang. Wan Azizah won her husband's former constituency in Penang but otherwise Keadilan made little impact. In the late 1990s, Malaysia was shaken by the Asian financial crisis, which damaged Malaysia's assembly line-based economy. Mahathir combated it initially with IMF approved policies. However, the devaluation of the Ringgit and the deepening recession caused him to create his own programme, based on protecting Malaysia from foreign investors and reinvigorating the economy through construction projects and the lowering of interest rates. The policies caused Malaysia's economy to rebound by 2002, but brought disagreement between Mahathir and his deputy, Anwar Ibrahim, who backed the IMF policies. This led to the sacking of the Anwar, causing political unrest. Anwar was arrested and banned from politics on what are considered trumped up charges. In 2003 Mahathir, Malaysia's longest serving prime minister, voluntarily retired in favour of his new deputy, Abdullah Ahmad Badawi. In November 2007 two anti-government rallies occurred, precipitated by allegations of corruption and discrepancies in the election system that heavily favoured the ruling political coalition, National Front (Barisan Nasional), which has been in power since Malaya achieved independence. Dato Seri Abdullah Ahmad Badawi freed Anwar, which was seen as a portent of a mild liberalisation. At the 2004 election, the National Front led by Abdullah had a massive victory, virtually wiping out the PAS and Keadilan, although the DAP recovered the seats it had lost in 1999. This victory was seen as the result mainly of Abdullah's personal popularity and the strong recovery of Malaysia's economy, which has lifted the living standards of many Malaysians to almost first world standards, coupled with an ineffective opposition. The government's objective is for Malaysia to become a fully developed country by 2020 as expressed in "Wawasan 2020". It leaves unanswered, however, the question of when and how Malaysia will acquire a first world political system (a multi-party democracy, a free press, an independent judiciary and the restoration of civil and political liberties) to go with its new economic maturity. In November 2007, Malaysia was rocked by two anti-government rallies. The 2007 Bersih Rally which was attended by 40,000 people was held in Kuala Lumpur on 10 November 2007, to campaign for electoral reform. It was precipitated by allegations of corruption and discrepancies in the Malaysian election system that heavily favour the ruling political party, Barisan Nasional, which has been in power since Malaysia achieved its independence in 1957. Another rally was held on 25 November 2007, in Kuala Lumpur led by HINDRAF. The rally organiser, the Hindu Rights Action Force, had called the protest over alleged discriminatory policies favouring ethnic Malays. The crowd was estimated to be between 5,000 and 30,000. In both cases the government and police tried to prevent the gatherings from taking place. On 16 October 2008, HINDRAF was banned when the government labelled the organisation as "a threat to national security". Najib Razak entered office as Prime Minister with a sharp focus on domestic economic issues and political reform. On his first day as Prime Minister, Najib announced as his first actions the removal of bans on two opposition newspapers, "Suara Keadilan" and "Harakahdaily", run by the opposition leader Datuk Seri Anwar Ibrahim-led People's Justice Party and the Pan Islamic Party, respectively, and the release of 13 people held under the Internal Security Act. Among the released detainees were two ethnic Indian activists who were arrested in December 2007 for leading an anti-government campaign, three foreigners and eight suspected Islamic militants. Najib also pledged to conduct a comprehensive review of the much-criticised law which allows for indefinite detention without trial. In the speech, he emphasised his commitment to tackling poverty, restructuring Malaysian society, expanding access to quality education for all, and promoting renewed "passion for public service". He also deferred and abandoned the digital television transition plan of all free-to-air broadcasters such as Radio Televisyen Malaysia. Malaysia Day, celebrating the formation of Malaysia on 16 September 1963, was declared a public holiday in 2010 in complement to the existing 31 August celebration of Hari Merdeka. In September 2016 Mahathir submitted a request to the King requesting Najib be dismissed, although no action was taken on this. Tun Dr Mahathir Mohamad, who left UMNO in 2016 and formed his own political party, the Malaysian United Indigenous Party which teamed up with three other political parties to form Pakatan Harapan, was sworn in as the Prime Minister of Malaysia after winning the election on 10 May 2018. He defeated Najib Razak who led Barisan Nasional political party that had previously ruled Malaysia for 61 years since 1957. Najib Razak was defeated by Tun Dr Mahathir Mohamad due to the factors such as the ongoing political scandal which is 1Malaysia Development Berhad scandal that has arisen since 2015, the introduction of Goods and Services Tax (Malaysia) of 6% since 1 April 2015, high cost of living and openly extreme criticism against Tun Dr. Mahathir Mohamad. The unpopular tax was reduced to 0% on 1 June 2018. Government of Malaysia under Tun Dr Mahathir tabled for first reading Bill to repeal GST in Parliament on 31 July 2018 (Dewan Rakyat). GST was successfully replaced with Sales Tax and Service Tax starting 1 September 2018.
https://en.wikipedia.org/wiki?curid=13806
Kiwi Kiwi ( ) or kiwis are flightless birds native to New Zealand, in the genus Apteryx and family Apterygidae. Approximately the size of a domestic chicken, kiwi are by far the smallest living ratites (which also consist of ostriches, emus, rheas, and cassowaries). DNA sequence comparisons have yielded the surprising conclusion that kiwi are much more closely related to the extinct Malagasy elephant birds than to the moa with which they shared New Zealand. There are five recognised species, four of which are currently listed as vulnerable, and one of which is near-threatened. All species have been negatively affected by historic deforestation but currently the remaining large areas of their forest habitat are well protected in reserves and national parks. At present, the greatest threat to their survival is predation by invasive mammalian predators. The kiwi's egg is one of the largest in proportion to body size (up to 20% of the female's weight) of any species of bird in the world. Other unique adaptations of kiwi, such as their hairlike feathers, short and stout legs, and using their nostrils at the end of their long beak to detect prey before they ever see it, have helped the bird to become internationally well-known. The kiwi is recognised as an icon of New Zealand, and the association is so strong that the term "Kiwi" is used internationally as the colloquial demonym for New Zealanders. The Māori language word "" is generally accepted to be "of imitative origin" from the call. However, some linguists derive the word from Proto-Nuclear Polynesian "*kiwi", which refers to "Numenius tahitiensis", the bristle-thighed curlew, a migratory bird that winters in the tropical Pacific islands. With its long decurved bill and brown body, the curlew resembles the kiwi. So when the first Polynesian settlers arrived, they may have applied the word kiwi to the new-found bird. The genus name "Apteryx" is derived from Ancient Greek "without wing": "a-", "without" or "not"; "pterux", "wing". The name is usually uncapitalised, with the plural either the anglicised "kiwis" or, consistent with the Māori language, appearing as "kiwi" without an "‑s". Although it was long presumed that the kiwi was closely related to the other New Zealand ratites, the moa, recent DNA studies have identified its closest relative as the extinct elephant bird of Madagascar, and among extant ratites, the kiwi is more closely related to the emu and the cassowaries than to the moa. Research published in 2013 on an extinct genus, "Proapteryx", known from the Miocene deposits of the Saint Bathans Fauna, found that it was smaller and probably capable of flight, supporting the hypothesis that the ancestor of the kiwi reached New Zealand independently from moas, which were already large and flightless by the time kiwi appeared. There are five known species of kiwi, as well as a number of subspecies. Relationships in the genus "Apteryx" Their adaptation to a terrestrial life is extensive: like all the other ratites (ostrich, emu, rhea and cassowary), they have no keel on the sternum to anchor wing muscles. The vestigial wings are so small that they are invisible under the bristly, hair-like, two-branched feathers. While most adult birds have bones with hollow insides to minimise weight and make flight practicable, kiwi have marrow, like mammals and the young of other birds. With no constraints on weight due to flight requirements, brown kiwi females carry and lay a single egg that may weigh as much as . Like most other ratites, they have no uropygial gland (preen gland). Their bill is long, pliable and sensitive to touch, and their eyes have a reduced pecten. Their feathers lack barbules and aftershafts, and they have large vibrissae around the gape. They have 13 flight feathers, no tail and a small pygostyle. Their gizzard is weak and their caecum is long and narrow. The eye of the kiwi is the smallest relative to body mass in all avian species resulting in the smallest visual field as well. The eye has small specialisations for a nocturnal lifestyle, but kiwi rely more heavily on their other senses (auditory, olfactory, and somatosensory system). The sight of the kiwi is so underdeveloped that blind specimens have been observed in nature, showing how little they rely on sight for survival and foraging. In an experiment, it was observed that one-third of a population of "A. rowi" in New Zealand under no environmental stress had ocular lesions in one or both eyes. The same experiment examined three specific specimens that showed complete blindness and found them to be in good physical standing outside of ocular abnormalities. A 2018 study revealed that the kiwi's closest relatives, the extinct elephant birds, also shared this trait despite their great size. Unlike virtually every other palaeognath, which are generally small-brained by bird standards, kiwi have proportionally large encephalisation quotients. Hemisphere proportions are even similar to those of parrots and songbirds, though there is no evidence of similarly complex behaviour. Before the arrival of humans in the 13th century or earlier, New Zealand's only endemic mammals were , and the ecological niches that in other parts of the world were filled by creatures as diverse as horses, wolves and mice were taken up by birds (and, to a lesser extent, reptiles, insects and gastropods). The kiwi's mostly nocturnal habits may be a result of habitat intrusion by predators, including humans. In areas of New Zealand where introduced predators have been removed, such as sanctuaries, kiwi are often seen in daylight. They prefer subtropical and temperate podocarp and beech forests, but they are being forced to adapt to different habitat, such as sub-alpine scrub, tussock grassland, and the mountains. Kiwi have a highly developed sense of smell, unusual in a bird, and are the only birds with nostrils at the end of their long beaks. Kiwi eat small invertebrates, seeds, grubs, and many varieties of worms. They also may eat fruit, small crayfish, eels and amphibians. Because their nostrils are located at the end of their long beaks, kiwi can locate insects and worms underground using their keen sense of smell, without actually seeing or feeling them. This sense of smell is due to a highly developed olfactory chamber and surrounding regions. It is a common belief that the kiwi relies solely on its sense of smell to catch prey but this has not been scientifically observed. Lab experiments have suggested that "A. australis" can rely on olfaction alone but is not consistent under natural conditions. Instead, the kiwi may rely on auditory and/or vibrotactile cues. Once bonded, a male and female kiwi tend to live their entire lives as a monogamous couple. During the mating season, June to March, the pair call to each other at night, and meet in the nesting burrow every three days. These relationships may last for up to 20 years. They are unusual among other birds in that, along with some raptors, they have a functioning pair of ovaries. (In most birds and in platypuses, the right ovary never matures, so that only the left is functional.) Kiwi eggs can weigh up to one-quarter the weight of the female. Usually, only one egg is laid per season. The kiwi lays one of the largest eggs in proportion to its size of any bird in the world, so even though the kiwi is about the size of a domestic chicken, it is able to lay eggs that are about six times the size of a chicken's egg. The eggs are smooth in texture, and are ivory or greenish white. The male incubates the egg, except for the great spotted kiwi, "A. haastii", in which both parents are involved. The incubation period is 63–92 days. Producing the huge egg places significant physiological stress on the female; for the thirty days it takes to grow the fully developed egg, the female must eat three times her normal amount of food. Two to three days before the egg is laid there is little space left inside the female for her stomach and she is forced to fast. Lice in the genus "Apterygon" and in the subgenus "Rallicola" ("Aptericola") are exclusively ectoparasites of kiwi species. Nationwide studies show that only around 5–10% of kiwi chicks survive to adulthood without management. However, in areas under active pest management, survival rates for North Island brown kiwi can be far higher. For example, prior to a joint 1080 poison operation undertaken by DOC and the Animal Health Board in Tongariro Forest in 2006, 32 kiwi chicks were radio-tagged. 57% of the radio-tagged chicks survived to adulthood. Efforts to protect kiwi have had some success, and in 2017 two species were downlisted from endangered to vulnerable by the IUCN. In 2000, the Department of Conservation set up five kiwi sanctuaries focused on developing methods to protect kiwi and to increase their numbers. A number of other mainland conservation islands and fenced sanctuaries have significant populations of kiwi, including: North island brown kiwi were introduced to the Cape Sanctuary in Hawke's Bay between 2008 and 2011, which in turn provided captive-raised chicks that were released back into Maungataniwha Native Forest. Operation Nest Egg is a programme run by the BNZ Save the Kiwi Trust—a partnership between the Bank of New Zealand, the Department of Conservation and the Royal Forest and Bird Protection Society. Kiwi eggs and chicks are removed from the wild and hatched and/or raised in captivity until big enough to fend for themselves—usually when they weigh around 1200 grams (42 ounces). They are then returned to the wild. An Operation Nest Egg bird has a 65% chance of surviving to adulthood—compared to just 5% for wild-hatched and raised chicks. The tool is used on all kiwi species except little spotted kiwi. In 2004, anti-1080 activist Phillip Anderton posed for the New Zealand media with a kiwi he claimed had been poisoned. An investigation revealed that Anderton lied to journalists and the public. He had used a kiwi that had been caught in a possum trap. Extensive monitoring shows that kiwi are not at risk from the use of biodegradable 1080 poison. Introduced mammalian predators, namely stoats, dogs, ferrets, and cats, are the principal threats to kiwi. The biggest threat to kiwi chicks is stoats, while dogs are the biggest threat to adult kiwi. Stoats are responsible for approximately half of kiwi chick deaths in many areas through New Zealand. Young kiwi chicks are vulnerable to stoat predation until they reach about in weight, at which time they can usually defend themselves. Cats also to a lesser extent prey on kiwi chicks. These predators can cause large and abrupt declines in populations. In particular, dogs find the distinctive strong scent of kiwi irresistible and easy to track, such that they can catch and kill kiwi in seconds. Motor vehicle strike is a threat to all kiwi where roads cross through their habitat. Badly set possum traps often kill or maim kiwi. Habitat destruction is another major threat to kiwi; restricted distribution and small size of some kiwi populations increases their vulnerability to inbreeding. Research has shown that the combined effect of predators and other mortality (accidents etc.) results in less than 5% of kiwi chicks surviving to adulthood. The Māori traditionally believed that kiwi were under the protection of Tane Mahuta, god of the forest. They were used as food and their feathers were used for kahu kiwi—ceremonial cloaks. Today, while kiwi feathers are still used, they are gathered from birds that die naturally, through road accidents, predation, or from captive birds. Kiwi are no longer hunted and some Māori consider themselves the birds' guardians. In 1813, George Shaw named the genus "Apteryx" in his species description of the southern brown kiwi, which he called "the southern apteryx". Captain Andrew Barclay of the ship "Providence" provided Shaw with the specimen. Shaw's description was accompanied by two plates, engraved by Frederick Polydore Nodder; they were published in volume 24 of "The Naturalist's Miscellany". In 1851, London Zoo became the first zoo to keep kiwi. The first captive breeding took place in 1945. As of 2007 only 13 zoos outside New Zealand hold kiwi. The Frankfurt Zoo has 12, the Berlin Zoo has seven, Walsrode Bird Park has one, the Avifauna Bird Park in the Netherlands has three, the San Diego Zoo has five, the San Diego Zoo Safari Park has one, the National Zoo in Washington, DC has eleven, the Smithsonian Conservation Biology Institute has one, and the Columbus Zoo and Aquarium has three. The kiwi as a symbol first appeared in the late 19th century in New Zealand regimental badges. It was later featured in the badges of the South Canterbury Battalion in 1886 and the Hastings Rifle Volunteers in 1887. Soon after, the kiwi appeared in many military badges; and in 1906, when Kiwi Shoe Polish was widely sold in the UK and the US, the symbol became more widely known. During the First World War, the name ""kiwi"" for New Zealand soldiers came into general use, and a giant kiwi (now known as the Bulford kiwi) was carved on the chalk hill above Sling Camp in England. Usage has become so widespread that all New Zealanders overseas and at home are now commonly referred to as "Kiwis". The kiwi has since become the most well-known national symbol for New Zealand, and the bird is prominent in the coat of arms, crests and badges of many New Zealand cities, clubs and organisations; at the national level, the red silhouette of a kiwi is in the centre of the roundel of the Royal New Zealand Air Force. The kiwi is featured in the logo of the New Zealand Rugby League, and the New Zealand national rugby league team are nicknamed the Kiwis. A kiwi has featured on the reverse side of three New Zealand coins: the one florin (two-shilling) coin from 1933 to 1966, the twenty-cent coin from 1967 to 1990, and the one-dollar coin since 1991. In currency trading the New Zealand dollar is often referred to as "the kiwi".
https://en.wikipedia.org/wiki?curid=17362
Kiwifruit Kiwifruit (often shortened to kiwi outside Australia and New Zealand), or Chinese gooseberry, is the edible berry of several species of woody vines in the genus "Actinidia". The most common cultivar group of kiwifruit ("Actinidia deliciosa" 'Hayward') is oval, about the size of a large hen's egg: in length and in diameter. It has a thin, hair-like, fibrous, sour-but-edible light brown skin and light green or golden flesh with rows of tiny, black, edible seeds. The fruit has a soft texture with a sweet and unique flavour. In 2017, China produced 50% of the world total of kiwifruit. Kiwifruit is native to central and eastern China. The first recorded description of the kiwifruit dates to the 12th century during the Song dynasty. In the early 20th century, cultivation of kiwifruit spread from China to New Zealand, where the first commercial plantings occurred. The fruit became popular with British and American servicemen stationed in New Zealand during World War II, and later became commonly exported, first to Great Britain and then to California in the 1960s. Early varieties were described in a 1904 nursery catalogue as having "...edible fruits the size of walnuts, and the flavour of ripe gooseberries", leading to the name "Chinese gooseberry". In 1962, New Zealand growers began calling it "kiwifruit" for export marketing, a name commercially adopted in 1974. In New Zealand and Australia, the word "kiwi" alone refers to the kiwi bird or is used as a nickname for New Zealanders; it is almost never used to refer to the fruit. Kiwifruit has since become a common name for all commercially grown green kiwifruit from the genus "Actinidia". Kiwifruit is native to central and eastern China. The first recorded description of the kiwifruit dates to 12th century China during the Song dynasty. As it was usually collected from the wild and consumed for medicinal purposes, the plant was rarely cultivated or bred. Cultivation of kiwifruit spread from China in the early 20th century to New Zealand, where the first commercial plantings occurred. The fruit became popular with British and American servicemen stationed in New Zealand during World War II, and was later exported, first to Great Britain and then to California in the 1960s. In New Zealand during the 1940s and 1950s, the fruit became an agricultural commodity through the development of commercially viable cultivars, agricultural practices, shipping, storage, and marketing. The genus "Actinidia" comprises around 60 species. Their fruits are quite variable, although most are easily recognised as kiwifruit because of their appearance and shape. The skin of the fruit varies in size, hairiness and colour. The flesh varies in colour, juiciness, texture and taste. Some fruits are unpalatable, while others taste considerably better than the majority of commercial cultivars. The most commonly sold kiwifruit is derived from "A. deliciosa" (fuzzy kiwifruit). Other species that are commonly eaten include "A. chinensis" (golden kiwifruit), "A. coriacea" (Chinese egg gooseberry), "A. arguta" (hardy kiwifruit), "A. kolomikta" (Arctic kiwifruit), "A. melanandra" (purple kiwifruit), "A. polygama" (silver vine) and "A. purpurea" (hearty red kiwifruit). Most kiwifruit sold belongs to a few cultivars of "A. deliciosa" (fuzzy kiwifruit): 'Hayward', 'Blake' and 'Saanichton 12'. They have a fuzzy, dull brown skin and bright green flesh. The familiar cultivar 'Hayward' was developed by Hayward Wright in Avondale, New Zealand, around 1924. It was initially grown in domestic gardens, but commercial planting began in the 1940s. 'Hayward' is the most commonly available cultivar in stores. It is a large, egg-shaped fruit with a sweet flavour. 'Saanichton 12', from British Columbia, is somewhat more rectangular than 'Hayward' and comparably sweet, but the inner core of the fruit can be tough. 'Blake' can self-pollinate, but it has a smaller, more oval fruit and the flavour is considered inferior. Kiwi berries are edible fruits the size of a large grape, similar to fuzzy kiwifruit in taste and internal appearance, but the thin, smooth green skin and lack of fuzz makes eating the entire fruit more pleasant. They are primarily produced by three species: "Actinidia arguta" (hardy kiwi), "A. kolomikta" (Arctic kiwifruit) and "A. polygama" (silver vine). They are fast-growing, climbing vines, durable over their growing season. They are referred to as kiwi berry, baby kiwi, dessert kiwi, grape kiwi, or cocktail kiwi. The cultivar 'Issai' is a hybrid of hardy kiwi and silver vine which can self-pollinate. Grown commercially because of its relatively large fruit, 'Issai' is less hardy than most hardy kiwi. "Actinidia chinensis" (golden kiwifruit) has a smooth, bronze skin, with a beak shape at the stem attachment. Flesh colour varies from bright green to a clear, intense yellow. This species is sweeter and more aromatic in flavour compared to "A. deliciosa", similar to some subtropical fruits. One of the most attractive varieties has a red 'iris' around the centre of the fruit and yellow flesh outside. The yellow fruit obtains a higher market price and, being less hairy than the fuzzy kiwifruit, is more palatable for consumption without peeling. A commercially viable variety of this red-ringed kiwifruit, patented as EnzaRed, is a cultivar of the Chinese "hong yang" variety. 'Hort16A' is a golden kiwifruit cultivar marketed worldwide as "Zespri Gold". This cultivar has suffered significant losses in New Zealand from late 2010 to 2013 due to the PSA bacterium. A new cultivar of golden kiwifruit, "Gold3", has been found to be more disease-resistant and most growers have now changed to this cultivar. 'Gold3', marketed by Zespri as "SunGold" is not quite as sweet as the previous 'Hort16A', having a sour flavour, and lacks the usually slightly pointed tip of 'Hort16A'. Kiwifruit can be grown in most temperate climates with adequate summer heat. Where fuzzy kiwifruit ("A. deliciosa") is not hardy, other species can be grown as substitutes. Often in commercial farming, different breeds are used for rootstock, fruit bearing plants and pollinators. Therefore, the seeds produced are crossbreeds of their parents. Even if the same breeds are used for pollinators and fruit bearing plants, there is no guarantee that the fruit will have the same quality as the parent. Additionally, seedlings take seven years before they flower, so determining whether the kiwi is fruit bearing or a pollinator is time-consuming. Therefore, most kiwifruits, with the exception of rootstock and new cultivars, are propagated asexually. This is done by grafting the fruit producing plant onto rootstock grown from seedlings or, if the plant is desired to be a true cultivar, rootstock grown from cuttings of a mature plant. Kiwifruit plants generally are dioecious, meaning a plant is either male or female. The male plants have flowers that produce pollen, the females receive the pollen to fertilise their ovules and grow fruit; most kiwifruit requires a male plant to pollinate the female plant. For a good yield of fruit, one male vine for every three to eight female vines is considered adequate. Some varieties can self pollinate, but even they produce a greater and more reliable yield when pollinated by male kiwifruit. Cross-species pollination is often (but not always) successful as long as bloom times are synchronised. In nature, the species are pollinated by birds and native bumblebees, which visit the flowers for pollen, not nectar. The female flowers produce fake anthers with what appears to be pollen on the tips in order to attract the pollinators, although these fake anthers lack the DNA and food value of the male anthers. Kiwifruit growers rely on honey bees, the principal ‘for-hire’ pollinator. But commercially grown kiwifruit is notoriously difficult to pollinate. The flowers are not very attractive to honey bees, in part because the flowers do not produce nectar and bees quickly learn to prefer flowers with nectar. And for kiwifruit, honey bees are inefficient cross-pollinators because they practice “floral fidelity”. Each honey bee visits only a single type of flower in any foray and maybe only a few branches of a single plant. The pollen needed from a different plant (such as a male for a female kiwifruit) might never reach it were it not for the cross-pollination that principally occurs in the crowded colony. It is in the colonies where bees laden with different pollen literally cross paths. To deal with these pollination challenges, some producers blow collected pollen over the female flowers. Most common, though, is saturation pollination, where the honey bee populations are made so large (by placing hives in the orchards at a concentration of about 8 hives per hectare) that bees are forced to use this flower because of intense competition for all flowers within flight distance. Kiwifruit is picked by hand and commercially grown on sturdy support structures, as it can produce several tonnes per hectare, more than the rather weak vines can support. These are generally equipped with a watering system for irrigation and frost protection in the spring. Kiwifruit vines require vigorous pruning, similar to that of grapevines. Fruit is borne on one-year-old and older canes, but production declines as each cane ages. Canes should be pruned off and replaced after their third year. In the northern hemisphere the fruit ripens in November, while in the southern it ripens in May. Four year-old plants can produce up to 14,000 lbs per acre while eight year-old plants can produce 18,000 lbs per acre. The plants produce their maximum at 8 to 10 years old. The seasonal yields are variable, a heavy crop on a vine one season generally comes with a light crop the following season. Fruits harvested when firm will ripen when stored properly for long periods. This allows fruit to be sent to market up to 8 weeks after harvest. Firm kiwifruit ripen after a few days to a week when stored at room temperature, but should not be kept in direct sunlight. Faster ripening occurs when placed in a paper bag with an apple, pear, or banana. Once a kiwifruit is ripe, however, it is preserved optimally when stored far from other fruits, as it is very sensitive to the ethylene gas they may emit, thereby tending to over-ripen even in the refrigerator. If stored appropriately, ripe kiwifruit normally keep for about one to two weeks. "Pseudomonas syringae actinidiae" (PSA) was first identified in Japan in the 1980s. This bacterial strain has been controlled and managed successfully in orchards in Asia. In 1992, it was found in northern Italy. In 2007/2008, economic losses were observed, as a more virulent strain became more dominant (PSA V). In 2010 it was found in New Zealand's Bay of Plenty kiwifruit orchards in the North Island. Scientists reported they had worked out the strain of PSA affecting kiwifruit from New Zealand, Italy and Chile originated in China. In 2017, global production of kiwifruit was 4.04 million tonnes, led by China with 50% of the world total (table). Italy, New Zealand, Iran, and Chile were other major producers. In China, kiwifruit is grown mainly in the mountainous area upstream of the Yangtze River, as well as Sichuan. Kiwifruit exports rapidly increased from the late 1960s to early 1970s in New Zealand. By 1976, exports exceeded the amount consumed domestically. Outside of Australasia, New Zealand kiwifruit are marketed under the brand-name label, Zespri. The general name, "Zespri", has been used for marketing of all cultivars of kiwifruit from New Zealand since 2012. In the 1980s, countries outside New Zealand began to grow and export kiwifruit. In Italy, the infrastructure and techniques required to support grape production were adapted to the kiwifruit. This, coupled with being close to the European kiwifruit market, led to Italians becoming the leading producer of kiwifruit in 1989. The growing season of Italian kiwifruit does not overlap much with the New Zealand or the Chilean growing seasons, therefore direct competition between New Zealand or Chile was not a significant factor. Much of the breeding to refine the green kiwifruit was undertaken by the Plant & Food Research Institute (formerly HortResearch) during the decades of 1970–1999. In 1990, the New Zealand Kiwifruit Marketing Board opened an office for Europe in Antwerp, Belgium Kiwifruit may be eaten raw, made into juices, used in baked goods, prepared with meat or used as a garnish. The whole fruit, including the skin, is suitable for human consumption; however, the skin of the fuzzy varieties is often discarded due to its texture. Sliced kiwifruit has long been used as a garnish atop whipped cream on pavlova, a meringue-based dessert. Traditionally in China, kiwifruit was not eaten for pleasure, but was given as medicine to children to help them grow and to women who have given birth to help them recover. Raw kiwifruit contains actinidain (also spelled "actinidin") which is commercially useful as a meat tenderizer and possibly as a digestive aid. Actinidain also makes raw kiwifruit unsuitable for use in desserts containing milk or any other dairy products because the enzyme digests milk proteins. This applies to gelatin-based desserts, due to the fact that the actinidain will dissolve the proteins in gelatin, causing the dessert to either liquefy or prevent it from solidifying. In a 100-gram amount, green kiwifruit provides 61 calories, is 83% water and 15% carbohydrates, with negligible protein and fat (table). It is particularly rich (20% or more of the Daily Value, DV) in vitamin C (112% DV) and vitamin K (38% DV), has a moderate content of vitamin E (10% DV), with no other micronutrients in significant content. Gold kiwifruit has similar nutritional value, although only vitamin C has high content in a 100 gram amount (194% DV, table). Kiwifruit seed oil contains on average 62% alpha-linolenic acid, an omega-3 fatty acid. Kiwifruit pulp contains carotenoids, such as provitamin A beta-carotene, lutein and zeaxanthin. The actinidain found in kiwifruit can be an allergen for some individuals, including children. The most common symptoms are unpleasant itching and soreness of the mouth, with wheezing as the most common severe symptom; anaphylaxis may occur.
https://en.wikipedia.org/wiki?curid=17363
Kiel Canal The Kiel Canal (, literally "North-[to]-Baltic Sea canal", formerly known as the Kaiser-Wilhelm-Kanal) is a freshwater canal in the German state of Schleswig-Holstein. The canal was finished in 1895, but later widened, and links the North Sea at Brunsbüttel to the Baltic Sea at Kiel-Holtenau. An average of is saved by using the Kiel Canal instead of going around the Jutland Peninsula. This not only saves time but also avoids storm-prone seas and having to pass through the Sound or Belts. Besides its two sea entrances, the Kiel Canal is linked, at Oldenbüttel, to the navigable River Eider by the short Gieselau Canal. The first connection between the North and Baltic Seas was constructed while the area was ruled by Denmark–Norway. It was called the Eider Canal, which used stretches of the Eider River for the link between the two seas. Completed during the reign of Christian VII of Denmark in 1784, the "Eiderkanal" was a part of a waterway from Kiel to the Eider River's mouth at Tönning on the west coast. It was only wide with a depth of , which limited the vessels that could use the canal to 300 tonnes. After 1864 Second Schleswig War put Schleswig-Holstein under the government of Prussia (from 1871 the German Empire), a new canal was sought by merchants and by the German navy, which wanted to link its bases in the Baltic and the North Sea without the need to sail around Denmark. In June 1887, construction started at Holtenau (), near Kiel. The canal took over 9,000 workers eight years to build. On 20 June 1895 the canal was officially opened by Kaiser Wilhelm II for transiting from Brunsbüttel to Holtenau. The next day, a ceremony was held in Holtenau, where Wilhelm II named it the "Kaiser Wilhelm Kanal" (after his grandfather, Kaiser Wilhelm I), and laid the final stone. The opening of the canal was filmed by British director Birt Acres; surviving footage of this early film is preserved in the Science Museum in London. The first vessel to pass through the canal was the aviso ; she was sent through in late April, before the canal officially opened, to determine if it was ready for use. The first Trans-Atlantic sailing ship to pass through the canal was "Lilly", commanded by Johan Pitka. "Lilly", a barque, was a wooden sailing ship of about 390 tons built 1866 in Sunderland, U.K. She had a length of , beam , depth of and a keel. In order to meet the increasing traffic and the demands of the Imperial German Navy, between 1907 and 1914 the canal width was increased. The widening of the canal allowed the passage of a "Dreadnought"-sized battleship. This meant that these battleships could travel from the Baltic Sea to the North Sea without having to go around Denmark. The enlargement projects were completed by the installation of two larger canal locks in Brunsbüttel and Holtenau. After World War I, the Treaty of Versailles required the canal to be open to vessels of commerce and of war of any nation at peace with Germany, while leaving it under German administration. (The United States opposed this proposal to avoid setting a precedent for similar concessions on the Panama Canal.) The government under Adolf Hitler repudiated its international status in 1936, but the canal was reopened to all traffic after World War II. In 1948, the current name was adopted. The canal was partially closed in March 2013 after two lock gates failed at the western end near Brunsbüttel. Ships larger than were forced to navigate via Skagerrak, a detour. The failure was blamed on neglect and a lack of funding by the German Federal Government, which has been in financial dispute with the state of Schleswig-Holstein regarding the canal. Germany's Transport Ministry promised rapid repairs. There are detailed traffic rules for the canal. Each vessel in passage is classified in one of six traffic groups according to its dimensions. Larger ships are obliged to accept pilots and specialised canal helmsmen, in some cases even the assistance of a tugboat. Furthermore, there are regulations regarding the passing of oncoming ships. Larger ships may also be required to moor at the bollards provided at intervals along the canal to allow the passage of oncoming vessels. Special rules apply to pleasure craft. All permanent, fixed bridges crossing the canal since its construction have a clearance of . Maximum length for ships passing the Kiel Canal is , with the maximum width (beam) of ; these ships can have a draught of up to . Ships up to a length of may have a draught up to . The bulker "Ever Leader" (deadweight 74001 t) is considered to be the cargo ship that to date has come closest to the overall limits. Several railway lines and federal roads (Autobahnen and Bundesstraßen) cross the canal on eleven fixed links. The bridges have a clearance of allowing for ship heights up to . The oldest bridge still in use is the "Levensau High Bridge" from 1893; however, the bridge will be replaced in the course of a canal expansion already underway. In sequence and in the direction of the official kilometre count from west (Brunsbüttel) to east (Holtenau) these crossings are: Local traffic is also served by 14 ferry lines. Most noteworthy is the “hanging ferry” () beneath the "Rendsburg High Bridge" which needs to be replaced after a collision with a ship in 2016. All ferries are run by the Canal Authority and their use is free of charge.
https://en.wikipedia.org/wiki?curid=17364
Konrad Emil Bloch Konrad Emil Bloch, ForMemRS (21 January 1912 – 15 October 2000) was a German American biochemist. Bloch received the Nobel Prize in Physiology or Medicine in 1964 (joint with Feodor Lynen) for discoveries concerning the mechanism and regulation of the cholesterol and fatty acid metabolism. Bloch was born in Neisse (Nysa), in the German Empire's Prussian Province of Silesia. He was the second child of middle-class parents Hedwig (Striemer) and Frederich D. "Fritz" Bloch. From 1930 to 1934, he studied chemistry at the Technical University of Munich. In 1934, due to the Nazi persecutions of Jews, he fled to the "Schweizerische Forschungsinstitut" in Davos, Switzerland, before moving to the United States in 1936. Later he was appointed to the department of biological chemistry at Yale Medical School. In the United States, Bloch enrolled at Columbia University, and received a Ph.D in biochemistry in 1938. He taught at Columbia from 1939 to 1946. From there he went to the University of Chicago and then to Harvard University as Higgins Professor of Biochemistry in 1954, a post he held until 1982. After retirement at Harvard, he served as the Mack and Effie Campbell Tyner Eminent Scholar Chair in the College of Human Sciences at Florida State University. Bloch shared the Nobel Prize in Physiology or Medicine in 1964 with Feodor Lynen, for their discoveries concerning the mechanism and regulation of the cholesterol and fatty acid metabolism. Their work showed that the body first makes squalene from acetate over many steps and then converts the squalene to cholesterol. He traced all the carbon atoms in cholesterol back to acetate. Some of his research was conducted using radioactive acetate in bread mold: this was possible because fungi also produce squalene. He confirmed his results using rats. He was one of several researchers who showed that acetyl Coenzyme A is turned into mevalonic acid. Both Bloch and Lynen then showed that mevalonic acid is converted into chemically active isoprene, the precursor to squalene. Bloch also discovered that bile and a female sex hormone were made from cholesterol, which led to the discovery that all steroids were made from cholesterol. His Nobel Lecture was "The Biological Synthesis of Cholesterol." In 1985, Bloch became a Fellow of the Royal Society. In 1988, he was awarded the National Medal of Science. Bloch and his wife Lore Teutsch first met in Munich. They married in the U.S. in 1941. They had two children, Peter Conrad Bloch and Susan Elizabeth Bloch, and two grandchildren, Benjamin Nieman Bloch and Emilie Bloch Sondel. He was fond of skiing, tennis, and music. Konrad died in Lexington, Massachusetts of congestive heart failure in 2000, aged 88. Lore Bloch died in 2010 aged 98.
https://en.wikipedia.org/wiki?curid=17367
Klement Gottwald Klement Gottwald (23 November 1896 – 14 March 1953) was a Czechoslovak communist politician, who was the leader of the Communist Party of Czechoslovakia from 1929 until his death in 1953–titled as General Secretary until 1945 and as Chairman from 1945 to 1953. He was the first leader of Communist Czechoslovakia from 1948 to 1953. He was the 14th Prime Minister of Czechoslovakia from July 1946 until June 1948, the first Communist to hold the post. In June 1948, he was elected as Czechoslovakia's first Communist president, four months after the 1948 coup d'état in which his party seized power with the backing of the Soviet Union. He held the post until his death. Klement Gottwald was born in Heroltice as the illegitimate son of a poor peasant. Before the First World War he was trained in Vienna as a carpenter but also actively participated in the activities of the Social Democratic youth movement. Klement Gottwald was married to who, like him, came from a poor family and was an illegitimate child. Although his wife stood by him through his endeavours, and was his faithful companion, she never joined the Communist Party. They had one daughter, Marta. From 1915 to 1918 Gottwald was a soldier in the Austro-Hungarian Army. It is believed that he fought in the Battle of Zborov, which would mean that he fought there against future General and President Ludvík Svoboda, who fought on the side of the Czechoslovak Legion. Thomas Jakl of the Military History Institute called Gottwald's participation in Zborova a legend: Gottwald was in a hospital in Vienna during the time of the battle. In the summer of 1918, Gottwald deserted from the army. After the establishment of the first Czechoslovak Republic, he served for two years in the Czechoslovak Army. From 1920 to 1921 he worked in Rousinov as a cabinetmaker. After the collapse of the Union of Workers sports associations (SDTJ), the Communist-oriented party of the organization split off in 1921 and created the Federation of Worker's Sports Unions (FDTJ). Gottwald was able to unify the organization to gain considerable power in the local districts, and became the mayor of the 20th district of the FDTJ. In June 1921, he participated in the first Spartakiada in Prague. In September 1921 he moved from Rousinov to Banská Bystrica, where he became the editor of the communist magazine ""Hlas Ľudu"" (Voice of the people). At the same time, he was planning FDTJ events at the Banská Bystrica district. He became the local mayor of the district, and was the managing director of the 47th district of the FDJT. Later, he moved to Žilina and became editor in chief of the magazine "Spartacus". In 1922 he moved to Vrútky, where by decision of the KSČ Central Committee, they merged a number of communist magazines and their editors together. In 1924, the editorial staff finally moved to Ostrava, where Gottwald finally resettled. In 1926, Gottwald became a functionary of the Communist Party, and editor of the Communist Press. From 1926 to 1929 he worked in Prague, where he aided the Secretariat of the KSČ to form a pro-Moscow opposition against the then in power anti-Moscow leadership. Since 1928 he was a member of the Comintern. Following Comintern policy initiated by Stalin, he carried out the Bolshevization of the Party. In February 1929, at the Fifth Congress of the KSČ, Gottwald was elected party general secretary, alongside Guttmann, Šverma, Slansky, Kopecky and the Reimans (known as "the "). In the second half of 1930 the Communist Party carried out a number of reforms in accordance and response with the changes in those of the foreign policy of the Soviet Union, namely the introduction of the policy of the formation of the ""Popular front against Fascism"". In September and October 1938 Gottwald was one of the main leaders of the opposition against the adoption of the Munich Agreement. After the banning of the Communist Party Gottwald emigrated to the Soviet Union in November 1938. While there, he opposed the party policy of backing the Molotov-Ribbentrop pact of 1939. After the attack on the Soviet Union in June 1941, Soviet leadership saw the front against fascism as a great opportunity to assert themselves in Czechoslovakia, promoting interest in supporting Gottwald after the liberation of Czechoslovakia. In 1943 Gottwald agreed with representatives of the Czechoslovak-government-in-exile located in London, along with President Edvard Beneš, to unify domestic and foreign anti-fascist resistance and form the National Front. This proved helpful for Gottwald as it helped secure Communist influence in post-war Czechoslovakia. In 1945, Gottwald gave up the general secretary's post to Rudolf Slánský and was elected to the new position of party chairman. On 10 May 1945 Gottwald returned to Prague as the deputy premier under Zdeněk Fierlinger and as the chairman of the National Front. In March 1946, he led the party to a 38% share of the vote. This was easily the KSČ's best performance in an election. Gottwald was a firm supporter of the expulsion of ethnic Germans from Czechoslovakia, gaining mainstream credibility with many Czechs through the use of nationalist rhetoric, exhorting the population to "prepare for the final retribution for White Mountain, for the return of the Czech lands to the Czech people. We will expel for good all descendants of the alien German nobility." By the summer of 1947, however, the KSČ's popularity had significantly dwindled, particularly after the Soviets pressured Czechoslovakia to turn down Marshall Plan aid after initially accepting it. Most observers believed Gottwald would be turned out of office at the elections due in May 1948. The Communists' dwindling popularity, combined with France and Italy dropping the Communists from their coalition governments, prompted Joseph Stalin to order Gottwald to begin efforts to set up an undisguised Communist regime in Czechoslovakia. Outwardly, though, Gottwald kept up the appearance of working within the system, announcing that he intended to lead the Communists to an absolute majority in the upcoming election—something no Czechoslovak party had ever done. The endgame began in February 1948, when a majority of the Cabinet directed the Communist interior minister, Václav Nosek, to stop packing the police force with Communists. Nosek ignored this directive, with Gottwald's support. In response, 12 non-Communist ministers resigned. They believed that without their support, Gottwald would be unable to govern and be forced to either give way or resign. Beneš initially supported their position, and refused to accept their resignations. Gottwald not only refused to resign, but demanded the appointment of a Communist-dominated government under threat of a general strike. His Communist colleagues occupied the offices of the non-Communist ministers. On 25 February, Beneš, fearing Soviet intervention, gave in. He accepted the resignations of the non-Communist ministers and appointed a new government in accordance with Gottwald's specifications. Although ostensibly still a coalition, it was dominated by Communists and pro-Moscow Social Democrats. The other parties were still nominally represented, but with the exception of Foreign Minister Jan Masaryk they were fellow travellers handpicked by the Communists. From this date forward, Gottwald was effectively the most powerful man in Czechoslovakia. On 9 May, the National Assembly, now a docile tool of the Communists, approved the so-called Ninth-of-May Constitution. While it was not a completely Communist document, its Communist imprint was strong enough that Beneš refused to sign it. Later that month, elections were held in which voters were presented with a single list from the National Front, now a Communist-controlled patriotic organization. Beneš resigned on 2 June. In accordance with the 1920 Constitution, Gottwald took over most presidential functions until 14 June, when he was formally elected as President. Gottwald initially tried to take a semi-independent line. However, that changed shortly after a meeting with Stalin. Under his direction, Gottwald imposed the Soviet model of government on the country. He nationalized the country's industry and collectivized its farms. There was considerable resistance within the government to Soviet influence on Czechoslovak politics. In response, Gottwald instigated a series of purges, first to remove non-communists, later to remove some communists as well. Prominent Communists who became victims of these purges and were defendants in the Prague Trials included Rudolf Slánský, the party's general secretary, Vlado Clementis (the Foreign Minister) and Gustáv Husák (the leader of an administrative body responsible for Slovakia), who was dismissed from office for "bourgeois nationalism". Slánský and Clementis were executed in December 1952, and hundreds of other government officials were sent to prison. Husák was rehabilitated in the 1960s and became the leader of Czechoslovakia in 1969. In a famous photograph from 21 February 1948, described also in "The Book of Laughter and Forgetting" by Milan Kundera, Clementis stands next to Gottwald. When Vladimír Clementis was charged in 1950, he was erased from the photograph (along with the photographer Karel Hájek) by the state propaganda department. Gottwald was a long-time alcoholic and suffered from heart disease caused by syphilis that had gone untreated for several years. Shortly after attending Stalin's funeral on 9 March 1953, one of his arteries burst. He died five days later on 14 March 1953, aged 56. He was the first Czechoslovak president to die in office. Gottwald's embalmed body was initially displayed in a mausoleum at the site of the Jan Žižka national monument in the district of Žižkov, Prague. In 1962 the personality cult ended and it was no longer deemed appropriate to show Gottwald's body. There are accounts that in 1962 Gottwald's body had blackened and was decomposing due to a botched embalming, although other witnesses have disputed this. His body was cremated, the ashes returned to the Žižka Monument and placed in a sarcophagus. After the end of the communist period, Gottwald's ashes were removed from the Žižka Monument (in 1990) and placed in a common grave at Prague's Olšany Cemetery, together with the ashes of about 20 other communist leaders which had also originally been placed in the Žižka Monument. The Communist Party of Bohemia and Moravia now maintains that common grave. He was succeeded as "de facto" leader of Czechoslovakia by Antonín Novotný, who became First Secretary of the KSČ. Antonín Zápotocký, who had been prime minister since 1948, succeeded Gottwald as president. In tribute, Zlín, a city in Moravia, now the Czech Republic, was renamed "Gottwaldov" after him from 1949 to 1990. Zmiiv, a city in Kharkiv Oblast, Ukrainian SSR, was named "Gotvald" after him from 1976 to 1990. A major square and park in Bratislava was named "Gottwaldovo námestie" after him, later becoming Námestie Slobody "(Freedom square)" immediately following the Velvet Revolution. The original eponym persists today, the square being referred to by locals as "Gottko". A bridge in Prague that is now called Nuselský Most was once called Gottwaldův Most, and the abutting metro station now called Vyšehrad was called Gottwaldova. A Czechoslovak 100 Koruna banknote issued on 1 October 1989 as part of the 1985–89 banknote series included a portrait of Gottwald. This note was so poorly received by Czechoslovaks that it was removed from official circulation on 31 December 1990 and was promptly replaced with the previous banknote issue of the same denomination. All Czechoslovak banknotes were removed from circulation in 1993 and replaced by separate Czech and Slovak notes. In 2005 he was voted the "Worst Czech" in a ČT poll (a program under the BBC licence 100 Greatest Britons). He received 26% of the votes.
https://en.wikipedia.org/wiki?curid=17372
Kettlebaston Kettlebaston is a village and a civil parish with just over 30 inhabitants in the Babergh district of Suffolk, England, located around east of Lavenham. From the 2011 Census the population of the village was not maintained and is included in the civil parish of Chelsworth. It derives its name from Kitelbeornastuna, (Kitelbjorn's farmstead - O.Scand. pers. name + O.E. Tun), later evolving to Kettlebarston, (which is how the name is still pronounced), and finally to the current spelling. Its existence was first recorded in 1086 in the "Domesday Book". Once in an area of great wealth, the demise of the mediaeval wool trade was indirectly the saving of the village, (as we know it today), since the locals were unable to afford the expense of upgrading their houses with the latest architectural fashions. The number of timber-framed houses slowly declined over the years, as did the population - from over 200 at its peak, to the point when the village was on the brink of extinction. By the 1960s, with the road no more than an unmade track, and no electricity or mains water supplies, (it still has no gas or main drains), Kettlebaston was barely standing. In the "Spotlight On The Suffolk Scene" article, of the "Chronicle & Mercury" in June 1949, it was noted that a great many houses were category five - derelict, and ready for demolition. As the agricultural workers left the land in search of other jobs, due to the increased mechanisation of farm work, "outsiders" discovered the secluded beauty of the rural Suffolk countryside, and a new age dawned. The tiny workmen's cottages, which once housed huge families - and some stock and chickens according to local accounts - were lovingly renovated and converted, and the village was reborn, and went on to proudly win Babergh Best Kept Village, and runner up in the Suffolk Community Council Best Kept Village Competition, in 1989. The village sign, bearing two crossed sceptres topped with doves, was erected to mark the coronation of King George VI and Queen Elizabeth. It also commemorates that, in 1445, Henry VI granted the manor of Kettlebaston to William de la Pole, 1st Marquess of Suffolk, in return for the service of carrying a golden sceptre at the coronation of all the future Kings of England, and an ivory sceptre to carry at the coronation of Margaret of Anjou, and all future Queens. This honour continued until Henry VIII resumed the manor, although it was later regranted it was without the royal service. The parish church of St Mary the Virgin has Norman origins, and features a font from around 1200. The building is listed as Grade I. It is recorded that the church was then "built anew" in 1342, remaining largely unchanged until targeted by Protestant iconoclasts in the 1540s. Today it features one of Suffolk's finest post-Reformation rood screens, designed by Father Ernest Geldart and decorated by Patrick Osborne, and Enid Chadwick, and a rare Sacred Heart altar upon a Stuart Holy Table. It now lacks the small lead spire which once topped the tower. Regarded as a place of pilgrimage to the followers of the Anglo-Catholic movement from all over the UK, Kettlebaston was the liturgically highest of all Suffolk's Anglican churches. From 1930, until his retirement in 1964, Reverend Father Harold Clear Butler said Roman Mass every day, and celebrated High Mass and Benediction on Sundays. He also removed state notices from the porch, and refused to keep registers, or to recognise the office of the local Archdeacon of Sudbury. Despite opposition, the church finally received electric heating and lighting in 2014. The current village has no shop, school, or pub.
https://en.wikipedia.org/wiki?curid=17375
Karl Amadeus Hartmann Karl Amadeus Hartmann (2 August 1905 – 5 December 1963) was a German composer. Sometimes described as the greatest German symphonist of the 20th century, he is now largely overlooked, particularly in English-speaking countries. Born in Munich, the son of Friedrich Richard Hartmann, and the youngest of four brothers of whom the elder three became painters, Hartmann was himself torn, early in his career, between music and the visual arts. He was much affected in his early political development by the events of the unsuccessful Workers’ Revolution in Bavaria that followed the collapse of the German empire at the end of World War I (see Bavarian Soviet Republic). He remained an idealistic socialist for the rest of his life. At the Munich Academy in the 1920s, Hartmann studied with Joseph Haas, a pupil of Max Reger, and later received intellectual stimulus and encouragement from the conductor Hermann Scherchen, an ally of the Schoenberg school, with whom he had a nearly lifelong mentor-protégé relationship. He voluntarily withdrew completely from musical life in Germany during the Nazi era, while remaining in Germany, and refused to allow his works to be played there. An early symphonic poem, "Miserae" (1933–1934, first performed in Prague, 1935) was condemned by the Nazi regime but his work continued to be performed, and his fame grew, abroad. During World War II, though already an experienced composer, Hartmann submitted to a course of private tuition in Vienna by Schoenberg’s pupil Anton Webern (with whom he often disagreed on a personal and political level). Although stylistically their music had little in common, he clearly felt that he needed, and benefited from, Webern's acute perfectionism. After the fall of Hitler, Hartmann was one of the few prominent surviving anti-fascists in Bavaria whom the postwar Allied administration could appoint to a position of responsibility. In 1945, he became a "dramaturge" at the Bavarian State Opera and there, as one of the few internationally recognized figures who had survived untainted by any collaboration with the Nazi regime, he became a vital figure in the rebuilding of (West) German musical life. Perhaps his most notable achievement was the Musica Viva concert series, which he founded and ran for the rest of his life in Munich. Beginning in November 1945, the concerts reintroduced the German public to 20th-century repertoire, which had been banned since 1933 under National Socialist aesthetic policy. Hartmann also provided a platform for the music of young composers in the late 1940s and early 1950s, helping to establish such figures as Hans Werner Henze, Luigi Nono, Luigi Dallapiccola, Carl Orff, Iannis Xenakis, Olivier Messiaen, Luciano Berio, Bernd Alois Zimmermann and many others. Hartmann also involved sculptors and artists such as Jean Cocteau, Le Corbusier, and Joan Miró in exhibitions at Musica Viva. He was accorded numerous honours after the war, including the Musikpreis of the city of Munich in March 1949. This was followed by the Kunstpreis of the Bayrische Akademie der Schönen Künste (1950), the Arnold Schönberg Medal of the IGNM (1954), the Große Kunstpreis of the Land Nordrhein-Westfalen (1957), as well as the Ludwig Spohr Award of the city of Braunschweig, the Schwabing Kunstpreis (1961) and the Bavarian Medal of Merit (1959). Hartmann became a member of the Academy of Arts in Munich (1952) and Berlin (1955) and received an honorary doctorate from Spokane Conservatory, Washington (1962). His socialist sympathies did not extend to the Soviet Union's variety of communism, and in the 1950s, he refused an offer to move to East Germany. Hartmann continued to base his activities in Munich for the remainder of his life, and his administrative duties came to absorb much of his time and energy. This reduced his time for composition, and his last years were dogged by serious illness. In 1963, he died of stomach cancer at the age of 58, leaving his last work – an extended symphonic "Gesangsszene" for voice and orchestra on words from Jean Giraudoux’s apocalyptic drama "Sodom and Gomorrah" – unfinished. Hartmann completed a number of works, most notably eight symphonies. The first of these, and perhaps emblematic of the difficult genesis of many of his works, is Symphony No. 1, "Essay for a Requiem" ("Versuch eines Requiems"). It began in 1936 as a cantata for alto solo and orchestra loosely based on a few poems by Walt Whitman. It soon became known as "Our Life: Symphonic Fragment" ("Unser Leben: Symphonisches Fragment") and was intended as a comment on the generally miserable conditions for artists and liberal-minded people under the early Nazi regime. After the defeat of the Third Reich in World War II, the regime's real victims had become clear, and the cantata's title was changed to "Symphonic Fragment: Attempt at a Requiem" to honor the millions killed in the Holocaust. Hartmann revised the work in 1954–55 as his Symphony No. 1, and published it in 1956. As this example indicates, he was a highly self-critical composer and many of his works went through successive stages of revision. He also suppressed most of his substantial orchestral works of the late 1930s and the war years, either allowing them to remain unpublished or, in several cases, reworking them – or portions of them – into the series of numbered symphonies that he produced in the late 1940s and early 1950s. Perhaps the most frequently performed of his symphonies are No. 4, for strings, and No. 6; probably his most widely known work, through performances and recordings, is his Concerto funebre for violin and strings, composed at the beginning of World War II and making use of a Hussite chorale and a Russian revolutionary song of 1905. Hartmann attempted a synthesis of many different idioms, including musical expressionism and jazz stylization, into organic symphonic forms in the tradition of Bruckner and Mahler. His early works are both satirical and politically engaged. But he admired the polyphonic mastery of J.S. Bach, the profound expressive irony of Mahler, and the neoclassicism of Igor Stravinsky and Paul Hindemith. In the 1930s he developed close ties with Béla Bartók and Zoltán Kodály in Hungary, and this is reflected in his music to some extent. In the 1940s, he began to take an interest in Schoenbergian twelve-tone technique; though he studied with Webern his own idiom was closer to Alban Berg. In the 1950s, Hartmann started to explore the metrical techniques pioneered by Boris Blacher and Elliott Carter. Among his most-used forms are three-part adagio slow movements, fugues, variations and toccatas. Significantly, championed his music following his death: Scherchen, his most noted advocate, died in 1966. Some have suggested that this accelerated the disappearance of Hartmann's music from public view in the years following his death. Conductors who regularly performed Hartmann's music include Rafael Kubelik and Ferdinand Leitner, who recorded the third and sixth symphonies. More recent champions of works by Hartmann include Ingo Metzmacher and Mariss Jansons. Hans Werner Henze said of Hartmann's music: Symphonic architecture was essential for him... as a suitable medium for reflecting the world as he experienced and understood it – as an agonizingly dramatic battle, as contradiction and conflict – in order to be able to achieve self-realization in its dialectic and to portray himself as a man among men, a man of this world, and not out of this world. The English composer John McCabe wrote his "Variations on a Theme of Karl Amadeus Hartmann" (1964) in tribute. It uses the opening of Hartmann's Fourth Symphony as its theme. Henze made a version of Hartmann's Piano Sonata No. 2 for full orchestra. (i) Up to 1945 – mostly later suppressed (ii) After 1945
https://en.wikipedia.org/wiki?curid=17378
Kami In Shinto, kami are not separate from nature, but are of nature, possessing positive and negative, and good and evil characteristics. They are manifestations of , the interconnecting energy of the universe, and are considered exemplary of what humanity should strive towards. Kami are believed to be "hidden" from this world, and inhabit a complementary existence that mirrors our own: . To be in harmony with the awe-inspiring aspects of nature is to be conscious of . Though the word kami is translated multiple ways into English, no English word expresses its full meaning. "Kami" is the Japanese word for a god, deity, divinity, or spirit. It has been used to describe mind (心霊), God (ゴッド), supreme being (至上者), one of the Shinto deities, an effigy, a principle, and anything that is worshipped. Although "deity" is the common interpretation of "kami", some Shinto scholars argue that such a translation can cause a misunderstanding of the term. The wide variety of usage of the word "kami" can be compared to the Sanskrit "Deva" and the Hebrew "Elohim", which also refer to God, gods, angels, or spirits. Some etymological suggestions are: Because Japanese does not normally distinguish grammatical number in nouns (the singular and plural forms of nouns in Japanese are the same), it is sometimes unclear whether "kami" refers to a single or multiple entities. When a singular concept is needed, is used as a suffix. The reduplicated term generally used to refer to multiple kami is kamigami. Gender is also not implied in the word "kami", and as such, it can be used to refer to either male or female. While Shinto has no founder, no overarching doctrine, and no religious texts, the "Kojiki" (Records of Ancient Matters), written in 712 CE, and the "Nihon Shoki "(Chronicles of Japan), written in 720 CE, contain the earliest record of Japanese creation myths. The "Kojiki" also includes descriptions of various kami. In the ancient traditions there were five defining characteristics of kami: Kami are an ever-changing concept, but their presence in Japanese life has remained constant. The kami's earliest roles were as earth-based spirits, assisting the early hunter-gatherer groups in their daily lives. They were worshipped as gods of the earth (mountains) and sea. As the cultivation of rice became increasingly important and predominant in Japan, the kami's identity shifted to more sustaining roles that were directly involved in the growth of crops; roles such as rain, earth, and rice. This relationship between early Japanese people and the kami was manifested in rituals and ceremonies meant to entreat the kami to grow and protect the harvest. These rituals also became a symbol of power and strength for the early Emperors. (See .) There is a strong tradition of myth-histories in the Shinto faith; one such myth details the appearance of the first emperor, grandson of the Sun Goddess Amaterasu. In this myth, when Amaterasu sent her grandson to earth to rule, she gave him five rice grains, which had been grown in the fields of heaven (Takamagahara). This rice made it possible for him to transform the "wilderness". Social and political strife have played a key role in the development of new sorts of kami, specifically the goryō-shin (the sacred spirit kami). Goryō are the vengeful spirits of the dead whose lives were cut short, but they were calmed by the devotion of Shinto followers and are now believed to punish those who do not honor the kami. The pantheon of kami, like the kami themselves, is forever changing in definition and scope. As the needs of the people have shifted, so too have the domains and roles of the various kami. Some examples of this are related to health, such as the kami of smallpox whose role was expanded to include all contagious diseases, or the kami of boils and growths who has also come to preside over cancers and cancer treatments. In the ancient animistic religions, kami were understood as simply the divine forces of nature. Worshippers in ancient Japan revered creations of nature which exhibited a particular beauty and power such as waterfalls, mountains, boulders, animals, trees, grasses, and even rice paddies. They strongly believed the spirits or resident kami deserved respect. In 927 CE, the was promulgated in fifty volumes. This, the first formal codification of Shinto rites and "norito" (liturgies and prayers) to survive, became the basis for all subsequent Shinto liturgical practice and efforts. It listed all of the 2,861 Shinto shrines existing at the time, and the 3,131 official-recognized and enshrined kami. The number of kami has grown and far exceeded this figure through the following generations as there are over 2,446,000 individual kami enshrined in Tokyo's Yasukuni Shrine alone. Kami are the central objects of worship for the Shinto belief. The ancient animistic spirituality of Japan was the beginning of modern Shinto, which became a formal spiritual institution later, in an effort to preserve the traditional beliefs from the encroachment of imported religious ideas. As a result, the nature of what can be called kami is very general and encompasses many different concepts and phenomena. Some of the objects or phenomena designated as kami are qualities of growth, fertility, and production; natural phenomena like wind and thunder; natural objects like the sun, mountains, rivers, trees, and rocks; some animals; and ancestral spirits. Included within the designation of ancestral spirits are spirits of the ancestors of the Imperial House of Japan, but also ancestors of noble families as well as the spirits of the ancestors of all people, which when they died were believed to be the guardians of their descendants. There are other spirits designated as kami as well. For example, the guardian spirits of the land, occupations, and skills; spirits of Japanese heroes, men of outstanding deeds or virtues, and those who have contributed to civilization, culture, and human welfare; those who have died for the state or the community; and the pitiable dead. Not only spirits superior to man can be considered kami; spirits that are considered pitiable or weak have also been considered kami in Shinto. The concept of kami has been changed and refined since ancient times, although anything that was considered to be kami by ancient people will still be considered kami in modern Shinto. Even within modern Shinto, there are no clearly defined criteria for what should or should not be worshipped as kami. The difference between modern Shinto and the ancient animistic religions is mainly a refinement of the kami-concept, rather than a difference in definitions. Although the ancient designations are still adhered to, in modern Shinto many priests also consider kami to be anthropomorphic spirits, with nobility and authority. One such example is the mythological figure Amaterasu-ōmikami, the sun goddess of the Shinto pantheon. Although these kami can be considered deities, they are not necessarily considered omnipotent or omniscient, and like the Greek Gods, they had flawed personalities and were quite capable of ignoble acts. In the myths of Amaterasu, for example, she could see the events of the human world, but had to use divination rituals to see the future. There are considered to be three main variations of kami: , , and . ("" literally means eight million, but idiomatically it expresses "uncountably many" and "all-around"—like many East Asian cultures, the Japanese often use the number 8, representing the cardinal and ordinal directions, to symbolize ubiquity.) These classifications of kami are not considered strictly divided, due to the fluid and shifting nature of kami, but are instead held as guidelines for grouping them. The ancestors of a particular family can also be worshipped as kami. In this sense, these kami are worshipped not because of their godly powers, but because of a distinctive quality or virtue. These kami are celebrated regionally, and several miniature shrines ("hokora") have been built in their honor. In many cases, people who once lived are thus revered; an example of this is Tenjin, who was Sugawara no Michizane (845-903 CE) in life. Within Shinto it is believed that the nature of life is sacred because the kami began human life. Yet people cannot perceive this divine nature, which the kami created, on their own; therefore, magokoro, or purification, is necessary in order to see the divine nature. This purification can only be granted by the kami. In order to please the kami and earn magokoro, Shinto followers are taught to uphold the four affirmations of Shinto. The first affirmation is to hold fast to tradition and the family. Family is seen as the main mechanism by which traditions are preserved. For instance, in marriage or birth, tradition is potentially observed and passed onto future generations. The second affirmation is to have a love of nature. Nature objects are worshipped as sacred because the kami inhabit them. Therefore, to be in contact with nature means to be in contact with the gods. The third affirmation is to maintain physical cleanliness. Followers of Shinto take baths, wash their hands, and rinse out their mouths often. The last affirmation is to practice matsuri, which is the worship and honor given to the kami and ancestral spirits. Shinto followers also believe that the kami are the ones who can either grant blessings or curses to a person. Shinto believers desire to appease the evil kami to "stay on their good side", and also to please the good kami. In addition to practicing the four affirmations daily, Shinto believers also wear "omamori" to aid them in remaining pure and protected. Mamori are charms that keep the evil kami from striking a human with sickness or causing disaster to befall them. The kami are both worshipped and respected within the religion of Shinto. The goal of life to Shinto believers is to obtain "magokoro", a pure sincere heart, which can only be granted by the kami. As a result, Shinto followers are taught that humankind should venerate both the living and the nonliving, because both possess a divine superior spirit within: the kami. One of the first recorded rituals we know of is , the ceremony in which the Emperor offers newly harvested rice to the kami to secure their blessing for a bountiful harvest. A yearly festival, Niiname-sai is also performed when a new Emperor comes to power, in which case it is called . In the ceremony, the Emperor offers crops from the new harvest to the kami, including rice, fish, fruits, soup, and stew. The Emperor first feasts with the deities, then the guests. The feast could go on for some time; for example, Emperor Shōwa's feast spanned two days. Visitors to a Shinto shrine follow a purification ritual before presenting themselves to the kami. This ritual begins with hand washing and swallowing and later spitting a small amount of water in front of the shrine to purify the body, heart, and mind. Once this is complete they turn their focus to gaining the kami's attention. The traditional method of doing this is to bow twice, clap twice and bow again, alerting the kami to their presence and desire to commune with them. During the last bow, the supplicant offers words of gratitude and praise to the kami; if they are offering a prayer for aid they will also state their name and address. After the prayer and/or worship they repeat the two bows, two claps and a final bow in conclusion. Shinto practitioners also worship at home. This is done at a "kamidana" (household shrine), on which an "ofuda" with the name of their protector or ancestral kami is positioned. Their protector kami is determined by their or their ancestors' relationship to the kami. Ascetic practices, shrine rituals and ceremonies, and Japanese festivals are the most public ways that Shinto devotees celebrate and offer adoration for the kami. Kami are celebrated during their distinct festivals that usually take place at the shrines dedicated to their worship. Many festivals involve believers, who are usually intoxicated, parading, sometimes running, toward the shrine while carrying mikoshi (portable shrines) as the community gathers for the festival ceremony. Yamamoto Guji, the high priest at the Tsubaki Grand Shrine, explains that this practice honors the kami because "it is in the festival, the matsuri, the greatest celebration of life can be seen in the world of Shinto and it is the people of the community who attend festivals as groups, as a whole village who are seeking to unlock the human potential as children of kami." During the New Year Festival, families purify and clean their houses in preparation for the upcoming year. Offerings are also made to the ancestors so that they will bless the family in the future year. Shinto ceremonies are so long and complex that in some shrines it can take ten years for the priests to learn them. The priesthood was traditionally hereditary. Some shrines have drawn their priests from the same families for over a hundred generations. It is not uncommon for the clergy to be female priestesses. The priests ("kannushi") may be assisted by "miko", young unmarried women acting as shrine maidens. Neither priests nor priestesses live as ascetics; in fact, it is common for them to be married, and they are not traditionally expected to meditate. Rather, they are considered specialists in the arts of maintaining the connection between the kami and the people. In addition to these festivals, ceremonies marking rites of passage are also performed within the shrines. Two such ceremonies are the birth of a child and the Shichi-Go-San. When a child is born they are brought to a shrine so that they can be initiated as a new believer and the kami can bless them and their future life. The Shichi-Go-San (the Seven-Five-Three) is a rite of passage for five-year-old boys and three- or seven-year-old girls. It is a time for these young children to personally offer thanks for the kami's protection and to pray for continued health. Many other rites of passage are practiced by Shinto believers, and there are also many other festivals. The main reason for these ceremonies is so that Shinto followers can appease the kami in order to reach magokoro. Magokoro can only be received through the kami. Ceremonies and festivals are long and complex because they need to be perfect to satisfy the kami. If the kami are not pleased with these ceremonies, they will not grant a Shinto believer magokoro.
https://en.wikipedia.org/wiki?curid=17379
Koalang Koalang is a term coined by Janusz A. Zajdel, a Polish science fiction writer. It is a language used by people in a totalitarian world called "Paradyzja" in his 1984 novel of the same name. The ""ko-al"" in ""koalang"" derives from the Polish words 'kojarzeniowo-aluzyjny' ("associative-allusive"). Because Paradyzja is a space station, and activity is tracked by automatic cameras and analysed, mostly, by computers, its people created an Aesopian language, which is full of metaphors that are impossible for computers to grasp. The meaning of every sentence depended on the context. For example, "I dreamt about blue angels last night" means "I was visited by the police last night." The software that analyzes sentences is self-learning. Thus, a phrase that is used to describe something metaphorically should not be used again in the same context. Zajdel paid a tribute to George Orwell's newspeak and to Aldous Huxley, by naming one of the main characters "Nikor Orley Huxwell". In the 1980s, the youth magazine "Na Przełaj" (Short Cut) printed rock lyrics in a column titled "KOALANG", hinting that the songs' texts contained content camouflaged from censorship.
https://en.wikipedia.org/wiki?curid=17381
Kobellite Kobellite (Pb22Cu4(Bi,Sb)30S69) is a gray, fibrous, metallic mineral. It is also a sulfide mineral consisting of antimony, bismuth, and lead. It is a member of the izoklakeite - berryite series with silver and iron substituting in the copper site and a varying ratio of bismuth, antimony, and lead. It crystallizes with monoclinic pyramidal crystals. The mineral can be found in ores and deposits of Hvena, Sweden; Ouray, Colorado; and Wake County, North Carolina, US. The mineral was named after Wolfgang Franz von Kobell (1803–1882), a German mineralogist.
https://en.wikipedia.org/wiki?curid=17382
Kayak A kayak is a small, narrow watercraft which is typically propelled by means of a double-bladed paddle. The word kayak originates from the Greenlandic word "qajaq" (). The traditional kayak has a covered deck and one or more cockpits, each seating one paddler. The cockpit is sometimes covered by a spray deck that prevents the entry of water from waves or spray, differentiating the craft from a canoe. The spray deck makes it possible for suitably skilled kayakers to roll the kayak: that is, to capsize and right it without it filling with water or ejecting the paddler. ] Some modern boats vary considerably from a traditional design but still claim the title "kayak", for instance in eliminating the cockpit by seating the paddler on top of the boat ("sit-on-top" kayaks); having inflated air chambers surrounding the boat; replacing the single hull by twin hulls, and replacing paddles with other human-powered propulsion methods, such as foot-powered rotational propellers and "flippers". Kayaks are also being sailed, as well as propelled by means of small electric motors, and even by outboard gas engines. The kayak was first used by the indigenous Aleut, Inuit, Yupik and possibly Ainu hunters in subarctic regions of the world. Kayaks (Inuktitut: "qajaq" (ᖃᔭᖅ ), Yup'ik: "qayaq" (from "qai-" "surface; top"), Aleut: "Iqyax") were originally developed by the Inuit, Yup'ik, and Aleut. They used the boats to hunt on inland lakes, rivers and coastal waters of the Arctic Ocean, North Atlantic, Bering Sea and North Pacific oceans. These first kayaks were constructed from stitched seal or other animal skins stretched over a wood or whalebone-skeleton frame. (Western Alaskan Natives used wood whereas the eastern Inuit used whalebone due to the treeless landscape). Kayaks are believed to be at least 4,000 years old. The oldest existing kayaks are exhibited in the North America department of the State Museum of Ethnology in Munich, with the oldest dating from 1577. Native people made many types of boats for different purposes. The Aleut baidarka was made in double or triple cockpit designs, for hunting and transporting passengers or goods. An umiak is a large open sea canoe, ranging from , made with seal skins and wood. It is considered a kayak although it was originally paddled with single-bladed paddles, and typically had more than one paddler. Native builders designed and built their boats based on their own experience and that of the generations before them, passed on through oral tradition. The word "kayak" means "man's boat" or "hunter's boat", and native kayaks were a personal craft, each built by the man who used it—with assistance from his wife, who sewed the skins—and closely fitting his size for maximum maneuverability. The paddler wore a tuilik, a garment that was stretched over the rim of the kayak coaming, and sealed with drawstrings at the coaming, wrists, and hood edges. This enabled the "eskimo roll" and rescue to become the preferred methods of recovery after capsizing, especially as few Inuit could swim; their waters are too cold for a swimmer to survive for long. Instead of a "tuilik", most traditional kayakers today use a spray deck made of waterproof synthetic material stretchy enough to fit tightly around the cockpit rim and body of the kayaker, and which can be released rapidly from the cockpit to permit easy exit. Inuit kayak builders had specific measurements for their boats. The length was typically three times the span of his outstretched arms. The width at the cockpit was the width of the builder's hips plus two fists (and sometimes less). The typical depth was his fist plus the outstretched thumb (hitch hiker). Thus typical dimensions were about long by wide by deep. This measurement system confounded early European explorers who tried to duplicate the kayak, because each kayak was a little different. Traditional kayaks encompass three types: "Baidarkas", from the Bering sea & Aleutian islands, the oldest design, whose rounded shape and numerous chines give them an almost Blimp-like appearance; "West Greenland" kayaks, with fewer chines and a more angular shape, with gunwales rising to a point at the bow and stern; and "East Greenland" kayaks that appear similar to the West Greenland style, but often fit more snugly to the paddler and possess a steeper angle between gunwale and stem, which lends maneuverability. Most of the Aleut people in the Aleutian Islands eastward to Greenland Inuit relied on the kayak for hunting a variety of prey—primarily seals, though whales and caribou were important in some areas. Skin-on-frame kayaks are still being used for hunting by Inuit people in Greenland, because the smooth and flexible skin glides silently through the waves. In other parts of the world home builders are continuing the tradition of skin on frame kayaks, usually with modern skins of canvas or synthetic fabric, such as sc. ballistic nylon. Contemporary traditional-style kayaks trace their origins primarily to the native boats of Alaska, northern Canada, and Southwest Greenland. Wooden kayaks and fabric kayaks on wooden frames dominated the market up until the 1950s, when fiberglass boats were first introduced in the US, and inflatable rubberized fabric boats were first introduced in Europe. Rotomolded plastic kayaks first appeared in 1973, and most kayaks today are made from roto-molded polyethylene resins. The development of plastic and rubberized inflatable kayaks arguably initiated the development of freestyle kayaking as we see it today, since these boats could be made smaller, stronger and more resilient than fiberglass boats. Typically, kayak design is largely a matter of trade-offs: directional stability ("tracking") vs maneuverability; stability vs speed; and primary vs secondary stability. Multihull kayaks face a different set of trade-offs. The paddler's body shape and size is an integral part of the structure, and will also affect the trade-offs made. If the displacement of a kayak is not enough to support the passenger(s) and gear, it will sink. If the displacement is excessive, the kayak will float too high, catch the wind and waves uncomfortably, and handle poorly; it will probably also be bigger and heavier and than it needs to be. Being excessively big will create more drag, and the kayak will move more slowly and take more effort. Rolling is easier in lower-displacement kayaks. On the other hand, a higher deck will keep the paddler(s) dryer and make self-rescue and coming through surf easier. Many paddlers who use a sit-in kayak feel more secure in a kayak with a weight capacity substantially more than their own weight. Maximum volume in a sit-in kayak is helped by a wide hull with high sides. But paddling ease is helped by lower sides where the paddler sits and a narrower width. Most manufacturers make kayaks for paddlers weighing , with some kayaks for paddlers down to . Kayaks made for paddlers under 100 pounds are almost all very beamy and intended for beginners. There seem to be no anthropometry stats of kayakers, who may not be representative of the general population. In the American civilian population of the early 1960s, about 0.7% of men and 9% of women weighed under ; 20% of men and 7% of women weighed over . In the same population in the late sixties, the average weight of both male and female children crossed at age thirteen. In the early 2000s, it was a year or two earlier, and the mean weight of adults was over heavier. Also in the early 2000s, the mean weight of men was , and the mean weight of women was . As a general rule, a longer kayak is faster: it has a higher hull speed. It can also be narrower for a given displacement, reducing the drag, and it will generally track (follow a straight line) better than a shorter kayak. On the other hand, it is less maneuverable. Very long kayaks are less robust, and may be harder to store and transport. Some recreational kayak makers try to maximize hull volume (weight capacity) for a given length as shorter kayaks are easier to transport and store. Kayaks that are built to cover longer distances such as touring and sea kayaks are longer, generally . With touring kayaks the keel is generally more defined (helping the kayaker track in a straight line). Whitewater kayaks, which generally depend upon river current for their forward motion, are short, to maximize maneuverability. These kayaks rarely exceed in length, and "play boats" may be only long. Recreational kayak designers try to provide more stability at the price of reduced speed, and compromise between tracking and maneuverability, ranging from . Length alone does not fully predict a kayak's maneuverability: a second design element is "rocker", i.e. its lengthwise curvature. A heavily "rockered" boat curves more, shortening its effective waterline. For example, an kayak with no rocker is in the water from end to end. In contrast, the bow and stern of a rockered boat are out of the water, shortening its lengthwise waterline to only . Rocker is generally most evident at the ends, and in moderation improves handling. Similarly, although a rockered whitewater boat may only be a few feet shorter than a typical recreational kayak, its waterline is far shorter and its maneuverability far greater. When surfing, a heavily rockered boat is less likely to lock into the wave as the bow and stern are still above water. A boat with less rocker cuts into the wave and makes it harder to turn while surfing. The overall width of a kayak's cross section is its "beam". A wide hull is more stable, and packs more displacement into a shorter length. A narrow hull has less drag and is generally easier to paddle; in waves it will ride more easily and stay dryer. A narrower kayak makes a somewhat shorter paddle appropriate and a shorter paddle puts less strain on the shoulder joints. Some paddlers are comfortable with a sit-in kayak so narrow that their legs extend fairly straight out. Others want sufficient width to permit crossing their legs inside the kayak. "Primary" (sometimes called "initial") stability describes how much a boat tips, or rocks back and forth, when displaced from level by paddler weight shifts. "Secondary" stability describes how stable a kayak feels when put on edge or when waves are passing under the hull perpendicular to the length of the boat. For kayak rolling, "tertiary" stability, or the stability of an upside-down kayak, is also important (lower tertiary stability makes rolling up easier). Primary stability is often a big concern to a beginner, while secondary stability matters both to beginners and experienced travelers. By example, a wide, flat-bottomed kayak will have high primary stability and feel very stable on flat water. However, when a steep wave breaks on such a boat, it can be easily overturned because the flat bottom is no longer level. By contrast, a kayak with a narrower, more rounded hull with more hull flare can be edged or leaned into waves and (in the hands of a skilled kayaker) provides a safer, more comfortable response on stormy seas. Kayaks with only moderate primary, but excellent secondary stability are, in general, considered more seaworthy, especially in challenging conditions. The shape of the cross section affects stability, maneuverability, and drag. Hull shapes are categorized by roundness/flatness, and by the presence and angle of chines. This cross–section may vary along the length of the boat. A chine typically increases secondary stability by effectively widening the beam of the boat when it heels (tips). A V-shaped hull tends to travel straight (track) well, but makes turning harder. V-shaped hulls also have the greatest secondary stability. Conversely, flat-bottomed hulls are easy to turn, but harder to direct in a constant direction. A round-bottomed boat has minimal wetter area, and thus minimizes drag; however, it may be so unstable that it will not remain upright when floating empty, and needs continual effort to keep it upright. In a skin-on-frame kayak, chine placement may be constrained by the need to avoid the bones of the pelvis. Sea kayaks, designed for open water and rough conditions, are generally narrower at and have more secondary stability than recreational kayaks, which are wider , have a flatter hull shape, and more primary stability. The body of the paddler must also be taken into account. A paddler with a low center of gravity will find all boats more stable; for a paddler with a high center of gravity, all boats will feel tippier. On average, women and children have a lower COG than men. Unisex kayaks are built for men. A paddler with narrow shoulders will also want a narrower kayak. Newcomers will often want a craft with high primary stability (see above). The southern method is a wider kayak. The northern method is a removable pair of outriggers, lashed across the stern deck. Such an outrigger pair is often homemade of a small plank and found floats such as empty bottles or plastic ducks. Outriggers are also made commercially, especially for fishing kayaks and sailing. If the floats are set so that they are both in the water, they give primary stability, but produce more drag. If they are set so that they are both out of the water when the kayak is balanced, they give secondary stability. Some kayak hulls are categorized according to the shape from bow to stern Common shapes include: Traditional-style and some modern types of kayaks (e.g. sit-on-top) require that paddler be seated with their legs stretched in front of them, in a right angle, in a position called the "L" kayaking position. Other kayaks offer a different sitting position, in which the paddler's legs are not stretched out in front of them, and the thigh brace bears more on the inside than the top of the thighs (see diagram). A kayaker must be able to move the hull of their kayak by moving their lower body, and brace themselves against the hull (mostly with the feet) on each stroke. Most kayaks therefore have footrests and a backrest. Some kayaks fit snugly on the hips; others rely more on thigh braces. Mass-produced kayaks generally have adjustable bracing points. Many paddlers also customize their kayaks by putting in shims of closed-cell foam, or more elaborate structure, to make it fit more tightly. Paddling puts substantial force through the legs, alternately with each stroke. The knees should therefore not be hyperextended. Separately, if the kneecap is in contact with the boat, this will cause pain and may injure the knee. Insufficient foot space will cause painful cramping and inefficient paddling. The paddler should generally be in a comfortable position. Attempting to lift and carry a kayak by oneself or improperly is a significant cause of kayaking injuries. Good lifting technique, sharing loads, and not using needlessly large and heavy kayaks prevents injuries. Today almost all kayaks are commercial products intended for sale rather than for the builder's personal use. Fiberglass hulls are stiffer than polyethylene hulls, but they are more prone to damage from impact, including cracking. Most modern kayaks have steep V sections at the bow and stern, and a shallow V amidships. Fiberglass kayaks need to be "laid-up" in a mold by hand, so are usually more expensive than polyethylene kayaks, which are rotationally molded in a machine. Plastic kayaks are rotationally molded ('rotomolded') from a various grades and types of polyethylene resins ranging from soft to hard. Such kayaks are particularly resistant to impact. Wooden hulls don't necessarily require significant skill and handiwork, depending on how they are made. Kayaks made from thin strips of wood sheathed in fiberglass have proven successful, especially as the price of epoxy resin has decreased in recent years. A plywood, stitch and glue (S&G) doesn't need fiberglass sheathing though some builders do. Three main types are popular, especially for the home builder: Stitch & Glue, Strip-Built, and hybrids which have a stitch & glue hull and a strip-built deck. Stitch & Glue designs typically use modern, marine-grade plywood — eighth-inch, or up to quarter-inch, thick. After cutting out the required pieces of hull and deck (kits often have these pre-cut), a series of small holes are drilled along the edges. Copper wire is then used to "stitch" the pieces together through the holes. After the pieces are temporarily stitched together, they are glued with epoxy and the seams reinforced with fiberglass. When the epoxy dries, the copper stitches are removed. Sometimes the entire boat is then covered in fiberglass for additional strength and waterproofing though this adds greatly to the weight and is unnecessary. Construction is fairly straightforward, but because plywood does not bend to form compound curves, design choices are limited. This is a good choice for the first-time kayak builder as the labor and skills required (especially for kit versions) is considerably less than for strip-built boats which can take 3 times as long to build. Strip-built designs are similar in shape to rigid fiberglass kayaks but are generally both lighter and tougher. Like their fiberglass counterparts the shape and size of the boat determines performance and optimal uses. The hull and deck are built with thin strips of lightweight wood, often cedar, pine or Redwood. The strips are edge-glued together around a form, stapled or clamped in place, and allowed to dry. Structural strength comes from a layer of fiberglass cloth and epoxy resin, layered inside and outside the hull. Strip–built kayaks are sold commercially by a few companies, priced US$4,000 and up. An experienced woodworker can build one for about US$400 in 200 hours, though the exact cost and time depend on the builder's skill, the materials and the size and design. As a second kayak project, or for the serious builder with some woodworking expertise, a strip–built boat can be an impressive piece of work. Kits with pre-cut and milled wood strips are commercially available. Skin on frame boats are more traditional in design, materials, and construction. They were traditionally made of driftwood, pegged or lashed together, and stretched seal skin, as those were the most readily available materials in the Arctic regions. Today, seal skin is usually replaced with canvas or nylon cloth covered with paint, polyurethane, or a hypalon rubber coating and a wooden or aluminum frame. Modern skin-on-frame kayaks often possess greater impact resistance than their fiberglass counterparts, but are less durable against abrasion or sharp objects. They are often the lightest kayaks. A special type of skin-on-frame kayak is the folding kayak. It has a collapsible frame, of wood, aluminum or plastic, or a combination thereof, and a skin of water-resistant and durable fabric. Many types have air sponsons built into the hull, making the kayak float even if flooded. Modern kayaks differ greatly from native kayaks in every aspect—from initial form through conception, design, manufacturing and usage. Modern kayaks are designed with computer-aided design (CAD) software, often in combination with CAD customized for naval design. Modern kayaks serve diverse purposes, ranging from slow and easy touring on placid water, to racing and complex maneuvering in fast-moving whitewater, to fishing and long-distance ocean excursions. Modern forms, materials and construction techniques make it possible to effectively serve these needs while continuing to leverage the insights of the original Arctic inventors. Kayaks are long—, short—, wide—, or as narrow as the paddler's hips. They may attach one or two stabilizing hulls (outriggers), have twin hulls like catamarans, inflate or fold. They move via paddles, pedals that turn propellers or underwater flippers, under sail, or motor. They're made of wood/canvas, wood, carbon fiber, fiberglass, Kevlar, polyethylene, polyester, rubberized fabric, neoprene, nitrylon, polyvinyl chloride (PVC), polyurethane, and aluminum. They may sport rudders, fins, bulkheads, seats, eyelets, foot braces and cargo hatches. They accommodate 1-3 or more paddlers/riders. Modern kayaks have evolved into specialized types that may be broadly categorized according to their application as "sea or touring kayaks", "whitewater" (or "river") "kayaks", "surf kayaks", "racing kayaks", "fishing kayaks," and "recreational" kayaks. The broader kayak categories today are 'Sit-In', which is inspired mainly by traditional kayak forms, 'Sit-On-Top' (SOT), which evolved from paddle boards that were outfitted with footrests and a backrest, 'Hybrid', which are essentially canoes featuring a narrower beam and a reduced free board enabling the paddler to propel them from the middle of the boat, using a double blade paddle (i.e. 'kayak paddle'), and twin hull kayaks offering each of the paddler's legs a narrow hull of its own. In recent decades, kayaks design have proliferated to a point where the only broadly accepted denominator for them is their being designed mainly for paddling using a kayak paddle featuring two blades i.e. 'kayak paddle'. However, even this inclusive definition is being challenged by other means of human powered propulsion, such as foot activated pedal drives combined with rotating or sideways moving propellers, electric motors, and even outboard motors. Recreational kayaks are designed for the casual paddler interested in fishing, photography, or a peaceful paddle on a lake, flatwater stream or protected salt water away from strong ocean waves. These boats presently make up the largest segment of kayak sales. Compared to other kayaks, recreational kayaks have a larger cockpit for easier entry and exit and a wider beam () for more stability. They are generally less than in length and have limited cargo capacity. Less expensive materials like polyethylene and fewer options keep these boats relatively inexpensive. Most canoe/kayak clubs offer introductory instruction in recreational boats. They do not perform as well in the sea. The recreational kayak is usually a type of touring kayak. "Sea kayaks" are typically designed for travel by one, two or even three paddlers on open water and in many cases trade maneuverability for seaworthiness, stability, and cargo capacity. Sea-kayak sub-types include "skin-on-frame" kayaks with traditionally constructed frames, open-deck "sit-on-top" kayaks, and recreational kayaks. The sea kayak, though descended directly from traditional types, is implemented in a variety of materials. Sea kayaks typically have a longer waterline, and provisions for below-deck storage of cargo. Sea kayaks may also have rudders or skegs (fixed rudder) and upturned bow or stern profiles for wave shedding. Modern sea kayaks usually have two or more internal bulkheads. Some models can accommodate two or sometimes three paddlers. Sealed-hull (unsinkable) craft were developed for leisure use, as derivatives of surfboards (e.g. paddle or wave skis), or for surf conditions. Variants include planing surf craft, touring kayaks, and sea marathon kayaks. Increasingly, manufacturers build leisure 'sit-on-top' variants of extreme sports craft, typically using polyethylene to ensure strength and affordability, often with a skeg for directional stability. Water that enters the cockpit drains out through scupper holes—tubes that run from the cockpit to the bottom of the hull. Sit-on-top kayaks come in 1-4 paddler configurations. Sit-on-top kayaks are particularly popular for fishing and SCUBA diving, since participants need to easily enter and exit the water, change seating positions, and access hatches and storage wells. Ordinarily the seat of a sit-on-top is slightly above water level, so the center of gravity for the paddler is higher than in a traditional kayak. To compensate for the higher center of gravity, sit-on-tops are often wider and slower than a traditional kayak of the same length. Contrary to popular belief, the sit-on-top kayak hull is not self bailing, since water penetrating it does not drain out automatically, as it does in bigger boats equipped with self bailing systems. Furthermore, the sit-on-top hull cannot be molded in a way that would assure water tightness, and water may get in through various holes in its hull, usually around hatches and deck accessories. If the sit-on-top kayak is loaded to a point where such perforations are covered with water, or if the water paddled is rough enough that such perforations often go under water, the sit-on-top hull may fill with water without the paddler noticing it in time. Specialty surf boats typically have flat bottoms, and hard edges, similar to surf boards. The design of a surf kayak promotes the use of an ocean surf wave (moving wave) as opposed to a river or feature wave (moving water). They are typically made from rotomolded plastic, or fiberglass. Surf kayaking comes in two main varieties, High Performance (HP) and International Class (IC). HP boats tend to have a lot of nose rocker, little to no tail rocker, flat hulls, sharp rails and up to four fins set up as either a three fin thruster or a quad fin. This enables them to move at high speed and maneuver dynamically. IC boats have to be at least long and until a recent rule change had to have a convex hull; now flat and slightly concave hulls are also allowed, although fins are not. Surfing on international boats tends to be smoother and more flowing, and they are thought of as kayaking's "long boarding". Surf boats come in a variety of materials ranging from tough but heavy plastics to super light, super stiff but fragile foam–cored carbon fiber. Surf kayaking has become popular in traditional surfing locations, as well as new locations such as the Great Lakes. A variation on the closed cockpit surf kayak is called a waveski. Although the waveski offers dynamics similar to a sit–on–top, its paddling technique and surfing performance and construction can be similar to surfboard designs. Whitewater kayaks are rotomolded in a semi-rigid, high impact plastic, usually polyethylene. Careful construction ensures that the boat remains structurally sound when subjected to fast-moving water. The plastic hull allows these kayaks to bounce off rocks without leaking, although they scratch and eventually puncture with enough use. Whitewater kayaks range from long. There are two main types of whitewater kayak: One type, the "playboat", is short, with a scooped bow and blunt stern. These trade speed and stability for high maneuverability. Their primary use is performing tricks in individual water features or short stretches of river. In playboating or "freestyle" competition (also known as "rodeo" boating), kayakers exploit the complex currents of rapids to execute a series of tricks, which are scored for skill and style. The other primary type is the creek boat, which gets its name from its purpose: running narrow, low-volume waterways. Creekboats are longer and have far more volume than playboats, which makes them more stable, faster and higher-floating. Many paddlers use creekboats in "short boat" downriver races, and they are often seen on large rivers where their extra stability and speed may be necessary to get through rapids. Between the creekboat and playboat extremes is a category called "river–running" kayaks. These medium–sized boats are designed for rivers of moderate to high volume, and some, known as "river running playboats", are capable of basic playboating moves. They are typically owned by paddlers who do not have enough whitewater involvement to warrant the purchase of more–specialized boats. Squirt Boating involves paddling both on the surface of the river and underwater. Squirt boats must be custom-fitted to the paddler to ensure comfort while maintaining the low interior volume necessary to allow the paddler to submerge completely in the river. White water racers combine a fast, unstable lower hull portion with a flared upper hull portion to combine flat water racing speed with extra stability in open water: they are not fitted with rudders and have similar maneuverability to flat water racers. They usually require substantial skill to achieve stability, due to extremely narrow hulls. Whitewater racing kayaks, like all racing kayaks, are made to regulation lengths, usually of fiber reinforced resin (usually epoxy or polyester reinforced with Kevlar, glass fiber, carbon fiber, or some combination). This form of construction is stiffer and has a harder skin than non-reinforced plastic construction such as rotomolded polyethylene: stiffer means faster, and harder means fewer scratches and therefore also faster. Sprint kayak is a sport held on calm water. Crews or individuals race over 200 m, 500 m, 1000 m or 5000 m with the winning boat being the first to cross the finish line. The paddler is seated, facing forward, and uses a double-bladed paddle pulling the blade through the water on alternate sides to propel the boat forward. In competition the number of paddlers within a boat is indicated by a figure besides the type of boat; K1 signifies an individual kayak race, K2 pairs, and K4 four-person crews. Kayak sprint has been in every summer olympics since it debuted at the 1936 summer olympics. Racing is governed by the International Canoe Federation. Slalom kayaks are flat–hulled, and—since the early 1970s—feature low profile decks. They are highly maneuverable, and stable but not fast in a straight line. A specialized variant of racing craft called a "surf ski" has an open cockpit and can be up to long but only wide, requiring expert balance and paddling skill. Surf skis were originally created for surf and are still used in races in New Zealand, Australia, and South Africa. They have become popular in the United States for ocean races, lake races and even downriver races. Marathon races vary in distances from ten kilometres to over 1000 kilometres for multi-day stage races. The term "kayak" is increasingly applied to craft that look little like traditional kayaks. Inflatables, also known as the "duckies" or "IKs", can usually be transported by hand using a carry bag. They are generally made of hypalon (a kind of neoprene), Nytrylon (a rubberized fabric), PVC, or polyurethane coated cloth. They can be inflated with foot, hand or electric pumps. Multiple compartments in all but the least expensive increase safety. They generally use low pressure air, almost always below 3 psi. While many inflatables are non-rigid, essentially pointed rafts, best suited for use on rivers and calm water, the higher end inflatables are designed to be hardy, seaworthy vessels. Recently some manufacturers have added an internal frame (folding-style) to a multi-section inflatable sit-on-top kayak to produce a seaworthy boat. The appeal of inflatable kayaks is their portability, their durability (they don't dent), ruggedness in white water (they bounce off rocks rather than break) and their easy storage. In addition, inflatable kayaks generally are stable, have a small turning radius and are easy to master, although some models take more effort to paddle and are slower than traditional kayaks. Because inflatable kayaks aren't as sturdy as traditional, hard-shelled kayaks, a lot of people tend to steer away from them. However, there have been considerable advancements in inflatable kayak technology over recent years. Folding kayaks are direct descendants of the skin-on-frame boats used by the Inuit and Greenlandic peoples. Modern folding kayaks are constructed from a wooden or aluminum frame over which is placed a synthetic skin made of polyester, cotton canvas, polyurethane, or Hypalon. They are more expensive than inflatable kayaks, but have the advantage of greater stiffness and consequently better seaworthiness. Walter Höhn (English Hoehn) had built, developed and then tested his design for a folding kayak in the white-water rivers of Switzerland from 1924 to 1927. In 1928, on emigrating to Australia, he brought 2 of them with him, lodged a patent for the design and proceeded to manufacture them. In 1942 the Australian Director of Military operations approached him to develop them for Military use. Orders were placed and eventually a total of 1024, notably the MKII & MKIII models, were produced by him and another enterprise, based on his 1942 patent (No. 117779) A kayak with pedals allows the kayaker to propel the vessel with a rotating propeller or underwater "flippers" rather than with a paddle. In contrast to paddling, kayakers who pedal kayaks use their legs rather than their arms. Traditional multi-hull vessels such as catamarans and outrigger canoes benefit from increased lateral stability without sacrificing speed, and these advantages have been successfully applied in twin hull kayaks. "Outrigger kayaks" attach one or two smaller hulls to the main hull to enhance stability, especially for fishing, touring, kayak sailing and motorized kayaking. Twin hull kayaks feature two long and narrow hulls, and since all their buoyancy is distributed as far as possible from their center line, they are stabler than mono hull kayaks outfitted with outriggers. While native people of the Arctic regions hunted rather than fished from kayaks, in recent years kayak sport fishing has become popular in both fresh and salt water, especially in warmer regions. Traditional fishing kayaks are characterized by wide beams of up to that increase their lateral stability. Some are equipped with outriggers that increase their stability, and others feature twin hulls enabling stand up paddling and fishing. Compared with motorboats, fishing kayaks are inexpensive and have few maintenance costs. Many kayak anglers like to customize their kayaks for fishing, a process known as 'rigging'. Kayaks were adapted for military use in the Second World War. Used mainly by British Commando and special forces, principally the Combined Operations Pilotage Parties (COPPs), the Special Boat Service and the Royal Marines Boom Patrol Detachment. The latter made perhaps the best known use of them in the Operation Frankton raid on Bordeaux harbor. Both the Special Air Service (SAS) and the Special Boat Service (SBS) used kayaks for reconnaissance in the 1982 Falklands War. US Navy SEALs reportedly used them at the start of Unified Task Force operations in Somalia in 1992. The SBS currently use Klepper two-man folding kayaks that can be launched from surfaced submarines or carried to the surface by divers from submerged ones. They can be parachuted from transport aircraft into the ocean or dropped from the back of Chinook helicopters. US Special Forces have used Kleppers but now primarily use Long Haul folding kayaks, which are made in the US. The Australian Military MKII and MKIII folding kayaks were extensively used during the 1941-1945 Pacific War for some 33 raids and missions on and around the South-East Asian islands. Documentation for this will be found in the National Archives of Australia official records, reference No. NAA K1214-123/1/06. They were deployed from disguised watercraft, submarines, Catalina aircraft, P.T. boats, motor launches and by parachute.
https://en.wikipedia.org/wiki?curid=17383
Imperial German Navy The Imperial German Navy () was the navy created at the time of the formation of the German Empire. It existed between 1871 and 1919, growing out of the small Prussian Navy (from 1867 the North German Federal Navy), which primarily had the mission of coastal defence. Kaiser Wilhelm II greatly expanded the navy, and enlarged its mission. The key leader was Admiral Alfred von Tirpitz, who greatly expanded the size and quality of the navy, while adopting the sea power theories of American strategist Alfred Thayer Mahan. The result was a naval arms race with Britain as the German navy grew to become one of the greatest maritime forces in the world, second only to the Royal Navy. The German surface navy proved ineffective during World War I; its only major engagement, the Battle of Jutland, was indecisive. However, the submarine fleet was greatly expanded and posed a major threat to the British supply system. The Imperial Navy's main ships were turned over to the Allies, but were scuttled at Scapa Flow in 1919 by German crews. All ships of the Imperial Navy were designated "SMS", for "Seiner Majestät Schiff" (His Majesty's Ship). The Imperial Navy achieved some important operational feats. At the Battle of Coronel, it inflicted the first major defeat on the Royal Navy in over one hundred years, although the German squadron of ships was subsequently defeated at the Battle of the Falkland Islands, only one ship escaping destruction. The Navy also emerged from the fleet action of the Battle of Jutland having destroyed more ships than it lost, although the strategic value of both of these encounters was minimal. The Imperial Navy was the first to operate submarines successfully on a large scale in wartime, with 375 submarines commissioned by the end of the First World War, and it also operated zeppelins. Although it was never able to match the number of ships of the Royal Navy, it had technological advantages, such as better shells and propellant for much of the Great War, meaning that it never lost a ship to a catastrophic magazine explosion from an above-water attack, although the elderly pre-dreadnought sank rapidly at Jutland after a magazine explosion was caused by an underwater attack. The unification of Germany under Prussian leadership was the defining point for the creation of the Imperial Navy in 1871. The newly created emperor, Wilhelm I, as King of Prussia, had previously been head of state of the strongest state forming part of the new empire. The navy remained the same as that operated by the empire's predecessor organisation in the unification of Germany, the North German Federation, which itself in 1867 had inherited the navy of the Kingdom of Prussia. Article 53 of the new Empire's constitution recognised the existence of the Navy as an independent organisation, but until 1888 it was commanded by army officers and initially adopted the same regulations as the Prussian army. Supreme command was vested in the emperor, but its first appointed chief was "General der Infanterie" (General of the Infantry) Albrecht von Stosch. Kiel on the Baltic Sea and Wilhelmshaven on the North Sea served as the Navy's principal naval bases. The former Navy Ministry became the Imperial Admiralty on 1 February 1872, while Stosch became formally an admiral in 1875. Initially the main task of the new Imperial Navy was coastal protection, with France and Russia seen as Germany's most likely future enemies. The Imperial Navy's tasks were then to prevent any invasion force from landing and to protect coastal towns from possible bombardment. In March 1872 a German Imperial Naval Academy was created at Kiel for training officers, followed in May by the creation of a 'Machine Engineer Corps', and in February 1873 a 'Medical Corps'. In July 1879 a separate 'Torpedo Engineer Corps' was created dealing with torpedoes and mines. In May 1872 a ten-year building programme was instituted to modernise the fleet. This called for eight armoured frigates, six armoured corvettes, twenty light corvettes, seven monitors, two floating batteries, six avisos, eighteen gunboats and twenty-eight torpedo boats, at an estimated cost of 220 million gold marks. The building plan had to be approved by the "Reichstag", which controlled the allocation of funds, although one-quarter of the money came from French war reparations. In 1883 Stosch was replaced by another general, Count Leo von Caprivi. At this point the navy had seven armoured frigates and four armoured corvettes, 400 officers and 5,000 ratings. The objectives of coastal defence remained largely unchanged, but there was a new emphasis on development of the torpedo, which offered the possibility of relatively small ships successfully attacking much larger ones. In October 1887 the first torpedo division was created at Wilhelmshaven and the second torpedo division based at Kiel. In 1887 Caprivi requested the construction of ten armoured frigates. Greater importance was placed at this time on development of the army, which was expected to be more important in any war. However, the Kiel Canal was commenced in June 1887, which connected the North Sea with the Baltic through the Jutland peninsula, allowing German ships to travel between the two seas avoiding waters controlled by other countries. This shortened the journey for commercial ships, but specifically united the two areas principally of concern to the German navy, at a cost of 150 million marks. Later, the protection of German maritime trade routes became important. This soon involved the setting up of some overseas supply stations, so called (foreign stations) and in the 1880s the Imperial Navy played a part in helping to secure the establishment of German colonies and protectorates in Africa, Asia and Oceania. In June 1888 Wilhelm II became Emperor after the death of his father Frederick III, who ruled for only 99 days. He started his reign with the intention of doing for the navy what his grandfather Wilhelm I had done for the army. The creation of a maritime empire to rival the British and French empires became an ambition to mark Germany as a truly global great power. Wilhelm became Grand Admiral of the German Navy, but also was awarded honorific titles from all over Europe, becoming admiral in the British, Russian, Swedish, Danish, Norwegian, Austro-Hungarian and Greek navies. On one occasion he wore the uniform of a British admiral to receive the visiting British ambassador. At this time the Imperial Navy had 534 officers and 15,480 men. The concept of expanding naval power, inevitably at the cost of not expanding other forces, was opposed by the three successive heads of the German armed forces, Waldersee, Schlieffen and Moltke between 1888 and 1914. It would also have been more widely opposed, had the Kaiser's intentions been widely known. Instead, he proceeded with a plan to expand the navy slowly, justifying enlargement step by step. In July 1888 Wilhelm II appointed Vice-Admiral Alexander von Monts as head of the admiralty. Monts oversaw the design of the , four of which were constructed by 1894 at a cost of 16 million marks each and displacement of 10,000 tons. In 1889 Wilhelm II reorganised top level control of the navy by creating a Navy Cabinet ("Marine-Kabinett") equivalent to the German Imperial Military Cabinet which had previously functioned in the same capacity for both the army and navy. The Head of the navy cabinet was responsible for promotions, appointments, administration and issuing orders to naval forces. Captain Gustav von Senden-Bibran was appointed as its first head and remained so until 1906, when he was replaced by the long-serving Admiral Georg Alexander von Müller. The existing Imperial admiralty was abolished and its responsibilities divided between two organisations. A new position of Chief of the Imperial Naval High Command was created, being responsible for ship deployments, strategy and tactics, an equivalent to the supreme commander of the Army. Vice admiral Max von der Goltz was appointed in 1889 and remained in post until 1895. Construction and maintenance of ships and obtaining supplies was the responsibility of the State Secretary of the Imperial Navy Office ("Reichsmarineamt"), responsible to the chancellor and advising the Reichstag on naval matters. The first appointee was Rear Admiral Karl Eduard Heusner, followed shortly by Rear Admiral Friedrich von Hollmann from 1890 to 1897. Each of these three heads of department reported separately to Wilhelm II. In 1895 funding was agreed for five battleships of the , completed by 1902. The ships were innovative for their time, introducing a complex system of watertight compartments and storing coal along the sides of the ship to help absorb explosions. However, the ships went against the trend for increasingly larger main guns, having smaller diameter guns than the "Brandenburg" design, but with a quick-loading design and more powerful secondary armaments. Costs rose to 21 million marks each, as had size to 11,500 tons. In 1892 Germany had launched the protected cruiser , the first navy ship to have triple propellers. She was succeeded by five protected cruisers, the last 'protected', as distinct from 'armoured' cruiser class constructed by Germany. The ships, completed between 1898 and 1900, had deck armour but not side armour and were intended for overseas duties. Shortages of funding meant it was not possible to create several designs of cruisers specialised for long range work, or more heavily armoured for fleet work. Work commenced on an armoured cruiser design, started in 1896 and commissioned in 1900. On 18 June 1897 Rear-Admiral Alfred von Tirpitz was appointed State Secretary of the Navy, where he remained for nineteen years. Tirpitz advocated the cause of an expanded navy necessary for Germany to defend her territories abroad. He had great success in persuading parliament to pass successive Navy bills authorising expansions of the fleet. German foreign policy as espoused by Otto von Bismarck had been to deflect the interest of great powers abroad while Germany consolidated her integration and military strength. Now Germany was to compete with the rest. Tirpitz started with a publicity campaign aimed at popularising the navy. He created popular magazines about the navy, arranged for Alfred Thayer Mahan's "The Influence of Sea Power upon History", which argued the importance of naval forces, to be translated into German and serialised in newspapers, arranged rallies in support and invited politicians and industrialists to naval reviews. Various pressure groups were formed to lobby politicians and spread publicity. One such organisation, the navy league or "Flottenverein", was organized by principals in the steel industry (Alfred Krupp), ship yards and banks, gaining more than one million members. Political parties were offered concessions, such as taxes on imported grain, in exchange for their support for naval bills. On 10 April 1898 the first Navy Bill was passed by the "Reichstag". It authorised the maintenance of a fleet of 19 battleships, 8 armoured cruisers, 12 large cruisers and 30 light cruisers to be constructed by 1 April 1904. Existing ships were counted in the total, but the bill provided for ships to be replaced every 25 years on an indefinite basis. Five million marks annually was allocated to run the navy, with a total budget of 408 million marks for shipbuilding. This would bring the German fleet to a strength where it could contemplate challenging France or Russia, but would remain clearly inferior to the world's largest fleet, the Royal Navy. Following the Boxer rebellion in China and the Boer War, a second navy bill was passed on 14 June 1900. This approximately doubled the allocated number of ships to 38 battleships, 20 armoured cruisers, 38 light cruisers. Significantly, the bill set no overall cost limit for the building program. Expenditure for the navy was too great to be met from taxation: the Reichstag had limited powers to extend taxation without entering into negotiations with the constituent German states, and this was considered politically unviable. Instead, the bill was financed by massive loans. Tirpitz, in 1899 was already exploring the possibilities for extending the battleship total to 45, a target which rose to 48 by 1909. Tirpitz's ultimate goal was a fleet capable of rivaling the Royal Navy. As British public opinion was turned against Germany, Admiral Sir John Fisher twice – in 1904 and 1908 – proposed using Britain's current naval superiority to 'Copenhagen' the German fleet, that is, to launch pre-emptive strikes against the Kiel and Wilhelmshaven naval bases as the Royal Navy had done against the Danish navy in 1801 and 1807." Tirpitz argued that if the fleet could achieve two-thirds the number of capital ships possessed by Britain then it stood a chance of winning in a conflict. Britain had to maintain a fleet throughout the world and consider other naval powers, whereas the German fleet could be concentrated in German waters . Attempts were made to play down the perceived threat to Britain, but once the German fleet reached the position of equalling the other second-rank navies, it became impossible to avoid mention of the one great fleet it was intended to challenge. Tirpitz hoped that other second-rank powers might ally with Germany, attracted by its navy. The policy of commencing what amounted to a naval arms race did not properly consider how Britain might respond. British policy, stated in the Naval Defence Act of 1889, was to maintain a navy superior to Britain's two largest rivals combined. The British Admiralty estimated that the German navy would be the world's second largest by 1906. Major reforms of the Royal Navy were undertaken, particularly by Fisher as First Sea Lord from 1904 to 1909. 154 older ships, including 17 battleships, were scrapped to make way for newer vessels. Reforms in training and gunnery were introduced to make good perceived deficiencies, which in part Tirpitz had counted upon to provide his ships with a margin of superiority. More capital ships were stationed in British home waters. A treaty with Japan in 1902 meant that ships could be withdrawn from East Asia, while the "Entente Cordiale" with France in 1904 meant that Britain could concentrate on guarding Channel waters, including the French coast, while France would protect British interests in the Mediterranean. By 1906 it was considered that Britain's only likely naval enemy was Germany. Five battleships of the were constructed from 1899 to 1904 at a cost of 22 million marks per ship. Five ships of the were built between 1901 and 1906 for the slightly greater 24 million marks each. Technological improvements meant that rapid fire guns could be made larger, so the "Braunschweig" class had a main armament of guns. Due to torpedo improvements in range and accuracy, emphasis was placed on a secondary armament of smaller guns to defend against them. The five s constructed between 1903 and 1908 had similar armament as the "Braunschweig" class, but heavier armour, for the slightly greater sum of 24.5 million marks each. Development of armoured cruisers also continued. "Fürst Bismarck"s design was improved upon in the subsequent , completed in 1902. Two ships of the were commissioned in 1904, followed by two similar armoured cruisers commissioned in 1905 and 1906, at costs around 17 million marks each. and followed, between 1904 and 1908, and cost an estimated for 20.3 million marks. Main armament was eight guns, but with six and eighteen guns for smaller targets. Eight light cruisers were constructed between 1902 and 1907, developed from the earlier . The ships had ten guns and were named after German towns. was the first German cruiser to be fitted with turbine engines, which were also trialled in torpedo boat "S-125". Turbines were faster, quieter, lighter, more reliable and more fuel efficient at high speeds. The first British experimental design (the destroyer ) had been constructed in 1901 and as a result Tirpitz had set up a special commission to develop turbines. No reliable German design was available by 1903, so British Parsons turbines were purchased. In 1899, the Imperial Naval High Command was replaced by the German Imperial Admiralty Staff ("Admiralstab") responsible for planning, the training of officers, and naval intelligence. In time of war it was to assume overall command, but in peace acted only advisory. Direct control of various elements of the fleet was subordinated to officers commanding those elements, accountable to the Kaiser. The reorganisations suited the Kaiser who wanted to maintain direct control of his ships. A disadvantage was that it split apart the integrated military command structure which before had balanced the importance of the navy within overall defence considerations. It suited Alfred von Tirpitz, because it removed the influence of the admiralty staff from naval planning, but left him the possibility, in wartime, to reorganise command around himself. Wilhelm II, however, never agreed to relinquish direct control of his fleet. On 3 December 1906 the Royal Navy received a new battleship, . She became famous as the first of a new concept in battleship design, using all big gun, single size of calibre armament. She used turbine propulsion for greater speed and less space required by the machinery, and guns arranged so that three times as many could be brought to bear when firing ahead, and twice as many when firing broadside. The design was not a uniquely British concept as similar ships were being built around the world, nor was it uniquely intended as a counter to German naval expansion, but the effect was to immediately require Germany to reconsider its naval building program. The battleship design was complemented by the introduction of a variant with lighter armour and greater speed, which became the battlecruiser. The revolution in design, together with improvements in personnel and training severely brought into question the German assumption that a fleet of two-thirds the size of the Royal Navy would at least stand a chance in an engagement. By 1906 Germany was already spending 60% of revenue upon the army. Either an enormous sum now had to be found to develop the navy further, or naval expansion had to be abandoned. The decision to continue was taken by Tirpitz in September 1905 and agreed by Chancellor Bernhard von Bülow and the Kaiser, while "Dreadnought" was still at the planning stage. The larger ships would naturally be more expensive, but also would require enlargement of harbours, locks and the Kiel canal, all of which would be enormously expensive. Estimated cost for new dreadnoughts was placed at 36.5 million marks for 19,000 tons displacement ships (larger than "Dreadnought" at 17,900 tons), and 27.5 million marks for battle-cruisers. 60 million mark was allocated for dredging the canal. The Reichstag was persuaded to agree to the program and passed a "Novelle" (a supplementary law) amending the navy bills and allocating 940 million marks for a dreadnought program and the necessary infrastructure. Two dreadnoughts and one battlecruiser were to be built each year. Construction of four s began in 1907 under the greatest possible secrecy. The chief German naval designer was Hans Bürkner. A principle was introduced that the thickness of side armour on a ship would equal the calibre of the large guns, while ships were increasingly divided internally into watertight compartments to make them more resistant to flooding when damaged. The design was hampered by the necessity to use reciprocating engines instead of the smaller turbines, since no sufficiently powerful design was available and acceptable to the German navy. Turrets could not be placed above the centre of the ship and instead had to be placed at the side, meaning two of the six turrets would always be on the wrong side of the ship when firing broadsides. Main armament was twelve 28 cm guns. The ships were all completed by 1910, over budget, averaging 37.4 million marks each. In 1910 they were transferred from Kiel to Wilhelmshaven, where two new large docks had been completed and more were under construction. The first German battlecruiser——was commenced March 1908. Four Parsons turbines were used, improving speed to 27 knots and reducing weight. Four twin turrets mounted 28 cm guns; although the two centre turrets were still placed one either side of the ship, they were offset so could now fire either side. The design was considered a success, but the cost at 35.5 million marks was significantly above the 1906 allocation. Light cruiser development continued with the light cruisers, which were to become famous for their actions in the start of World War I in the Pacific. The ships were 3,300 tons, and armed with ten 10.5 cm rapid fire guns and a speed around 24 knots. cost 7.5 million marks, and 6 million marks. Four were produced between 1907 and 1911 at 4,400 tons and around 8 million marks each. These had turbines, twelve 10.5 cm guns as main armament, but were also equipped to carry and lay 100 mines. From 1907 onward, all torpedo boats were constructed using turbine engines. Despite their ultimate importance, the German navy declined to take up the cause of another experiment, the submarine, until 1904. The first submarine, was delivered in December 1906, built by Krupp's Germania yard in Kiel. The first submarine had 238 ton displacement on the surface and 283 tons submerged. The kerosene engine developed 10 knots on the surface with a range of . Submerged, the ship could manage 50 nautical miles at 5 knots using battery electric propulsion. The ships followed a design by Maxime Laubeuf first used successfully in 1897, having a double hull and flotation tanks around the outside of the main crew compartments. The submarine had just one torpedo tube at the front and a total of three torpedoes. The early engines were noisy and smoky, so that a considerable boost to the usefulness of the submarine came with the introduction of quieter and cleaner diesel engines in 1910, which were much more difficult for an enemy to detect. German expenditure on ships was steadily rising. In 1907, 290 million marks was spent on the fleet, rising to 347 million marks or 24 percent of the national budget in 1908, with a predicted budget deficit of 500 million marks. By the outbreak of World War I, one billion marks had been added to Germany's national debt because of naval expenditures. While each German ship was more expensive than the last, the British managed to reduce the cost of the succeeding generations of (3 ships) and (3) battleships. Successive British battlecruisers were more expensive, but less so than their German equivalents. Overall, German ships were some 30% more expensive than the British. This all contributed to growing opposition in the Reichstag to any further expansion, particularly when it was clear that Britain intended to match and exceed any German expansion program. In the fleet itself, complaints were beginning to be made in 1908 about underfunding and shortages of crews for the new ships. The State Secretary of the Treasury, Hermann von Stengel, resigned because he could see no way to resolve the budget deficit. The elections of 1907 had returned a "Reichstag" more favourable to military exploits, following the refusal of the previous parliament to grant funds to suppress uprisings in colonies in German South-West Africa. Despite the difficulties, Tirpitz persuaded the Reichstag to pass a further "Novelle" in March 1908. This reduced the service life for ships from 25 years to 20 years, allowing for faster modernisation, and increased the building rate to four capital ships per year. Tirpitz's target was a fleet of 16 battleships and 5 battlecruisers by 1914, and 38 battleships and 20 battlecruisers by 1920. There were also to be 38 light cruisers, and 144 torpedo boats. The bill contained a restriction, that building would fall to two ships per year in 1912, but Tirpitz was confident of changing this at a later date. He anticipated that German industry, now heavily involved in shipbuilding, would back a campaign to maintain a higher construction rate. Four battleships of the were laid down in 1909–10, with displacements of 22,800 tons, twelve guns in 6 turrets, reciprocating engines generating a maximum speed of 21 knots, and a price tag of 46 million marks. Again, the turret configuration was dictated by the need to use the centre of the ship for machinery, despite the disadvantage of the turret layout. The ships were now equipped with torpedoes. The s built between 1909 and 1913 introduced a change in design as turbine engines were finally approved. The ships had ten 30.5 cm guns, losing two of the centre side turrets but gaining an additional turret astern on the centre line. As with the "Von der Tann" design, which was drawn up at a similar time, all guns could be fired either side in broadsides, meaning more guns could come to bear than with the "Helgoland" design, despite having fewer in total. Five ships were constructed rather than the usual four, one to act as a fleet flagship. One ship, the , was equipped with only two turbines rather than three, with the intention of having an additional diesel engine for cruising, but the Howaldt engine could not be developed in time. "Luitpold" had a top speed of 20 knots as a result, compared to 22 knots for the other ships. The ships were larger than the preceding class at 24,700 tons, but cheaper at 45 million marks. They formed part of the third squadron of the High Seas Fleet as it was constituted for World War I. Between 1908 and 1912 two s were constructed, adding an extra turret on the centre line astern, raised above the aft turret, but still using 28 cm guns. became part of the High Seas Fleet, but became part of the Mediterranean squadron and spent World War I as part of the Ottoman navy. The ships cost 42.6 and 41.6 million marks, with maximum speed of 28 knots. was constructed as a slightly enlarged version of the "Moltke" design, reaching a maximum speed of 29 knots. All cruisers were equipped with turbine engines from 1908 onwards. Between 1910 and 1912 four light cruisers were constructed of 4,600 tons, at around 7.4 million marks each. The ships were fitted with oil burners to improve the effectiveness of their main coal fueling. These were followed by the similar but slightly enlarged and marginally faster and light cruisers. In 1907 a naval artillery school was established at Sonderburg (north of Kiel). This aimed to address the difficulties with the new generation of guns, which with potentially greater range required aiming devices capable of directing them at targets at those extreme ranges. By 1914, experiments were being conducted with guns in increasing sizes up to . Capital ships were fitted with spotting tops high up on masts with range finding equipment, while ship design was altered to place turrets on the centre line of the ship for improved accuracy. The four s were commenced between October 1911 and May 1912 and entered service in 1914 at a cost of 45 million marks, forming the other part of the Third Squadron of the High Seas Fleet. They were 28,500 tons, with a maximum speed of 21 knots from three triple-stage Brown-Boverie-Parsons turbines. Main armament was five double turrets housing twin 30.5 cm guns, arranged with two turrets fore and aft and one in the centre of the ship. The second turret at either end was raised higher than the outer so that it could fire over the top (superfiring). As with "Prinzregent Luitpold", the ships were originally intended to have one diesel engine for cruising, but these were never developed and turbines were fitted instead. The ships were equipped with torpedo nets, trailed along the hull intended to stop torpedoes, but these reduced maximum speed to an impractical 8 knots and were later removed. Construction began in 1910 of the first submarine powered by twin diesel engines. "U-19" was twice the size of the first German submarine, had five times the range at cruising at 8 knots, or 15 knots maximum. There were now two bow and two stern torpedo tubes, with six torpedoes carried. The ships were designed to operate at a depth of , though could go to . Spending on the navy increased inexorably year by year. In 1909 Chancellor Bernhard von Bülow and Treasury Secretary Reinhold von Sydow attempted to pass a new budget boosting taxes in an attempt to reduce the deficit. The Social Democratic parties refused to accept the increased taxes on goods, while the conservatives opposed increases in inheritance taxes. Bülow and Sydow resigned in defeat and Theobald von Bethmann-Hollweg became Chancellor. His attempted solution was to initiate negotiations with Britain for an agreed slow down in naval building. Negotiations came to nothing when in 1911 the Agadir Crisis brought France and Germany into conflict. Germany attempted to 'persuade' France to cede territory in the Middle Congo in return for giving France a free hand in Morocco. The effect was to raise concerns in Britain over Germany's expansionist aims, and encouraged Britain to form a closer relationship with France, including naval cooperation. Tirpitz saw this once again as an opportunity to press for naval expansion and the continuation of the four capital ships per year building rate into 1912. The January 1912 elections brought a Reichstag where the Social Democrats, opposed to military expansion, became the largest party. The German army, mindful of the steadily increasing proportion of spending going to the navy, demanded an increase of 136,000 men to bring its size closer to that of France. In February 1912 the British war minister, Viscount Haldane, came to Berlin to discuss possible limits to naval expansion. Meanwhile, in Britain, the First Lord of the Admiralty Winston Churchill made a speech describing the German navy as a 'luxury', which was considered an insult when reported in Germany. The talks came to nothing, ending in recriminations over who had offered what. Bethmann-Hollweg argued for a guaranteed proportion of expenditure for the army, but failed when army officers refused to support him publicly. Tirpitz argued for six new capital ships, and got three, together with 15,000 additional sailors in a new combined military budget passed in April 1912. The new ships, together with the existing reserve flagship and four reserve battleships were to become one new squadron for the High Seas Fleet. In all the fleet would have five squadrons of eight battleships, twelve large cruisers and thirty small, plus additional cruisers for overseas duties. Tirpitz intended that with the rolling program of replacements, the existing coastal defence squadron of old ships would become a sixth fleet squadron, while the eight existing battle-cruisers would be joined by eight more as replacements for the large cruisers presently in the overseas squadrons. The plan envisaged a main fleet of 100,000 men, 49 battleships and 28 battlecruisers by 1920. The Kaiser commented of the British, "... we have them up against the wall." Although Tirpitz had succeeded in getting more ships, the proportion of military expenditure on the navy declined in 1912 and thereafter, from 35% in 1911 to 33% in 1912 and 25% in 1913. This reflected a change in attitude amongst military planners that a land war in Europe was increasingly likely, and a turning away from Tirpitz's scheme for worldwide expansion using the navy. In 1912 General von Moltke commented, "I consider war to be unavoidable, and the sooner the better." The Kaiser's younger brother, Admiral Prince Heinrich of Prussia, considered that the cost of the navy was now too great. In Britain, Churchill announced an intention to build two capital ships for every one constructed by Germany, and reorganised the fleet to move battleships from the Mediterranean to Channel waters. A policy was introduced of promoting British naval officers by merit and ability rather than time served, which saw rapid promotions for Jellicoe and Beatty, both of whom had important roles in the forthcoming World War I. By 1913 the French and British had plans in place for joint naval action against Germany, and France moved its Atlantic fleet from Brest to Toulon, replacing British ships. Britain also escalated the arms race by expanding the capabilities of its new battleships. The five 1912 of 32,000 tons would have guns and would be completely oil-fuelled, allowing a speed of 25 knots. For 1912–13 Germany concentrated on battlecruisers, with three ships of 27,000 tons and 26–27 knots maximum speed, costing 56–59 million marks each. These had four turrets mounting two 30.5 cm guns arranged in two turrets either end, with the inner turret superfiring over the outer. was the first German ship to have anti-aircraft guns fitted. In 1913, Germany responded to the British challenge by laying down two battleships. These did not enter service until after the Battle of Jutland, so failed to take part in any major naval action of the war. They had displacement of 28,600 tons, a crew of 1,100 and a speed of 22 knots, costing 50 million marks. Guns were arranged in the same pattern as the preceding battle-cruisers, but were now increased to diameter. The ships had four 8.8 cm anti-aircraft and also sixteen 15 cm lighter guns, but were coal fuelled. It was considered that coal bunkers at the sides of the ship added to protection against penetrating shells, but Germany also did not have a reliable supply of fuel oil. Two more ships of the class were later laid down, but never completed. Three light cruisers commenced construction in German yards in 1912–1913 ordered by the Russian Navy, costing around 9 million marks. The ships were seized at the outbreak of World War I becoming , and . Two larger cruisers, and were also commenced and entered service in 1915. More torpedo boats were constructed, with gradually increasing sizes having reached 800 tons for the V-25 to V-30 craft constructed by AG Vulcan in Kiel before 1914. In 1912 Germany created a Mediterranean squadron consisting of the battle-cruiser "Goeben" and light cruiser "Breslau". Naval trials of balloons began in 1891, but the results were unsatisfactory and none were purchased by the navy. In 1895 Count Ferdinand von Zeppelin attempted to interest both the army and navy in his new rigid airships, but without success. The Zeppelin rigids were considered too slow and there were concerns with their reliability operating over water. In 1909 the navy rejected proposals for aircraft to be launched from ships, and again in 1910 declined Zeppelin's airships. Finally in 1911, trials with aircraft began and in 1912 Tirpitz agreed to purchase the first airship for naval reconnaissance at a cost of 850,000 marks. The machine had insufficient range () to operate over Britain, but had machine guns for use against aircraft and experimental bombs. The following year ten more were ordered and a new naval air division was created at Johannisthal, near Berlin. However, in September 1913 L 1 was destroyed in a storm, while the following month L 2 was lost in a gas explosion. Orders for the undelivered machines were cancelled, leaving the navy with one machine, the L 3. In 1910 Prince Heinrich had learned to fly and supported the cause of naval aviation. In 1911 experiments took place with Albatros seaplanes and in 1912 Tirpitz authorized 200,000 marks for seaplane trials. The Curtiss seaplane was adopted. By 1913 there were four aeroplanes, now including a British Sopwith, and long-term plans to create six naval air stations by 1918. By 1914, the "Marine-Fliegerabteilung", the naval counterpart to the well-established "Fliegertruppe" land-based aviation units of the Army, comprised twelve seaplanes and one landplane and disposed of a budget of 8.5 million marks. Trials in 1914 using seaplanes operating with the fleet were less than impressive; out of four taking part one crashed, one was unable to take off and only one succeeded in all tasks. The most successful aircraft had been the British design, and indeed experiments in Britain had been proceeding with the support of Winston Churchill, and included converting ferries and liners into seaplane carriers. By the start of the First World War, the German Imperial Navy possessed 22 pre-Dreadnoughts, 14 dreadnought battleships and 4 battle-cruisers. A further three ships of the "König" class were completed between August and November 1914, and two "Bayern"-class battleships entered service in 1916. The battlecruisers "Derfflinger", , and were completed in September 1914, March 1916, and May 1917, respectively. Admiral von Tirpitz became the commander of the Navy. The main fighting forces of the navy were to become the High Seas Fleet and the U-boat fleet. Smaller fleets were deployed to the German overseas protectorates, the most prominent being assigned to the East Asia Station at Tsingtao. The German Navy's U-boats were also instrumental in the sinking of the passenger liner and auxiliary cruiser, the on 7 May 1915, which was one of the main events that led to the USA joining the war two years later in 1917. Notable battles fought by the Navy were: Notable minor battles: Minor engagements included the commerce raiding carried out by the "Emden", , and the sailing ship and commerce raider . The Imperial Navy carried out land operations, e.g. operating the long-range Paris Gun which was based on a naval gun. The Siege of Tsingtao used naval troops as Tsingtao was a naval base, and also as the Imperial Navy was directly under the Imperial Government (the German Army was made up of regiments from the various states). Following the Battle of Jutland, the capital ships of the Imperial Navy had been confined to inactive service in harbor. In October 1918, the Imperial Naval Command in Kiel under Admiral Franz von Hipper, without authorization, planned to dispatch the fleet for a last battle against the Royal Navy in the English Channel. The naval order of 24 October 1918 and the preparations to sail first triggered the Kiel Mutiny among the affected sailors and then a general revolution which was to sweep aside the monarchy within a few days. The Marines were referred to as "Seebataillone" (sea battalions). They served in the Prussian Navy, the North German Federal Navy, the Imperial German Navy and in the modern German Navy. The "Marine-Fliegerabteilung" consisted of Zeppelins (airships), observation balloons and fixed-wing aircraft. The main use of the Zeppelins was in reconnaissance over the North Sea and the Baltic, where the endurance of the craft led German warships to a number of Allied vessels. Zeppelin patrolling had priority over any other airship activity. During the entire war around 1,200 scouting flights were made. During 1915 the German Navy had some 15 Zeppelins in commission and was able to have two or more patrolling continuously at any one time. They kept the British ships from approaching Germany, spotted when and where the British were laying sea-mines, and later aided in the destruction of those mines. Zeppelins would sometimes land on the sea surface next to a minesweeper, bring aboard an officer and show him the lay of the mines. The Naval and Army Air Services also directed a number of strategic raids against Britain, leading the way in bombing techniques and also forcing the British to bolster their anti-aircraft defences. The possibility of airship raids were approved by the Kaiser on 9 January 1915, although he excluded London as a target and further demanded that no attacks be made on historic or government buildings or museums. The night-time raids were intended to target only military sites on the east coast and around the Thames estuary, but difficulties in navigation and the height from which the bombs were dropped made accurate bombing impossible, and most bombs fell on civilian targets or open countryside. Stationed in North Sea coastal airfields, German naval aircraft often fought against their British counterparts of the Royal Naval Air Service. Naval pilots flew aircraft that were also used by the German Army's "Luftstreitkräfte" in addition to seaplanes. Theo Osterkamp was one of the original naval pilots, the first German pilot to fly a land-based aircraft to England on a reconnaissance mission, and its leading ace with 32 victories. By war's end, the roster of German naval flying aces also included such luminaries as Gotthard Sachsenberg (31 victories), Alexander Zenzes (18 victories), Friedrich Christiansen (13 victories), Karl Meyer (8 victories), Karl Scharon (8 victories), and Hans Goerth (7 victories). Another decorated aviator was Gunther Plüschow who shot down a Japanese plane during the Siege of Tsingtao and was the only German combatant to escape from a prison camp in Britain. List of aircraft that were assigned to naval air service: Naval Air Service Units included:"Marine Jagdgruppe Flandern" composed of: After the end of World War I, the bulk of the navy's modern ships (74 in all) were interned at Scapa Flow (November 1918), where the entire fleet (with a few exceptions) was scuttled by its crews on 21 June 1919 on orders from its commander, Rear Admiral Ludwig von Reuter. Ernest Cox subsequently salvaged many of the Scapa Flow ships. The surviving ships of the Imperial Navy became the basis for the "Reichsmarine" of the Weimar Republic. The Imperial German Navy's rank and rating system combined that of Prussia's with the navies of other northern states.
https://en.wikipedia.org/wiki?curid=17385
Kriegsmarine The Kriegsmarine (, ) was the navy of Nazi Germany from 1935 to 1945. It superseded the Imperial German Navy of the German Empire (1871–1918) and the inter-war "Reichsmarine" (1919–1935) of the Weimar Republic. The "Kriegsmarine" was one of three official branches, along with the "Heer" and the "Luftwaffe" of the "Wehrmacht", the German armed forces from 1933 to 1945. In violation of the Treaty of Versailles, the "Kriegsmarine" grew rapidly during German naval rearmament in the 1930s. The 1919 treaty had limited the size of the German navy and prohibited the building of submarines. "Kriegsmarine" ships were deployed to the waters around Spain during the Spanish Civil War (1936–1939) under the guise of enforcing non-intervention, but in reality supported the Nationalists against the Spanish Republicans. In January 1939, Plan Z was ordered, calling for surface naval parity with the British Royal Navy by 1944. When World War II broke out in September 1939, Plan Z was shelved in favour of a crash building program for submarines (U-boats) instead of capital surface warships and land and air forces were given priority of strategic resources. The Commander-in-Chief of the "Kriegsmarine" (as for all branches of armed forces during the period of absolute Nazi power) was Adolf Hitler, who exercised his authority through the "Oberkommando der Marine". The "Kriegsmarine"s most significant ships were the U-boats, most of which were constructed after Plan Z was abandoned at the beginning of World War II. Wolfpacks were rapidly assembled groups of submarines which attacked British convoys during the first half of the Battle of the Atlantic but this tactic was largely abandoned by May 1943 when U-boat losses mounted. Along with the U-boats, surface commerce raiders (including auxiliary cruisers) were used to disrupt Allied shipping in the early years of the war, the most famous of these being the heavy cruisers "Admiral Graf Spee" and "Admiral Scheer" and the battleship "Bismarck". However, the adoption of convoy escorts, especially in the Atlantic, greatly reduced the effectiveness of surface commerce raiders against convoys. Following the end of World War II in 1945, the "Kriegsmarine"'s remaining ships were divided up among the Allied powers and were used for various purposes including minesweeping. Under the terms of the 1919 Treaty of Versailles, Germany was only allowed a minimal navy of 15,000 personnel, six capital ships of no more than 10,000 tons, six cruisers, twelve destroyers, twelve torpedo boats and no submarines or aircraft carriers. Military aircraft were also banned, so Germany could have no naval aviation. Under the treaty Germany could only build new ships to replace old ones. All the ships allowed and personnel were taken over from the "Kaiserliche Marine", renamed the" Reichsmarine". From the outset, Germany worked to circumvent the military restrictions of the Treaty of Versailles. The Germans continued to develop U-boats through a submarine design office in the Netherlands ("NV Ingenieurskantoor voor Scheepsbouw") and a torpedo research program in Sweden where the G7e torpedo was developed. Even before the Nazi seizure of power on 30 January 1933 the German government decided on 15 November 1932 to launch a prohibited naval re-armament program that included U-boats, airplanes and an aircraft carrier. The launching of the first pocket battleship, in 1931 (as a replacement for the old pre-dreadnought battleship "Preussen") was a step in the formation of a modern German fleet. The building of the "Deutschland" caused consternation among the French and the British as they had expected that the restrictions of the Treaty of Versailles would limit the replacement of the pre-dreadnought battleships to coastal defence ships, suitable only for defensive warfare. By using innovative construction techniques, the Germans had built a heavy ship suitable for offensive warfare on the high seas while still abiding by the letter of the treaty. When the Nazis came to power in 1933, Adolf Hitler soon began to more brazenly ignore many of the Treaty restrictions and accelerated German naval rearmament. The Anglo-German Naval Agreement of 18 June 1935 allowed Germany to build a navy equivalent to 35% of the British surface ship tonnage and 45% of British submarine tonnage; battleships were to be limited to no more than 35,000 tons. That same year the "Reichsmarine" was renamed as the "Kriegsmarine". In April 1939, as tensions escalated between the United Kingdom and Germany over Poland, Hitler unilaterally rescinded the restrictions of the Anglo-German Naval Agreement. The building-up of the German fleet in the time period of 1935–1939 was slowed by problems with marshaling enough manpower and material for ship building. This was because of the simultaneous and rapid build-up of the German army and air force which demanded substantial effort and resources. Some projects, like the D-class cruisers and the P-class cruisers, had to be cancelled. The first military action of the "Kriegsmarine" came during the Spanish Civil War (1936–1939). Following the outbreak of hostilities in July 1936 several large warships of the German fleet were sent to the region. The heavy cruisers and , and the light cruiser were the first to be sent in July 1936. These large ships were accompanied by the 2nd Torpedo-boat Flotilla. The German presence was used to covertly support Franco's Nationalists although the immediate involvement of the "" was humanitarian relief operations and evacuating 9,300 refugees, including 4,550 German citizens. Following the brokering of the International Non-Intervention Patrol to enforce an international arms embargo the "Kriegsmarine" was allotted the patrol area between Cabo de Gata (Almeria) and Cabo de Oropesa. Numerous vessels served as part of these duties including . On 29 May 1937 the "Deutschland" was attacked off Ibiza by two bombers from the Republican Air Force. Total casualties from the Republican attack were 31 dead and 110 wounded, 71 seriously, mostly burn victims. In retaliation the "Admiral Scheer" shelled Almeria on 31 May killing 19–20 civilians, wounding 50 and destroying 35 buildings. Following further attacks by Republican submarines against the off the port of Oran between 15–18 June 1937 Germany withdrew from the Non-Intervention Patrol. U-boats also participated in covert action against Republican shipping as part of Operation Ursula. At least eight U-boats engaged a small number of targets in the area throughout the conflict. (By comparison the Italian "Regia Marina" operated 58 submarines in the area as part of the "Sottomarini Legionari".) The "Kriegsmarine" saw as her main tasks the controlling of the Baltic Sea and winning a war against France in connection with the German army, because France was seen as the most likely enemy in the event of war. But in 1938 Hitler wanted to have the possibility of winning a war against Great Britain at sea in the coming years. Therefore he ordered plans for such a fleet from the "Kriegsmarine". From the three proposed plans (X, Y and Z) he approved Plan Z in January 1939. This blueprint for the new German naval construction program envisaged building a navy of approximately 800 ships during the period 1939–1947. Hitler demanded that the program was to be completed by 1945. The main force of Plan Z were six H-class battleships. In the version of Plan Z drawn up in August 1939 the German fleet was planned to consist of the following ships by 1945: Personnel strength was planned to rise to over 200,000. The planned naval program was not very far advanced by the time World War II began. In 1939 two s and two H-class battleships were laid down and parts for two further H-class battleships and three s were in production. The strength of the German fleet at the beginning of the war was not even 20% of Plan Z. On 1 September 1939, the navy still had a total personnel strength of only 78,000, and it was not at all ready for a major role in the war. Because of the long time it would take to get the Plan Z fleet ready for action and shortage in workers and material in wartime, Plan Z was essentially shelved in September 1939 and the resources allocated for its realization were largely redirected to the construction of U-boats, which would be ready for war against the United Kingdom more quickly. The "Kriegsmarine" participated in the Battle of Westerplatte and the Battle of the Danzig Bay during the invasion of Poland. In 1939, major events for the "Kriegsmarine" were the sinking of the British aircraft carrier and the British battleship and the loss of the at the Battle of the River Plate. Submarine attacks on Britain's vital maritime supply routes (Battle of the Atlantic) started immediately at the outbreak of war, although they were hampered by the lack of well placed ports from which to operate. Throughout the war the "Kriegsmarine" was responsible for coastal artillery protecting major ports and important coastal areas. It also operated anti-aircraft batteries protecting major ports. In April 1940, the German Navy was heavily involved in the invasion of Norway, where it suffered significant losses, which included the heavy cruiser sunk by artillery and torpedoes from Norwegian shore batteries at the Oscarsborg Fortress in the Oslofjord. Ten destroyers were lost in the Battles of Narvik (half of German destroyer strength at the time), and two light cruisers, the "Königsberg" which was bombed and sunk by Royal Navy aircraft in Bergen, and the "Karlsruhe" which was sunk off the coast of Kristiansand by a British submarine. The "Kriegsmarine" did in return sink some British warships during this campaign, including the aircraft carrier . The losses in the Norwegian Campaign left only a handful of undamaged heavy ships available for the planned, but never executed, invasion of the United Kingdom (Operation Sea Lion) in the summer of 1940. There were serious doubts that the invasion sea routes could have been protected against British naval interference. The Fall of France and the conquest of Norway gave German submarines greatly improved access to British shipping routes in the Atlantic. At first, British convoys lacked escorts that were adequate either in numbers or equipment and, as a result, the submarines had much success for few losses (this period was dubbed the First Happy Time by the Germans). Italy entered the war in June 1940, and the Battle of the Mediterranean began: from September 1941 to May 1944 some 62 German submarines were transferred there, sneaking past the British naval base at Gibraltar. The Mediterranean submarines sank 24 major Allied warships (including 12 destroyers, 4 cruisers, 2 aircraft carriers and 1 battleship) and 94 merchant ships (449,206 tons of shipping). None of the Mediterranean submarines made it back to their home bases, as they were all either sunk in battle or scuttled by their crews at the end of the war In 1941 one of the four modern German battleships, sank while breaking out into the Atlantic for commerce raiding. "Bismarck" was in turn hunted down by much superior British forces after being crippled by an air-launched torpedo. She was subsequently scuttled after being rendered a burning wreck by two British battleships. During 1941, the "Kriegsmarine" and the United States Navy became de facto belligerents, although war was not formally declared, leading to the sinking of the . This course of events were the result of the American decision to support Britain with its Lend-Lease program and the subsequent decision to escort Lend-Lease convoys with American war ships through the western part of the Atlantic. The Japanese attack on Pearl Harbor and the subsequent German declaration of war against the United States in December 1941 led to another phase of the Battle of the Atlantic. In Operation Drumbeat and subsequent operations until August 1942, a large number of Allied merchant ships were sunk by submarines off the American coast as the Americans had not prepared for submarine warfare, despite clear warnings (this was the so-called Second Happy Time for the German Navy). The situation became so serious that military leaders feared for the whole Allied strategy. The vast American ship building capabilities and naval forces were however now brought into the war and soon more than offset any losses inflicted by the German submariners. In 1942, the submarine warfare continued on all fronts, and when German forces in the Soviet Union reached the Black Sea, a few submarines were eventually transferred there. In February 1942, the three large warships stationed on the Atlantic coast at Brest were evacuated back to German ports for deployment to Norway. The ships had been repeatedly damaged by air attacks by the RAF, the supply ships to support Atlantic sorties had been destroyed by the Royal Navy, and Hitler now felt that Norway was the "zone of destiny" for these ships. The two battleships and and the heavy cruiser passed through the English Channel (Channel Dash) on their way to Norway despite British efforts to stop them. Not since the Spanish Armada in 1588 had any warships in wartime done this. It was a tactical victory for the "Kriegsmarine" and a blow to British morale, but the withdrawal removed the possibility of attacking allied convoys in the Atlantic with heavy surface ships. With the German attack on the Soviet Union in June 1941 Britain started to send Arctic convoys with military goods around Norway to support their new ally. In 1942 German forces began heavily attacking these convoys, mostly with bombers and U-boats. The big ships of the "Kriegsmarine" in Norway were seldom involved in these attacks, because of the inferiority of German radar technology, and because Hitler and the leadership of the "Kriegsmarine" feared losses of these precious ships. The most effective of these attacks was the near destruction of Convoy PQ 17 in July 1942. Later in the war German attacks on these convoys were mostly reduced to U-boat activities and the mass of the allied freighters reached their destination in Soviet ports. The Battle of the Barents Sea in December 1942 was an attempt by a German naval surface force to attack an Allied Arctic convoy. However, the advantage was not pressed home and they returned to base. There were serious implications: this failure infuriated Hitler, who nearly enforced a decision to scrap the surface fleet. Instead, resources were diverted to new U-boats, and the surface fleet became a lesser threat to the Allies. After December 1943 when had been sunk in an attack on an Arctic convoy in the Battle of North Cape by , most German surface ships in bases at the Atlantic were blockaded in, or close to, their ports as a "fleet in being", for fear of losing them in action and to tie up British naval forces. The largest of these ships, the battleship , was stationed in Norway as a threat to Allied shipping and also as a defence against a potential Allied invasion. When she was sunk, after several attempts, by British bombers in November 1944 (Operation Catechism), several British capital ships could be moved to the Far East. From late 1944 until the end of the war, the surviving surface fleet of the "Kriegsmarine" (heavy cruisers: , , , , light cruisers: , , ) was heavily engaged in providing artillery support to the retreating German land forces along the Baltic coast and in ferrying civilian refugees to the western Baltic Sea parts of Germany (Mecklenburg, Schleswig-Holstein) in large rescue operations. Large parts of the population of eastern Germany fled the approaching Red Army out of fear for Soviet retaliation (mass rapes, killings and looting by Soviet troops did occur). The "Kriegsmarine" evacuated two million civilians and troops in the evacuation of East Prussia and Danzig from January to May 1945. It was during this activity that the catastrophic sinking of several large passenger ships occurred: and were sunk by Soviet submarines, while was sunk by British bombers, each sinking claiming thousands of civilian lives. The "Kriegsmarine" also provided important assistance in the evacuation of the fleeing German civilians of Pomerania and Stettin in March and April 1945. A desperate measure of the "Kriegsmarine" to fight the superior strength of the Western Allies from 1944 was the formation of the "Kleinkampfverbände" (Small Battle Units). These were special naval units with frogmen, manned torpedoes, motorboats laden with explosives and so on. The more effective of these weapons and units were the development and deployment of midget submarines like the "Molch" and "Seehund". In the last stage of the war, the "Kriegsmarine" also organized a number of divisions of infantry from its personnel. Between 1943 and 1945, a group of U-boats known as the "Monsun" Boats ("Monsun Gruppe") operated in the Indian Ocean from Japanese bases in the occupied Dutch East Indies and Malaya. Allied convoys had not yet been organized in those waters, so initially many ships were sunk. However, this situation was soon remedied. During the later war years, the "Monsun Boats" were also used as a means of exchanging vital war supplies with Japan. During 1943 and 1944, due to Allied anti-submarine tactics and better equipment the U-boat fleet started to suffer heavy losses. The turning point of the Battle of the Atlantic was during Black May in 1943, when the U-boat fleet started suffering heavy losses and the number of Allied ships sunk started to decrease. Radar, longer range air cover, sonar, improved tactics and new weapons all contributed. German technical developments, such as the "Schnorchel", attempted to counter these. Near the end of the war a small number of the new "Elektroboot" U-boats (XXI and XXIII) became operational, the first submarines designed to operate submerged at all times. The "Elektroboote" had the potential to negate the Allied technological and tactical advantage, although they were deployed too late to see combat in the war. Following the capture of the naval base at Liepāja by the Germans on 29 June 1941, Latvia came under the command of the "Kriegsmarine". On 1 July 1941, the town commandant "Korvettenkapitän" Stein ordered that ten hostages be shot for every act of sabotage, and further put civilians in the zone of targeting by declaring that Red Army soldiers were hiding among them in civilian attire. On 5 July 1941 "Korvettenkapitän" Brückner, who had taken over for Stein, issued a set of anti-Jewish regulations in the local newspaper, "Kurzemes Vārds". Summarized these were as follows: On 16 July 1941, "Fregattenkapitän" Dr. Hans Kawelmacher was appointed the German naval commandant in Liepāja. On 22 July, Kawelmacher sent a telegram to the German Navy's Baltic Command in Kiel, which stated that he wanted 100 SS and fifty "Schutzpolizei" (protective police) men sent to Liepāja for "quick implementation Jewish problem". Kawelmacher hoped to accelerate killings complaining: "Here about 8,000 Jews... with present SS-personnel, this would take one year, which is untenable for [the] pacification of Liepāja." Kawelmacher on 27 July 1941: "Jewish problem Libau largely solved by execution of about 1,100 male Jews by Riga SS commando on 24 and 25.7." In post-war 1945, U-boat Commander Heinz-Wilhelm Eck of was tried and executed with two of his crewmen for shooting at survivors; likewise was also involved in shooting at sunken ship survivors, although they were not tried as they were lost at sea. After the war, the German surface ships that remained afloat (only the cruisers and , and a dozen destroyers were operational) were divided among the victors by the"Tripartite Naval Commission". The US used the heavy cruiser in nuclear testing at Bikini Atoll in 1946 as target ship. Some (like the unfinished aircraft carrier ) were used for target practice with conventional weapons, while others (mostly destroyers and torpedo boats) were put into the service of Allied navies that lacked surface ships after the war. The training barque SSS "Horst Wessel" was recommissioned USCGC "Eagle" and remains in active service, assigned to the United States Coast Guard Academy. The British, French and Soviet navies received the destroyers, and some torpedo boats went to the Danish and Norwegian navies. For the purpose of mine clearing, the Royal Navy employed German crews and minesweepers from June 1945 to January 1948, organized in the German Mine Sweeping Administration, the GMSA, which consisted of 27,000 members of the former "Kriegsmarine" and 300 vessels. The destroyers and the Soviet share light cruiser were all retired by the end of the 1950s, but five escort destroyers were returned from the French to the new West German navy in the 1950s and three 1945 scuttled type XXI and XXIII U-boats were raised by West Germany and integrated into their new navy. In 1956, with West Germany's accession to NATO, a new navy was established and was referred to as the "Bundesmarine" (Federal Navy). Some "Kriegsmarine" commanders like Erich Topp and Otto Kretschmer went on to serve in the "Bundesmarine". In East Germany the "Volksmarine" (People's Navy) was established in 1956. With the reunification of Germany in 1990, it was decided to use the name "Deutsche Marine" (German Navy). By the start of World War II, much of the "Kriegsmarine" were modern ships: fast, well-armed and well-armoured. This had been achieved by concealment but also by deliberately flouting World War I peace terms and those of various naval treaties. However, the war started with the German Navy still at a distinct disadvantage in terms of sheer size with what were expected to be its primary adversaries – the navies of France and Great Britain. Although a major re-armament of the navy (Plan Z) was planned, and initially begun, the start of the war in 1939 meant that the vast amounts of material required for the project were diverted to other areas. The sheer disparity in size when compared to the other European powers navies prompted German naval commander in chief Grand Admiral Erich Raeder to write of his own navy once the war began "The surface forces can do no more than show that they know how to die gallantly." A number of captured ships from occupied countries were added to the German fleet as the war progressed. Though six major units of the "Kriegsmarine" were sunk during the war (both "Bismarck"-class battleships and both "Scharnhorst"-class battleships, as well as two heavy cruisers), there were still many ships afloat (including four heavy cruisers and four light cruisers) as late as March 1945. Some ship types do not fit clearly into the commonly used ship classifications. Where there is argument, this has been noted. The main combat ships of the "Kriegsmarine" (excluding U-boats): Construction of the was started in 1936 and construction of an unnamed sister ship was started two years later in 1938, but neither ship was completed. In 1942 conversion of three German passenger ships ("Europa", "Potsdam", "Gneisenau") and two unfinished cruisers, the captured French light cruiser and the German heavy cruiser , to auxiliary carriers was begun. In November 1942 the conversion of the passenger ships was stopped because these ships were now seen as too slow for operations with the fleet. But conversion of one of these ships, the "Potsdam", to a training carrier was begun instead. In February 1943 all the work on carriers was halted because of the German failure during the Battle of the Barents Sea which convinced Hitler that big warships were useless. All engineering of the aircraft carriers like catapults, arresting gears and so on were tested and developed at the "Erprobungsstelle See" Travemünde (Experimental Place Sea in Travemünde) including the airplanes for the aircraft carriers, the Fieseler Fi 167 ship-borne biplane torpedo and reconnaissance bomber and the navalized versions of two key early war "Luftwaffe" aircraft: the Messerschmitt Bf 109T fighter and Junkers Ju 87C "Stuka" dive bomber. The "Kriegsmarine" completed four battleships during its existence. The first pair were the 11-inch gun , consisting of the and , which participated in the invasion of Norway (Operation Weserübung) in 1940, and then in commerce raiding until the "Gneisenau" was heavily damaged by a British air raid in 1942 and the "Scharnhorst" was sunk in the Battle of the North Cape in late 1943. The second pair were the 15-inch gun , consisting of the and . The "Bismarck" was sunk on her first sortie into the Atlantic in 1941 (Operation Rheinübung) although she did sink the battlecruiser "Hood" and severely damaged the battleship "Prince of Wales", while the "Tirpitz" was based in Norwegian ports during most of the war as a fleet in being, tying up Allied naval forces, and subject to a number of attacks by British aircraft and submarines. More battleships were planned (the H-class), but construction was abandoned in September 1939. The pocket battleships were the (renamed "Lützow"), , and . Modern commentators favour classifying these as "heavy cruisers" and the "Kriegsmarine" itself reclassified these ships as such ("Schwere Kreuzer") in 1940. In German language usage these three ships were designed and built as "armoured ships" ("Panzerschiffe") – "pocket battleship" is an English label. The "Graf Spee" was scuttled by her own crew in the Battle of the River Plate, in the Rio de la Plata estuary in December 1939. "Admiral Scheer" was bombed on 9 April 1945 in port at Kiel and badly damaged, essentially beyond repair, and rolled over at her moorings. After the war that part of the harbor was filled in with rubble and the hulk buried. "Lützow" (ex-"Deutschland") was bombed 16 April 1945 in the Baltic off Schwinemünde just west of Stettin, and settled on the shallow bottom. With the Soviet Army advancing across the Oder, the ship was destroyed in place to prevent the Soviets capturing anything useful. The wreck was dismantled and scrapped in 1948–1949. The World War I era Pre-dreadnought battleships and were used mainly as training ships, although they also participated in several military operations, with the latter bearing the distinction of firing the opening shots of WWII. and were converted into radio-guided target ships in 1928 and 1930 respectively. was decommissioned in 1931 and struck from the naval register in 1936. Plans to convert her into a radio-controlled target ship for aircraft was canceled because of the outbreak of war in 1939. Three s were ordered in 1939, but with the start of the war the same year there were not enough resources to build the ships. , , and Never completed: , The term "light cruiser" is a shortening of the phrase "light armoured cruiser". Light cruisers were defined under the Washington Naval Treaty by gun caliber. Light cruiser describes a small ship that was armoured in the same way as an armoured cruiser. In other words, like standard cruisers, light cruisers possessed a protective belt and a protective deck. Prior to this, smaller cruisers tended to be of the protected cruiser model and possessed only an armoured deck. The Kriegsmarine light cruisers were as follows: Never completed: three s Never Completed: KH-1 and KH-2 (Kreuzer (cruiser) Holland 1 and 2). Captured in the Netherlands 1940. Both being on the stocks and building continued for the "Kriegsmarine". In addition, the former "Kaiserliche Marine" light cruiser was captured by Germans on 11 September 1943 after the capitulation of Italy. She was pressed into "Kriegsmarine" service for a brief time before being destroyed by British MTBs. During the war, some merchant ships were converted into "auxiliary cruisers" and nine were used as commerce raiders sailing under false flags to avoid detection, and operated in all oceans with considerable effect. The German designation for the ships was "'Handelstörkreuzer"' thus the HSK serial assigned. Each had as well an administrative label more commonly used, e.g. Schiff 16 = Atlantis, Schiff 41 = Kormoran, etc. The auxiliary cruisers were: Although the German World War II destroyer ("Zerstörer") fleet was modern and the ships were larger than conventional destroyers of other navies, they had problems. Early classes were unstable, wet in heavy weather, suffered from engine problems and had short range. Some problems were solved with the evolution of later designs, but further developments were curtailed by the war and, ultimately, by Germany's defeat. In the first year of World War II, they were used mainly to sow offensive minefields in shipping lanes close to the British coast. These vessels evolved through the 1930s from small vessels, relying almost entirely on torpedoes, to what were effectively small destroyers with mines, torpedoes and guns. Two classes of fleet torpedo boats were planned, but not built, in the 1940s. The E-boats were fast attack craft with torpedo tubes. Over 200 boats of this type were built for the "Kriegsmarine". Thousands of smaller warships and auxiliaries served in the "Kriegsmarine", including minelayers, minesweepers, mine transports, netlayers, floating AA and torpedo batteries, command ships, decoy ships (small merchantmen with hidden weaponry), gunboats, monitors, escorts, patrol boats, sub-chasers, landing craft, landing support ships, training ships, test ships, torpedo recovery boats, dispatch boats, aviso, fishery protection ships, survey ships, harbor defense boats, target ships and their radio control vessels, motor explosive boats, weather ships, tankers, colliers, tenders, supply ships, tugs, barges, icebreakers, hospital and accommodation ships, floating cranes and docks, and many others. The "Kriegsmarine" employed hundreds of auxiliary "Vorpostenboote" during the war, mostly civilian ships that were drafted and fitted with military equipment, for use in coastal operations. The U-boat Arm of the "Kriegsmarine" was titled the "U-bootwaffe" ("U-boat weapon"). At the outbreak of war, it had a fleet of 57 submarines (U-boats). This was increased steadily until mid-1943, when losses from Allied counter-measures matched the new vessels launched. The principal types were the Type IX, a long range type used in the western and southern Atlantic, Indian and Pacific Oceans; the Type VII, the most numerous type, used principally in the north Atlantic; and the small Type II, for coastal waters. Type X was a small class of minelayers and Type XIV was a specialized type used to support distant U-boat operations – the ""Milchkuh"" (Milkcow). Types XXI and XXIII, the ""Elektroboot"", could have negated much of the Allied anti-submarine tactics and technology, but only a few of this new type of U-boat became ready for combat at the end of the war. Post-war, they became the prototype for modern conventional submarines, such as the Soviet . During World War II, about 60% of all U-boats commissioned were lost in action; 28,000 of the 40,000 U-boat crewmen were killed during the war and 8,000 were captured. The remaining U-boats were either surrendered to the Allies or scuttled by their own crews at the end of the war. The military campaigns in Europe yielded a large number of captured vessels, many of which were under construction. Nations represented included Austria (riverine craft), Czechoslovakia (riverine craft), Poland, Norway, Denmark, the Netherlands, Belgium, France, Yugoslavia, Greece, Soviet Union, United Kingdom, United States (several landing craft) and Italy (after the armistice). Few of the incomplete ships of destroyer size or above were completed, but many smaller warships and auxiliaries were completed and commissioned into Kriegsmarine during the war. Additionally many captured or confisticated foreign civilian ships (merchantmen, fishing boats, tugboats etc.) were converted into auxiliary warships or support ships. The first warship sunk in World War II was the destroyer of the Polish Navy by Junkers Ju 87 dive bombers of the carrier air group of aircraft carrier on 3 September 1939. This carrier air group (Trägergeschwader 186) was part of the "Luftwaffe" but at that time under command of the "Kriegsmarine". Adolf Hitler was the Commander-in-Chief of all German armed forces, including the "Kriegsmarine". His authority was exercised through the "Oberkommando der Marine", or OKM, with a Commander-in-Chief ("Oberbefehlshaber der Kriegsmarine"), a Chief of Naval General Staff ("Chef des Stabes der Seekriegsleitung") and a Chief of Naval Operations ("Chef der Operationsabteilung"). The first Commander-in-Chief of the OKM was Erich Raeder who was the Commander-in-Chief of the "Reichsmarine" when it was renamed and reorganized in 1935. Raeder held the post until falling out with Hitler after the German failure in the Battle of the Barents Sea. He was replaced by Karl Dönitz on 30 January 1943 who held the command until he was appointed President of Germany upon Hitler's suicide in April 1945. Hans-Georg von Friedeburg was then Commander-in-Chief of the OKM for the short period of time until Germany surrendered in May 1945. Subordinate to these were regional, squadron and temporary flotilla commands. Regional commands covered significant naval regions and were themselves sub-divided, as necessary. They were commanded by a "Generaladmiral" or an Admiral. There was a "Marineoberkommando" for the Baltic Fleet, Nord, Nordsee, Norwegen, Ost/Ostsee (formerly Baltic), Süd and West. The "Kriegsmarine" used a form of encoding called "Gradnetzmeldeverfahren" to denote regions on a map. Each squadron (organized by type of ship) also had a command structure with its own Flag Officer. The commands were Battleships, Cruisers, Destroyers, Submarines ("Führer der Unterseeboote"), Torpedo Boats, Minesweepers, Reconnaissance Forces, Naval Security Forces, Big Guns and Hand Guns, and Midget Weapons. Major naval operations were commanded by a "Flottenchef". The "Flottenchef" controlled a flotilla and organized its actions during the operation. The commands were, by their nature, temporary. The "Kriegsmarine"'s ship design bureau, known as the "Marineamt", was administered by officers with experience in sea duty but not in ship design, while the naval architects who did the actual design work had only a theoretical understanding of design requirements. As a result the German surface fleet was plagued by design flaws throughout the war. Communication was undertaken using an eight-rotor system of Enigma encoding. The "Luftwaffe" had a near-complete monopoly on all German military aviation, including naval aviation, a major source of ongoing interservice rivalry with the "Kriegsmarine". Catapult-launched spotter planes like Arado Ar 196 twin-float seaplanes were manned by the so-called "Bordfliegergruppen" (shipboard flying group). In addition, "Trägergeschwader 186" (Carrier Air Wing 186) operated two "Gruppen" ("Trägergruppe I/186" and "Trägergruppe II/186") equipped with navalized Messerschmitt Bf 109T and Junkers Ju 87C Stuka; these units were intended to serve aboard the aircraft carrier which was never completed, yet provided the Kriegsmarine with some air-power from bases on land. Furthermore, five coastal groups ("Küstenfliegergruppen") with reconnaissance aircraft, torpedo bombers, "Minensuch" aerial minesweepers and air-sea rescue seaplanes supported the "Kriegsmarine", although with lesser resources as the war progressed. The coastal batteries of the "Kriegsmarine" were stationed on the German coasts. With the conquering and occupation of other countries coastal artillery was stationed along the coasts of these countries, especially in France and Norway as part of the Atlantic Wall. Naval bases were protected by flak-batteries of the "Kriegsmarine" against enemy air raids. The "Kriegsmarine" also manned the "Seetakt" sea radars on the coasts. At the beginning of World War II, on 1 September 1939, the "Marine Stoßtrupp Kompanie" (Marine Attack Troop Company) landed in Danzig from the old battleship for conquering a Polish bastion at Westerplatte. A reinforced platoon of the "Marine Stoßtrupp Kompanie" landed with soldiers of the German Army from destroyers on 9 April 1940 in Narvik. In June 1940 the "Marine Stoßtrupp Abteilung" (Marine Attack Troop Battalion) was flown in from France to the Channel Islands to occupy this British territory. In September 1944 amphibious units unsuccessfully tried to capture the strategic island Suursaari in the Gulf of Finland from Germany's former ally Finland (Operation Tanne Ost). With the invasion of Normandy in June 1944 and the Soviet advance from the summer of 1944 the "Kriegsmarine" started to form regiments and divisions for the battles on land with superfluous personnel. With the loss of naval bases because of the Allied advance more and more navy personnel were available for the ground troops of the "Kriegsmarine". About 40 regiments were raised and from January 1945 on six divisions. Half of the regiments were absorbed by the divisions. Many different types of uniforms were worn by the Kriegsmarine; here is a list of the main ones:
https://en.wikipedia.org/wiki?curid=17386
Knights of Labor Knights of Labor (K of L), officially Noble and Holy Order of the Knights of Labor, was an American labor federation active in the late 19th century, especially the 1880s. It operated in the United States as well in Canada, and had chapters also in Great Britain and Australia. Its most important leader was Terence V. Powderly. The Knights promoted the social and cultural uplift of the working man, and demanded the eight-hour day. In some cases it acted as a labor union, negotiating with employers, but it was never well organized or funded. It was notable in its ambition to organize across lines of gender and race and in the inclusion of both skilled and unskilled labor. After a rapid expansion in the mid-1880s, it suddenly lost its new members and became a small operation again. It was founded by Uriah Smith Stephens on December 28, 1869, reached 28,000 members in 1880, then jumped to 100,000 in 1884. By 1886, 20% of all workers were affiliated, nearly 800,000 members. Its frail organizational structure could not cope as it was battered by charges of failure and violence and calumnies of the association with the Haymarket Square riot. Most members abandoned the movement in 1886–1887, leaving at most 100,000 in 1890. Many opted to join groups that helped to identify their specific needs, instead of the KOL which addressed many different types of issues. The Panic of 1893 terminated the Knights of Labor's importance. Remnants of the Knights of Labor continued in existence until 1949, when the group's last 50-member local dropped its affiliation. In 1869, Uriah Smith Stephens, James L. Wright, and a small group of Philadelphia tailors founded a secret organization known as the Noble Order of the Knights of Labor. The collapse of the National Labor Union in 1873 left a vacuum for workers looking for organization. The Knights became better organized with a national vision when they replaced Stephens with Terence V. Powderly. The body became popular with Pennsylvania coal miners during the economic depression of the mid-1870s, then it grew rapidly. The KOL was a diverse industrial union open to all workers. The leaders felt that it was best to have a versatile population in order to get points of view from all aspects. The Knights of Labor barred five groups from membership: bankers, land speculators, lawyers, liquor dealers and gamblers. Its members included low skilled workers, railroad workers, immigrants, and steel workers. As membership expanded, the Knights began to function more as a labor union and less of a secret organization. During the 1880s, the Knights of Labor played a huge role in independent and third-party movements. Local assemblies began not only to emphasize cooperative enterprises, but to initiate strikes to win concessions from employers. The Knights of Labor brought together workers of different religions, races and genders and helped them all create a bond and unify all for the same cause. The new leader Powderly opposed strikes as a "relic of barbarism," but the size and the diversity of the Knights afforded local assemblies a great deal of autonomy. In 1882, the Knights ended their membership rituals and removed the words "Noble Order" from their name. This was intended to mollify the concerns of Catholic members and the bishops who wanted to avoid any resemblance to freemasonry. Though initially averse to strikes to advance their goals, the Knights did aid various strikes and boycotts. The Wabash Railroad strike in 1885 saw Powderly finally adapt and support an eventually successful strike against Jay Gould's Wabash Line. Gould met with Powderly and agreed to call off his campaign against the Knights of Labor, which had caused the turmoil originally. This gave momentum to the Knights and membership surged. By 1886, the Knights had more than 700,000 members. The Knights' primary demand was for the eight-hour workday. They also called for legislation to end child and convict labor as well as a graduated income tax. They also supported cooperatives. The only woman to hold office in the Knights of Labor, Leonora Barry, worked as an investigator. She described the horrific conditions in factories employing women and children. These reports made Barry the first person to collect national statistics on the American working woman. Powderly and the Knights tried to avoid divisive political issues, but in the early 1880s, many Knights had become followers of Henry George's radical ideology known now as georgism. In 1883, Powderly officially recommended George's book and announced his support of "single tax" on land values. During the New York mayoral election of 1886, Powderly was able to successfully push the organization towards the favor of Henry George. The Knights of Labor helped to bring together many different types of people from all different walks of life; for example Catholic and Protestant Irish-born workers. The KOL appealed to them because they worked very closely with the Irish Land League. The Knights had a mixed record on inclusiveness and exclusiveness. They accepted women and blacks (after 1878) and their employers as members, and advocating the admission of blacks into local assemblies. However, the organization tolerated the segregation of assemblies in the South. Bankers, doctors, lawyers, stockholders, and liquor manufacturers were excluded because they were considered unproductive members of society. Asians were also excluded, and in November 1885, a branch of the Knights in Tacoma, Washington violently expelled the city's Chinese workers, who amounted to nearly a tenth of the overall city population at the time. The Union Pacific Railroad came into conflict with the Knights. When the Knights in Wyoming refused to work more hours in 1885, the railroad hired Chinese workers as strikebreakers and to stir up racial animosity. The result was the Rock Springs massacre, that killed scores of Chinese workers, and drove the rest out of Wyoming. About 50 African-American sugar-cane laborers organized by the Knights went on strike and were murdered by strikebreakers in the 1887 Thibodaux massacre in Louisiana. The Knights strongly supported passage of the Chinese Exclusion Act of 1882 and the Contract Labor Law of 1885, as did many other labor groups, demonstrating the limits of their commitment to solidarity. While they claimed to not be "against immigration," their anti-Asian racism demonstrated the limits and inconsistency of their anti-racist platform. The Knights of Labor attracted many Catholics, who were a large part of the membership, perhaps a majority. Powderly was also a Catholic. However, the Knights's use of secrecy, similar to the Masons, during its early years concerned many bishops of the church. The Knights used secrecy and deception to help prevent employers from firing members. After the Archbishop of Quebec condemned the Knights in 1884, twelve American archbishops voted 10 to 2 against doing likewise in the United States. Furthermore, Cardinals James Gibbons and John Ireland defended the Knights. Gibbons went to the Vatican to talk to the hierarchy. In 1886, right after the peak of the Knights of Labor, they started to lose more members to the American Federation of Labor. It has been believed that the fall of the Knights of Labor was due to their lack of adaptability and beliefs in the old-style industrial capitalism. Though often overlooked, the Knights of Labor contributed to the tradition of labor protest songs in America. The Knights frequently included music in their regular meetings, and encouraged local members to write and perform their work. In Chicago, James and Emily Talmadge, printers and supporters of the Knights of Labor, published the songbook "Labor Songs Dedicated to the Knights of Labor" (1885). The song "Hold the Fort" [also "Storm the Fort"], a Knights of Labor pro-labor revision of the hymn by the same name, became the most popular labor song prior to Ralph Chaplin's IWW (Industrial Workers of the World) anthem "Solidarity Forever". Pete Seeger often performed this song and it appears on a number of his recordings. Songwriter and labor singer Bucky Halker includes the Talmadge version, entitled "Labor's Battle Song," on his CD "Don't Want Your Millions" (Revolting Records 2000). Halker also draws heavily on the Knights songs and poems in his book on labor song and poetry, "For Democracy, Workers and God: Labor Song-Poems and Labor Protest, 1865-1895" (University of Illinois Press, 1991). The Knights of Labor supported the Chinese Exclusion Act because it believed that industrialists were using Chinese workers as a wedge to keep wages low.
https://en.wikipedia.org/wiki?curid=17387
Kryptonite Kryptonite is a fictional material that appears primarily in Superman stories. In its most well-known form, it is a green, crystalline material originating from Superman's home world of Krypton, that emits a peculiar radiation that weakens Superman, but is generally harmless to humans when exposed to it in short term. There are other varieties of kryptonite such as red and gold kryptonite which have different but still generally negative effects on Superman. Due to Superman's popularity, "kryptonite" has become a byword for an extraordinary exploitable weakness, synonymous with "Achilles' heel". An unpublished 1940 story titled "The K-Metal from Krypton", written by Superman creator Jerry Siegel, featured a prototype of kryptonite. It was a mineral from the planet Krypton that drained Superman of his strength while giving superhuman powers to mortals. This story was rejected because in it Superman reveals his identity to Lois. The mineral known as kryptonite was first officially introduced in the radio serial "The Adventures of Superman", in the story "The Meteor from Krypton", broadcast in June 1943. An apocryphal story claims that kryptonite was introduced to give Superman's voice actor, Bud Collyer, the possibility to take a vacation at a time when the radio serial was performed live. In an episode where Collyer would not be present to perform, Superman would be incapacitated by kryptonite, and a substitute voice actor would make groaning sounds. This tale was recounted by Julius Schwartz in his memoir. However, the historian Michael J. Hayde disputes this: in "The Meteor From Krypton", Superman is never exposed to kryptonite. If kryptonite allowed Collyer to take vacations, that was a fringe benefit discovered later. More likely, kryptonite was introduced as a plot device for Superman to discover his origin. In the radio serial, Krypton was located in the same solar system as Earth, in the same orbit, but on the opposite side of the Sun. This provided an easy explanation for how kryptonite found its way to Earth. Kryptonite was incorporated into the comic mythos with "Superman" #61 (November 1949). Editor Dorothy Woolfolk stated in an interview with "Florida Today" in August 1993, that she "felt Superman's invulnerability was boring." Various forms of the fictional material have been created over the years in "Superman" publications. Columbia Pictures produced two 15-part motion picture serials that used kryptonite as a plot device: "Superman" (1948) and "Atom Man vs. Superman" (1950). Songs: Trinitite
https://en.wikipedia.org/wiki?curid=17390
Kosovo Kosovo (; , or , ; , ), officially the Republic of Kosovo (; ), is a partially-recognised state in Southeast Europe, subject to a territorial dispute with the Republic of Serbia. Defined in an area of , Kosovo is landlocked in the center of the Balkans and bordered by the uncontested territory of Serbia to the north and east, North Macedonia to the southeast, Albania to the southwest and Montenegro to the west. It possesses varied and diverse landscapes for its size by climate along with geology and hydrology. Most of central Kosovo is dominated by the vast plains and fields of Metohija and Kosovo. The Albanian Alps and Šar Mountains rise in the southwest and southeast respectively. The earliest known human settlements in what is now Kosovo were the Paleolithic Vinča and Starčevo cultures. During the Classical period, it was inhabited by Illyrian-Dardanian and Celtic people. In 168 BC, the area was annexed by the Romans. In the Middle Ages, it was conquered by the Byzantine, Bulgarian, and Serbian Empires. The Battle of Kosovo of 1389 is considered to be one of the defining moments in Serbian medieval history. The region was the core of the Serbian medieval state, which has also been the seat of the Serbian Orthodox Church from the 14th century, when its status was upgraded to a patriarchate. Kosovo was part of the Ottoman Empire from the 15th to the early 20th century. In the late 19th century, it became the centre of the Albanian National Awakening. Following their defeat in the Balkan Wars, the Ottomans ceded Kosovo to Serbia and Montenegro. Both countries joined Yugoslavia after World War I, and following a period of Yugoslav unitarianism in the Kingdom, the post-World War II Yugoslav constitution established the Autonomous Province of Kosovo and Metohija within the Yugoslav constituent republic of Serbia. Tensions between Kosovo's Albanian and Serb communities simmered through the 20th century and occasionally erupted into major violence, culminating in the Kosovo War of 1998 and 1999, which resulted in the withdrawal of the Yugoslav army and the establishment of the United Nations Interim Administration Mission in Kosovo. On 17 February 2008, Kosovo unilaterally declared its independence from Serbia. It has since gained diplomatic recognition as a sovereign state by , of which have since been withdrawn. Serbia does not recognise Kosovo as a sovereign state, although with the Brussels Agreement of 2013, it has accepted its institutions. While Serbia recognises administration of the territory by Kosovo's elected government, it continues to claim it as the Autonomous Province of Kosovo and Metohija. Kosovo has a lower-middle-income economy and has experienced solid economic growth over the last decade by international financial institutions, and has experienced growth every year since the onset of the financial crisis of 2007–2008. Kosovo is a member of the International Monetary Fund, World Bank, and has applied for membership of Interpol and for observer status in the Organization of the Islamic Cooperation. The entire region that today corresponds to the territory is commonly referred to in English simply as "Kosovo" and in Albanian as "Kosova" (definite form, ) or ' ("indefinite" form, ). In Serbia, a formal distinction is made between the eastern and western areas; the term ' () is used for the eastern part centred on the historical Kosovo Field, while the western part is called "Metohija" () (known as "Dukagjini" in Albanian). "Kosovo" (, ) is the Serbian neuter possessive adjective of "kos" (кос) "blackbird", an ellipsis for "Kosovo Polje", 'blackbird field', the name of a plain situated in the eastern half of today's Kosovo and the site of the 1389 Battle of Kosovo Field. The name of the plain was applied to the Kosovo Province created in 1864. Albanians also refer to Kosovo as Dardania, the name of an ancient kingdom and later Roman province, which covered the territory of modern-day Kosovo. The name is derived from the ancient tribe of the Dardani, possibly related to a Proto-Albanian word "dardā", which means "pear". The former Kosovo President Ibrahim Rugova had been an enthusiastic backer of a "Dardanian" identity and the Kosovan flag and presidential seal refer to this national identity. However, the name "Kosova" remains more widely used among the Albanian population. The current borders of Kosovo were drawn while part of Yugoslavia in 1945, when the Autonomous Region of Kosovo and Metohija (1945–1963) was created as an administrative division of the new People's Republic of Serbia. In 1963, it was raised from the level of an autonomous region to the level of an autonomous province as the Autonomous Province of Kosovo and Metohija (1963–1968). In 1968, the dual name "Kosovo and Metohija" was reduced to a simple "Kosovo" in the name of the "Socialist Autonomous Province of Kosovo". In 1990, the province was renamed the "Autonomous Province of Kosovo and Metohija". The official conventional long name of the state is "Republic of Kosovo", as defined by the Constitution of Kosovo, and is used to represent Kosovo internationally. Additionally, as a result of an arrangement agreed between Pristina and Belgrade in talks mediated by the European Union, Kosovo has participated in some international forums and organisations under the title "Kosovo*" with a footnote stating "This designation is without prejudice to positions on status, and is in line with UNSC 1244 and the ICJ Opinion on the Kosovo declaration of independence". This arrangement, which has been dubbed the "asterisk agreement", was agreed in an 11-point arrangement agreed on 24 February 2012. In prehistory, the succeeding Starčevo culture and Vinča culture were active in the region. The area in and around Kosovo has been inhabited for nearly 10,000 years. During the Neolithic age, Kosovo lay within the area of the Vinča-Turdaş culture which is characterised by West Balkan black and grey pottery. Bronze and Iron Age tombs have been found in Metohija. The favorable position as well as abundant natural resources were ideal for the development of life since the prehistoric periods, proven by hundreds of archaeological sites discovered and identified throughout Kosovo, which proudly present its rich archeological heritage. The number of sites with archaeological potential is increasing, this as a result of findings and investigations that are carried out throughout Kosovo but also from many superficial traces which offer a new overview of antiquity of Kosovo. The earliest traces documented in the territory of Kosovo belong to the Stone Age Period, namely there are indications that cave dwellings might have existed like for example the Radivojce Cave set near the spring of the Drin river, then there are some indications at Grnčar Cave in the Vitina municipality, Dema and Karamakaz Caves of Peć and others. However, human settlement during the Paleolithic or Old Stone Age is not confirmed yet and not scientifically proven. Therefore, until arguments of Paleolithic and Mesolithic man are confirmed, Neolithic man, respectively the Neolithic sites are considered as the chronological beginning of population in Kosovo. From this period until today Kosovo has been inhabited, and traces of activities of societies from prehistoric, ancient and up to medieval time are visible throughout its territory. Whereas, in some archaeological sites, multilayer settlements clearly reflect the continuity of life through centuries. During antiquity, the area which now makes up Kosovo was inhabited by various tribal ethnic groups, who were liable to move, enlarge, fuse and fissure with neighbouring groups. As such, it is difficult to locate any such group with precision. The Dardani were a prominent group in the region during the late Hellenistic and early Roman eras. Their ethno-linguistic affiliation as either Thracian or Illyrian is difficult to determine. The Dardani retained an individuality and succeeded to maintain themselves as an ethnic unity, they played an important role in the genesis of the new peoples in the region. The area was then conquered by Rome in the 160s BC, and incorporated into the Roman province of Illyricum in 59 BC. Subsequently, it became part of Moesia Superior in AD 87. The region was exposed to an increasing number of 'barbarian' raids from the 4th century AD onwards, culminating with the Slavic migrations of the 6th and 7th centuries. Archaeologically, the early Middle Ages represent a hiatus in the material record, and whatever was left of the native provincial population fused into the Slavs. The subsequent political and demographic history of Kosovo is not known with absolute certainty until the 13th century. Archaeological findings suggest that there was steady population recovery and progression of the Slavic culture seen elsewhere throughout the Balkans. The region was absorbed into the Bulgarian Empire in the 850s, where Byzantine culture was cemented in the region. It was re-taken by the Byzantines after 1018, and became part of the newly established Theme of Bulgaria. As the centre of Slavic resistance to Constantinople in the region, the region often switched between Serbian and Bulgarian rule on one hand and Byzantine on the other, until Serbian Grand Prince Stefan Nemanja secured it by the end of the 12th century. An insight into the region is provided by the Byzantine historian-princess, Anna Comnena, who wrote of "Serbs" being the "main" inhabitants of the region. The zenith of Serbian power was reached in 1346, with the formation of the Serbian Empire. During the 13th and 14th centuries, Kosovo became a political, cultural and religious centre of the Serbian Kingdom. In the late 13th century, the seat of the Serbian Archbishopric was moved to Peć, and rulers centred themselves between Prizren and Skopje, during which time thousands of Christian monasteries and feudal-style forts and castles were erected. Stefan Dušan used Prizren Fortress as the capital of the Empire. When the Serbian Empire fragmented into a conglomeration of principalities in 1371, Kosovo became the hereditary land of the House of Branković. In the late 14th and the 15th centuries parts of Kosovo, the easternmost area of which was located near Pristina, were part of the Principality of Dukagjini, which was later incorporated into an anti-Ottoman federation of all Albanian principalities, the League of Lezhë. Medieval Monuments in Kosovo is a today combined UNESCO World Heritage Site consisting of four Serbian Orthodox churches and monasteries. The constructions were founded by members of Nemanjić dynasty, the most important dynasty of Serbia in the Middle Ages. In the 1389 Battle of Kosovo, Ottoman forces defeated a coalition led by Lazar Hrebeljanović. Some historians, most notably Noel Malcolm, argue that the battle of Kosovo in 1389 did not end with an Ottoman victory and "Serbian statehood did survive for another seventy years." Soon after, Lazar's son accepted Turkish nominal vassalage (as did some other Serbian principalities) and Lazar's daughter was married to the Sultan to seal the peace. By 1459, Ottomans conquered the new Serbian capital of Smederevo, leaving Belgrade and Vojvodina under Hungarian rule until second quarter of the 16th century. Kosovo was part of the Ottoman Empire from 1455 to 1912, at first as part of the "eyalet" of Rumelia, and from 1864 as a separate province ("vilayet"). During this time, Islam was introduced to the population. The Vilayet of Kosovo was an area much larger than today's Kosovo; it included all today's Kosovo territory, sections of the Sandžak region cutting into present-day Šumadija and Western Serbia and Montenegro along with the Kukës municipality, the surrounding region in present-day northern Albania and also parts of north-western North Macedonia with the city of Skopje (then Üsküp), as its capital. Between 1881 and 1912 (its final phase), it was internally expanded to include other regions of present-day North Macedonia, including larger urban settlements such as Štip ("İştip"), Kumanovo ("Kumanova") and Kratovo ("Kratova"). According to some historians, Serbs likely formed a majority of Kosovo from the 8th to the mid-19th century. Nevertheless, this claim is difficult to prove, as historians who base their works on Ottoman sources of the time give solid evidence that at least the western and central parts of Kosovo had an Albanian majority. The scholar Fredrick F. Anscombe shows that Prizren and Vučitrn ("Vulçitrin") had no Serbian population in early 17th century. Prizren was inhabited by a mix of Catholic and Muslim Albanians, while Vučitrn had a mix of Albanian and Turkish speakers, followed by tiny a Serbian minority. Gjakova was founded by Albanians in the 16th century, and Peć ("İpek") had a continuous presence of the Albanian Kelmendi tribe. Central Kosovo was mixed, but large parts of the Drenica Valley were ethnically Albanian. Central Kosovo, as well as the cities of Prizren, Gjakova, and the region of Has regularly supplied the Ottoman forces with levies and mercenaries. Kosovo was part of the wider Ottoman region to be occupied by Austrian forces during the Great War of 1683–99, but the Ottomans re-established their rule of the region. Such acts of assistance by the Austrian Empire (then arch-rivals of the Ottoman Empire), or Russia, were always abortive or temporary at best. In 1690, the Serbian Patriarch Arsenije III led thousands people from Kosovo to the Christian north, in what came to be known as the Great Serb Migration. Anscombe casts doubt on the fact that this exodus affected Kosovo, since there is no evidence that parts of Kosovo were depopulated. Evidence of depopulation can only be found in areas between Niš and Belgrade. Some Albanians from Skopje and other regions were displaced in order to fill some areas around Niš, but there is no evidence that such events took place in Kosovo. In 1766, the Ottomans abolished the Patriarchate of Peć and fully imposed the "jizya" on its non-Muslim population. Although initially stout opponents of the advancing Turks, Albanian chiefs ultimately came to accept the Ottomans as sovereigns. The resulting alliance facilitated the mass conversion of Albanians to Islam. Given that the Ottoman Empire's subjects were divided along religious (rather than ethnic) lines, Islamisation greatly elevated the status of Albanian chiefs. Prior to this, they were organised along simple tribal lines, living in the mountainous areas of modern Albania (from Kruje to the Šar range). Soon, they expanded into a depopulated Kosovo, as well as northwestern Macedonia, although some might have been autochthonous to the region. However, Banac favours the idea that the main settlers of the time were Vlachs. Many Albanians gained prominent positions in the Ottoman government. "Albanians had little cause of unrest", according to author Dennis Hupchik. "If anything, they grew important in Ottoman internal affairs." In the 19th century, there was an awakening of ethnic nationalism throughout the Balkans. The underlying ethnic tensions became part of a broader struggle of Christian Serbs against Muslim Albanians. The ethnic Albanian nationalism movement was centred in Kosovo. In 1878 the League of Prizren () was formed. This was a political organisation that sought to unify all the Albanians of the Ottoman Empire in a common struggle for autonomy and greater cultural rights, although they generally desired the continuation of the Ottoman Empire. The League was dis-established in 1881 but enabled the awakening of a national identity among Albanians. Albanian ambitions competed with those of the Serbs. The Kingdom of Serbia wished to incorporate this land that had formerly been within its empire. The modern Albanian-Serbian conflict has its roots in the expulsion of the Albanians in 1877–1878 from areas that became incorporated into the Principality of Serbia. During and after the Serbian–Ottoman War of 1876–78, between 30,000 and 70,000 Muslims, mostly Albanians, were expelled by the Serb army from the Sanjak of Niš and fled to the Kosovo Vilayet. At the turn of the century in 1901, widespread massacres were committed against the Serbian population by the Albanian population across the Kosovo Vilayet. The Young Turk movement took control of the Ottoman Empire after a coup in 1912 which deposed Sultan Abdul Hamid II. The movement supported a centralised form of government and opposed any sort of autonomy desired by the various nationalities of the Ottoman Empire. An allegiance to Ottomanism was promoted instead. An Albanian uprising in 1912 exposed the empire's northern territories in Kosovo and Novi Pazar, which led to an invasion by the Kingdom of Montenegro. The Ottomans suffered a serious defeat at the hands of Albanians in 1912, culminating in the Ottoman loss of most of its Albanian-inhabited lands. The Albanians threatened to march all the way to Salonika and reimpose Abdul Hamid. A wave of Albanians in the Ottoman army ranks also deserted during this period, refusing to fight their own kin. In September 1912, a joint Balkan force made up of Serbian, Montenegrin, Bulgarian and Greek forces drove the Ottomans out of most of their European possessions. The rise of nationalism hampered relations between Albanians and Serbs in Kosovo, due to influence from Russians, Austrians and Ottomans. After the Ottomans' defeat in the First Balkan War, the 1913 Treaty of London was signed with Western Kosovo (Metohija) ceded to the Kingdom of Montenegro and Eastern Kosovo ceded to the Kingdom of Serbia. Soon, there were concerted Serbian colonisation efforts in Kosovo during various periods between Serbia's 1912 takeover of the province and World War II. So the population of Serbs in Kosovo fell after World War II, but it had increased considerably before then. An exodus of the local Albanian population occurred. Serbian authorities promoted creating new Serb settlements in Kosovo as well as the assimilation of Albanians into Serbian society. Numerous colonist Serb families moved into Kosovo, equalising the demographic balance between Albanians and Serbs. In the winter of 1915–16, during World War I, Kosovo saw the retreat of the Serbian army as Kosovo was occupied by Bulgaria and Austria-Hungary. In 1918, the Allied Powers pushed the Central Powers out of Kosovo. After the end of World War I, the Kingdom of Serbia was transformed into the Kingdom of Serbs, Croats and Slovenians on 1 December 1918. Kosovo was split into four counties, three being a part of Serbia (Zvečan, Kosovo and southern Metohija) and one of Montenegro (northern Metohija). However, the new administration system since 26 April 1922 split Kosovo among three districts (oblast) of the Kingdom: Kosovo, Raška and Zeta. In 1929, the country was transformed into the Kingdom of Yugoslavia and the territories of Kosovo were reorganised among the Banate of Zeta, the Banate of Morava and the Banate of Vardar. In order to change the ethnic composition of Kosovo, between 1912 and 1941 a large-scale Serbian re-colonisation of Kosovo was undertaken by the Belgrade government. Meanwhile, Kosovar Albanians' right to receive education in their own language was denied alongside other non-Slavic or unrecognised Slavic nations of Yugoslavia, as the kingdom only recognised the Slavic Croat, Serb, and Slovene nations as constituent nations of Yugoslavia, while other Slavs had to identify as one of the three official Slavic nations while non-Slav nations were only deemed as minorities. Albanians and other Muslims were forced to emigrate, mainly with the land reform which struck Albanian landowners in 1919, but also with direct violent measures. In 1935 and 1938 two agreements between the Kingdom of Yugoslavia and Turkey were signed on the expatriation of 240,000 Albanians to Turkey, which was not completed because of the outbreak of World War II. After the Axis invasion of Yugoslavia in 1941, most of Kosovo was assigned to Italian-controlled Albania, with the rest being controlled by Germany and Bulgaria. A three-dimensional conflict ensued, involving inter-ethnic, ideological, and international affiliations, with the first being most important. Nonetheless, these conflicts were relatively low-level compared with other areas of Yugoslavia during the war years, with one Serb historian estimating that 3,000 Albanians and 4,000 Serbs and Montenegrins were killed, and two others estimating war dead at 12,000 Albanians and 10,000 Serbs and Montenegrins. An official investigation conducted by the Yugoslav government in 1964 recorded nearly 8,000 war-related fatalities in Kosovo between 1941 and 1945, 5,489 of whom were Serb and Montenegrin and 2,177 of whom were Albanian. It is not disputed that between 1941 and 1945 tens of thousands of Serbs, mostly recent colonists, fled from Kosovo. Estimates range from 30,000 to 100,000. There had been large-scale Albanian immigration from Albania to Kosovo which is by some scholars estimated in the range from 72,000 to 260,000 people (with a tendency to escalate, the last figure being in a petition of 1985). Some historians and contemporary references emphasise that a large-scale migration of Albanians from Albania to Kosovo is not recorded in Axis documents. The province as in its outline today first took shape in 1945 as the "Autonomous Kosovo-Metohian Area". Until World War II, the only entity bearing the name of Kosovo had been a political unit carved from the former vilayet which bore no special significance to its internal population. In the Ottoman Empire (which previously controlled the territory), it had been a vilayet with its borders having been revised on several occasions. When the Ottoman province had last existed, it included areas which were by now either ceded to Albania, or found themselves within the newly created Yugoslav republics of Montenegro, or Macedonia (including its previous capital, Skopje) with another part in the Sandžak region of southwest Serbia. Tensions between ethnic Albanians and the Yugoslav government were significant, not only due to ethnic tensions but also due to political ideological concerns, especially regarding relations with neighbouring Albania. Harsh repressive measures were imposed on Kosovo Albanians due to suspicions that there were sympathisers of the Stalinist regime of Enver Hoxha of Albania. In 1956, a show trial in Pristina was held in which multiple Albanian Communists of Kosovo were convicted of being infiltrators from Albania and were given long prison sentences. High-ranking Serbian communist official Aleksandar Ranković sought to secure the position of the Serbs in Kosovo and gave them dominance in Kosovo's nomenklatura. Islam in Kosovo at this time was repressed and both Albanians and Muslim Slavs were encouraged to declare themselves to be Turkish and emigrate to Turkey. At the same time Serbs and Montenegrins dominated the government, security forces, and industrial employment in Kosovo. Albanians resented these conditions and protested against them in the late 1960s, accusing the actions taken by authorities in Kosovo as being colonialist, as well as demanding that Kosovo be made a republic, or declaring support for Albania. After the ouster of Ranković in 1966, the agenda of pro-decentralisation reformers in Yugoslavia, especially from Slovenia and Croatia, succeeded in the late 1960s in attaining substantial decentralisation of powers, creating substantial autonomy in Kosovo and Vojvodina, and recognising a Muslim Yugoslav nationality. As a result of these reforms, there was a massive overhaul of Kosovo's nomenklatura and police, that shifted from being Serb-dominated to ethnic Albanian-dominated through firing Serbs in large scale. Further concessions were made to the ethnic Albanians of Kosovo in response to unrest, including the creation of the University of Pristina as an Albanian language institution. These changes created widespread fear among Serbs that they were being made second-class citizens in Yugoslavia. By the 1974 Constitution of Yugoslavia, Kosovo was granted major autonomy, allowing it to have its own administration, assembly, and judiciary; as well as having a membership in the collective presidency and the Yugoslav parliament, in which it held veto power. In the aftermath of the 1974 constitution, concerns over the rise of Albanian nationalism in Kosovo rose with the widespread celebrations in 1978 of the 100th anniversary of the founding of the League of Prizren. Albanians felt that their status as a "minority" in Yugoslavia had made them second-class citizens in comparison with the "nations" of Yugoslavia and demanded that Kosovo be a constituent republic, alongside the other republics of Yugoslavia. Protests by Albanians in 1981 over the status of Kosovo resulted in Yugoslav territorial defence units being brought into Kosovo and a state of emergency being declared resulting in violence and the protests being crushed. In the aftermath of the 1981 protests, purges took place in the Communist Party, and rights that had been recently granted to Albanians were rescinded – including ending the provision of Albanian professors and Albanian language textbooks in the education system. Due to very high birth rates, the proportion of Albanians increased from 75% to over 90%. In contrast, the number of Serbs barely increased, and in fact dropped from 15% to 8% of the total population, since many Serbs departed from Kosovo as a response to the tight economic climate and increased incidents with their Albanian neighbours. While there was tension, charges of "genocide" and planned harassment have been debunked as an excuse to revoke Kosovo's autonomy. For example, in 1986 the Serbian Orthodox Church published an official claim that Kosovo Serbs were being subjected to an Albanian program of 'genocide'. Even though they were disproved by police statistics, they received wide attention in the Serbian press and that led to further ethnic problems and eventual removal of Kosovo's status. Beginning in March 1981, Kosovar Albanian students of the University of Pristina organised protests seeking that Kosovo become a republic within Yugoslavia and demanding their human rights. The protests were brutally suppressed by the police and army, with many protesters arrested. During the 1980s, ethnic tensions continued with frequent violent outbreaks against Yugoslav state authorities, resulting in a further increase in emigration of Kosovo Serbs and other ethnic groups. The Yugoslav leadership tried to suppress protests of Kosovo Serbs seeking protection from ethnic discrimination and violence. After the Tito-Stalin rift in 1948, the relations between Stalinist Albania and Yugoslavia were also broken. Language policy was of utmost importance in communist Yugoslavia, which after World War II was reorganised as a federation of ethnolinguistically defined nations, in emulation of the interwar Soviet nationalities policy. For instance, in 1944, the Macedonian language was proclaimed for the sake of distancing former Vardar Banovina, which was incorporated into wartime Bulgaria, from Bulgarian language and culture. Likewise, in postwar Yugoslavia's Socialist Autonomous Province of Kosovo, the local Albanian language was distanced from Albania's standard steeped in Tosk, by basing it on the Kosovar dialect of Gheg. As a result, a standard Kosovar language was formed. However, after the rapprochement between Albania and Yugoslavia at the turn of the 1970s, Belgrade adopted Albania's Tosk-based standard of the Albanian language, which ended the brief flourishing of the Gheg-based Kosovar language. Inter-ethnic tensions continued to worsen in Kosovo throughout the 1980s. In 1989, Serbian President Slobodan Milošević, employing a mix of intimidation and political maneuvering, drastically reduced Kosovo's special autonomous status within Serbia and started cultural oppression of the ethnic Albanian population. Kosovar Albanians responded with a non-violent separatist movement, employing widespread civil disobedience and creation of parallel structures in education, medical care, and taxation, with the ultimate goal of achieving the independence of Kosovo. In July 1990, the Kosovo Albanians proclaimed the existence of the Republic of Kosova, and declared it a sovereign and independent state in September 1992. In May 1992, Ibrahim Rugova was elected its president in an election in which only Kosovo Albanians participated. During its lifetime, the Republic of Kosova was only officially recognised by Albania. By the mid-1990s, the Kosovo Albanian population was growing restless, as the status of Kosovo was not resolved as part of the Dayton Agreement of November 1995, which ended the Bosnian War. By 1996, the Kosovo Liberation Army (KLA), an ethnic Albanian guerrilla paramilitary group that sought the separation of Kosovo and the eventual creation of a Greater Albania,
https://en.wikipedia.org/wiki?curid=17391
Konqueror Konqueror is a free and open-source web browser and file manager that provides web access and file-viewer functionality for file systems (such as local files, files on a remote FTP server and files in a disk image). It forms a core part of the KDE Software Compilation. Developed by volunteers, Konqueror can run on most Unix-like operating systems. The KDE community licenses and distributes Konqueror under the GNU General Public License version 2. The name "Konqueror" echoes a colonization paradigm to reference the two primary competitors at the time of the browser's first release: "first comes the Navigator, then Explorer, and then the Konqueror". It also follows the KDE naming convention: the names of most KDE programs begin with the letter K. Konqueror first appeared with version 2 of KDE on October 23, 2000. It replaces its predecessor, KFM (KDE file manager). With the release of KDE 4, Dolphin replaced Konqueror as the default KDE file-manager, but the KDE community continues to maintain Konqueror as the default KDE web-browser. Konqueror can utilize all KIOslaves installed on the user's system. Some examples include: A complete list is available in the KDE Info Center's Protocols section. Konqueror supports tabbed document interface and Split views, wherein a window can contain multiple documents in tabs. Multiple document interfaces are not supported, however it is possible to recursively divide a window to view multiple documents simultaneously, or simply open another window. Konqueror's user interface is somewhat reminiscent of Microsoft's Internet Explorer, though it is more customizable. It works extensively with "panels", which can be rearranged or added. For example, one could have an Internet bookmarks panel on the left side of the browser window, and by clicking a bookmark, the respective web page would be viewed in the larger panel to the right. Alternatively, one could display a hierarchical list of folders in one panel and the content of the selected folder in another. Panels are quite flexible and can even include, among other KParts (components), a console window, a text editor, a media player. Panel configurations can be saved, and there are some default configurations. (For example, "Midnight Commander" displays a screen split into two panels, where each one contains a folder, Web site, or file view.) Navigation functions (back, forward, history, etc.) are available during all operations. Most keyboard shortcuts can be remapped using a graphical configuration, and navigation can be conducted through an assignment of letters to nodes on the active file by pressing the control key. The address bar has extensive autocompletion support for local directories, past URLs, and past search terms. Konqueror has been developed as an autonomous web browser project. It uses KHTML as its browser engine, which is compliant with HTML and supports JavaScript, Java applets, CSS, SSL, and other relevant open standards. An alternative layout engine, "kwebkitpart", is available from the Extragear. While KHTML is the default web-rendering engine, Konqueror is a modular application and other rendering engines are available. Especially the WebKitPart that uses the KHTML-derived WebKit engine has seen a lot of support in the KDE 4 series. One thing to note, is when the KHTML rendering backend is chosen, the user can choose to make a full archive of any given webpage, which is stored in an archive file with the ".war" extension. Konqueror integrates several customizable search services which can be accessed by entering the service's abbreviation code (for example, gg: for Google, or wp: for Wikipedia) followed by the search term(s). One can add their own search service; for instance, to retrieve English Wikipedia articles, a shortcut may be added with the URL codice_1. KHTML's rendering speed is on par with that of competing browsers, but sites with customized JavaScript are often problematic due to KHTML's much smaller mind- and market-share, resulting in fewer JavaScript features built into the JS engine. Kubuntu's 10.10 Maverick Meerkat release switched the default browser from Konqueror to rekonq. Kubuntu subsequently switched from rekonq to Firefox, with the release of 14.04 Trusty Tahr. Konqueror also allows browsing the local directory hierarchy—either by entering locations in the address bar, or by selecting items in the file browser window. It allows browsing in different views, which differ in their usage of icons and layout. Files can also be executed, viewed, copied, moved, and deleted. The user can also open an embedded version of Konsole, via KDE's KParts technology, in which they can directly execute shell commands. In addition to the Konsole KPart, Konqueror can also use a Filelight KPart, to view a radial diagram of the user's filesystem. Although this functionality has not been removed from Konqueror, as of KDE 4, Dolphin has replaced Konqueror as the default file manager. Dolphin can – like Konqueror – divide each window or tab into multiple panes. Konqueror makes more powerful use of this feature, allowing as many vertically and horizontally divided panes as desired. Each can link to different content or even remote locations, so that Konqueror becomes a powerful graphical tool to manage content on multiple servers all in one window, "dragging and dropping" files between locations. Using the KParts object model, Konqueror executes components that are capable of viewing (and sometimes editing) specific filetypes and embeds their client area directly into the Konqueror panel in which the respective files have been opened. This makes it possible to, for example, view an OpenDocument (via Calligra) or PDF document directly within Konqueror. Any application that implements the KParts model correctly can be embedded in this fashion. KParts can also be used to embed certain types of multimedia content into HTML pages; for example, the KMPlayer KPart enables Konqueror to show embedded video on web pages. In addition to browsing files and web sites, Konqueror utilizes KIO plugins to extend its capabilities well beyond those of other browsers and file managers. It uses components of KIO, the KDE I/O plugin system, to access different protocols such as HTTP and FTP (support for these is built-in), WebDAV, SMB (Windows shares), SFTP and FISH (a handy replacement to the latter when the SFTP subsystem is disabled on the remote host). Similarly, Konqueror can use KIO plugins (called IOslaves) to access ZIP files and other archives, to process ed2k links (edonkey/emule), or even to browse audio CDs, ("audiocd:/") and rip them via drag-and-drop. Likewise, the "man:" and "info:" IOslaves can be used to fetch man and info formatted documentation. An embedded systems version, Konqueror Embedded is available. Unlike the full version of Konqueror, Embedded Konqueror is purely a web browser. It does not require KDE or even the X window system. A single static library, it is designed to be as small as possible, while providing all necessary functions of a web browser, such as support for HTML 4, CSS, JavaScript, cookies, and SSL. KGet is a free download manager for KDE and is the default download manager for Konqueror. It is part of the KDE Network package. By default it is the download manager used for Konqueror, but can also be used with Mozilla Firefox and rekonq. KGet was featured by "Tux Magazine" and "Free Software Magazine". On KDE 3, KGet 0.8.x, 1 supported HTTP/FTP download. On KDE Software Compilation 4, KGet 2 was released; it supported bandwidth throttling segmentation, multi-threading, and the BitTorrent protocol.
https://en.wikipedia.org/wiki?curid=17392
Key signature In musical notation, a key signature is a set of sharp (), flat (), and rarely, natural () symbols placed together on the staff. Key signatures are generally written immediately after the clef at the beginning of a line of musical notation, although they can appear in other parts of a score, notably after a double barline. A key signature designates notes that are to be played higher or lower than the corresponding natural notes and applies through to the end of the piece or up to the next key signature. A sharp symbol on a line or space in the key signature raises the notes on that line or space one semitone above the natural, and a flat lowers such notes one semitone. Further, a symbol in the key signature affects all the notes of one letter: for instance, a sharp on the top line of the treble staff applies to Fs not only on that line, but also to Fs in the bottom space of the staff, and to any other Fs. This convention was not universal until the late Baroque and early Classical period, however; music published in the 1720s and 1730s, for example, uses key signatures showing sharps or flats on both octaves for notes which fall within the staff. An accidental is an exception to the key signature, applying only to the measure and octave in which it appears. Although a key signature may be written using any combination of sharp and flat symbols, the most common series of fifteen homogeneous key signatures—ranging from seven flats to seven sharps, with each successive flat or sharp placed on the note a perfect fifth below or above, respectively, the previous one—is assumed in much of this article. A piece scored using a single diatonic key signature and no accidentals contains notes of at most seven of the twelve pitch classes, which seven being determined by the particular key signature. Each major and minor key has an associated key signature that sharpens or flattens the notes which are used in its scale. However, it is not uncommon for a piece to be written with a key signature that does not match its key, for example, in some Baroque pieces, or in transcriptions of traditional modal folk tunes. Later on, this use of a key signature that is theoretically incorrect for a piece as a whole or a self-contained section of a piece became less common (in contrast to brief passages within a piece, which, as they modulate from key to key often temporarily disagree with the key signature); but it can be found at least as late as one of Beethoven's very late piano sonatas. For example, in his Sonata No. 31 in A major, Op. 110, the first appearance of the Arioso section in the final movement is notated throughout in six flats; but it both begins and ends in A minor and has a significant modulation to C major, and both these keys theoretically require seven flats in their key signature. (The second appearance later in the movement of this same section, a semitone lower, in G minor, uses the correct key signature of two flats.) In principle, any piece can be written with any key signature, using accidentals to correct the pattern of whole and half steps. The purpose of the key signature is to minimize the number of such accidentals required to notate the music. The sequence of sharps or flats in key signatures is generally rigid in modern music notation. This allows musicians to identify the key simply by the number of sharps or flats (which is the same in any clef), rather than their position on the staff. For example, if a key signature has only one sharp, it must be an F-sharp, which corresponds to a G major or an E minor key. However, in 20th-century music, there are occasional exceptions to this, where a piece uses an unorthodox or synthetic scale, where a key signature may be invented to reflect this. This may consist of a number of sharps or flats that are not the normal ones (such as a signature of just C or E), or it may consist of one or more sharps combined with one or more flats (such as a signature containing both F and B). Key signatures of this kind can be found in the music of Béla Bartók, for example. The effect of a key signature continues throughout a piece or movement, unless explicitly cancelled by another key signature. For example, if a five-sharp key signature is placed at the beginning of a piece, every A in the piece in any octave will be played as A sharp, unless preceded by an accidental (for instance, the A in scale 2 illustrated – the next-to-last note – is played as an A even though the A in the key signature (the last sharp sign) is written an octave lower). In a score containing more than one instrument, all the instruments are usually written with the same key signature. Exceptions include: The convention for the notation of key signatures follows the circle of fifths. Starting from C major (or equivalently A minor) which has no sharps or flats, successively raising the key by a fifth adds a sharp, going clockwise round the circle of fifths. The new sharp is placed on the new key's leading note (seventh degree) for major keys or supertonic (second degree) for minor keys. Thus G major (E minor) has one sharp which is on the F; then D major (B minor) has two sharps on F and C and so on. Similarly, successively lowering the key by a fifth adds a flat, going counterclockwise around the circle of fifths. The new flat is placed on the subdominant (fourth degree) for major keys or submediant (sixth degree) for minor keys. Thus F major (D minor) has one flat which is on the B; then B major (G minor) has two flats (on B and E) and so on. Put another way: for key signatures with sharps, the first sharp is placed on F with subsequent sharps on C, G, D, A, E and B; for key signatures with flats, the first flat is placed on B with subsequent flats on E, A, D, G, C and F. There are thus 15 conventional key signatures, with up to seven sharps or flats and including the "empty" signature of C major (A minor). Corollaries: The relative minor is a minor third down from the major, regardless of whether it is a "flat" or a "sharp" key signature. The key signatures with seven flats () and seven sharps () are rarely used because they have simpler enharmonic equivalents. For example, the key of C major (seven sharps) is more simply represented as D major (five flats). For modern practical purposes these keys are (in twelve tone equal temperament) the same, because C and D are enharmonically the same note. Pieces "are" written in these "extreme" sharp or flat keys, however: for example, Bach's Prelude and Fugue No. 3 from Book 1 of The Well-Tempered Clavier BWV 848 is in C major. The modern musical "Seussical" by Flaherty and Ahrens also has several songs written in these extreme keys. Key signatures can be further extended through double sharps and double flats (for example, a piece in the key of G major can be expressed with a double sharp on F and six sharps on the other six pitches). As with the seven-sharp and seven-flat examples, it is rarely necessary to express music in such keys when simpler enharmonic examples can instead be used (in the case of G, the same passage could be expressed in A with only four flats). The key signature may be changed at any time in a piece, usually at the beginning of a measure, simply by notating the new signature; although if the new signature has no sharps or flats, a signature of naturals, as shown, is needed to cancel the preceding signature. If a change in signature occurs at the start of a new line on the page, where a signature would normally appear anyway, the new signature is customarily repeated at the end of the previous line to make the change more conspicuous. In traditional use, when the key signature change goes from sharps to flats or vice versa, the old key signature is cancelled with the appropriate number of naturals before the new one is inserted; but many more recent publications (whether of newer music or newer editions of older music) dispense with the naturals and simply insert the new signature. Similarly, when a signature with either flats or sharps in it changes to a smaller signature of the same type, strict application of tradition or convention would require that naturals first be used to cancel just those flats or sharps that are being subtracted in the new signature before the new signature itself is written; but, again, more modern usage often dispenses with these naturals. When the signature changes from a smaller to a larger signature of the same type, the new signature is simply written in by itself, in both traditional and newer styles. At one time it was usual to precede the new signature with a double barline (provided the change occurred between bars and not inside a bar), even if it was not required by the structure of the music to mark sections within the movement; but more recently it has increasingly become usual to use just a single barline. The courtesy signature that appears at the end of a line immediately before a change is usually preceded by an additional barline; the line at the very end of the staff is omitted in this case. If both naturals and a new key signature appear at a key signature change, there are also more recently variations about where a barline will be placed (in the case where the change occurs between bars). For example, in some scores by Debussy, in this situation the barline is placed after the naturals but before the new key signature. Hitherto, it would have been more usual to place all the symbols after the barline. In key signatures of five or more sharps or of seven flats, one occasionally encounters variant positions of particular symbols in the key signatures, both of them in the bass clef. The A which is the fifth sharp in the sharp signatures may occasionally be notated on the top line of the bass staff, whereas it is more usually found in the lowest space on that staff. An example of this can be seen in the full score of Ottorino Respighi's "Pines of Rome", in the third section, "Pines of the Janiculum" (which is in B major), in the bass-clef instrumental parts. In the case of seven-flat key signatures, the final F may occasionally be seen on the second-top line of the bass staff, whereas it would more usually appear below the bottom line. An example of this can be seen in Isaac Albéniz's "Iberia": first movement, "Evocación", which is in A minor. Except for C major, key signatures appear in two varieties, "sharp key signatures" ("sharp keys") and "flat key signatures" ("flat keys"), so called because they contain only one or other. Sharp key signatures consist of a number of sharps between one and seven, applied in this order: . A mnemonic device often used to remember this is "Father Charles Goes Down And Ends Battle", or “Father Christmas Gave Dad An Electric Blanket.” The key note or tonic of a piece in a major key is immediately above the last sharp in the signature. For example, one sharp (F) in the key signature of a piece in a major key indicates the key of G major, the next note above F. (Six sharps, the last one being E (an enharmonic spelling of F) indicate the key of F major, since F has already been sharped in the key signature.) This table shows that each scale starting on the fifth scale degree of the previous scale has one new sharp, added in the order given above. "Flat key signatures" consist of one to seven flats, applied as: The mnemonic device is then reversed for use in the flat keys: "Battle Ends And Down Goes Charles' Father", or "Blanket Exploded And Dad Got Cold Feet.” The major scale with one flat is F major. In all other "flat major scales", the tonic or key note of a piece in a major key is four notes below the last flat, which is the same as the second-to-last flat in the signature. In the major key with four flats (B E A D), for example, the penultimate flat is A, indicating a key of A major. In this case each new scale starts a fifth "below" (or a fourth above) the previous one. A key signature is not the same as a key; key signatures are merely notational devices. They are convenient principally for diatonic or tonal music. The key signature defines the diatonic scale that a piece of music uses without the need for accidentals. Most scales require that some notes be consistently sharped or flatted. For example, the only sharp in the G major scale is F sharp, so the key signature associated with the G major key is the one-sharp key signature. However, it is only a notational convenience; a piece with a one-sharp key signature is not necessarily in the key of G major, and likewise, a piece in G major may not always be written with a one-sharp key signature; this is particularly true in pre-Baroque music, when the concept of key had not yet evolved to its present state. In any case, more extensive pieces often change key ("modulate") during contrasting sections, and only sometimes is this change indicated with a change of key signature; if not, the passage in the second key will not have a matching key signature. The Toccata and Fugue in D minor, BWV 538 by Bach has no key signature, leading it to be called the "Dorian", but it is still in D minor; the Bs that occur in the piece are written with accidentals. Keys which are associated with the same key signature are called relative keys. When musical modes, such as Lydian or Dorian, are written using key signatures, they are called "transposed modes". Exceptions to common-practice-period use may be found in Klezmer scales, such as Freygish (Phrygian). In the 20th century, composers such as Bartók and Rzewski (see below) began experimenting with unusual key signatures that departed from the standard order. Because of the limitations of the traditional highland bagpipe scale, key signatures are often omitted from written pipe music, which otherwise would be written with two sharps, the usual F and C. (In this case the pipes are incapable of playing F and C so it is not necessary to specify the sharps.) The above 15 key signatures only express the diatonic scale, and are therefore sometimes called "standard key signatures". Other scales are written either with a standard key signature and use accidental as required, or with a nonstandard key signature. Examples of the latter include the E (right hand), and F and G (left hand) used for the E diminished (E octatonic) scale in Bartók's "Crossed Hands" (no. 99, vol. 4, "Mikrokosmos"), or the B, E and F used for the D Phrygian dominant scale in Frederic Rzewski's "God to a Hungry Child". The absence of a key signature does not always mean that the music is in the key of C major / A minor as each accidental may be notated explicitly as required, or the piece may be modal or atonal. An example is Bartók's Piano Sonata, which has no fixed key and is highly chromatic. If not bound by common practice conventions, microtones can also be notated in a key signature; which microtonal symbols are used depends on the system. The common practice period conventions are so firmly established that some musical notation programs have been unable to show nonstandard key signatures until recently. The use of a one-flat signature developed in the Medieval period, but signatures with more than one flat did not appear until the 16th century, and signatures with sharps not until the mid-17th century. When signatures with multiple flats first came in, the order of the flats was not standardized, and often a flat appeared in two different octaves, as shown at right. In the late 15th and early 16th centuries, it was common for different voice parts in the same composition to have different signatures, a situation called a partial signature or conflicting signature. This was actually more common than complete signatures in the 15th century. The 16th-century motet "Absolon fili mi" by Pierre de La Rue (formerly attributed to Josquin des Prez) features two voice parts with two flats, one part with three flats, and one part with four flats. Baroque music written in minor keys often was written with a key signature with fewer flats than we now associate with their keys; for example, movements in C minor often had only two flats (because the A would frequently have to be sharpened to A in the ascending melodic minor scale, as would the B).
https://en.wikipedia.org/wiki?curid=17394
Kutia Kutia or kutya is a ceremonial grain dish with sweet gravy traditionally served by Eastern Orthodox Christians in Ukraine, Belarus and Russia during the Christmas - Feast of Jordan holiday season and/or as part of a funeral feast. The word with a descriptor is also used to describe the eves of Christmas, New Year, and Feast of Jordan days. The word kutia is a borrowing from the Greek language κουκκί (bean) or κόκκος (grain). In Ukraine kutia is an essential dish at the Ukrainian Christmas Eve Supper (also known as "Sviata vecheria" or "Svyata vecherya"). It is believed that kutia has been known to Ukrainians’ ancestors since pre-historic times. The main ingredients used to make traditional kutia are: wheatberries, poppy seeds and honey. At times, walnuts, dried fruit and raisins are added as well. Kutia is a Lenten dish and no milk or egg products can be used. There are known kutia recipes that use pearl barley instead of wheatberries. Kutia, as a part of Ukrainian Christmas Eve Supper, is used in a number of rituals performed on the night. Kutia is the first out of twelve dishes served for Sviata Vecheria to be tasted. Everyone present must have at least a spoonful of kutia. In the past, the head of the household used kutia to foretell whether the upcoming year harvest would be plentiful; and to bargain with the forces of the nature asking for good weather. Kolyvo is a Ukrainian ritual dish similar to kutia, but includes no poppy seeds. Kolyvo is served at remembrance services. A dish of boiled grains (usually wheat berries) mixed with honey, nuts, spices, and a few other ingredients is traditional in other countries as well:
https://en.wikipedia.org/wiki?curid=17395
Kid Rock Robert James Ritchie Sr. (born January 17, 1971), better known by his stage names Kid Rock and Bobby Shazam, is an American singer-songwriter, rapper, disc jockey, musician, record producer, and actor. In a career spanning 30 years, Rock's musical style alternates between rock, hip hop, and country. A multi-instrumentalist, he has overseen his own production on nine of his eleven studio albums. Kid Rock started his professional music career as a self-taught rapper and DJ, releasing his debut album "Grits Sandwiches for Breakfast" in 1990 on Jive Records. His subsequent independent releases "The Polyfuze Method" (1993) and "Early Mornin' Stoned Pimp" (1996) saw him developing a more distinctive style, which was fully realized on "Devil Without a Cause" (1998), his breakthrough album, which sold 14 million copies. "Devil Without a Cause" and his subsequent album, "Cocky" (2001), were noted for blending elements of hip hop, country, rock and heavy metal. Starting with his 2010 album "Born Free", the country music style has dominated his musical direction. He has not rapped on his albums since the 2003 album "Kid Rock", with the exception of one track each on "Rock n Roll Jesus" (2007) and "Sweet Southern Sugar" (2017). He was born as Robert James Ritchie on January 17, 1971, in Romeo, Michigan, to father William Ritchie, owner of multiple car dealerships, and mother Susan Ritchie. Ritchie's father owned a six-acre estate and 5,628-square-foot home where Ritchie grew up, regularly helping his family pick apples and caring for their horses. In the 1980s, Ritchie became interested in hip hop, began to breakdance and taught himself how to rap and DJ, while performing in talent shows in and around Detroit. A self-taught musician, Ritchie plays every instrument in his backing band. Kid Rock began his professional music career as a member of the hip hop music group The Beast Crew in the late 1980s. During this time, he met rapper D-Nice. That relationship would eventually lead to him becoming the opening act at local shows for Boogie Down Productions. During this time, Kid Rock began a professional association with producer Mike E. Clark, who, after some initial skepticism with the idea of a "white" rapper, found himself impressed with Kid Rock's energetic and well-received performance where the artist, using his own turntables and equipment, actually prepared his own beats to demonstrate his skills for Clark. In 1988, Clark produced a series of demos with Kid Rock and that eventually led to offers from six major record labels, including Atlantic and CBS Records. In 1989, Kid Rock became a shareholder in an independent record label that was formed by Alvin Williams and Earl Blunt of EB-Bran Productions, called "Top Dog" Records. Later, that investment would become a 25% ownership stake. With the help of D-Nice, Kid Rock signed with Jive Records at the age of 17, releasing his debut studio album, "Grits Sandwiches for Breakfast" in 1990. According to Kid Rock, the contract with Jive resulted in animosity from fellow rapper Vanilla Ice, who Kid Rock claimed felt that he should have been signed with Jive instead of Kid Rock. The album made Kid Rock one of the two biggest rap stars in Detroit in 1990, along with local independent rapper Esham. To promote the album, Kid Rock toured nationally with Ice Cube, D-Nice, Yo-Yos and Too Short; Detroit artist Champtown served as Kid Rock's DJ on this tour. During instore promotions for the album, Kid Rock met and developed a friendship with local rapper Eminem, who frequently challenged Kid Rock to rap battles. Ultimately, unfavorable comparisons to Vanilla Ice led to Jive dropping Kid Rock, according to Mike E. Clark. In 1992, Kid Rock signed with local independent record label Continuum. Around this time, Kid Rock met local hip hop duo Insane Clown Posse through Mike E. Clark, who was producing the duo. While ICP member Violent J disliked Kid Rock's music, he wanted the rapper to appear on ICP's debut album, "Carnival of Carnage", believing the appearance would gain ICP notice, since Kid Rock was a nationally successful artist. Noting that local rapper Esham was paid $500 to appear on ICP's album, Violent J claims that Kid Rock demanded $600 to record his guest appearance, alleging that Esham and Kid Rock had a feud over who was the bigger rapper. Kid Rock showed up to record the song "Is That You?" intoxicated, but re-recorded his vocals and record scratching the following day. In 1993, Kid Rock recorded his second studio album, "The Polyfuze Method", with producer Mike E. Clark, who worked with Kid Rock to help give the album more of a rock-oriented sound than his debut. Kid Rock also began releasing his "Bootleg" cassette series to keep local interest in his music. Later in the year, Kid Rock recorded the EP "Fire It Up" at White Room Studios in downtown Detroit, run by brothers Michael and Andrew Nehra, who were forming the rock-soul band Robert Bradley's Blackwater Surprise. The EP featured the heavy rock song "I Am the Bullgod" and a cover of Hank Williams Jr.'s country song "A Country Boy Can Survive". By 1994, Kid Rock's live performances had mostly been backed by DJs Blackman and Uncle Kracker, but Kid Rock soon began to utilize more and more live instrumentation into his performances, and formed the rock band Twisted Brown Trucker. After breaking up with his girlfriend, Kid Rock moved engineer Bob Ebeling into his apartment. During a recording session with Mike E. Clark, the producer discovered that Kid Rock could sing when he recorded a reworked cover of Billy Joel's "It's Still Rock and Roll to Me", entitled "It's Still East Detroit to Me", which Clark claims led him to encourage Kid Rock to sing more. During this time, Kid Rock developed animosity towards other Detroit artists, including Insane Clown Posse; according to Mike E. Clark, who worked with both artists, Kid Rock was frustrated with ICP's local success, as Kid Rock disliked ICP's music, and wanted to become more successful than ICP. Through extensive promoting, including distributing tapes on consignment to local stores and giving away free samplers of his music, Kid Rock developed a following among an audience which DJ Uncle Kracker described as "white kids who dropped acid and liked listening to gangsta rap"; this following included local rapper Joe C, who had been attending Kid Rock concerts as a fan, but upon meeting Kid Rock, was invited to perform on stage as Kid Rock's hype man. Kid Rock's stage presence became honed with the addition of a light show, pyrotechnics, dancers and a light-up backdrop bearing the name "Kid Rock", and 1996 saw the release of his most rock-oriented album to date, "Early Mornin' Stoned Pimp"; the album's title came from Bob Eberling, who told a sleepless, alcoholic, drug-using Kid Rock, "Dude, you are the early-morning, stoned pimp." According to Kid Rock, who distributed the album himself, "Early Mornin' Stoned Pimp" sold 14,000 copies. Kid Rock developed his stage persona, performing dressed in 1970s pimp clothing with a real, possibly loaded, gun down the front of his pants. Though Kid Rock became known for frequent partying, and using drugs and alcohol, he was primarily focused on increasing his success and fame, placing himself as a businessman first; the result of this drive led to increased success locally. Kid Rock's attorney, Tommy Valentino, increased his stature by helping him get articles written about Kid Rock and Twisted Brown Trucker in major publications, including the Beastie Boys' "Grand Royal" magazine, but though his management tried to interest local record labels in his music, they told his management team that they were not interested in signing a white rapper, to which Valentino told them, "He's not a white rapper. He's a rock star and everything in between." In 1997, Jason Flom, head of Lava Records, attended one of Kid Rock's performances, and met with Kid Rock, who later gave him a demo containing the songs "Somebody's Gotta Feel This" and "I Got One for Ya", which led to Kid Rock signing with Atlantic Records. As part of his recording deal, Kid Rock received $150,000 from the label. By this time, Kid Rock had fully developed his stage persona and musical style and wanted to make a "redneck, shit-kicking rock 'n' roll rap" album, resulting in his fourth studio album, "Devil Without a Cause", recorded at the White Room in Detroit and mixed at the Mix Room in Los Angeles. Through extensive promoting, including appearances on 1999 MTV VMA (including a performance alongside Aerosmith and Run-DMC) and performing at Woodstock 1999, "Devil Without a Cause" sold 14 million copies, the album's success spurred by Kid Rock's breakthrough hit single "Bawitdaba". In 2000, Kid Rock was nominated for a Grammy Award for Best New Artist, despite having been active in the music industry for over 10 years. Kid Rock's success, however, was marked by tragedy, with the death of friend and collaborator Joe C. In May 2000, Kid Rock released the compilation album "The History Of Rock", behind the single "American Bad Ass". The song sampled Metallica's 1991 song "Sad but True",peaking at No. 20 on the mainstream rock chart. Kid Rock would join Metallica on their 2000 Summer Sanitarium Tour along with Korn and System Of A Down. Kid Rock and Jonathan Davis filled in on vocals for an injured James Hetfield in Atlanta on July 7, 2000. Kid Rock performed American Bad Ass along with the Metallica classics Sad But True, Nothing Else Matters, Fuel and Enter Sandman along with covers of Turn The Page and Fortunate Son. The History Of Rock was certified double platinum and Kid Rock embarked on 2000s History Of Rock and 2001's American Bad Ass Tour's. American Bad Ass was nominated for the Grammy for Best Hard Rock Performance in 2001,losing out to Rage Against The Machine's "Guerilla Radio". His song with Robert Bradley "Higher" was featured in a TV spot for Gatorade. In 1999, Kid Rock made his voice acting debut in an episode of The Simpsons in the episode "Kill the Alligator and Run" playing himself, alongside Joe C. Kid Rock also appeared in comedy film "Joe Dirt", starring David Spade. Kid Rock was in the live-action/animated film "Osmosis Jones", voicing a bacterial cell version of himself named "Kidney Rock"; Kid Rock and Joe C had also recorded the song "Cool Daddy Cool" for the film's soundtrack album before Joe C's death. In November, Kid Rock released his fifth studio album, "Cocky". The album became a hit, spurred by the crossover success of the single "Picture", a country ballad featuring Sheryl Crow which introduced Kid Rock to a wider audience and was ultimately the most successful single on the album. In support of the album, Kid Rock performed on the Cocky Tour in 2002 and opened for Aerosmith with Run DMC on The Girls Of Summer Tour During this period, Uncle Kracker began his solo career full-time. He was replaced by underground Detroit rapper Paradime. In 2001, Kid Rock filed a lawsuit to gain full control over the Top Dog record label, resulting in his receiving full ownership of the label in 2003. In 2002, Kid Rock covered ZZ Top's "Legs" to serve as WWE Diva Stacy Keibler's theme song; it also appeared on the album "WWF Forceable Entry". The same year, Kid Rock performed alongside Chuck D and Grandmaster Flash in tribute to slain DJ Jam Master Jay. 2003 saw the release of Kid Rock's self-titled sixth album, which shifted his music further away from hip hop; the lead single was a cover of Bad Company's "Feel Like Makin' Love". The same year, Kid Rock contributed to the tribute album "I've Always Been Crazy: A Tribute to Waylon Jennings", honoring the late country singer by covering the song "Luckenbach, Texas" in collaboration with country singer Kenny Chesney. In 2004, he performed at the Super Bowl XXXVIII halftime show, in a controversial appearance that spurred criticism from Veterans of Foreign Wars and Senator Zell Miller for wearing the American flag with one slit in the middle, as a poncho; Kid Rock was accused of "desecrating" the flag. Kid Rock also appeared on the track 'My Name is Robert Too' on American blues artist R. L. Burnside's final studio album, "A Bothered Mind". In September 2005, Kid Rock filled in for Johnny Van Zant, the lead singer of Lynyrd Skynyrd, on the band's hit "Sweet Home Alabama" at the Hurricane Katrina benefit concert. In 2006, Kid Rock stopped displaying the Confederate flag at his concerts. The following year, Kid Rock released his seventh studio album, "Rock N Roll Jesus", which was his first release to chart at #1 on the "Billboard 200", selling 172,000 copies in its first week and going on to sell over 5 million copies. In July 2007, Kid Rock was featured in the cover of "Rolling Stone" magazine for the second time. The album's third single, "All Summer Long", became a global hit, utilizing a mash up of Lynyrd Skynyrd's "Sweet Home Alabama" and Warren Zevon's "Werewolves of London". In 2008, Kid Rock recorded and made a music video for the song "Warrior" for a National Guard advertising campaign. In 2010, Kid Rock released his country-oriented eighth studio album, "Born Free", produced by Rick Rubin, and featuring guest appearances by Sheryl Crow and Bob Seger. In 2011, Kid Rock was honored by the NAACP, which sparked protests stemming from his past display of the Confederate flag in his concerts. During the ceremony, Kid Rock elaborated on his display of the flag, stating, "[I] never flew the flag with hate in my heart [...] I love America, I love Detroit, and I love black people." Kid Rock's publicist announced that 2011 was the year he officially distanced himself from the flag. The following year, Kid Rock performed alongside Travie McCoy and The Roots in honor of the Beastie Boys, during the band's induction to the Rock & Roll Hall of Fame. 2012 also saw the release of Kid Rock's ninth studio album, "Rebel Soul"; he said that he wanted the album to feel like a greatest hits album, but with new songs. One of the songs on the album, "Cucci Galore", introduced Kid Rock's alter ego, Bobby Shazam. In 2013, Kid Rock performed on the "Best Night Ever" tour, where he motioned to charge no more than $20 for his tickets. The following year, he moved to Warner Bros. Records, releasing his only album on the label, "First Kiss", which he self-produced. Subsequently, after leaving Warner Bros., Kid Rock signed with the country label Broken Bow Records. In 2015, following the Charleston church shooting, the Michigan chapter of the National Action Network protested outside of the Detroit Historical Museum which honored Kid Rock; activists urged Kid Rock to renounce the Confederate flag. Kid Rock wrote an email to Fox News Channel host Megyn Kelly, stating, "Please tell the people who are protesting to kiss my ass". The same day, the National Action Network protested Chevrolet for sponsoring Kid Rock's tour. On July 12, 2017, Kid Rock shared a photo of a "Kid Rock for US Senate" yard sign on Twitter. However, he denied that he was running, citing his upcoming album release and tour. He later clarified that the campaign was a hoax. He donated $122,000, raised by selling "Kid Rock for U.S. Senate" merchandise, to a voter registration group. Also in July, he released two singles from his next album, "Po-Dunk" and "Greatest Show on Earth", both released on the same day. In November of that year, he released his eleventh studio album, "Sweet Southern Sugar". The same year also saw Kid Rock publicly advocate measures against ticket scalpers at his shows by making tickets more affordable for fans. Instead of getting paid for the show, he gets a percentage of concession and ticket sales. In November 2017, Kid Rock fired his publicist, Kirt Webster, after Webster was accused of sexual misconduct. On December 22, 2017, Kid Rock was sued by Ringling Bros. and Barnum & Bailey Circus (which closed seven months earlier) for using their slogan "Greatest Show on Earth" as the name of his 2018 tour. After the lawsuit, Kid Rock changed the tour's name to "American Rock N' Roll Tour". In January 2018, the National Hockey League announced Kid Rock as the headlining entertainer for their January 28 All-Star Game, sparking negative online responses from hockey fans. Hockey player Jeremy Roenick praised the choice and condemned Kid Rock's critics, saying, "Kid Rock is the most talented musician, I think ever, on the planet, because you can put any instrument in your hand or on your mouth and you can play anything and rock a house and sing any kind of genre." It was also announced that, in March 2018, Kid Rock would perform on Lynyrd Skynyrd's final tour before the Southern rock band retired, alongside Hank Williams Jr., Bad Company, the Marshall Tucker Band and 38 Special. Kid Rock released his first greatest hits album titled "" on September 21, 2018. On March 29, 2020, Kid Rock released his first single under the name "DJ Bobby Shazam", entitled "Quarantine", which featured a old-school hip hop sound. The artist stated all proceeds from the single's sales will go to fight Covid-19. Kid Rock's music is noted for its eclectic sound, which draws from genres such as hip hop, rap rock, rap metal, hard rock, heavy metal, Southern rock, country, nu metal, blues, funk and soul. Kid Rock's music has been described by "Pitchfork" as a cross between Run-DMC, Lynyrd Skynyrd and AC/DC. "MTV" compared Kid Rock's songs "I Am the Bullgod" and "Roving Gangster (Rollin')" to a cross between Alice in Chains and Public Enemy. Kid Rock's debut album "Grits Sandwiches for Breakfast" featured a straightforward hip hop sound. With the recording of his follow-up album, "The Polyfuze Method", Kid Rock began to feature more of a rap rock sound; the album served as a crossroads between his hip hop and rock career, still maintaining a strong hip hop sound, while beginning Kid Rock's use of rock and roll and country music influences. His third album, "Early Morning Stoned Pimp", featured what MTV described as "a more eclectic collection of funk, rap, soul, and rock". Beginning with "The Polyfuze Method" and "Early Morning Stoned Pimp", Kid Rock began to utilize sampling of country and rock music to shape his sound. "Devil Without a Cause" saw Kid Rock's sound shift to rap metal, while "Cocky" shifted his sound yet again, featuring more ballads. "Entertainment Weekly" described the album's sound as a "blend of low-rider hip-hop and strip-mall heavy metal". His 2003 self-titled album saw his sound shift once again, being described by critics as hard rock, swamp rock and outlaw country. "Rock n Roll Jesus" and "Born Free" were described as heartland rock. "Born Free", "First Kiss" and "Sweet Southern Sugar" were also noted for having a predominantly country sound. Kid Rock's lyricism ranges from the braggadocio to the introspective; many of his raps consist of broad, humorous boasting, while other songs in his catalog have dealt with more serious topics, including poverty, war, race relations, interracial dating, abortion and patriotism. Kid Rock's influences include Bob Seger and the Beastie Boys. "Cowboys & Indians" claims that Kid Rock's song "Cowboy" had a major impact on the country music scene; the magazine alleges that artists Jason Aldean and Big & Rich, among others, were influenced by the song's country rap style. In eighth grade, Robert James Ritchie began a ten-year on and off relationship with a classmate named Kelley South Russell. In summer 1993, Russell gave birth to Ritchie's son, Robert James Ritchie Jr. While living with her, the two raised three children, but Ritchie discovered that one of them was not his, which led to the couple splitting in late 1993; Ritchie raised his son as a single father. In both March 1991 and September 1997, Ritchie faced misdemeanor charges stemming from alcohol-related arrests in Michigan. In 2000, "Rolling Stone" reported that Ritchie was dating model Jaime King. Ritchie began dating Pamela Anderson in 2001; they became engaged in April 2002, but ended their relationship in 2003. In 2005, Ritchie was charged with assaulting a DJ in a strip club. In July 2006, Ritchie married Anderson. On November 10, 2006, it was announced that Anderson, who had been pregnant with Ritchie's second child, had miscarried. Seventeen days later, on November 27, 2006, Anderson filed for divorce from Ritchie in Los Angeles County Superior Court, citing irreconcilable differences. Ritchie later claimed that the divorce was due to Anderson openly criticizing his mother and sister in front of his son. In 2006, California pornographic film company Red Light District attempted to distribute a 1999 sex tape in which Kid Rock and Scott Stapp, singer of the band Creed, are seen partying and receiving oral sex from groupies; both Rock and Stapp filed with the California courts to sue the pornographers to stop the tape's distribution. The following year, Ritchie physically fought with Mötley Crüe drummer Tommy Lee, another former spouse of Anderson's, at the 2007 Video Music Awards, and was charged with assault. A month later, Ritchie was arrested and charged with battery after fighting with a Waffle House customer. He pleaded nolo contendere ("no contest") to one count, was fined $1,000, required to perform 80 hours of community service and complete a six-hour course on anger management. In 2014, Ritchie became a grandfather when his son's girlfriend gave birth to a daughter, Skye. In November 2017, Ritchie became engaged to longtime girlfriend Audrey Berry. In 1989, Ritchie became a shareholder of the independent record label Top Dog Records, formed by Alvin Williams and Earl Blunt of EB-Bran Productions, in 1988; Ritchie's investment in the company gave him 25% ownership. In 2001, Ritchie filed a lawsuit to gain full control over the Top Dog record label, resulting in his receiving full ownership of the label in 2003. Ritchie also founded Kid Rock's Made in Detroit restaurant and bar, which specializes in Southern-style cuisine. In 2011, Ritchie was honored by the NAACP, which sparked protests stemming from his past display of the Confederate flag in his concerts. During the ceremony, Kid Rock elaborated on his display of the flag, stating, "[I] never flew the flag with hate in my heart [...] I love America, I love Detroit, and I love black people." Ritchie's publicist announced that 2011 was the year he officially distanced himself from the flag. In 2012, Kid Rock performed alongside Travie McCoy and The Roots in honor of the Beastie Boys, during the band's induction to the Rock & Roll Hall of Fame. In 2013, Ritchie criticized Republican lawmakers in New York for passing laws which made it difficult for him to keep concert ticket prices low. In January 2015, Ritchie was criticized by fans for appearing in a photograph holding up a dead cougar that was killed on a hunting trip with Ted Nugent. In 2015, following the Charleston church shooting, the Michigan chapter of the National Action Network protested outside of the Detroit Historical Museum which honored Ritchie; activists urged Ritchie to renounce the Confederate flag, which he had displayed in concerts from 2001 to 2006. Ritchie wrote an email to Fox News Channel host Megyn Kelly, stating, "Please tell the people who are protesting to kiss my ass". The same day, the National Action Network protested Chevrolet for sponsoring Ritchie's tour. In September 2016, Ritchie was criticized for allegedly saying "Man, fuck Colin Kaepernick." during a live performance of his song "Born Free". In November 2017, Ritchie fired his publicist, Kirt Webster, after Webster was accused of sexual misconduct. On April 6, 2018, Ritchie was inducted into the Celebrity Wing of the WWE Hall of Fame, during the weekend of Wrestlemania 34. A philanthropist, Ritchie oversees The Kid Rock Foundation, a charity which raises funds for multiple causes, including campaigns which sent "Kid Rock care packages" to U.S. military personnel stationed overseas. Ritchie is an advocate for affordable concert tickets, and makes an effort to try and sell tickets to his performances for as low as possible to encourage increased concert attendance for lower income consumers and discourage scalping. Instead of getting paid for the show, he gets a percentage of concession and ticket sales. Ritchie is an ordained minister, and collects guns. In 2002, Ritchie performed alongside Chuck D and Grandmaster Flash in tribute to slain DJ Jam Master Jay. His performance at the Super Bowl XXXVIII, in 2004, drew criticism from Veterans of Foreign Wars and Senator Zell Miller for wearing the American flag with one slit in the middle, as a poncho; Ritchie was accused of "desecrating" the flag. In January 2005, Ritchie performed at the inaugural address of reelected president George W. Bush, sparking criticism from conservative groups, due to Ritchie's lyrics. In September, Kid Rock filled in for Johnny Van Zant, the lead singer of Lynyrd Skynyrd, on the band's hit "Sweet Home Alabama" at the Hurricane Katrina benefit concert. In 2007 and 2008, Ritchie toured for the United Service Organizations. Also in 2008, Ritchie recorded and made a music video for the song "Warrior" for a National Guard advertising campaign. On November 30th 2019 Kid Rock went on a rant while intoxicated at his restaurant in Nashville called Kid Rock's Big Ass Honky Tonk and Steakhouse, he made crude comments about Oprah Winfrey calling her "weird". This led to people calling Kid Rock a racist, this also made Kid Rock decide to close his Detroit restaurant located at the Little Caesars Arena. Many fans of Kid Rock protested to keep his Detroit restaurant open, an up and coming stand up comedian from Seattle said "this restaurant defines who Kid Rock really is, He's from the Detroit area and states he loves that city more than anything" he also stated "Since Kid Rock was born and raised in the Detroit area, that makes it his home, other intolerant citizens forcing a proud Detroit Native out is like every citizen is Archie Bunker". Kid Rock responded to these with "it's wise to go where you're celebrated, not tolerated." Regarding political issues, Ritchie is a Republican, although he has routinely proclaimed himself as libertarian philosophically, stating he has socially liberal views on abortion and gay marriage, but fiscally conservative views on economics. Ritchie has advocated legalizing and taxing marijuana, cocaine and heroin. Ritchie has also stated, "I don't think crazy people should have guns." Ritchie was a vocal supporter of American military involvement in the Iraq War. Ritchie has met Presidents Bill Clinton, Barack Obama, and Donald Trump while in office. Regarding his political views, Ritchie said, "I have friends everywhere. Democrat, Republican, this that and the other. [...] We're all human beings first, Americans second, let's find some common ground and get along," while also stating in the same interview that he wanted "to bodyslam some Democrats." Ritchie supported Bill Clinton and George W. Bush during their presidencies. In 2008, Ritchie supported newly elected President Barack Obama, saying that the president's election was "a great thing for black people." In 2012, Ritchie campaigned for Republican presidential candidate Mitt Romney; the candidate used Ritchie's song "Born Free" as his campaign theme. In 2015, Ritchie publicly endorsed Ben Carson for the Republican nomination for President of the United States in the 2016 election. In February 2016, he voiced approval for Donald Trump's campaign for the same office. In September, Kid Rock was criticized for allegedly saying "fuck Colin Kaepernick" during a live performance of his song "Born Free". In December, Kid Rock sparked controversy for selling vulgar T-shirts supporting Trump at concerts, including one showing a map of the United States which labelled the states which had voted against Trump as "Dumbfuckistan". On July 12, 2017, Ritchie shared a photo of a "Kid Rock for US Senate" yard sign on Twitter. He also launched a website at kidrockforsenate.com, which sold merchandise bearing that inscription. Several weeks later, he wrote a post on his blog stating that he was still "exploring my candidacy", and that, whether or not he ran, he wanted to register people to vote, because "although people are unhappy with the government, too few are even registered to vote or do anything about it." He added that he wanted "to help working class people in Michigan and America all while still calling out these jackass lawyers who call themselves politicians." His statements sparked media speculation that he would try to run on the Republican ticket against sitting Michigan senator Debbie Stabenow, as well as enthusiasm from some prominent Republicans, including former New York Governor George Pataki, who wrote on Twitter, "Kid Rock is exactly the kind of candidate the GOP needs right now." In an October 2017 interview with Howard Stern, Ritchie put an end to the speculation, saying that he had never intended to run for Senate, adding rhetorically, "Who couldn't figure that out?". He later clarified that the campaign was a joke that he had started after a Michigan state legislator encouraged him to run for Senate. He expressed surprise at the interest his potential candidacy had received, but also disappointment that some opposed to his candidacy had brought up his previous use of the Confederate flag to label him a racist. He donated the $122,000 he had raised by selling "Kid Rock for U.S. Senate" merchandise to a voter registration group.
https://en.wikipedia.org/wiki?curid=17396
Knaresborough Castle Knaresborough Castle is a ruined fortress overlooking the River Nidd in the town of Knaresborough, North Yorkshire, England. The castle was first built by a Norman baron in on a cliff above the River Nidd. There is documentary evidence dating from 1130 referring to works carried out at the castle by Henry I. In the 1170s Hugh de Moreville and his followers took refuge there after assassinating Thomas Becket. In 1205 King John took control of Knareborough Castle. He regarded Knaresborough as an important northern fortress and spent £1,290 on improvements to the castle. The castle was later rebuilt at a cost of £2,174 between 1307 and 1312 by Edward I and later completed by Edward II, including the great keep. Edward II gifted the castle to Piers Gaveston, and stayed there himself when the unpopular nobleman was besieged nearby at Scarborough Castle. Philippa of Hainault took possession of the castle in 1331, at which point it became a royal residence. The queen often spent summers there with her family. Her son, John of Gaunt acquired the castle in 1372, adding it to the vast holdings of the Duchy of Lancaster. Katherine Swynford, Gaunt's third wife, obtained the castle upon his death. The castle was taken by Parliamentarian troops in 1644 during the Civil War, and largely destroyed in 1648 not as the result of warfare, but because of an order from Parliament to dismantle all Royalist castles. Indeed, many town centre buildings are built of 'castle stone'. The remains of the castle are open to the public and there is a charge for entry to the interior remains. The grounds are used as a public leisure space, with a bowling green and putting green open during summer. It is also used as a performing space, with bands playing most afternoons through the summer. It plays host to frequent events, such as the annual FEVA (Festival of Visual Arts and Entertainment). The property is owned by the monarch as part of the Duchy of Lancaster holdings, but is administered by Harrogate Borough Council. Knaresborough castle has had ravens since 2000, one of which was donated by the Tower of London, and another an African Pied Crow named Mourdour. In 2018, Mourdour was filmed greeting people at the castle in a Yorkshire accent, saying "Y'alright love?" The video subsequently went viral and was reported by various news broadcasters. The castle, now much ruined, comprised two walled baileys set one behind the other, with the outer bailey on the town side and the inner bailey on the cliff side. The enclosure wall was punctuated by solid towers along its length, and a pair, visible today, formed the main gate. At the junction between the inner and outer baileys, on the north side of the castle stood a tall five-sided keep, the eastern parts of which have been pulled down. The keep had a vaulted basement, at least three upper stories, and served as a residence for the lord of the castle throughout the castle's history. The castle baileys contained residential buildings, and some foundations have survived. In 1789, historian Ely Hargrove wrote that the castle contained "only three rooms on a floor, and measures, in front, only fifty-four feet." The upper storey of the Courthouse features a museum that includes furniture from the original Tudor Court, as well as exhibits about the castle and the town. Some of the surviving areas of the castle keep wall also bear impact scars left by bullets fired during the Civil War siege.
https://en.wikipedia.org/wiki?curid=17398
Calligra Calligra Suite is a graphic art and office suite by KDE. It is available for desktop PCs, tablet computers, and smartphones. It contains applications for word processing, spreadsheets, presentation, databases, vector graphics, and digital painting. Calligra uses the OpenDocument format as its default file format for most applications and can import other formats, such as Microsoft Office formats. Calligra relies on KDE technology and is often used in combination with KDE Plasma Workspaces. Calligra's main platform is desktop PCs running Linux, FreeBSD, macOS, and Windows, of which Linux is the best supported system. On desktop systems, the whole range of features is available. Calligra's efforts to create touchscreen-friendly versions are centered on reusable Qt Quick components. For smartphone-like formfactors 3rd party documents viewers Coffice for Android and Sailfish Office for Sailfish OS are available that make use of these components. The Calligra project shipped Krita Sketch/Gemini and the tablet-focused Plasma Active document viewer with Calligra 2.8. Calligra 2.9 ships Calligra Gemini, an enhanced version of Calligra Active with added document editing features and runtime switching between desktop and touchscreen interfaces. Calligra was created after disagreements within the KOffice community in 2010 – between KWord maintainer Thomas Zander and the other core developers. Following arbitration with the community members, several applications were renamed by both parties. Most developers, and all but KWord maintainer Thomas Zander, of particular applications joined the Calligra project. Three applications, Kexi, Krita and KPlato and the user interfaces for mobile devices have been completely moved out of KOffice and are only available within Calligra. A new application called Braindump has been added to Calligra after the split and KWord was replaced by the new word processor Calligra Words. KOffice 2.3, released , along with subsequent bugfix releases (2.3.1–2.3.3) was still a collaborative effort of both the KOffice and Calligra development teams. According to its developers, this version is stable enough for real use, and Karbon14, Krita and KSpread are recommended for production work. On the Calligra team began releasing monthly snapshots while preparing for the release of Calligra 2.4. The first version of the Calligra Suite for Windows was released on . The package is labeled as “highly experimental” and “not yet suitable for daily use”. The Calligra team originally scheduled to release the final 2.4 version in January 2012 but problems in the undo/redo feature of Words and Stage required a partial rewrite and caused a delay. Calligra 2.4 was released on . Calligra 2.4 launched with two mobile-oriented user interfaces: Calligra Mobile and Calligra Active. Calligra Mobile's development was initiated in summer 2009 and first shown during Akademy / Desktop Summit 2009 by KO GmbH as a simple port of KOffice to Maemo. Later Nokia hired KO to assist them with a full-fledged mobile version, including a touchscreen-friendly user interface which was presented by Nokia during Maemo Conference in October 2009. The first alpha version was made available in January 2010. Along with the launch of the Nokia N9 smartphone, Nokia released its own Poppler and Calligra-based office document viewer under GPL. "Calligra Active" was launched in 2011 after the Plasma Active initiative to provide a document viewer similar to "Calligra Mobile" but for tablet computers. In December 2012 KDE, KO GmbH, and Intel released Krita Sketch, a variant of Calligra's Krita painting application, for Windows 7 and 8. On 24 March 2013 KDE developer Sebastian Sauer released "Coffice", a Calligra-based document viewer, for Android. Jolla continued Nokia's efforts on a smartphone version. In 2013 Jolla launched Sailfish Office. Sailfish Office reuses the Qt Quick components from Calligra Active. In September 2013 a merger of Krita and Krita Sketch, named Krita Gemini, was launched on Windows 8.1. Development was funded by Intel to promote "2in1" convertible notebooks. On 5 March 2014 Krita Sketch and Gemini were also released as part of Calligra 2.8 for non-Windows platforms. In April 2014 Intel and KO GmbH extended the promotion deal to Gemini versions of Stage and Words. On 28 August 2014 the first snapshot of Calligra Gemini was released by KO GmbH for Windows. On 21 November 2014 KDE announced that Calligra Gemini would officially be released as part of Calligra 2.9. As with Krita, this Gemini release adds a touchscreen interface to Words and Stage and users can switch between desktop and touch mode at runtime. Calligra Gemini is a continuation of Calligra Active and Sailfish Office developments but with added editing capabilities. On 19 October 2014 a Linux version was presented. The koffice.org website was replaced by a placeholder in early September 2012. KOffice was declared unmaintained by KDE. The koffice.org domain now redirects to Calligra.org. In Autumn 2015 Krita was split off into a project independent from Calligra, with the then current 2.9 versions though still developed as part of Calligra 2.9. Initial reception shortly after the 2.4 release was positive. Linux Pro Magazine Online's Bruce Byfield wrote "Calligra needed an impressive first release. Perhaps surprisingly, and to the development team's credit, it has managed one in 2.4", but also noted that "Words in particular is still lacking features". He concluded that Calligra is "worth keeping an eye on". The German sister publication LinuxUser 10/2012 reviewed Calligra 2.5 on . Its reception was mostly positive. Negative criticism centered on Words' stability: "During our review no Calligra module was completely free of crashes, however Words' crashes reached an amount that we cannot recommend it for general use." The reviewer Thomas Drilling on the other hand praised Calligra's usability, writing: "The consistent work flow, often stunningly intuitive workflows, and clear menu structure are well received." He then concluded: "The individual modules' quality varies: While Words shows weakness, image editor Krita, spreadsheet application Sheets, and presentation program Stage completely won us over. Flowcharting application Flow allures with its wide range of stencils which makes drawing flow charts come easy." LinuxUser reviewed Calligra 2.6 in issue 3/2013. Reviewer Vincze-Aron Szabo reiterated positive criticism about Calligra's user interface and noted increased stability of Words compared to Calligra 2.5. Szabo's major point for negative criticism was Author's and Word's handling of long documents, resulting in decreased performance and crashes. The other reviewed components – Plan, Stage, Sheets, and Krita – were praised in terms of stability and intuitiveness. Calligra 2.7 was reviewed by LinuxUser in its October 2013 issue. Thomas Drilling, the reviewer, drew a positive conclusion overall. Among the positive aspects he pointed out were better ".docx" file import than LibreOffice and the amount of new features gained by the new version of the suite. Source for negative criticism was once again Words' stability, although Drilling noted improvements in this regard. Network World editor Bryan Lunduke wrote about Calligra 2.8 in March 2014: “Karbon is an astoundingly nice vector design tool, and Flow, a diagramming tool, is incredibly handy from the design point of view as well. […] And Words is a great word processor.” In August 2014 he wrote: “Calligra Suite has become a staple of my workflow even on non-KDE desktops.” Linux Insider also reviewed Calligra 2.8, concluding “Calligra Suite is a solid offering that has grown considerably since branching out from its traditional KOffice roots. It has something for everyone. Its tools fills the needs of writers, artists, content designers and office workers.” In 2017, sempreupdate.com.br wrote: “"[If you do not] depend on proprietary formats [...] especially .xls, .xlsx and .doc [...] and you use KDE it's worth trying. Yes, regarding LibreOffice Calligra is still two steps behind [but] it also brings small differentials that would be welcomed in LibreOffice."“ Calligra is written with dependencies on KDE Frameworks 5 and Qt 5. Older versions depend on KDE Platform 4 and Qt 4, and even older versions of KOffice depend on KDElibs and Qt 3. Despite that Calligra Suite is released independently of the KDE Software Compilation or of the KDE Applications. All components of the Calligra Suite are released under free software licenses and use OpenDocument as their native file format when applicable. The developers of Calligra plan to share as much infrastructure as possible between applications to reduce bugs and improve the user experience. This is done by common technologies like Flake and Pigment. Flake provides a way to handle shapes, which can contain text, images, formulas (via KFormula), charts (via KChart) or other objects, in a consistent way across all applications. The Calligra team also wants to create an OpenDocument library for use in other KDE applications that will allow developers to easily add support for reading and outputting OpenDocument files to their applications. Automating tasks and extending the suite with custom functionality can be done with D-Bus or with scripting languages like Python, Ruby, and JavaScript through the Kross scripting framework.
https://en.wikipedia.org/wiki?curid=17399
Fumimaro Konoe Prince was a Japanese politician and Prime Minister who presided over Japan's invasion of China in 1937 and the deterioration in relations with the United States and its allies. He also played a central role in Japan's transformation into a totalitarian state by passing the National Mobilization Law and founding the Imperial Rule Assistance Association. Despite Konoe's attempts to resolve tensions with the United States, the rigid timetable imposed on negotiations by the military and his government's inflexibility regarding potential resolution terms set Japan on the path to war. After failing to reach a peace agreement, Konoe resigned as Prime Minister on 18 October 1941 prior to the outbreak of hostilities. However, he remained a close advisor to the Emperor until the end of World War II. Following the end of the war, he committed suicide on 16 December 1945. Fumimaro Konoe was born in Tokyo on 12 October 1891 to the prominent Konoe family, one of the main branches of the ancient Fujiwara clan. The Konoe were the ""head of the most prestigious, and highest ranking noble house in the realm"". They became independent of the Fujiwara in the 12th century, when Minamoto no Yoritomo divided the Fujiwara into five separate houses (go-sekke). ""First among the go-sekke was the Konoe"", and Fumimaro was its 29th leader. His younger brother Hidemaro Konoye was a symphony conductor. Konoe's father, Atsumaro, had been politically active, having organized the Anti-Russia Society in 1903. Fumimaro's real mother died shortly after his birth; his father then married her younger sister. Fumimaro was misled into thinking she was his real mother, and found out the truth when he was 12 years old after his father's death. Fumimaro inherited family debt when his father died. Thanks to the financial support of the zaibatsu Sumitomo, which he received throughout his career, and the auction of Fujiwara heirlooms the family was able to become solvent. He studied socialism at Kyoto Imperial University and translated Oscar Wilde's "The Soul of Man Under Socialism" into Japanese, where he met Saionji Kinmochi. Konoe became Saionji's protege, after graduation he went to Saionji for advice about starting a political career, and worked briefly in home ministry before accompanying his mentor to Versailles as part of the Japanese peace delegation. In December 1918, prior to Versailles, Konoe published an essay titled "Reject the Anglo-American-Centered Peace" (英米本位の平和主義を排す). In this article in 1918 he attacked the western democracies as hypocritically supporting democracy, peace, and self determination, while he argued that they actually undermined these ideals through their own version of racially discriminatory imperialism. He argued that the league of nations was actually an effort to institutionalize the status quo, colonial hegemony of the western powers. Following a translation by American journalist Thomas Franklin Fairfax Millard, Saionji wrote a rebuttal in his journal, "Millard's Review of the Far East". Saionji deemed Konoe's writing to be reckless, but after it became internationally read, Konoe was invited to dinner by Sun Yat-sen who admired Japan's quick modernization, where they discussed pan Asian nationalism. During the Paris peace conference, Konoe was one of the Japanese diplomats who proposed the Racial Equality Proposal for the Covenant of the League of Nations. When the Racial Equality Clause came up before the committee, it received the support of Japan, France, Serbia, Greece, Italy, Brazil, Czechoslovakia, and China. However, U.S. President Woodrow Wilson overturned the vote, declaring that the clause needed unanimous support. Konoe took the rejection of the Racial Equality Clause very badly, and was afterwards known to have had a grudge against white people who he felt had humiliated Japan by rejecting the Racial Equality Clause. Upon his return to Japan he published a booklet where he described his travels to France, England and America, how he was angered by rising anti-Japanese sentiment there, and how government policy which discriminated against Japanese immigration to America, additionally he described China as a rival to Japan in international relations. Before Konoe left to Europe he fathered an illegitimate child with his Geisha Kiku. In 1916, while at university, Fumimaro took his father's seat in the house of peers. After his return from Europe he was aggressively recruited by the most powerful political faction of Japan's budding Taisho democracy of the 1920s, the "kenkyukai", led by Yamagata Aritomo, which he joined in September 1922. The "kenkyukai" was a conservative, militaristic faction, generally opposed to democratic reform. The opposing faction was the "seiyukai", led by Hara Kei, which drew its strength from the lower house. Eventually the "seiyukai" was able to gain the support of Aritomo, and Kei became premier in 1918. Konoe believed the house of peers should stay neutral in factional party politics due to fear that the peerage would have their privileges restricted if seen as too partisan. He therefore supported Hara Kei's "seiyukai" government, as did most of the "kenkyukai". However, by 1923 the "seiyukai" had split into two factions, and could no longer control the government. As different factions rose to control the government, Konoe supported universal male suffrage during the premiership of Kato Komei and his party the "kenseikai", to forestall his government from enaction of serious curtailment of the privileges of the nobility. He committed to universal male suffrage as he believed it was the best way to channel popular discontent into the current political system, thereby reducing the chances of violent revolution. As the house of peers became allied with different political factions in the lower house, Konoe left the "kenkyukai" in November 1927. Like his position in regard to the nobility, he believed that the emperor should not take political positions as well, since in his eyes it would diminish the imperial prestige, undermine the unifying power of the throne, expose the emperor to criticism, and potentially undermine domestic tranquillity. His greatest fear in this period of rapid industrialization would become the threat of left wing revolution, facilitated by the petty factionalism of Taisho democracy's political factions. He saw the peerage as a bulwark of stability committed to tranquillity, harmony, the maintenance of the status quo, its function was to restrain the excesses of the elected government, but its power was to be used sparingly. The Japanese home ministry was extremely powerful, was in charge of the police, elections, public works, Shinto shrines, and land development. The home ministry was also abused to influence elections in favour the ruling party. Konoe entered into an alliance with important home ministry officials despite having once believed it to be beneath the dignity of a nobleman, the most important among these officials being Yoshiharu Tazawa, who he met after he became the managing director of the Japan Youth Hall ("Nippon Seinenkan") in 1921. Konoe and these officials saw the influence of local "meiboka" political bosses as a threat to Japan's political stability, universal suffrage having opened the vote to the peasantry, which was undereducated, and controlled by these local bosses, who utilized pork barrel politics. These officials also shared Konoe's concern about party influence within the home ministry, which had seen great turnover mirroring the political upheaval occurring in the Diet. Konoe's association with the youth hall began two months after the publication of an article in July 1921, where he stressed education of the electorate's political wisdom and morality, and lamented that education only taught youth to accept ideas passively from their superiors. The Youth Corps ("Seinendan") was thereafter created to foster a moral, sense of civic duty among the people, with the overall purpose of destroying the "meiboka" system. 1925 Konoe and these officials formed the Alliance for a New Japan ("Shin Nippon Domei") which endorsed the concept of representative government, but rejected the value of party and local village bosses, instead advocating that new candidates from outside the parties should run for office. The Association for Election Purification ("Senkyo Shukusei Domeikai") was also created, an organization whose purpose was to circumvent and weaken pork barrel local politics by supporting candidates that were not beholden to "meiboka" bosses, the alliance even formed a political party ("meiseikai") which was unable to secure backing, and was dissolved within two years of formation, in 1928. In the 1920s Japanese foreign policy was largely in line with Anglo-American policy, the treaty of Versailles, the Washington Naval Conference treaty, and there was agreement between the great powers over the establishment of an independent Chinese State. A flourishing party system controlled the cabinet in alliance with industry. The great depression of 1930's, the rise Soviet military power in the east, further insistence on limitations to Japanese naval power, and increased Chinese resistance to Japanese aggression in Asia, marked the abandonment of Japanese cooperation with the Anglo-American powers. The Japanese government began to seek autonomy in foreign policy, and as the sense of crisis deepened, unity and mobilization became overreaching imperatives. Konoe assumed the vice presidency of the house of peers in 1931. In 1932 political parties lost control of the cabinet, henceforth cabinets were formed by alliances of representatives from political elites and military factions, the government increased suppression of political parties and what remained of the left wing, as Japan mobilized its resources for war. Konoe ascended to the presidency of the house of peers in 1933, his efforts in the mid 1930s focusing on political mediation among elite political factions, elite policy consensus and national unity. He sent his eldest son Fumitaka to study in the US, at Princeton, wishing to prepare him for politics and make him an able proponent of Japan in America. Fumimaro had not been educated abroad due to his father's poor financial situation, although most of his elite contemporaries were. He visited Fumitaka in 1934 and he was shocked by rising anti-Japanese sentiment, this experience deepened his resentment of the US, which he perceived as selfish, and racist, he additionally blamed the US for its failure to avert economic disaster. In a speech in 1935 Konoe said that the ""monopolization"" of resources by the Ango-American alliance must end and be replaced by an ""international new deal"" to help countries like Japan take care of their growing populations. Konoe remained consistent since Versailles, he saw Japan as the equal and the rival of the western powers, believed that Japan had a right to expansion in China, believed that expansion was survival, and still held that the ""Anglo-American powers were hypocrites seeking to enforce their economic dominance of the world while denying Japan the right to survive as a great power"." Despite his tutelage with the liberal leaning Saionji Kinmochi, his study of socialism at university, and his support of universal suffrage, he seemed to have had a contradictory attraction to fascism, which angered and alarmed the ageing "genro", for instance he was reported to have dressed as Hitler at a costume party before Saionji's daughter was married in 1937. Despite these misgivings Saionji nominated Konoe to the emperor, and in June 1937 Konoe became Prime Minister. Konoe spent the short time between then and war with China attempting to secure pardons for the ultra-nationalist leaders of the 26 February incident, leaders who had attempted to assassinate his mentor Saionji. Konoe retained the military, and legal ministers from the previous cabinet upon assumption of the premiership, and refused to take ministers from the political parties, he was not interesting in resurrecting party government. One month later, Japanese troops clashed with Chinese troops near Peking in the Marco Polo Bridge Incident, a consensus emerged among Japanese military leadership that the nation was not ready for war with China, and a truce was made on 11 July. The ceasefire was broken by 20 July after Konoe's government sent more divisions to China, causing full-scale war to erupt. In November 1937 Konoe instituted a new system of joint conference between the civil government and the military called liaison conferences. In attendance at these liaison meetings were the prime minister, the foreign minister, the ministers of the army and navy and their chiefs of general staff. This arrangement resulted in an imbalance in favor of the military since each member in attendance had an equal say in policymaking. Prior to the capture on Nanjing, Chang Kai Shek through the German ambassador in China attempted to negotiate, however Konoe rejected the overture. Shortly after the Nanjing massacre, Konoe issued a statement in January 1938 where he declared that ""Kuomintang aggression had not ceased despite its defeat"" that it was ""subjecting its people to great misery"" and that Japan would no longer deal with Chang, six days later he gave a speech where he blamed China for the continued conflict. After taking Nanking, the Japanese Army was doubtful about its ability to advance up the Yangtze river valley, and favoured taking up a German offer of mediation to end the war with China. Konoe by contrast, was not interested in peace, and instead chose to escalate the war by suggesting deliberately humiliating terms that he knew Chiang Kai-shek would never accept, to win a "total victory" over China. In January 1938, Konoe's government announced that it would no longer deal with Chiang, but would await the development of a new regime. When later asked for clarifications, Konoe said he meant more than just non-recognition of Chiang's regime but "rejected it" and would "eradicate it". The American historian Gerhard Weinberg wrote about Konoe's escalation of the war: "The one time in the decade between 1931 and 1941 that the civilian authorities in Tokyo mustered the energy, courage and ingenuity to overrule the military on a major peace issue they did so with fatal results-fatal for Japan, fatal for China, and for Konoe himself". Japan had lost a large amount of its gold reserves by late 1937, due to trade imbalance, and Konoe believed that a new economic system geared toward exploitation of northern China's resources was the only way to stop this economic deterioration, he rejected US "open door policy as he had since Versailles, but left open possible western interests in southern China. In a declaration on November 3, 1938, Konoe said Japan seeks a new order in east Asia, that Chiang no longer spoke for China, that Japan would reconstruct China without help from foreign powers, and that a ""tripartite relationship of . . . Japan, Manchukuo, and China"" would ""create a new culture, and realize close economic cohesion throughout east Asia"." In April 1938 Konoe and the military pushed a National Mobilization Law through the Diet which declared a state of emergency, allowed the central government to control all manpower and material, and rationed the flow of raw materials into the Japanese market. Japanese victories continued at Xuzhou, Hankow, Canton, Wuchang, Hanyang – but still the Chinese kept on fighting. Konoe resigned in January 1939, and was appointed chairman of the Privy Council. He left a war that he had a large part in making to be finished by someone else, which bewildered the Japanese public as they had been told that the war was an endless series of victories. Kiichirō Hiranuma succeeded him as Prime Minister. Konoe was awarded the 1st class of the Order of the Rising Sun in 1939. Due to dissatisfaction with the policies of Prime Minister Mitsumasa Yonai, the Japanese Army demanded Konoe's recall as Prime Minister. On 23 June, Konoe resigned his position as Chairman of the Privy Council, and on 16 July 1940, the Yonai cabinet resigned and Konoe was appointed Prime Minister. One of his first moves was to launch the League of Diet Members Supporting the Prosecution of the Holy War to counter opposition from politicians such as deputy Saitō Takao who had spoken against the Second Sino-Japanese War in the Diet on 2 February. Yonai had refused to align Japan with the Nazis, in response the army minister Shunroku Hata resigned and the army refused to nominate a replacement. Konoe was recalled after Saionij for the last time before his death later that year endorsed Konoe one last time, upon his return Konoe set out to end the war in with China, and by way of the New Order Movement remove the political parties from control of government. Konoe successfully ended the political parties that year, thereby aiding the pro war factions in the military, he deemed the parties as too liberal, and divisive. The Imperial Rule Assistance Association (IRAA) was created in 1940 under Konoe as a wartime mobilization organization, ironically in alliance with local "meiboka", since their cooperation was required to mobilize the rural population. Konoe's government pressured political parties to dissolve into the IRAA, he resisted calls to form a political party akin to Nazi party, believing it would revive the political strife of the 1920s, instead he worked to promote the IRAA as the sole political order, additionally believing that becoming the head of a political party would be beneath the dignity of a nobleman. The Japanese invasion of French Indochina was planned by the army before Konoe was recalled as prime minister. The invasion would secure needed resources to wage war with China, cut off western supply of Kuomintang armies, put the Japanese military in a strategic location to threaten more territory, and would hopefully intimidate the Dutch East Indies into supplying Japan with oil. The US responded with the Export Control Act, and increased aid to Chang, despite this response foreign minister Yosuke Matsuoka signed the Tripartite Pact on 27 September 1940 over the objection of some of Konoe's advisors including former Japanese ambassador to the US Kikujiro Ishii. In an 4 October press conference Konoe said that the US should not misunderstand the intentions of the tripartite powers, and should help them to build a new world order, additionally he said that if the US did not end its provocative actions, and deliberately chose to misunderstand the actions of the tripartite powers, there would be no option left but war. In November 1940 Japan signed the Sino-Japanese treaty with Wang Jinwei who had been a disciple of Sun Yat-sen, and was the head of a rival Kuomintang government in Nanjing, Konoe's Government did not relinquish all held territory to Jinwei's government, undercut its authority, and it was largely seen as an illegitimate puppet. In December 1940 the British reopened the Burma road and lent 10 million pounds to Chang's Kuomintang. Konoe recommenced negotiations with the Dutch in January 1941 in an attempt to secure an alternate source of oil. In February 1941 Konoe chose Admiral Nomura as Japanese ambassador to the US. Matsuoka and Stalin signed the Soviet–Japanese Neutrality Pact in Moscow on 13 April 1941, which made it clear that the Soviets would not help the allies in the event of war with Japan. On 18 April 1941 word arrived from Nomura of a diplomatic breakthrough, a draft of understanding between the US and Japan. The basis of this agreement had been drafted by two American Maryknoll priests James Edward Walsh, and James M. Drought, through the postmaster general Frank C. Walker the priests met with Roosevelt. The outline of the proposal, which had been drafted in consultation with banker Tadao Ikawa, Colonel Hideo Iwakura, and Nomura, included American recognition of Manchukuo, the merging of Chiang's government with the Japanese backed Reorganized National Government of China, normalization of trade relations, withdrawal of Japanese troops from China, mutual respect for Chinese independence, and an agreement that Japanese immigration to the United States would proceed on the basis of equality with other nationals free from discrimination. When Matsuoka returned to Tokyo a liaison conference was held, during which he voiced his opposition to the draft of understanding, believing it would betray their Nazi allies, he thought that Japan should let Germany see this draft, he then left the meeting citing exhaustion, Konoe also retreated to his villa claiming a fever instead of forcing the issue Matsuoka pushed for an immediate attack on British Singapore and began to openly criticise Konoe and his cabinet, it was becoming suspected that he wanted to replace Konoe as prime minister. Matsuoka changed the US draft into a counteroffer which essentially gutted most of the Japanese concessions in regard to China and expansion in the Pacific then had Nomura deliver it to Washington. On Sunday, 22 June 1941, Hitler broke the Molotov-Ribbentrop pact by invading the Soviet Union, coinciding with the invasion, Cordell Hull delivered another amendment of the draft on understanding to the Japanese, but this time there was no recognition of the Japanese right to control of Manchukuo, the new draft also completely rejected the Japanese right of military expansion in the pacific. Hull included a statement which in summary said that as long as Japan was allied to Hitler an agreement would be next to impossible to achieve, he did not specifically mention Matsuoka, but it was implied that he would have to be removed, the foreign minister was now advocating an immediate attack on the Soviet Union, and did so directly to the emperor. Konoe was forced to apologize to the emperor and assure him that Japan was not about to go to war with the Soviet Union. Matsuoka was convinced that Barbarossa would be a quick German victory and he was now opposed to attacking Singapore because he believed it would provoke war with the western allies. After a series of liaison conferences where Matsuoka argued forcefully in favour of an attack against the Soviet Union and against further expansion southward, the decision was made to invade and occupy the southern half of French Indochina, which was formalized in an imperial conference on 2 July. Included in this imperial conference resolution was a statement that Japan would not flinch from war with the US and Britain if necessary. Beginning on 10 July Konoe held a series of liaison conferences to discuss the Japanese response to Hull's latest amendment to the draft of understanding. It was decided that a reply would not be given until the Japanese takeover of southern Indochina was complete, hoping that if it went peacefully, perhaps the US could be convinced to tolerate the occupation without intervention. On 14 July Matsuoka drafted a response through illness which said Japan would not abandon the tripartite pact, he attacked Hull's statement which had been aimed largely at him, and the next day he sent the response to Germany for approval. Sending the draft to the Germans without the cabinet's permission was the final straw which led Konoe and his entire cabinet to resign en masse and reform the government without Matsuoka on 16 July, Matsuoka did not attend this emergency cabinet meeting due to illness. Konoe's third government was formally created on 18 July, with admiral Teijiro Toyoda as foreign minister. The Roosevelt administration hoped that Matsuoka's dismissal would mean Japan was standing down from continued aggressive action, these hopes were dashed when the French government, after being threatened with military action, allowed the Japanese army to occupy all of French Indochina on 22 July, two days later the US cut off negotiations and froze Japanese assets, the British, Dutch, and Canadian governments following suit shortly thereafter. The same day Roosevelt cut off negotiations, he met with Nomura where he told the ambassador that if Japan would agree to pull out of Indochina, and agree to its being granted a status of neutrality, Japanese assets could be unfrozen. Roosevelt implied that Japanese expansion in China would be tolerated, but that Indochina was a red line, he also expressed how he was disturbed that Japan could not see that Hitler was bent on world domination. Konoe did not take aggressive action in implementing Roosevelt's offer, his was not able to restrain militarists led by Hideki Tojo who as minister of war regarded the seizure as irreversible due to its approval by the emperor. On 28 July the Japanese began to formally occupy southern Indochina. In response on 1 August the US embargoed oil exports to Japan. Finding a replacement source of petroleum was paramount, as the US supplied 93% of Japan's oil in 1940. Navy chief of staff Osami Nagano informed the Emperor that Japan's oil stockpiles would be completely depleted in two years. Konoe's cabinet failed to foresee the US embargo oil as a result of their occupation of southern French Indochina. On 1 August Hachiro Arita wrote Konoe a letter which told him that he should not have let the military occupy southern Indochina while negotiations with the US were still ongoing, Konoe responded that the ships were already dispatched and could not turn back in time, and that all he could do was pray for "divine intervention". On 6 August Konoe's government announced that it would only pull out of Indochina when the war in China was concluded, rejected Roosevelt's neutralization proposal, but promised not to expand further and asked for US mediation in ending the war in China. On 8 August Konoe requested through Nomura, a meeting with Roosevelt, the suggestion came from Kinkazu Saionji the grandson of his deceased mentor Saionji Kinmochi who advised Konoe through a monthly informal breakfast club, where Konoe consulted with civilian elites about policy. Hotsumi Ozaki, who was a friend and advisor to Konoe, was a member of this same breakfast club, he was also a member of Richard Sorge's soviet spy ring. Nomura met with Roosevelt and told him about Konoe's summit proposal, after condemning Japanese aggression in Indochina Roosevelt said he was open to the meeting, and suggested they could meet in Juneau, Alaska. On 3 September a liaison conference was held where it was decided that Konoe would continue to seek peace with Roosevelt, but at the same time Japan would commit to war if a peace agreement did not materialize by mid October, also that Japan would not abandon the tripartite pact. Konoe, Saionji, and his supporters had drafted a proposal which emphasized a willingness to withdraw troops from China, however Konoe did not introduce this proposal, but instead acceded to a proposal from the foreign ministry, the difference in the proposals being that the foreign ministry's was conditioned on an agreement being reached between China and Japan before troops would be withdrawn. On 5 September Konoe met the emperor with chiefs of staff General Hajime Sugiyama and Admiral Osami Nagano to inform him of the cabinet's decision to commit to war in the absence of a diplomatic breakthrough, alarmed the emperor asked what happened to the negotiations with Roosevelt, he asked Konoe to change the emphasis from war to negotiation, Konoe replied that would be politically impossible, the emperor then asked why he had been kept in the dark about these military preparations. The emperor then questioned Sugiyama about the chances of success of an open war with the Occident. After Sugiyama answered positively, Hirohito scolded him, remembering that the Army had predicted that the invasion of China would be completed in only three months. On 6 September the emperor approved the cabinet's decision at an imperial conference after being given assurance by the two chiefs of staff that diplomacy was the primary emphasis, with war only as a fall back option in the event of diplomatic failure. That same evening, Konoe arranged a dinner in secrecy with US ambassador to Japan Joseph Grew (on 15 August, Hiranuma Kiichiro who was a member of his cabinet, and a previous prime minister, had been shot six times by an ultra-nationalist because he was seen as too close to Grew) Konoe told Grew that he was prepared to travel to meet Roosevelt on a moment's notice, Grew then urged his superiors to advise Roosevelt to accept the summit proposal. The day after the imperial conference Konoe arranged a meeting between Prince Naruhiko Higashikuni and army minister Tojo, which was an attempt to bring the war hawk in line with Konoe. Higashikuni told Tojo that since the emperor and Konoe favoured negotiation over war, the army minister should too, and that he should quit if he could not follow a policy of non-confrontation. Tojo replied that if the western encirclement of Japan were to be accepted, Japan would cease to exist, Tojo believed that even if there was only a small chance of winning a war with the US, Japan must prepare for it and wage it rather than be encircled and destroyed.On 10 September Nomura met with Hull and was told by him that the latest Japanese offer was a non-starter, and that Japan would have to make concessions in regard to China before the summit meeting could take place. On 20 September a liaison meeting passed a revised proposal that actually hardened conditions for a withdrawal from China. At the liaison conference of 25 September, sensing that summit negotiations were stalling, Tojo and the militarists pressed the cabinet to commit to an actual deadline for war of 15 October. After this meeting Konoe told lord keeper of the privy seal Koichi Kido that he was going to resign, but Kido talked him out of it, Konoe then secluded himself in a villa at Kamakura until 2 October, leaving foreign minister Toyoda to take charge of negotiations in his absence. Toyoda asked ambassador Grew to tell Roosevelt that Konoe would only be able to grant concessions at the summit, but could not commit beforehand due to the influence of the militarists, and the risk that any conciliation beforehand would be leaked to the Germans in an effort to bring down the Konoe cabinet. Grew argued in favour of the summit to Roosevelt in a communication of 29 September. On 1 October, Konoe summoned navy minister Oikawa to Kamakura, where he secured his commitment of cooperation in acceptance American demands, the navy being acutely aware of the long odds of victory in the event to war with the US. Oikawa returned to Tokyo and seemed to secure the cooperation of navy chief of staff Nagano, including Toyoda as foreign minister they formed a potential majority in the next liaison conference. On 2 October Hull delivered to Nomura a statement constituting the preconditions for a summit meeting, Hull made it clear that the Japanese army would have to demonstrate that they were going to pull troops out of French Indochina and China. At the liaison conference of 4 October, Hull's response was still being processed and could not be fully discussed, Nagano changed his position and now agreed with the army and advocated a deadline for war, Konoe and Oikawa were largely silent, and did not try to bring him back to the side of negotiation, a final decision was further postponed. The army and the navy were in opposition to each other and held separate high level meetings each respectively confirming their resolve to either go to war, or pull back from the brink, however Nagano continued to oppose open confrontation of the army, while Oikawa did not want to take the lead as the only member of the liaison conference to oppose war. Konoe met privately with Tojo twice, in a failed attempt to convince him to a troop withdrawal, and to take the war option off the table on 5 and 7 October. In the 7 October meeting Konoe told Tojo that ""military men take wars too lightly"", Tojo's response was ""occasionally one must gather up enough courage, close one's eyes and jump off the platform of the Kiyomizu"", Konoe responded that was okay for the individual ""but if I think of the national polity that has lasted twenty six hundred years and of the hundred million Japanese belonging to this nation, I, as a person in the position of great responsibility, cannot do such a thing"". The next day Tojo met with Oikawa, and showed some doubt when he told him that it would be a betrayal of those who had already died in the war for the army to pull troops out of China, but that he was also worried about the many more who would die in an eventual war with the US, and that he was considering a troop withdrawal. Konoe held a meeting on 12 October with military ministers Tojo, Oikawa, and the foreign minister Toyoda, which became known as the Tekigaiso conference. Konoe began by saying that he had no confidence in the war they were about to wage and would not lead it, but neither Oikawa or Konoe was willing to take the lead in demanding the army agree to taking the war option off the table, Toyoda was the only member willing to declare that the imperial conference of 6 September was a mistake, implying that the war option should be taken off the table, while Tojo forcefully argued that an imperial resolution could not be violated. On 14 October one day before the deadline, Konoe and Tojo met one last time, where Konoe attempted to impress upon Tojo the need to stand down from war, and accede to US demands for a military withdrawal from China and Indochina, Tojo ruled a troop withdrawal as out of the question. In the cabinet meeting that followed Tojo declared that the decision of the imperial conference had been thoroughly deliberated, that hundreds of thousands of troops were being moved south as they spoke, that if diplomacy were to continue they must be sure that it would result in success, and that the imperial edict had specifically declared that negotiations must bear fruit by early October (which meant the deadline had already been passed), after this conference Tojo went to see lord keeper of the privy seal Kido, to push for Konoe's resignation. That same evening Tojo sent Teiichi Suzuki (at that time the head of the cabinet planning board) to Konoe with a message urging him to resign, stating that if he resigned Tojo would endorse prince Higashikuni as the next prime minister, Suzuki told Konoe that Tojo realized now that the navy was unwilling to admit that it could not fight the US, he also told Konoe that Tojo believed the current cabinet must resign and bear the responsibility of wrongfully calling for the imperial indict, and only someone of Higashikuni's imperial background could reverse it. The next day on 15 October Konoe's friend and advisor Hotsumi Ozaki was exposed and arrested as a soviet spy. Konoe resigned on 16 October 1941, one day after having recommended Prince Naruhiko Higashikuni to the Emperor as his successor. Two days later, Hirohito chose General Tōjō as Prime Minister. In 1946, Hirohito explained this decision: "I actually thought Prince Higashikuni suitable as chief of staff of the Army; but I think the appointment of a member of the imperial house to a political office must be considered very carefully. Above all, in time of peace this is fine, but when there is a fear that there may even be a war, then more importantly, considering the welfare of the imperial house, I wonder about the wisdom of a member of the imperial family serving [as prime minister]." Six weeks later, Japan attacked Pearl Harbor. Konoe justified his demission to his secretary Kenji Tomita. "Of course His Imperial Majesty is a pacifist and he wished to avoid war. When I told him that to initiate war was a mistake, he agreed. But the next day, he would tell me: 'You were worried about it yesterday but you do not have to worry so much.' Thus, gradually he began to lead to war. And the next time I met him, he leaned even more to war. I felt the Emperor was telling me: 'My prime minister does not understand military matters. I know much more.' In short, the Emperor had absorbed the view of the army and the navy high commands." On 29 November 1941, at a luncheon with the emperor with all living former prime ministers in attendance, Konoe voiced his objection to war. Upon hearing of the attack on Pearl Harbor Konoe said "What on earth? I really feel a miserable defeat coming, this will only last 2 or 3 months." Referring to Japan's military success. Konoe played a role in the fall of the Tōjō government in 1944. In February 1945, during the first private audience he had been allowed in three years, he advised the Emperor to begin negotiations to end World War II. According to Grand Chamberlain Hisanori Fujita, Hirohito, still looking for a "tennozan" (a great victory), firmly rejected Konoe's recommendation. After the beginning of the American occupation, Konoe served in the cabinet of Prince Naruhiko Higashikuni, the first post-war government. Having refused to collaborate with U.S. Army officer Bonner Fellers in "Operation Blacklist" to exonerate Hirohito and the imperial family of criminal responsibility, he came under suspicion of war crimes. In December 1945, during the last call by the Americans for alleged war criminals to report to the Americans, he took potassium cyanide poison and committed suicide. His grave is at the Konoe clan cemetery at the temple of Daitoku-ji in Kyoto. His grandson, Morihiro Hosokawa, became prime minister fifty years later.
https://en.wikipedia.org/wiki?curid=17401
Kowtow Kowtow, which is borrowed from "koutou" in Mandarin Chinese ("kau tau" in Cantonese), is the act of deep respect shown by prostration, that is, kneeling and bowing so low as to have one's head touching the ground. In East Asian culture, the kowtow is the highest sign of reverence. It was widely used to show reverence for one's elders, superiors, and especially the Emperor, as well as for religious and cultural objects of worship. In modern times, usage of the kowtow has been reduced. An alternative Chinese term is "ketou"; however, the meaning is somewhat altered: "kou" () has the general meaning of "knock", whereas "ke" () has the general meaning of "touch upon (a surface)", "tou" () meaning head. The date of this custom's origin is probably sometime between the Spring and Autumn period, or the Warring States period of China's history because it was a custom by the time of the Qin dynasty (221 BC – 206 BC). In Imperial Chinese protocol, the kowtow was performed before the Emperor of China. Depending on the solemnity of the situation different grades of kowtow would be used. In the most solemn of ceremonies, for example at the coronation of a new Emperor, the Emperor's subjects would undertake the ceremony of the "three kneelings and nine kowtows", the so-called grand kowtow, which involves kneeling from a standing position three times, and each time, performing the kowtow three times while kneeling. Immanuel Hsu describes the "full kowtow" as "three kneelings and nine knockings of the head on the ground". As government officials represented the majesty of the Emperor while carrying out their duties, commoners were also required to kowtow to them in formal situations. For example, a commoner brought before a local magistrate would be required to kneel and kowtow. A commoner is then required to remain kneeling, whereas a person who has earned a degree in the Imperial examinations is permitted a seat. Since one is required by Confucian philosophy to show great reverence to one's parents and grandparents, children may also be required to kowtow to their elderly ancestors, particularly on special occasions. For example, at a wedding, the marrying couple was traditionally required to kowtow to both sets of parents, as acknowledgement of the debt owed for their nurturing. Confucius believed there was a natural harmony between the body and mind and therefore, whatever actions were expressed through the body would be transferred over to the mind. Because the body is placed in a low position in the kowtow, the idea is that one will naturally convert to his or her mind a feeling of respect. What one does to oneself influences the mind. Confucian philosophy held that respect was important for a society, making bowing an important ritual. The kowtow, and other traditional forms of reverence, were much maligned after the May Fourth Movement. Today, only vestiges of the traditional usage of the kowtow remain. In many situations, the standing bow has replaced the kowtow. For example, some, but not all, people would choose to kowtow before the grave of an ancestor, or while making traditional offerings to an ancestor. Direct descendants may also kowtow at the funeral of an ancestor, while others would simply bow. During a wedding, some couples may kowtow to their respective parents, though the standing bow is today more common. In extreme cases, the kowtow can be used to express profound gratitude, apology, or to beg for forgiveness. The kowtow remains alive as part of a formal induction ceremony in certain traditional trades that involve apprenticeship or discipleship. For example, Chinese martial arts schools often require a student to kowtow to a master. Likewise, traditional performing arts often also require the kowtow. Prostration is a general practice in Buddhism, and not restricted to China. The kowtow is often performed in groups of three before Buddhist statues and images or tombs of the dead. In Buddhism it is more commonly termed either "worship with the crown (of the head)" (頂禮 ding li) or "casting the five limbs to the earth" (五體投地 wuti tou di)—referring to the two arms, two legs and forehead. For example, in certain ceremonies, a person would perform a sequence of three sets of three kowtows—stand up and kneel down again between each set—as an extreme gesture of respect; hence the term "three kneelings and nine head knockings" (). Also, some Buddhist pilgrims would kowtow once for every three steps made during their long journeys, the number three referring to the Triple Gem of Buddhism, the Buddha, the Dharma, and the Sangha. Prostration is widely practiced in India by Hindus to give utmost respect to their deities in temples and to parents and elders. Nowadays in modern times people show the regards to elders by bowing down and touching their feet. The word "kowtow" came into English in the early 19th century to describe the bow itself, but its meaning soon shifted to describe any abject submission or groveling. The term is still commonly used in English with this meaning, disconnected from the physical act and the East Asian context. Dutch ambassador Isaac Titsingh did not refuse to kowtow during the course of his 1794–1795 mission to the imperial court of the Qianlong Emperor. The members of the Titsingh mission, including Andreas Everardus van Braam Houckgeest and Chrétien-Louis-Joseph de Guignes, made every effort to conform with the demands of the complex Imperial court etiquette. On two occasions, the kowtow was performed by Chinese envoys to a foreign ruler – specifically the Russian Tsar. T'o-Shih, Qing emissary to Russia whose mission to Moscow took place in 1731, kowtowed before Tsarina Anna, as per instructions by the Yongzheng Emperor, as did Desin, who led another mission the next year to the new Russian capital at St. Petersburg. Hsu notes that the Kangxi Emperor, Yongzheng's predecessor, explicitly ordered that Russia be given a special status in Qing foreign relations by not being included among tributary states, i.e. recognition as an implicit equal of China. The kowtow was often performed in intra-Asian diplomatic relations as well. In 1636, after being defeated by the invading Manchus, King Injo of Joseon (Korea) was forced to surrender by kowtowing three times to pledge tributary status to the Qing Emperor, Hong Taiji. As was customary of all Asian envoys to Qing China, Joseon envoys kowtowed three times to the Qing emperor during their visits to China, continuing until 1896, when the Korean Empire withdrew its tributary status from Qing as a result of the First Sino-Japanese War. The King of the Ryukyu Kingdom also had to kneel three times on the ground and touch his head nine times to the ground (), to show his allegiance to the Chinese emperors.
https://en.wikipedia.org/wiki?curid=17405
Kamchatka Oblast Kamchatka Oblast (, "Kamchatskaya oblast") was, until being incorporated into Kamchatka Krai on July 1, 2007, a federal subject of Russia (an oblast). To the north, it bordered Magadan Oblast and Chukotka Autonomous Okrug. Koryak Autonomous Okrug was located in the northern part of the oblast. Including the autonomous okrug, the total area of the oblast was , encompassing the southern half of the Kamchatka Peninsula. The administrative center of Kamchatka Oblast was the city of Petropavlovsk-Kamchatsky. Population: Kamchatka's natural resources include coal, gold, mica, pyrites, and natural gas. Most of the inhabitants live in the administrative center, Petropavlovsk-Kamchatsky. The main employment sectors are fishing, forestry, tourism (a growing industry), and the Russian military. There is still a large military presence on the peninsula; the home base of Russia's Pacific submarine fleet is across Avacha Bay from Petropavlovsk-Kamchatsky at the Rybachy base. There are also several air force bases and radar sites in Kamchatka. As of the 2002 All-Russian Population Census, the majority of the 358,801 population is Russian (290,108), largest minorities are Ukrainian (20,870) and Koryak (7,328). The northern part of the peninsula is occupied by Koryak Autonomous Okrug, where around 6,700 Koryaks live. A small number of Evens also live here. The oblast was established on October 20, 1932.
https://en.wikipedia.org/wiki?curid=17407
Kołobrzeg Kołobrzeg (pronounced ; , ; ) is a city in the West Pomeranian Voivodeship in north-western Poland with about 47,000 inhabitants (). Kołobrzeg is located on the Parsęta River on the south coast of the Baltic Sea (in the middle of the section divided by the Oder and Vistula Rivers). It has been the capital of Kołobrzeg County in West Pomeranian Voivodship since 1999, and was in Koszalin Voivodship from 1950 to 1998. During the Early Middle Ages, Slavic Pomeranians founded a settlement at the site of modern Budzistowo. Thietmar of Merseburg first mentioned the site as "Salsa Cholbergiensis". Around the year 1000, when the city was part of Poland, it became seat of the Diocese of Kołobrzeg, one of five oldest Polish dioceses. During the High Middle Ages, the town was expanded with an additional settlement inhabited by German settlers a few kilometers north of the stronghold and chartered with Lübeck law, which settlement eventually superseded the original Slavic settlement. Later on, the indigenous Slavic population faced discrimation from the Germans. The city later joined the Hanseatic League. Within the Duchy of Pomerania, the town was the urban center of the secular reign of the prince-bishops of Cammin and their residence throughout the High and Late Middle Ages. When it was part of Brandenburgian Pomerania during the Early Modern Age, it withstood Polish and Napoleon's troops in the Siege of Kolberg. From 1815, it was part of the Prussian province of Pomerania. In the late 19th century Kolberg became a popular Spa town at the Baltic Sea. In 1945, Polish and Soviet troops captured the town, while the remaining German population which had not fled the advancing Red Army was expelled in accordance to the Potsdam Agreement. Kołobrzeg, now part of post-war Poland and devastated in the preceding Battle of Kolberg, was rebuilt, but lost its status as the regional center to the nearby city of Koszalin. "Kołobrzeg" means "by the shore" in Polish; "koło" translates as "by" and "brzeg" means "coast" or "shore". has a similar etymology. The original name of Cholberg was taken by Polish and Kashubian linguists in the 19th and 20th centuries to reconstruct the name. After German settlement, the original name of "Cholberg" evolved into (). According to Piskorski (1999) and Kempke (2001), Slavic and Lechitic immigration reached Farther Pomerania in the 7th century. First Slavic settlements in the vicinity of Kołobrzeg were centered around nearby deposits of salt and date to 6th and 7th century. In the late 9th century, the Pomeranian tribes erected a fortified settlement at the site of modern part of Kołobrzeg county called Budzistowo near modern Kołobrzeg, replacing nearby Bardy-Świelubie, a multi-ethnic emporium, as the center of the region. The Parseta valley, where both the emporium and the stronghold were located, was one of the Pomeranians' core settlement areas. The stronghold consisted of a fortified burgh with a suburbium. The Pomeranians mined salt in salt pans located in two downstream hills. They also engaged in fishing, and used the salt to conserve foodstuffs, primarily herring, for trade. Other important occupations were metallurgy and smithery, based on local iron ore reserves, other crafts like the production of combs from horn, and in the surrounding areas, agriculture. Important sites in the settlement were a place for periodical markets and a tavern, mentioned as "forum et taberna" in 1140. In the 9th and 10th centuries, the Budzistowo stronghold was the largest of several smaller ones in the Persante area, and as such is thought to have functioned as the center of the local Pomeranian subtribe. By the turn from the 10th to the 11th century, the smaller burghs in the Parseta area were given up. With the area coming under control of the Polish Duke Mieszko I, only two strongholds remained and underwent an enlargement, the one at Budzistowo and a predecessor of later Białogard. These developments were most likely associated with the establishment of Polish power over this part of the Baltic coast. In the 10th century the trade of salt and fish led to the development of the settlement into a town. During Polish rule of the area in the late 10th century, the chronicle of Thietmar of Merseburg (975–1018) mentions "salsa Cholbergiensis" as the see of the Bishopric of Kołobrzeg, set up during the Congress of Gniezno in 1000 and placed under the Archdiocese of Gniezno. The congress was organized by Polish duke Bolesław Chrobry and Holy Roman Emperor Otto III, and also led to the establishment of bishoprics in Kraków and Wrocław, connecting the territories of the Polish state. It was an important event not only in religious, but also political dimension in the history of the early Polish state, as it unified and organized medieval Polish territories. The missionary efforts of bishop Reinbern were not successful, the Pomeranians revolted in 1005 and regained political and spiritual independence. In 1013 Bolesław Chrobry removed his troops from Pomerania in face of war with Holy Roman Emperor Henry III. The Polish–German war ended with Polish victory, which was confirmed by the 1018 Peace of Bautzen. During his campaigns in the early 12th century, Bolesław III Wrymouth reacquired Pomerania for Poland, and made the local "Griffin" dynasty his vassals. The stronghold was captured by the Polish army in the winter of 1107/08, when the inhabitants ("cives et oppidani") including a duke ("dux Pomeranorum") surrendered without resistance. A previous Polish siege of the burgh had been unsuccessful; although the duke had fled the burgh, the Polish army was unable to break through the fortifications and the two gates. The army had however looted and burned the suburbium, which was not or only lightly fortified. The descriptions given by the contemporary chroniclers make it possible that a second, purely militarily used castle existed near the settlement, yet neither is this certain nor have archaeological efforts been able to locate traces thereof. In the 12th-century Polish chronicle "Gesta principum Polonorum" Kołobrzeg was named a significant and "famous city". During the subsequent Christianization of the area by Otto of Bamberg at the behest of Boleslaw, a St. Mary's church was built. This marked the first beginnings of German influence in the area. After Boleslaw's death, as a result of the fragmentation of Poland, the Duchy of Pomerania became independent, before the dukes became vassals of Denmark and the Holy Roman Empire in the late 12th century. Besides St. Mary's, a St. John's church and a St. Petri's chapel were built. A painting of the town of Kołobrzeg from the 13th century is located in the Museum of Polish Arms in the city. During the Ostsiedlung, a settlement was founded by German settlers some kilometres off the site of the Slavic/Lechitic one. It was located within the boundaries of today's downtown of Kołobrzeg and some of the inhabitants of the Polish town moved to the new settlement. On 23 May 1255 it was chartered under Lübeck law by Duke Wartislaw III of Pomerania, and more settlers arrived, attracted by the duke. Hermann von Gleichen, German bishop of Kammin also supported the German colonisation of the region. The settlers received several privileges such as exemption from certain taxes and several benefits, making it difficult for the indigenous Pomeranian population to compete with Germans. Henceforth, the nearby former stronghold was turned into a village and renamed "Old Town" (, , ), first documented in 1277 and used until 1945 when it was renamed "Budzistowo". A new St. Mary's church was built within the new town before the 1260s, while St. Mary's in the former Pomeranian stronghold was turned into a nuns' abbey. In 1277 St. Benedict's monastery for nuns was founded, which in the framework of the Pomeranian Reformation in 1545 was then changed into an educational institution for noble Protestant ladies. Already in 1248, the Kammin bishops and the Pomeranian dukes had interchanged the "terrae" Stargard and Kolberg, leaving the bishops in charge of the latter. When in 1276 they became the souvereign of the town also, they moved their residence there, while the administration of the diocese was done from nearby Köslin (Koszalin). In 1345, the bishops became Imperial immediate dukes in their secular reign. In 1361, the city joined the Hanseatic League. In 1446 it fought a battle against the nearby rival city of Koszalin. When the property of the Bishopric of Kammin was secularized during the Protestant Reformation in 1534, their secular reign including the Kolberg area became intermediately ruled by a Lutheran titular bishop, before it was turned into a "Sekundogenitur" of the House of Pomerania. In the 15th century the city traded with Scotland, Amsterdam and Scandinavia. Beer, salt, honey, wool and flour were exported, while merchants imported textiles from England, southern fruits, and cod liver oil. In the 16th century, the city reached 5,000 inhabitants. The indigenous Slavs in the city were discriminated, and their rights in trade and crafts were limited, with bans on performing certain types of professions and taking certain positions in the city, for instance in 1564 it was forbidden to admit native Slavs to the blacksmiths' guild. During the Thirty Years' War, Kolberg was occupied by imperial forces from 1627 to 1630, and thereafter by Swedish forces. Kolberg, with most of Farther Pomerania, was granted to Brandenburg-Prussia in 1648 by the Treaty of Westphalia and, after the signing of the Treaty of Stettin (1653), was part of the Province of Pomerania. It became part of the Kingdom of Prussia in 1701. In the 18th century, trade with Poland declined, while the production of textiles developed. In 1761, during the Seven Years' War, the town was captured after three subsequent sieges by the Russian commander Peter Rumyantsev. At the end of the war, however, Kolberg was returned to Prussia. During Napoleon's invasion of Prussia during the War of the Fourth Coalition, the town was besieged from mid-March to 2 July 1807 by the Grande Armée and by Polish forces drawn from insurgents against Prussian rule (a street named after General Antoni Paweł Sułkowski, who led Polish soldiers, is located within the present-day city). As a result of forced conscription, Poles were also among Prussian soldiers during the battle. The city's defense, led by then Lieutenant-Colonel August von Gneisenau, held out until the war was ended by the Treaty of Tilsit. Kolberg became part of the Prussian province of Pomerania in 1815, after the final defeat of Napoleon; until 1872, it was administered within the Fürstenthum District ("Principality District", recalling the area's former special status), then it was within Landkreis Kolberg-Körlin. Marcin Dunin, archbishop of Poznań and Gniezno and Roman Catholic primate of Poland, was imprisoned by Prussian authorities for ten months in 1839–1840 in the city and after his release, he tried to organise a chaplaincy for the many Polish soldiers stationed in Kolberg. In the 19th century the city had a small but active Polish population that increased during the century to account for 1.5% of the population by 1905. The Polish community funded a Catholic school and the Church of Saint Marcin where masses in Polish were held (initially throughout the season, after about 1890 all the year), were established. Dating back to 1261 Kolberg's Jewish population amounted to 528 people in 1887, rising to 580 two years later, and although many moved to Berlin after that date they numbered around 500 by the end of the Nineteenth century Between 1924 and 1935, the American-German painter Lyonel Feininger, a tutor at the Staatliches Bauhaus, visited Kolberg repeatedly and painted the cathedral and environs of the town. In the May elections of 1933, the Nazi Party received by far the most votes, 9,842 out of 19,607 cast votes. When the Nazis took power in Germany in 1933, the Jewish community in Kolberg comprised 200 people, and the antisemitic repression by Germany's ruling party led several of them to flee the country. A Nazi newspaper, the "Kolberger Beobachter", listed Jewish shops and business that were to be boycotted. Nazis also engaged in hate propaganda against Jewish lawyers, doctors, and craftsmen. At the end of 1935, Jews were banned from working in the city's health spas. During Kristallnacht, the Jewish synagogue and homes were destroyed, and in 1938 the local Jewish cemetery was vandalised, while a cemetery shrine was turned to stable by German soldiers. In 1938, all Jews in Kolberg, as all over Germany, were renamed in official German documents as "Israel" (for males) or "Sarah" (for females). In the beginning of 1939, Jews were banned from attending German schools and the entire adult population had its driving licenses revoked. After years of discrimination and harassment, local Jews were deported by the German authorities to concentration camps in 1940. During the World War II the German state brought in numerousforced laborers to the city, among them many Poles. The city's economy was changed to military production-especially after the German invasion of the Soviet Union. The forced laborers were threatened with everyday harassment and repression; they were forbidden from using phones, holding cultural events and sports events, they could not visit restaurants or swimming pools, or have contact with the local German population. Poles were only allowed to attend a church mass once a month – and only in the German language. They also had smaller food rations than Germans, and had to wear a sign with the letter P on their clothes indicating their ethnic background. Additionally, medical help for Polish workers was limited by the authorities. Arrests and imprisonment for various offences, such as "slow pace of work" or leaving the workspace, were everyday occurrences. In 1944, the city was designated a fortress — "Festung Kolberg". The 1807 siege was used for the last Nazi propaganda film, "Kolberg" shortly before the end of the war by Joseph Goebbels . It was meant to inspire the Germans with its depiction of the heroic Prussian defence during the Napoleonic Wars. Tremendous resources were devoted to filming this epic, even diverting tens of thousands of troops from the front lines to have them serve as extras in battle scenes. Ironically, the film was released in the final few weeks of Nazi Germany's existence, when most of the country's cinemas were already destroyed. On 10 February 1945, the German torpedo-boat T-196 brought about 300 survivors of the , which had been sunk by Soviet submarine S-13 to Kolberg. As the Red Army advanced on Kolberg, most of the inhabitants and tens of thousands of refugees from surrounding areas (about 70,000 were trapped in the Kolberg Pocket), as well as 40,000 German soldiers, were evacuated from the besieged city by German naval forces in Operation Hannibal. Only about two thousand soldiers were left on 17 March to cover the last sea transports. Between 4 and 18 March 1945, there were major battles between the Soviet and Polish forces and the German army. Because of a lack of anti-tank weapons, German destroyers used their guns to support the defenders of Kolberg until nearly all of the soldiers and civilians had been evacuated. During the fights, Polish soldiers' losses were 1,013 dead, 142 MIA and 2,652 wounded. On 18 March, the Polish Army re-enacted "Poland's Wedding to the Sea" ceremony, which had been celebrated for the first time in 1920 by General Józef Haller. After the battle the city for several weeks was under Soviet administration, the Germans that had not yet fled were expelled and the city was plundered by the Soviet troops. Freed Polish forced laborers remained and were joined by Polish railwaymen from Warsaw destroyed by the Germans. After World War II the region became part of Poland, under territorial changes demanded by the Soviet Union and the Polish Communist regime at the Potsdam Conference. Most Germans that had not yet fled were expelled from their homes. The town was re-settled by Polish citizens, many of whom were themselves Polish refugees from regions east of the Curzon line, the Kresy, from where they had been displaced by Soviet authorities. In 2000 the city business council of Kołobrzeg commissioned a monument called the Millennium Memorial as a commemoration of "1000 years of Christianity in Pomerania", and as a tribute to Polish-German Reconciliation, celebrating the meeting of King Boleslaw I of Poland and King Otto III of Germany, at the Congress of Gniezno, in the year 1000. It was designed and built by the artist Wiktor Szostalo in welded stainless steel. The two figures sit at the base of a 5-meter cross, cleft in two and being held together by a dove holding an olive branch. It is installed outside the Basilica Cathedral in the city center. Before the end of World War II the town was predominantly German Protestant with Polish and Jewish minorities. Almost all of the pre-war German population fled or was expelled so that since 1945, Polish Catholics make up the majority of the population. Around the turn from the 18th to the 19th century an increase of the number of Catholics was observed, because military personnel had been moved from West Prussia to the town. The mother tongue of a number of soldiers serving in the garrison of Kolberg was Polish. Kołobrzeg today is a popular tourist destination for Poles, Germans and due to the ferry connection to Bornholm also Danish people. It provides a unique combination of a seaside resort, health resort, an old town full of historic monuments and tourist entertainment options (e.g. numerous "beer gardens"). The town is part of the European Route of Brick Gothic network. A bike path "to Podczele", located along the seaside was commissioned on 14 July 2004. The path extends from Kołobrzeg to Podczele. The path has been financed by the European Union, and is intended to be part of a unique biking path that will ultimately circle the entire Baltic Sea. The path was breached on 24 March 2010 due to the encroachment of the sea associated with the draining of the adjacent unique Eco-Park marsh area. The government of Poland has allocated PLN 90,000 to repair the breach, and the path re-opened within a year. It was also extended in 2011 to connected with Ustronie Morskie to the east. South of Bagicz, some from Kołobrzeg, there is an 806-year-old oak (2008). Dated in the year 2000 as the oldest oak in Poland, it was named Bolesław to commemorate the king Boleslaus the Brave. Kołobrzeg is also a regional cultural center. In the summer take place – a number of concerts of popular singers, musicians, and cabarets. Municipal Cultural Center, is located in the "Park teatralny". Keep under attachment artistic arts, theater and dance. Patron of youth teams and the vocal choir. Interfolk organizes the annual festival, the International Meeting of the folklore and other cultural events. Cinema is a place for meetings Piast Discussion Film Club. In Kołobrzeg there are many permanent and temporary exhibitions of artistic and historical interest. In the town hall of Kołobrzeg is located Gallery of Modern Art, where exhibitions are exposed artists from Kołobrzeg, as well as outside the local artistic circles. Gallery also conducts educational activities, including organized by the gallery of art lessons for children and young people from schools. The Kołobrzeg Pier is currently the second longest pier in the West Pomeranian Voivodeship, after the pier in Międzyzdroje. A jetty positioned on the end of the pier enables small ships to sail for sightseeing excursions. In town, there is a museum of Polish weapons (Muzeum Oręża Polskiego), which are presented in the collections of militaria from the early Middle Ages to the present. The palace of Braunschweig include part of museum dedicated to the history of the city. In their collections branch presents a collection of rare and common measurement tools, as well as specific measures of the workshop. The local museum is also moored at the port of ORP Fala patrol ship, built in 1964, after leaving the service transformed into a museum. Kołobrzeg has connections among others to Szczecin, "Solidarity" Szczecin–Goleniów Airport, Gdańsk, Poznań, Warsaw, Kraków and Lublin. Kołobrzeg is twinned with:
https://en.wikipedia.org/wiki?curid=17410
Konix Multisystem The Konix Multisystem was a cancelled video game system under development by Konix, a British manufacturer of computer peripherals. The Konix Multisystem began life in 1988 as an advanced Konix peripheral design intended to build on the success of the company's range of joysticks. The design, codenamed Slipstream, resembled a dashboard-style games controller, and could be configured with a steering wheel, a flight yoke, and motorbike handles. It promised advanced features such as force feedback, hitherto unheard of in home gaming. However, it soon became apparent that the Slipstream project had the potential to be much more than a peripheral. Konix turned to their sister company Creative Devices Ltd, a computer hardware developer, to design a gaming computer to be put inside the controller to make it a stand-alone console in its own right. It was shortly after this development began that Konix founder and chairman Wyn Holloway came across a magazine article that described the work of a British group of computer hardware designers whose latest design was looking for a home. The article in question, published in issue 10 of "ACE" magazine in July 1988, featured Flare Technology, a group of computer hardware designers whom, having split from Sinclair Research (creators of the ZX81 and ZX Spectrum home computers), had built on their work on Sinclair's aborted Loki project to create a system known as Flare One. Flare's prototype system was Z80 based but featured four custom chips to give it the power to compete with peers such as the Commodore Amiga and Atari ST. The 1MB machine (128k of ROM, 128k of video RAM, 768k of system RAM) promised graphics with 256 colours on-screen simultaneously, could handle 3 million pixels per second, output 8 channel stereo and had a blitter chip that allowed vertical and horizontal hardware scrolling. Flare were specifically aiming their machine at the gaming market, eschewing such features as 80 column text display (considered the requisite for business applications such as word processing) in favour of faster graphics handling. This meant that in spite of its modest 8-bit CPU the system compared well against the 16-bit machines in the market at the time. It could move sprites and block graphics faster than an Atari ST, and in 256 colours under conditions when the ST would only show 16 colours. It could also draw lines 3 times faster than an Amiga and even handle the maths of 3D models faster than the 32-bit Acorn Archimedes. In spite of these specifications and bearing in mind their target gaming market, Flare aimed to retail their machine for around £200, half of what the Amiga and ST were selling for. Ultimately, Flare's resources to put it into mass production were limited. Holloway approached Flare and proposed a merger of their respective technologies to create an innovative new kind of gaming console with the computer hardware built into the main controller and in July 1988 a partnership was formed. Development work was carried out by Flare, with assistance from British games programmer Jeff Minter. Konix wanted the machine to use a 16-bit processor, so the Z80 was removed and replaced with an 8086 processor. They also demanded that the colour palette be expanded to 4096 colours, the same as that of the Amiga. To reduce manufacturing costs, the Flare One's four custom chips were integrated into one large chip. In order to keep the cost of software down, it was decided that the software media would be 3.5” floppy discs rather than ROM cartridges used universally by consoles up to that time. The embryonic console was revealed to the computing press at a toy fair held at Earls Court Exhibition Centre in February 1989. It boasted market leading performance, MIDI support and revolutionary peripherals including a light gun with recoil action and the Power Chair, a motorised seat designed to reproduce in the home what "sit-in" arcade games such as "After Burner" and "Out Run" delivered in the arcades using hydraulics. Another innovative feature was the ability to link two MultiSystems together to allow for head-to-head two player gaming. Release was slated for August that year. Several games in development had a version produced for the Konix Multisystem, including Vivid Image's "Hammerfist". An optional 512K RAM cartridge was considered to boost the total RAM for the machine to 768K. Despite the impressive specification on paper, the design did suffer from some limitations. Nick Speakman of software developer Binary Designs pointed out that "the custom chips are very powerful, but they require a lot of programming talent to get anything out of them. The screen handling [also] isn't as fast as we anticipated it to be." Brian Pollock of software publisher Logotron highlighted the limitations caused by the shortage of RAM (kept low to keep prices down), “My only concern is memory, or lack of it. For instance, in the game that I'm writing I am using six-channel FM synthesized sound. Now that takes up a hell of a lot of memory. I couldn't usefully fit any more samples, and that's sad.” The memory issue was also flagged by "Crash" magazine, which pointed out that the floppy disk format meant that games had to be loaded into the machine's RAM (originally intended to be 128k) in turn requiring the system to be constantly accessing the disk drive. Konix intended to remedy the problem with RAM upgrade cartridges, provided that the price of RAM fell in the future. Overall though, programmers received the system positively. Jeff Minter described the controller itself as "superb," while Chris Walsh of Argonaut Games stated that "Polygon based games like "Starglider 2" are going to be easy to program. The machine is geared up to rotating masses of vertices at incredible rates." Numerous game developers were recruited to produce games for the system, including Jeff Minter's Llamasoft, Electronic Arts, Psygnosis, Ocean, Palace and U.S. Gold, with Konix promising 40 games to be available by Christmas. Lucasfilm was mooted as a developer with the possibility of releasing their own branded version of the machine in the US, but nothing was ever confirmed. Games known to be in development for the system during 1988 included Llamasoft's "Attack of the Mutant Camels", System 3's "Last Ninja 2", Vivid Image's "Hamerfist", and Logotron's "Star Ray". Signs of trouble in the progress to the release of the console did not take long to arrive. By May the release date had slipped from August to October. By October, a first quarter 1990 release was envisaged. The December edition of "The Games Machine" magazine revealed the scale of the problem. According to company sources, Konix had been on the brink of calling in receivers. Cheques had bounced, employees hadn't been paid and software development had been brought to a halt in mid-October as developers had reached the stage where they could continue no further without a finished machine. In March 1990 it was revealed that Konix had sold the rights to sell their joystick range in the UK to Spectravision who also manufactured the rival QuickShot joystick range. They had effectively sold off the family silver in order to keep the MultiSystem project alive. Autumn 1990 was to be the new release time. Eventually, beset by delays and in spite of all of the media coverage and apparent demand for the machine, the project ultimately went under when Konix ran out of cash without a completed system ever being released. Some people, including Holloway, contend that this was due to major international competitors leaning on Konix's suppliers and financiers to prevent the project reaching the market. After the project was abandoned, Flare Technology began work on a new project, "Flare Two", which was eventually bought by Atari and, after further development, formed the basis for the Atari Jaguar game console. The original Flare One technology was purchased by arcade gambling machine manufacturer Bellfruit for use in their quiz machines. Drivers for these games are also included in the multi emulator MAME. The Konix Multisystem's design was later released independently by a Chinese company called MSC (MultiSystem China) as the MSC Super MS-200E Multi-System, although this was simply an inexpensive PC games controller, without any special internal hardware. In terms of its long lasting impact on the video game industry, perhaps the most intriguing aspect is Wyn Holloway's claim that Lucasfilm had their frequent partner Sony lined up to manufacture their version of the system, this being contemporaneous with Sony's development of the SNES-CD for Nintendo, which ultimately led to the first PlayStation. Video taped footage showing several games being worked on for the system survives. Excerpts from the footage were later issued on the cover disc of issue 8 of "Retro Gamer" magazine.
https://en.wikipedia.org/wiki?curid=17411
Klein bottle In topology, a branch of mathematics, the Klein bottle () is an example of a non-orientable surface; it is a two-dimensional manifold against which a system for determining a normal vector cannot be consistently defined. Informally, it is a one-sided surface which, if traveled upon, could be followed back to the point of origin while flipping the traveler upside down. Other related non-orientable objects include the Möbius strip and the real projective plane. Whereas a Möbius strip is a surface with boundary, a Klein bottle has no boundary (for comparison, a sphere is an orientable surface with no boundary). The Klein bottle was first described in 1882 by the German mathematician Felix Klein. It may have been originally named the "Kleinsche Fläche" ("Klein surface") and then misinterpreted as "Kleinsche Flasche" ("Klein bottle"), which ultimately may have led to the adoption of this term in the German language as well. The following square is a fundamental polygon of the Klein bottle. The idea is to 'glue' together the corresponding coloured edges with the arrows matching, as in the diagrams below. Note that this is an "abstract" gluing in the sense that trying to realize this in three dimensions results in a self-intersecting Klein bottle. To construct the Klein bottle, glue the red arrows of the square together (left and right sides), resulting in a cylinder. To glue the ends of the cylinder together so that the arrows on the circles match, one would pass one end through the side of the cylinder. This creates a circle of self-intersection – this is an immersion of the Klein bottle in three dimensions. This immersion is useful for visualizing many properties of the Klein bottle. For example, the Klein bottle has no "boundary", where the surface stops abruptly, and it is non-orientable, as reflected in the one-sidedness of the immersion. The common physical model of a Klein bottle is a similar construction. The Science Museum in London has a collection of hand-blown glass Klein bottles on display, exhibiting many variations on this topological theme. The bottles date from 1995 and were made for the museum by Alan Bennett. The Klein bottle, proper, does not self-intersect. Nonetheless, there is a way to visualize the Klein bottle as being contained in four dimensions. By adding a fourth dimension to the three-dimensional space, the self-intersection can be eliminated. Gently push a piece of the tube containing the intersection along the fourth dimension, out of the original three-dimensional space. A useful analogy is to consider a self-intersecting curve on the plane; self-intersections can be eliminated by lifting one strand off the plane. Suppose for clarification that we adopt time as that fourth dimension. Consider how the figure could be constructed in "xyzt"-space. The accompanying illustration ("Time evolution...") shows one useful evolution of the figure. At the wall sprouts from a bud somewhere near the "intersection" point. After the figure has grown for a while, the earliest section of the wall begins to recede, disappearing like the Cheshire Cat but leaving its ever-expanding smile behind. By the time the growth front gets to where the bud had been, there’s nothing there to intersect and the growth completes without piercing existing structure. The 4-figure as defined cannot exist in 3-space but is easily understood in 4-space. More formally, the Klein bottle is the quotient space described as the square [0,1] × [0,1] with sides identified by the relations for and for . Like the Möbius strip, the Klein bottle is a two-dimensional manifold which is not orientable. Unlike the Möbius strip, the Klein bottle is a "closed" manifold, meaning it is a compact manifold without boundary. While the Möbius strip can be embedded in three-dimensional Euclidean space R3, the Klein bottle cannot. It can be embedded in R4, however. The Klein bottle can be seen as a fiber bundle over the circle "S"1, with fibre "S"1, as follows: one takes the square (modulo the edge identifying equivalence relation) from above to be "E", the total space, while the base space "B" is given by the unit interval in "y", modulo "1~0". The projection π:"E"→"B" is then given by . The Klein bottle can be constructed (in a four dimensional space, because in three dimensional space it cannot be done without allowing the surface to intersect itself) by joining the edges of two Möbius strips together, as described in the following limerick by Leo Moser: The initial construction of the Klein bottle by identifying opposite edges of a square shows that the Klein bottle can be given a CW complex structure with one 0-cell "P", two 1-cells "C"1, "C"2 and one 2-cell "D". Its Euler characteristic is therefore . The boundary homomorphism is given by and , yielding the homology groups of the Klein bottle "K" to be , and for . There is a 2-1 covering map from the torus to the Klein bottle, because two copies of the fundamental region of the Klein bottle, one being placed next to the mirror image of the other, yield a fundamental region of the torus. The universal cover of both the torus and the Klein bottle is the plane R2. The fundamental group of the Klein bottle can be determined as the group of deck transformations of the universal cover and has the presentation . Six colors suffice to color any map on the surface of a Klein bottle; this is the only exception to the Heawood conjecture, a generalization of the four color theorem, which would require seven. A Klein bottle is homeomorphic to the connected sum of two projective planes. It is also homeomorphic to a sphere plus two cross caps. When embedded in Euclidean space, the Klein bottle is one-sided. However, there are other topological 3-spaces, and in some of the non-orientable examples a Klein bottle can be embedded such that it is two-sided, though due to the nature of the space it remains non-orientable. Dissecting a Klein bottle into halves along its plane of symmetry results in two mirror image Möbius strips, i.e. one with a left-handed half-twist and the other with a right-handed half-twist (one of these is pictured on the right). Remember that the intersection pictured is not really there. One description of the types of simple-closed curves that may appear on the surface of the Klein bottle is given by the use of the first homology group of the Klein bottle calculated with integer coefficients. This group is isomorphic to Z×Z2. Up to reversal of orientation, the only homology classes which contain simple-closed curves are as follows: (0,0), (1,0), (1,1), (2,0), (0,1). Up to reversal of the orientation of a simple closed curve, if it lies within one of the two crosscaps that make up the Klein bottle, then it is in homology class (1,0) or (1,1); if it cuts the Klein bottle into two Möbius strips, then it is in homology class (2,0); if it cuts the Klein bottle into an annulus, then it is in homology class (0,1); and if bounds a disk, then it is in homology class (0,0). To make the "figure 8" or "bagel" immersion of the Klein bottle, one can start with a Möbius strip and curl it to bring the edge to the midline; since there is only one edge, it will meet itself there, passing through the midline. It has a particularly simple parametrization as a "figure-8" torus with a half-twist: for 0 ≤ "θ" < 2π, 0 ≤ "v" < 2π and "r" > 2. In this immersion, the self-intersection circle (where sin("v") is zero) is a geometric circle in the "xy" plane. The positive constant "r" is the radius of this circle. The parameter "θ" gives the angle in the "xy" plane as well as the rotation of the figure 8, and "v" specifies the position around the 8-shaped cross section. With the above parametrization the cross section is a 2:1 Lissajous curve. A non-intersecting 4-D parametrization can be modeled after that of the flat torus: where "R" and "P" are constants that determine aspect ratio, "θ" and "v" are similar to as defined above. "v" determines the position around the figure-8 as well as the position in the x-y plane. "θ" determines the rotational angle of the figure-8 as well and the position around the z-w plane. "ε" is any small constant and "ε" sin"v" is a small "v" depended bump in "z-w" space to avoid self intersection. The "v" bump causes the self intersecting 2-D/planar figure-8 to spread out into a 3-D stylized "potato chip" or saddle shape in the x-y-w and x-y-z space viewed edge on. When "ε=0" the self intersection is a circle in the z-w plane <0, 0, cos"θ", sin"θ">. The pinched torus is perhaps the simplest parametrization of the klein bottle in both three and four dimensions. It's a torus that, in three dimensions, flattens and passes through itself on one side. Unfortunately, in three dimensions this parametrization has two pinch points, which makes it undesirable for some applications. In four dimensions the "z" amplitude rotates into the "w" amplitude and there are no self intersections or pinch points. One can view this as a tube or cylinder that wraps around, as in a torus, but its circular cross section flips over in four dimensions, presenting its "backside" as it reconnects, just as a Möbius strip cross section rotates before it reconnects. The 3D orthogonal projection of this is the pinched torus shown above. Just as a Möbius strip is a subset of a solid torus, the Möbius tube is a subset of a toroidally closed spherinder (solid spheritorus). The parametrization of the 3-dimensional immersion of the bottle itself is much more complicated. for 0 ≤ "u" < π and 0 ≤ "v" < 2π. Regular 3D embeddings of the Klein bottle fall into three regular homotopy classes (four if one paints them). The three are represented by The traditional Klein bottle embedding is achiral. The figure-8 embedding is chiral (the pinched torus embedding above is not regular as it has pinch points so it's not relevant in this section). The three embeddings above cannot be smoothly transformed into each other in three dimensions. If the traditional Klein bottle is cut lengthwise it deconstructs into two, oppositely chiral Möbius strips. If a left handed figure-8 Klein bottle is cut it deconstructs into two left handed Möbius strips, and similarly for the right handed figure-8 Klein bottle. If the traditional Klein bottle is two color painted, this induces chirality on it, creating four homotopy classes. The generalization of the Klein bottle to higher genus is given in the article on the fundamental polygon. In another order of ideas, constructing 3-manifolds, it is known that a solid Klein bottle is homeomorphic to the Cartesian product of a Möbius strip and a closed interval. The "solid Klein bottle" is the non-orientable version of the solid torus, equivalent to formula_5 A Klein surface is, as for Riemann surfaces, a surface with an atlas allowing the transition maps to be composed using complex conjugation. One can obtain the so-called dianalytic structure of the space.
https://en.wikipedia.org/wiki?curid=17412
Icehenge Icehenge is a science fiction novel by American author Kim Stanley Robinson, published in 1984. Though published almost ten years before Robinson's Mars trilogy, and taking place in a different version of the future, "Icehenge" contains elements that also appear in his Mars series, such as extreme human longevity, Martian political revolution, historical revisionism, and shifts between primary characters. "Icehenge" is set at three distinct time periods, and told from the perspective of three different characters. The first narrative is the diary of an engineer caught up in a Martian political revolution in 2248. Effectively kidnapped aboard a mutinous Martian spaceship, she provides assistance to the revolutionaries in their quest for interstellar travel, but ultimately chooses not to travel with them but to return to the doomed revolution on Mars. The second narrative is told from the perspective of an archaeologist three centuries later. He is involved in a project investigating the failed revolution, and during this finds the engineer's diary buried near the remains of a ruined city. At the same time, a mysterious monument is found at the north pole of Pluto, tying up with a passing mention in the engineer's diary. In the final narrative, the great-grandson of the archaeologist visits the monument on Pluto, a scaled-up version of Stonehenge carved in ice. He is investigating the possibility that both the diary and the monument were planted by a reclusive and wealthy businesswoman who lives in the orbit of Saturn. The first part of this novel was originally published as the novella "To Leave a Mark" in the November 1982 issue of "The Magazine of Fantasy & Science Fiction". The third part of "Icehenge" was originally published as the novella "On the North Pole of Pluto" in 1980 in the anthology "Orbit" 18 edited by Damon Knight. Robinson gave the novella in rough form to Ursula K. Le Guin to read and edit while he was enrolled in her writing workshop at UCSD in the spring of 1977. Views of Saturn from the space station visited by the narrator of the novel's third section were inspired by images of Saturn taken during the Voyager flybys in 1980–1981.
https://en.wikipedia.org/wiki?curid=17414
Knights Who Say "Ni!" The Knights Who Say "Ni!", also called the Knights of Ni, are a band of knights encountered by King Arthur and his followers in the film "Monty Python and the Holy Grail". They demonstrate their power by shouting "Ni!" (pronounced "nee"), terrifying the party, whom they refuse to allow passage through their forest unless appeased through the gift of a shrubbery. The knights appear silhouetted in a misty wood, wearing robes and horned helmets; their full number is never apparent, but there are at least six. The leader of the knights, played by Michael Palin, is the only one who speaks to the party. He is nearly double Arthur's height, and wears a great helm decorated with long antlers. The other knights are large, but of human proportions, and wear visored sallet helmets decorated with cow horns. The knight explains that they are the "keepers of the sacred words 'Ni', 'Peng', and 'Neee-Wom'." Arthur confides to Sir Bedivere, "those who hear them seldom live to tell the tale!" The knights demand a sacrifice, and when Arthur states that he merely wishes to pass through the woods, the knights begin shouting "Ni!", forcing the party to shrink back in fear. After this demonstration of their power, the head knight threatens to say "Ni!" again unless the travellers appease them with a shrubbery; otherwise they shall never pass through the wood alive. When Arthur questions the demand, the knights again shout "Ni!" until the travellers agree to bring them a shrubbery, which the head knight specifies must be "one that looks nice. And not too expensive." In order to fulfill their promise to the Knights of Ni, the party visits a small village, where Arthur and Bedivere ask an old crone where they can obtain a shrubbery. The woman questions them, and Arthur admits that it is for the Knights who say "Ni!", whereupon she refuses to cooperate. Arthur then threatens to say "Ni!" to the old woman unless she helps them, and when she still refuses, begins shouting "Ni!". Bedivere has trouble saying the sacred word, which he pronounces "Nu!" until Arthur demonstrates the correct technique. As the crone shrinks back from their combined assault, they are interrupted by Roger the Shrubber, who laments the lack of law and order that allows ruffians to say "Ni!" to an old woman. Arthur obtains a shrubbery from Roger, and brings it to the Knights of Ni. The head knight acknowledges that "it is a good shrubbery", but asserts that the knights cannot allow Arthur and his followers to pass through the wood because they are no longer the Knights who say "Ni!" They are now the Knights who say "Ekke Ekke Ekke Ekke Ptang Zoo Boing!" and must therefore give Arthur a test. Unable to pronounce the new name, Arthur addresses them as "Knights who until recently said 'Ni!, inquiring as to the nature of the test. The head knight demands "another" shrubbery, to be placed next to but slightly higher than the first; and then Arthur "must cut down the mightiest tree in the forest—with a herring!" The knight presents a herring to be used. Arthur objects, asserting that "it can't be done!" upon which the knights recoil as though in fear and pain. It soon emerges that the knights are unable to withstand the word "it", which Arthur's party is unable to avoid saying. The knights are soon incapacitated by the word, which even the head knight cannot stop repeating, allowing Arthur and his followers to make their escape. In the original screenplay, it was suggested that the head knight be played by "Mike standing on John's shoulders". In the DVD commentary for the film, Michael Palin states that their use of the word "Ni!" was derived from "The Goon Show". Upon Arthur's return, the knights were to have said, "Neeeow...wum...ping!" The Knights who say "Ni!" have been cited as an example of intentional disregard for historical accuracy in neo-medievalism, which may be contrasted with the casual disregard for historical accuracy inherent in more traditional works of the fantasy genre. However, in "Medievalisms: Making the Past in the Present", the authors suggest that the original characters of "Monty Python and the Holy Grail" actually represent medievalism, rather than neomedievalism, as many of the film's details are in fact based on authentic medieval texts and ideas. With respect to the Knights who say "Ni!", the authors suggest that Sir Bedivere's difficulty pronouncing "Ni!", despite its levity, "carries a very learned joke about the difficulties of pronouncing Middle English", alluding to the Great Vowel Shift, which occurred in English during the late medieval period.
https://en.wikipedia.org/wiki?curid=17415
Kingdom of Judah The Kingdom of Judah ( "Mamléḵeṯ Yehudāh"; "Ya'uda"; "Bēyt Dāwīḏ") was an Iron Age kingdom of the Southern Levant. The Hebrew Bible depicts it as the successor to the United Monarchy, a term denoting the Kingdom of Israel under biblical kings Saul, David and Solomon and covering the territory of two historical kingdoms, Judah and Israel; but some scholars, including Israel Finkelstein and Alexander Fantalkin, believe that the existent archaeological evidence for an extensive Kingdom of Judah before the late 8th century BCE is too weak, and that the methodology used to obtain the evidence is flawed. Such scholars believe that, prior to this era, the kingdom was no more than a small tribal entity which was limited to Jerusalem and its immediate surroundings. In the 10th and early 9th centuries BCE, the territory of Judah appears to have been sparsely populated, limited to small rural settlements, most of them unfortified. Jerusalem, the kingdom's capital, likely did not emerge as a significant administrative center until the end of the 8th century; before this the archaeological evidence suggests its population was too small to sustain a viable kingdom. In the 7th century its population increased greatly, prospering under Assyrian vassalage (despite Hezekiah's revolt against the Assyrian king Sennacherib), but in 605 the Assyrian Empire was defeated, and the ensuing competition between the Twenty-sixth Dynasty of Egypt and the Neo-Babylonian Empire for control of the Eastern Mediterranean led to the destruction of the kingdom in a series of campaigns between 597 and 582, the deportation of the elite of the community, and the incorporation of Judah into a province of the Neo-Babylonian Empire. The legendary history of David and Solomon in the 10th century BCE tells little about the origins of Judah. There is no archaeological evidence of an extensive, powerful Kingdom of Judah before the late 8th century BCE; Nimrud Tablet K.3751, dated c. 733 BCE, is the earliest known record of the name Judah (written in Assyrian cuneiform as Yaudaya or KUR.ia-ú-da-a-a). Prior to this the kingdom was no more than a small tribal entity which was limited to Jerusalem and its immediate surroundings. The status of Jerusalem in the 10th century BCE is a major subject of debate. The oldest part of Jerusalem and its original urban core is the City of David, which does not show evidence of significant Israelite residential activity until the 9th century. However, unique administrative structures such as the Stepped Stone Structure and the Large Stone Structure, which originally formed one structure, contain material culture dated to Iron I. On account of the apparent lack of settlement activity in the 10th century BCE, Israel Finkelstein argues that Jerusalem in that century was a small country village in the Judean hills, not a national capital, and Ussishkin argues that the city was entirely uninhabited. Amihai Mazar contends that if the Iron I/Iron IIa dating of administrative structures in the City of David are correct, (as he believes) "Jerusalem was a rather small town with a mighty citadel, which could have been a center of a substantial regional polity." A collection of military orders found in the ruins of a military fortress in the Negev dating to the period of the Kingdom of Judah indicates widespread literacy, given that based on the inscriptions, the ability to read and write extended throughout the chain of command, from commanders to petty officers. According to Professor Eliezer Piasetsky, who participated in analyzing the texts, "Literacy existed at all levels of the administrative, military and priestly systems of Judah. Reading and writing were not limited to a tiny elite." This indicates the presence of a substantial educational infrastructure in Judah at the time. According to the Hebrew Bible, the kingdom of Judah resulted from the break-up of the United Kingdom of Israel (1020 to about 930 BCE) after the northern tribes refused to accept Rehoboam, the son of Solomon, as their king. At first, only the tribe of Judah remained loyal to the house of David, but soon after the tribe of Benjamin joined Judah. The two kingdoms, Judah in the south and Israel in the north, coexisted uneasily after the split until the destruction of the Kingdom of Israel by Assyria in c. 722/721. The major theme of the Hebrew Bible's narrative is the loyalty of Judah, and especially its kings, to Yahweh, which it states is the God of Israel. Accordingly, all the kings of Israel and many of the kings of Judah were "bad", which in terms of Biblical narrative means that they failed to enforce monotheism. Of the "good" kings, Hezekiah (727–698 BCE) is noted for his efforts at stamping out idolatry (in this case, the worship of Baal and Asherah, among other traditional Near Eastern divinities), but his successors, Manasseh of Judah (698–642 BCE) and Amon (642–640 BCE), revived idolatry, drawing down on the kingdom the anger of Yahweh. King Josiah (640–609 BCE) returned to the worship of Yahweh alone, but his efforts were too late and Israel's unfaithfulness caused God to permit the kingdom's destruction by the Neo-Babylonian Empire in the Siege of Jerusalem (587 BCE). However it is now fairly well established among academic scholars that the Books of Kings is not an accurate reflection of religious views in either Judah or particularly Israel during this period. For the first sixty years, the kings of Judah tried to re-establish their authority over the northern kingdom, and there was perpetual war between them. Israel and Judah were in a state of war throughout Rehoboam's seventeen-year reign. Rehoboam built elaborate defenses and strongholds, along with fortified cities. In the fifth year of Rehoboam's reign, Shishak, pharaoh of Egypt, brought a huge army and took many cities. In the sack of Jerusalem (10th century BCE), Rehoboam gave them all of the treasures out of the temple as a tribute and Judah became a vassal state of Egypt. Rehoboam's son and successor, Abijah of Judah, continued his father's efforts to bring Israel under his control. He fought the Battle of Mount Zemaraim against Jeroboam of Israel and was victorious with a heavy loss of life on the Israel side. According to the books of Chronicles, Abijah and his people defeated them with a great slaughter, so that 500,000 chosen men of Israel fell slain after which Jeroboam posed little threat to Judah for the rest of his reign and the border of the tribe of Benjamin was restored to the original tribal border. Abijah's son and successor, Asa of Judah, maintained peace for the first 35 years of his reign, during which time he revamped and reinforced the fortresses originally built by his grandfather, Rehoboam. 2 Chronicles states that at the Battle of Zephath, the Egyptian-backed chieftain Zerah the Ethiopian and his million men and 300 chariots was defeated by Asa's 580,000 men in the Valley of Zephath near Maresha. The Bible does not state whether Zerah was a pharaoh or a general of the army. The Ethiopians were pursued all the way to Gerar, in the coastal plain, where they stopped out of sheer exhaustion. The resulting peace kept Judah free from Egyptian incursions until the time of Josiah some centuries later. In his 36th year, Asa was confronted by Baasha of Israel, who built a fortress at Ramah on the border, less than ten miles from Jerusalem. The result was that the capital was under pressure and the military situation was precarious. Asa took gold and silver from the Temple and sent them to Ben-Hadad I, king of Aram-Damascus, in exchange for the Damascene king canceling his peace treaty with Baasha. Ben-Hadad attacked Ijon, Dan, and many important cities of the tribe of Naphtali, and Baasha was forced to withdraw from Ramah. Asa tore down the unfinished fortress and used its raw materials to fortify Geba and Mizpah in Benjamin on his side of the border. Asa's successor, Jehoshaphat, changed the policy towards Israel and instead pursued alliances and co-operation with the northern kingdom. The alliance with Ahab was based on marriage. This alliance led to disaster for the kingdom with the battle of Ramoth-Gilead. He then entered into an alliance with Ahaziah of Israel for the purpose of carrying on maritime commerce with Ophir. But the fleet that was then equipped at Ezion-Geber was immediately wrecked. A new fleet was fitted out without the cooperation of the king of Israel, and although it was successful, the trade was not prosecuted. He subsequently joined Jehoram of Israel in a war against the Moabites, who were under tribute to Israel. This war was successful, with the Moabites being subdued. However, on seeing Mesha's act of offering his own son in a human sacrifice on the walls of Kir-haresheth filled Jehoshaphat with horror and he withdrew and returned to his own land. Jehoshaphat's successor, Jehoram of Judah formed an alliance with Israel by marrying Athaliah, the daughter of Ahab. Despite this alliance with the stronger northern kingdom, Jehoram's rule of Judah was shaky. Edom revolted, and he was forced to acknowledge their independence. A raid by Philistines, Arabs and Ethiopians looted the king's house and carried off all of his family except for his youngest son, Ahaziah of Judah. After Hezekiah became sole ruler in c. 715 BCE, he formed alliances with Ashkelon and Egypt, and made a stand against Assyria by refusing to pay tribute. (; ) In response, Sennacherib of Assyria attacked the fortified cities of Judah. () Hezekiah paid three hundred talents of silver and thirty talents of gold to Assyria – requiring him to empty the temple and royal treasury of silver and strip the gold from the doorposts of Solomon's Temple. () However, Sennacherib besieged Jerusalem () in 701 BCE, though the city was never taken. During the long reign of Manasseh (c. 687/686 – 643/642 BCE), Judah was a vassal of Assyrian rulers – Sennacherib and his successors, Esarhaddon and Ashurbanipal after 669 BCE. Manasseh is listed as being required to provide materials for Esarhaddon's building projects, and as one of a number of vassals who assisted Ashurbanipal's campaign against Egypt. When Josiah became king of Judah in c. 641/640 BCE, the international situation was in flux. To the east, the Neo-Assyrian Empire was beginning to disintegrate, the Neo-Babylonian Empire had not yet risen to replace it, and Egypt to the west was still recovering from Assyrian rule. In this power vacuum, Judah was able to govern itself for the time being without foreign intervention. However, in the spring of 609 BCE, Pharaoh Necho II personally led a sizable army up to the Euphrates to aid the Assyrians.
https://en.wikipedia.org/wiki?curid=17423
Calligra Words Calligra Words is a word processor, which is part of Calligra Suite and developed by KDE as free software. When the Calligra Suite was formed, unlike the other Calligra applications Words was not a continuation of the corresponding KOffice application – KWord. The Words was largely written from scratch – in May 2011 a completely new layout engine was announced. The first release was made available on , using the version number 2.4 to match the rest of Calligra Suite. Initial reception of Calligra Words shortly after the 2.4 release was mixed. While Linux Pro Magazine Online's Bruce Byfield wrote “Calligra needed an impressive first release. Perhaps surprisingly, and to the development team’s credit, it has managed one in 2.4.”, he also noted that “Words in particular is still lacking features”. He concluded that Calligra is “worth keeping an eye on”. On the other hand, Calligra Words became the default word processor in Kubuntu 12.04 – replacing LibreOffice Writer. Formulas in Calligra Words are provided by the Formula plugin. It is a formula editor with a WYSIWYG interface.
https://en.wikipedia.org/wiki?curid=17430
Kenneth Lee Pike Kenneth Lee Pike (June 9, 1912 – December 31, 2000) was an American linguist and anthropologist. He was the originator of the theory of tagmemics, the coiner of the terms "emic" and "etic" and the developer of the constructed language Kalaba-X for use in teaching the theory and practice of translation. In addition, he was the First President of the Bible-translating organization Summer Institute of Linguistics (SIL), with which he was associated from 1942 until his death. Pike was born in Woodstock, Connecticut, and studied theology at Gordon College, graduating with a B.A. in 1933. He initially wanted to do missionary work in China. When this was denied him, he studied linguistics with Summer Institute of Linguistics (SIL). He went to Mexico with SIL, learning Mixtec from native speakers there in 1935. In 1937 Pike went to the University of Michigan, where he worked for his doctorate in linguistics under Charles C. Fries. His research involved living among the Mixtecs, and developing a written system for the Mixtec language with his wife, Evelyn. After gaining his Ph.D. in 1942, Pike became the First President of the Summer Institute in Linguistics. Its main function was to produce translations of the Bible in unwritten languages, and in 1951 Pike published the "Mixtec New Testament". He was the President of SIL International from 1942 to 1979. As well as and in parallel with his role at SIL, Pike spent thirty years at the University of Michigan, during which time he served as chairman of its linguistics department, professor of linguistics, and director of its English Language Institute (he did pioneering work in the field of English language learning and teaching) and was later Professor Emeritus of the university. Pike is best known for his distinction between the "emic" and the "etic". "Emic" (as in "phonemics") refers to the role of cultural and linguistic categories as understood from within the cultural or linguistic system that they are a part of, while "etic" (as in phonetics") refers to the analytical study of those sounds grounded outside of the system itself. Pike argued that only native speakers are competent judges of emic descriptions, and are thus crucial in providing data for linguistic research, while investigators from outside the linguistic group apply scientific methods in the analysis of language, producing etic descriptions which are verifiable and reproducible. Pike himself carried out studies of indigenous languages in Australia, Bolivia, Ecuador, Ghana, Java, Mexico, Nepal, New Guinea, Nigeria, the Philippines, and Peru. Pike developed his theory of "tagmemics" to help with the analysis of languages from Central and South America, by identifying (using both semantic and syntactic elements) strings of linguistic elements capable of playing a number of different roles. Pike's approach to the study of language put him outside the circle of the "generative" movement begun by Noam Chomsky, a dominant linguist, since Pike believed that the structure of language should be studied in context, not just single sentences, as seen in the title of his magnum opus "Language in relation to a unified theory of the structure of human behavior" (1967). He became well known for his "monolingual demonstrations". He would stand before an audience, with a large number of chalkboards. A speaker of a language unknown to him would be brought in to work with Pike. Using gestures and objects, not asking questions in a language that the person might know, Pike would begin to analyze the language before the audience. He was a member of National Academy of Sciences, the Linguistic Society of America (LSA), the Linguistic Association of Canada and the United States (LACUS), and the American Anthropological Association. He served as president of LSA and LACUS and later was nominated for the Templeton Prize three years in a row. When he was named to the Charles Carpenter Fries Professorship of Linguistics at the University of Michigan in 1974, the Dean's citation noted that "his lifelong originality and energetic activity verge on the legendary". Pike was awarded honorary degrees by a number of institutions, including Huntington College, University of Chicago, Georgetown University, L'Université Réné Descartes (Sorbonne), and Albert-Ludwig Universität. Though the Nobel Prize committee did not publicize nominations, in 1983 US Senator Alan J. Dixon and US Congressman Paul Simon announced that they had nominated Pike for the Nobel Peace Prize. Academic sponsors for his nomination included Charles F. Hockett, Sydney Lamb (Rice University), Gordon J. van Wylen (Hope College), Frank H. T. Rhodes (Cornell University), André Martinet (Sorbonne), David C.C. Li (National Taiwan Normal University), and Ming Liu (Chinese University of Hong Kong).
https://en.wikipedia.org/wiki?curid=17431
K-Meleon K-Meleon is an open-source web browser for Microsoft Windows. Based on the same Gecko layout engine as Mozilla Firefox and SeaMonkey, K-Meleon's design goal is to provide a fast and reliable web browser while providing a highly customizable interface and using system resources efficiently. It is released under the GNU General Public License. K-Meleon uses the native Windows application programming interface (API) to create its user interface instead of Mozilla's cross-platform XML User Interface Language (XUL) layer, and as a result, is tightly integrated into the look and feel of the Windows desktop. This approach is similar to that of Galeon and Epiphany (for the GNOME desktop), and Camino (for Mac OS X). Omitting XUL makes K-Meleon less resource-intensive than other Gecko-based browsers on Windows. The first version, K-Meleon 0.1, was originally written by Christophe Thibault and released to the public on August 21, 2000. A flurry of development happened until 2003 when a number of developers stopped working on it. Dorian Boissonnade eventually took over as the primary developer of the project, and continues to maintain the project to date. After many major release versions from 0.1 to 0.9.x, K-Meleon 1.0 introduced major modifications. The most notable change was the main K-Meleon code being updated to accommodate the Gecko 1.8.0.x rendering engine, as used in the latest releases of Mozilla Firefox and SeaMonkey. This update to the layout engine brought significant improvements to security and usability, including support for favicons and multi-user environments. Some themes and macros from version 0.9 were still compatible with 1.0, although the macro system was updated. The macro system was updated further in K-Meleon 1.1, which was based on the Gecko 1.8.1 rendering engine that was used in Mozilla Firefox 2.0 and SeaMonkey 1.1. A true tabbed interface was introduced in version 1.5. Prior to this update, multiple web pages were only accessible within the same browser window using the included but optional "layers" plugin, which enabled a toolbar containing buttons representing each open page in a way that functionally mimicked tabbed browsing in every way other than appearance. These open pages were called "layers" instead of tabs. In 2010, K-Meleon was one of the twelve browsers offered to European Economic Area users of Microsoft Windows. As of 2012, the project was incorrectly reported as being on indefinite hold, presumably due to the fact that Mozilla stopped providing an embeddable version of the Gecko engine. This has since been clarified, as development continued. In late 2013, the K-Meleon group began developing new versions based on Mozilla's XULRunner 24 runtime environment in place of the discontinued Gecko Runtime Environment. K-Meleon 74 was the first stable release to use updated versions of this environment. K-Meleon 75 was released in mid-2015 with a Mozilla 31 backend, new skin and toolbar implementation, spellcheck, and form autocompletion. K-Meleon 77 planned for release in 2019 with a Mozilla 52 backend, new Goanna engine and some additional features. In the absence of new releases from the core team since December 2016, two unofficial versions have been developed that integrate bug fixes and other updates and enhancements, K-Meleon Pro and K-Meleon on Goanna, with the latter being updated on a regular basis and representing a major shift from the previous Gecko layout engine. K-Meleon has a highly flexible interface design. All the menus and toolbar buttons can be customized using text-format configuration files. This feature is useful in environments where the browser must be customized for general public use, such as in a public library or Internet café. Although individual toolbars can be repositioned, users must edit toolbar configuration files to make any changes to button layouts as there is no graphical user interface (GUI) to customize them. The use of the native Windows interface means that K-Meleon does not support Mozilla-formatted browser themes. Compatibility with Mozilla extensions is also limited, with only a few extensions that can be integrated. However, K-Meleon has its own plugins (called "kplugins") and browser themes (using Lim Chee Aun's "Phoenity" by default), which can extend the functionality and customize the appearance of the browser. There is also a macro plugin which allows users to extend the browser functionality without having to know the C programming language. References: K-Meleon file releases, release notes, Wiki documentation, and Announcements forum.
https://en.wikipedia.org/wiki?curid=17439
Klaus Maria Brandauer Klaus Maria Brandauer (; born Klaus Georg Steng; 22 June 1943) is an Austrian actor and director. He is also a professor at the Max Reinhardt Seminar. Brandauer is known internationally for his roles in "Mephisto" (1981), "Never Say Never Again" (1983), "Out of Africa" (1985), "Hanussen" (1988), "Burning Secret" (1988), and "Introducing Dorothy Dandridge" (1999). For his supporting role as Bror von Blixen-Finecke in the drama film "Out of Africa" (1985), Brandauer was nominated for an Academy Award and won a Golden Globe Award. Brandauer was born as Klaus Georg Steng in Bad Aussee, Austria. He is the son of Maria Brandauer and Georg Steng (or Stenj), a civil servant. He subsequently took his mother's first name as part of his professional name, Klaus Maria Brandauer. Brandauer began acting on stage in 1962. After working in national theatre and television, he made his film debut in English in 1972, in "The Salzburg Connection". In 1975 he played in "Derrick" – in Season 2, Episode 8 called "Pfandhaus". His starring and award-winning role in István Szabó's "Mephisto" (1981) playing a self-absorbed actor, launched his international career. Following his role in "Mephisto", Brandauer appeared as Maximillian Largo in "Never Say Never Again" (1983), a remake of the 1965 James Bond film "Thunderball". Roger Ebert said of his performance: "For one thing, there's more of a human element in the movie, and it comes from Klaus Maria Brandauer, as Largo. Brandauer is a wonderful actor, and he chooses not to play the villain as a cliché. Instead, he brings a certain poignancy and charm to Largo, and since Connery always has been a particularly human James Bond, the emotional stakes are more convincing this time." He starred in "Out of Africa" (1985), opposite Meryl Streep and Robert Redford, for which he was nominated for an Oscar and won a Golden Globe, and Szabó's "Oberst Redl" (1985). In 1987, he was the Head of the Jury at the 37th Berlin International Film Festival. In 1988 he appeared in "Hanussen" opposite Erland Josephson and Ildikó Bánsági. Brandauer was originally cast as Marko Ramius in "The Hunt for Red October". That role eventually went to Oscar nominee Sean Connery, who played James Bond to Brandauer's Largo in "Never Say Never Again" (1983). He co-starred with Connery again in "The Russia House" (1990). His other film roles have been in "The Lightship" (1986), "Streets of Gold" (1986), "Burning Secret" (1988), "White Fang" (1991), "Becoming Colette" (1992), "Introducing Dorothy Dandridge" (1999), and "Everyman's Feast" (2002). In 1989 he participated in the great production film for the bicentennial of the French Revolution by the French television channel TF1, "La Révolution française": He played the role of Georges Danton. Brandauer first work as movie director was, in 1989, "", with himself in the title role. In August 2006, Brandauer's much-awaited production of "The Threepenny Opera" gained a mixed reception. Brandauer had resisted questions about how his production of Bertolt Brecht and Kurt Weill's classic musical comedy about the criminal MacHeath would differ from earlier versions, and his production featured Mack the Knife in a three-piece suit and white gloves, stuck to Brecht's text, and avoided any references to contemporary politics or issues. Brandauer has at least a working knowledge of five languages: German, Italian, Hungarian, English and French and has acted in each. His first wife was Karin Katharina Müller (14 October 1945 – 13 November 1992), an Austrian film and television director and screenwriter, from 1963 until her death in 1992, aged 47, from cancer. Both were teenagers when they married, in 1963. They had one son, Christian. Brandauer married Natalie Krenn in 2007.
https://en.wikipedia.org/wiki?curid=17440
Boeing KC-135 Stratotanker The Boeing KC-135 Stratotanker is a military aerial refueling aircraft that was developed from the Boeing 367-80 prototype, alongside the Boeing 707 airliner. It is the predominant variant of the C-135 Stratolifter family of transport aircraft. The KC-135 was the US Air Force's first jet-powered refueling tanker and replaced the KC-97 Stratofreighter. The KC-135 was initially tasked with refueling strategic bombers, but it was used extensively in the Vietnam War and later conflicts such as Operation Desert Storm to extend the range and endurance of US tactical fighters and bombers. The KC-135 entered service with the United States Air Force (USAF) in 1957; it is one of six military fixed-wing aircraft with over 50 years of continuous service with its original operator. The KC-135 is supplemented by the larger KC-10. Studies have concluded that many of the aircraft could be flown until 2030, although maintenance costs have greatly increased. The KC-135 is to be partially replaced by the Boeing KC-46 Pegasus. Like its sibling, the commercial Boeing 707 jet airliner, the KC-135 was derived from the Boeing 367-80 jet transport "proof of concept" demonstrator, which was commonly called the "Dash-80". The KC-135 is similar in appearance to the 707, but has a narrower fuselage and is shorter than the 707. The KC-135 predates the 707, and is structurally quite different from the civilian airliner. Boeing gave the future KC-135 tanker the initial designation Model 717. In 1954 USAF's Strategic Air Command (SAC) held a competition for a jet-powered aerial refueling tanker. Lockheed's tanker version of the proposed Lockheed L-193 airliner with rear fuselage-mounted engines was declared the winner in 1955. Since Boeing's proposal was already flying, the KC-135 could be delivered two years earlier and Air Force Secretary Harold E. Talbott ordered 250 KC-135 tankers until Lockheed's design could be manufactured. In the end, orders for the Lockheed tanker were dropped rather than supporting two tanker designs. Lockheed never produced its jet airliner, while Boeing would eventually dominate the market with a family of airliners based on the 707. In 1954, the Air Force placed an initial order for 29 KC-135As, the first of an eventual 820 of all variants of the basic C-135 family. The first aircraft flew in August 1956 and the initial production Stratotanker was delivered to Castle Air Force Base, California, in June 1957. The last KC-135 was delivered to the Air Force in 1965. Developed in the early 1950s, the basic airframe is characterized by 35-degree aft swept wings and tail, four underwing-mounted engine pods, a horizontal stabilizer mounted on the fuselage near the bottom of the vertical stabilizer with positive dihedral on the two horizontal planes and a hi-frequency radio antenna which protrudes forward from the top of the vertical fin or stabilizer. These basic features make it strongly resemble the commercial Boeing 707 and 720 aircraft, although it is actually a different aircraft. Reconnaissance and command post variants of the aircraft, including the RC-135 Rivet Joint and EC-135 Looking Glass aircraft were operated by SAC from 1963 through 1992, when they were reassigned to the Air Combat Command (ACC). The USAF EC-135 Looking Glass was subsequently replaced in its role by the U.S. Navy E-6 Mercury aircraft, a new build airframe based on the Boeing 707-320B. All KC-135s were originally equipped with Pratt & Whitney J57-P-59W turbojet engines, which produced of thrust dry, and approximately of thrust wet. Wet thrust is achieved through the use of water injection on takeoff, as opposed to "wet thrust" when used to describe an afterburning engine. of water are injected into the engines over the course of three minutes. The water is injected into the inlet and the diffuser case in front of the combustion case. The water cools the air in the engine to increase its density; it also reduces the turbine gas temperature, which is a primary limitation on many jet engines. This allows the use of more fuel for proper combustion and creates more thrust for short periods of time, similar in concept to "War Emergency Power" in a piston-engined aircraft. In the 1980s the first modification program retrofitted 157 Air Force Reserve (AFRES) and Air National Guard (ANG) tankers with the Pratt & Whitney TF33-PW-102 turbofan engines from 707 airliners retired in the late 1970s and early 1980s. The modified tanker, designated the KC-135E, was 14% more fuel-efficient than the KC-135A and could offload 20% more fuel on long-duration flights. Only the KC-135E aircraft were equipped with thrust-reversers for aborted takeoffs and shorter landing roll-outs. The KC-135E fleet has since either been retrofitted as the R-model configuration or placed into long-term storage ("XJ"), as Congress has prevented the Air Force from formally retiring them. The final KC-135E, tail number "56-3630", was delivered by the 101st Air Refueling Wing of the Maine Air National Guard to the 309th Aerospace Maintenance and Regeneration Group (AMARG) at Davis–Monthan Air Force Base in September 2009. The second modification program retrofitted 500 aircraft with new CFM International CFM56 (military designation: F108) high-bypass turbofan engines produced by General Electric and Snecma. The CFM56 engine produces approximately of thrust, nearly a 100% increase compared to the original J57 engine. The modified tanker, designated KC-135R (modified KC-135A or E) or KC-135T (modified KC-135Q), can offload up to 50% more fuel (on a long-duration sortie), is 25% more fuel-efficient, and costs 25% less to operate than with the previous engines. It is also significantly quieter than the KC-135A, with noise levels at takeoff reduced from 126 to 99 decibels. The KC-135R's operational range is 60% greater than the KC-135E for comparable fuel offloads, providing a wider range of basing options. No longer in consideration, upgrading the remaining KC-135Es into KC-135Rs would have cost about US$3 billion, about $24 million per aircraft. According to Air Force data, the KC-135 fleet had a total operation and support cost in fiscal year 2001 of about $2.2 billion. The older E model aircraft averaged total costs of about $4.6 million per aircraft, while the R models averaged about $3.7 million per aircraft. Those costs include personnel, fuel, maintenance, modifications, and spare parts. In order to expand the KC-135's capabilities and improve its reliability, the aircraft has undergone a number of avionics upgrades. Among these was the Pacer-CRAG program (compass, radar and GPS) which ran from 1999 to 2002 and modified all the aircraft in the inventory to eliminate the Navigator position from the flight crew. The fuel management system was also replaced. The program development was done by Rockwell Collins in Iowa and installation was performed by BAE Systems at the Mojave Airport in California. Block 40.6 allows the KC-135 to comply with global air-traffic management. The latest block upgrade to the KC-135, the Block 45 program, is online with the first 45 upgraded aircraft delivered by January 2017. Block 45 adds a new glass cockpit digital display, radio altimeter, digital autopilot, digital flight director and computer updates. The original, no longer procurable, analog instruments, including all engine gauges, were replaced. Rockwell Collins again supplied the major avionic modules and the modification work is being done at Tinker AFB. The KC-135Q variant was modified to carry JP-7 fuel necessary for the Lockheed SR-71 Blackbird by separating the JP-7 from the KC-135's own fuel supply (the body tanks carrying JP-7, and the wing tanks carrying JP-4 or JP-8). The tanker also had special fuel systems for moving the different fuels between different tanks. When the KC-135Q model received the CFM56 engines, it was redesignated the KC-135T model, which was capable of separating the main body tanks from the wing tanks where the KC-135 draws its engine fuel. The only external difference between a KC-135R and a KC-135T is the presence of a clear window on the underside of the empennage of the KC-135T where a remote controlled searchlight is mounted. It also has two ground refueling ports, located in each rear wheel well so ground crews can fuel both the body tanks and wing tanks separately. Eight KC-135R aircraft are receiver-capable tankers, commonly referred to as KC-135R(RT). All eight aircraft were with the 22d Air Refueling Wing at McConnell AFB, Kansas, in 1994. They are primarily used for force extension and Special Operations missions, and are crewed by highly qualified receiver capable crews. If not used for the receiver mission, these aircraft can be flown just like any other KC-135R. The Multi-point Refueling Systems (MPRS) modification adds refueling pods to the KC-135's wings. The pods allow refueling of U.S. Navy, U.S. Marine Corps and most NATO tactical jet aircraft while keeping the tail-mounted refueling boom. The pods themselves are Flight Refueling Limited (FRL) MK.32B model pods, and refuel via the probe and drogue method common to USN/USMC tactical jets, rather than the primary "flying boom" method used by USAF fixed-wing aircraft. This allows the tanker to refuel two receivers at the same time, which increases throughput compared to the boom drogue adapter. A number of KC-135A and KC-135B aircraft have been modified to EC-135, RC-135 and OC-135 configurations for use in several different roles (although these could also be considered variants of the C-135 Stratolifter family). The KC-135R has four turbofan engines, mounted under 35-degree swept wings, which power it to takeoffs at gross weights up to . Nearly all internal fuel can be pumped through the tanker's flying boom, the KC-135's primary fuel transfer method. A special shuttlecock-shaped drogue, attached to and trailing behind the flying boom, may be used to refuel aircraft fitted with probes. This apparatus is significantly more unforgiving of pilot error in the receiving aircraft than conventional trailing hose arrangements; an aircraft so fitted is also incapable of refueling by the normal flying boom method until the attachment is removed. A boom operator stationed in the rear of the aircraft controls the boom while lying prone. A cargo deck above the refueling system can hold a mixed load of passengers and cargo. Depending on fuel storage configuration, the KC-135 can carry up to of cargo. The KC-135 was initially purchased to support bombers of the Strategic Air Command, but by the late 1960s, in the Southeast Asia theater, the KC-135 Stratotanker's ability as a force multiplier came to the fore. Midair refueling of F-105 and F-4 fighter-bombers as well as B-52 bombers brought far-flung bombing targets within reach, and allowed fighter missions to spend hours at the front, rather than a few minutes, which was usual due to their limited fuel reserves and high fuel consumption. KC-135 crews refueled both Air Force and Navy / Marine Corps aircraft; though they would have to change to probe and drogue adapters depending upon the mission, the Navy and Marine Corps not having fitted their aircraft with flying boom receptacles since the USAF boom system was impractical for aircraft carrier operations. Crews also helped to bring in damaged aircraft which could sometimes fly while being fed by fuel to a landing site or to ditch over the water (specifically those with punctured fuel tanks). KC-135s continued their tactical support role in later conflicts such as Operation Desert Storm and current aerial strategy. The Strategic Air Command (SAC) had the KC-135 Stratotanker in service with Regular Air Force SAC units from 1957 through 1992 and with SAC-gained Air National Guard (ANG) and Air Force Reserve (AFRES) units from 1975 through 1992. Following a major USAF reorganization that resulted in the inactivation of SAC in 1992, most KC-135s were reassigned to the newly created Air Mobility Command (AMC). While AMC gained the preponderance of the aerial refueling mission, a small number of KC-135s were also assigned directly to United States Air Forces in Europe (USAFE), Pacific Air Forces (PACAF) and the Air Education and Training Command (AETC). All Air Force Reserve Command (AFRC) KC-135s and most of the Air National Guard (ANG) KC-135 fleet became operationally-gained by AMC, while Alaska Air National Guard and Hawaii Air National Guard KC-135s became operationally-gained by PACAF. Air Mobility Command (AMC) manages 414 Stratotankers, of which the Air Force Reserve Command (AFRC) and Air National Guard (ANG) fly 247 in support of AMC's mission as of May 2014. The KC-135 is one of a few military aircraft types with over 50 years of continuous service with its original operator as of 2009. Israel was offered KC-135s again in 2013, after turning down the aging aircraft twice due to expense of keeping them flying. The IAF again rejected the offered KC-135Es, but said that it would consider up to a dozen of the newer KC-135Rs. Besides its primary role as an inflight aircraft refueler, the KC-135, designated NKC-135, has assisted in several research projects at the NASA Armstrong Flight Research Center at Edwards Air Force Base, California. One such project occurred between 1979 and 1980 when special wingtip "winglets", developed by Richard Whitcomb of the Langley Research Center, were tested at Armstrong, using an NKC-135A tanker loaned to NASA by the Air Force. Winglets are small, nearly vertical fins installed on an aircraft's wing tips. The results of the research showed that drag was reduced and range could be increased by as much as 7 percent at cruise speeds. Winglets are now being incorporated into most new commercial and military transport/passenger jets, as well as business aviation jets. NASA also has operated several KC-135 aircraft (without the tanker equipment installed) as their famed Vomit Comet zero-gravity simulator aircraft. The longest-serving (1973 to 1995) version was KC-135A, AF Ser. No. "59-1481", named "Weightless Wonder IV" and registered as N930NA. Between 1993 and 2003, the amount of KC-135 depot maintenance work doubled, and the overhaul cost per aircraft tripled. In 1996, it cost $8,400 per flight hour for the KC-135, and in 2002 this had grown to $11,000. The Air Force's 15-year estimates project further significant cost growth through fiscal year 2017. KC-135 fleet operations and support costs are estimated to grow from about $2.2 billion in fiscal year 2003 to $5.1 billion (2003 dollars) in fiscal year 2017, an increase of over 130 percent, which represents an annual growth rate of about 6.2 percent. The Air Force projected that E and R models have lifetime flying hour limits of 36,000 and 39,000 hours, respectively. According to the Air Force, only a few KC-135s would reach these limits by 2040, when some aircraft would be about 80 years old. A later 2005 Air Force study estimated that KC-135Es upgraded to the R standard could remain in use until 2030. In 2006, the KC-135E fleet was flying an annual average of 350 hours per aircraft and the KC-135R fleet was flying an annual average of 710 hours per aircraft. The KC-135 fleet is currently flying double its planned yearly flying hour program to meet airborne refueling requirements, and has resulted in higher than forecast usage and sustainment costs. In March 2009, the Air Force indicated that KC-135s would require additional skin replacement to allow their continued use beyond 2018. The USAF decided to replace the KC-135 fleet. However, the KC-135 fleet is large and will need to be replaced gradually. Initially the first batch of replacement planes was to be an air tanker version of the Boeing 767, leased from Boeing. In 2003, this was changed to contract where the Air Force would purchase 80 KC-767 aircraft and lease 20 more. In December 2003, the Pentagon froze the contract and in January 2006, the KC-767 contract was canceled. This followed public revelations of corruption in how the contract was awarded, as well as controversy regarding the original leasing rather than outright purchase agreement. Then Secretary of Defense Rumsfeld stated that this move will in no way impair the Air Force's ability to deliver the mission of the KC-767, which will be accomplished by continuing upgrades to the KC-135 and KC-10 Extender fleet. In January 2007, the U.S. Air Force formally launched the KC-X program with a request for proposal (RFP). KC-X is first phase of three acquisition programs to replace the KC-135 fleet. On 29 February 2008, the US Defense Department announced that it had selected the EADS/Northrop Grumman "KC-30" (to be designated the KC-45A) over the Boeing KC-767. Boeing protested the award on 11 March 2008, citing irregularities in the competition and bid evaluation. On 18 June 2008, the US Government Accountability Office sustained Boeing's protest of the selection of the Northrop Grumman/EADS's tanker. In February 2010, the US Air Force restarted the KC-X competition with the release of a revised request for proposal (RFP). After evaluating bids, the USAF selected Boeing's 767-based tanker design, with the military designation KC-46, as a replacement in February 2011. The first KC-46A Pegasus was delivered to the U.S. Air Force on 10 January 2019. In April 2020, plans call for retiring KC-135s, which are supposed to be replaced by KC-46s. But the KC-46 has been challenged by development delays that leave them non-operational. These would leave a gap in the USAF's future ability to fulfill operational refueling. Two foreign users of the KC-135, the French Air Force and the Republic of Singapore Air Force are taking deliveries of Airbus A330 MRTTs as replacements for their Stratotankers. Original production version powered by four Pratt & Whitney J57s, 732 built. Given the Boeing model numbers 717-100A, 717-146 and 717-148. Test-configured KC-135A. Airborne command post version equipped with turbofan engines, 17 built. Provided with in-flight refueling capability and redesignated EC-135C. Given the model number 717-166. All four RC-135As ("Pacer Swan") were modified to partial KC-135A configuration in 1979. The four aircraft (serial numbers "63-8058, 63-8059, 63-8060" and "63-8061") were given a unique designation KC-135D as they differed from the KC-135A in that they were built with a flight engineer's position on the flight deck. The flight engineer's position was removed when the aircraft were modified to KC-135 standards but they retained their electrically powered wing flap secondary (emergency) drive mechanism and second air conditioning pack which had been used to cool the RC-135As on-board photo-mapping systems. Later re-engined with Pratt & Whitney TF33 engines and a cockpit update to KC-135E standards in 1990 and were retired to the 309th AMARG at Davis-Monthan AFB, AZ in 2007. Air National Guard and Air Force Reserve KC-135As re-engined with Pratt & Whitney TF33-PW-102 engines from retired 707 airliners (161 modified). All E model aircraft were retired to the 309th AMARG at Davis-Monthan AFB by September 2009 and replaced with R models. Test-configured KC-135E. 55-3132 NKC-135E "Big Crow I" & 63-8050 NKC-135B "Big Crow II" used as airborne targets for the Boeing YAL-1 Airborne Laser carrier. KC-135As modified to carry JP-7 fuel necessary for the SR-71 Blackbird, 56 modified, survivors to KC-135T. 4 JC/KC-135As converted to "Rivet Stand" (Later "Rivet Quick") configuration for reconnaissance and evaluation of above ground nuclear test (55-3121, 59–1465, 59–1514, 58–0126; 58-0126 replaced 59-1465 after it crashed in 1967). These aircraft were powered by Pratt & Whitney J57 engines and were based at Offutt AFB, Nebraska. KC-135As and some KC-135Es re-engined with CFM56 engines, at least 361 converted. Receiver-capable KC-135R Stratotanker; eight modified with either a Boeing or LTV receiver system and a secure voice SATCOM radio. Three of the aircraft (60-0356, -0357, and -0362) were converted to tankers from RC-135Ds, from which they retained their added equipment. KC-135Q re-engined with CFM56 engines, 54 modified. A new-built variant for France as dual-role tanker/cargo and troop carrier aircraft. 12 were built for the French Air Force with the addition of a drogue adapter on the refueling boom. Given Boeing model numbers 717-164 and 717-165. 11 surviving C-135Fs upgraded with CFM International F108 turbofans between 1985 and 1988. Later modified with MPRS wing pods. An airborne command post modified in 1984 to support CINCCENT. Aircraft 55-3125 was the only EC-135Y. Unlike its sister EC-135N, it was a true tanker that could also receive in-flight refueling. Pratt & Whitney TF33-PW-102. Retired to 309th AMARG at Davis-Monthan AFB, AZ. Note Italy has been reported in some sources as operating several KC-135s, however these are actually Boeing 707-300s converted to tanker configuration. As of 2020, 52 Stratotankers have been lost to accidents during the over sixty years of service, involving 385 fatalities.
https://en.wikipedia.org/wiki?curid=17441
Katsuhiro Otomo Katsuhiro Otomo was born in Tome, Miyagi Prefecture and grew up in Tome-gun. While he was in high school he was fascinated with movies, often taking a three-hour train ride during school holidays just to see them. In 1973 he graduated high school and left Miyagi, heading to Tokyo with the hopes of becoming a manga artist. On October 4, 1973, he published his first work, a manga adaptation of Prosper Mérimée's short novel "Mateo Falcone", titled "A Gun Report". In 1979, after writing multiple short-stories for the magazine "Action", Otomo created his first science-fiction work, titled "Fireball". Although the manga was never completed, it is regarded as a milestone in Otomo's career as it contained many of the same themes he would explore in his later, more successful manga such as "". "Dōmu" began serialization in January 1980 and ran for two years until completed. In 1983, it was published in book form and would win the Nihon SF Taisho Award, the Japanese equivalent to the Nebula Award. In 1982, Otomo made his anime debut, working as character designer for the animated film "Harmagedon". The next year, Otomo began work on a manga which would become his most acclaimed and famous work: "Akira". It took eight years to complete and would eventually culminate in 2000 pages of artwork. In 1987, Otomo continued working in anime, directing an animated work for the first time: a segment, which he also wrote the screenplay and drew animation for, in the anthology feature "Neo Tokyo". He followed this up with two segments in another anthology, "Robot Carnival". While the serialization of Akira was taking place, Otomo decided to animate it into a feature film, although the comic was yet to be finished. In 1988, the animated film "Akira" was released. In 1990, Otomo did a brief interview with MTV for a general segment on the Japanese manga scene at the time. Otomo has recently worked extensively with noted studio Sunrise. The studio has animated and produced his recent projects, including the 2004 feature film "Steamboy", 2006's "Freedom Project" and his latest project, "", released in 2007. Reports have suggested that Otomo will be the executive producer of the live action adaptation of his manga series "Akira". In a 2012 interview, Otomo said he will start a new manga series, set during Japan's Meiji period (late 1800s early 1900s). It will be his first long-form work since "Akira". In 2013, Otomo released his newest film in over 9 years since "Steamboy", called "Short Peace", an anthology consisting on 4 shorts: His own short based on one of his stories called "Combustible", a tragic love story set in the Edo period, "Tsukumo", directed by Shuhei Morita in which everyday tools metamorphose into supernatural things, "Gambo", directed by Hiroaki Ando, which features a battle between an oni goblin and a polar bear, and "Buki yo Saraba" directed by Hajime Katoki, depicting a battle in a ruined Tokyo. "Combustible" won the Grand Prize of the Cultural Affairs Agency's Japan Media Arts Festival Animation awards in 2012, and it was shortlisted for the 2013 Best Animated Short at the 85th Academy Awards, but it failed to get nominated. "Tsukumo", under the title "Possessions", would become nominated for the 2014 Best Animated Short at the 86th Academy Awards. Katsuhiro Otomo was later nominated for Angoulême's Top Prize 2014 (Grand Prix de la ville d'Angoulême) Besides his own animation, Otomo has contributed his art to anime as varied as the "Genma Taisen" movie, "Harmagedon", the "Crusher Joe" movie, a special Gundam anniversary short film, the seven-part OVA series "Freedom Project", and "Space Dandy" episode 22. Otomo is married to Yoko Otomo. Together they have one child, a son named Shohei Otomo, who is also an artist.
https://en.wikipedia.org/wiki?curid=17442
Kate Bush Catherine Bush (born 30 July 1958) is an English singer, dancer, songwriter, and record producer. In 1978, aged 19, she topped the UK Singles Chart for four weeks with her debut single "Wuthering Heights", becoming the first female artist to achieve a UK number one with a self-written song. She has since released 25 UK Top 40 singles, including the Top 10 hits "The Man with the Child in His Eyes", "Babooshka", "Running Up That Hill", "Don't Give Up" (a duet with Peter Gabriel), and "King of the Mountain". All 10 of her studio albums reached the UK Top 10, including the UK number-one albums "Never for Ever" (1980), "Hounds of Love" (1985), and the compilation "The Whole Story" (1986). She was the first British solo female artist to top the UK album charts and the first female artist to enter the album chart at number one. Bush began writing songs at 11. She was signed to EMI Records after Pink Floyd guitarist David Gilmour helped produce a demo tape. Her debut album, "The Kick Inside", was released in 1978. Bush slowly gained artistic independence in album production and has produced all her studio albums since "The Dreaming" (1982). She took a hiatus between her seventh and eighth albums, "The Red Shoes" (1993) and "Aerial" (2005). She drew attention again in 2014 with her concert residency Before the Dawn, her first shows since 1979's The Tour of Life. Bush's eclectic and experimental musical style, unconventional lyrics, and literary themes have influenced a diverse range of artists. She has been nominated for 13 British Phonographic Industry accolades, winning for Best British Female Artist in 1987, and has been nominated for three Grammy Awards. In 2002, she was recognised with an Ivor Novello Award for Outstanding Contribution to British Music. In October 2017 she was nominated for induction in the Rock and Roll Hall of Fame in 2018. Bush was appointed Commander of the Order of the British Empire (CBE) in the 2013 New Year Honours for services to music. Bush was born in Bexleyheath, Kent, to an English doctor, general practitioner Robert Bush (1920–2008), and Hannah (1918–1992), née Daly, an Irish staff nurse, daughter of a farmer in County Waterford. She was raised as a Roman Catholic in their farmhouse in East Wickham, an urban village in the neighbouring town of Welling, with her elder brothers, John and Paddy. Bush came from an artistic background: her mother was an amateur traditional Irish dancer, her father was an amateur pianist, Paddy worked as a musical instrument maker, and John was a poet and photographer. Both brothers were involved in the local folk music scene. Bush trained at Goldsmiths College karate club, where her brother John was a karateka. There, she became known as "Ee-ee" because of her squeaky kiai. Her family's musical influence inspired Bush to teach herself the piano at the age of 11. She also played the organ in a barn behind her parents' house and studied the violin. She soon began composing songs, eventually adding her own lyrics. Bush attended St Joseph's Convent Grammar School, a Catholic girls' school in nearby Abbey Wood which, in 1975, after she had left, became part of St Mary's and St Joseph's School in Sidcup. During this time her family produced a demo tape with over 50 of her compositions, which was turned down by record labels. Pink Floyd guitarist David Gilmour received the demo from Ricky Hopper, a mutual friend of Gilmour and the Bush family. Impressed, Gilmour helped the sixteen-year-old Bush record a more professional demo tape. Three tracks in total were recorded and paid for by Gilmour. The tape was produced by Gilmour's friend Andrew Powell, who went on to produce Bush's first two albums, and sound engineer Geoff Emerick, who had worked with the Beatles. The tape was sent to EMI executive Terry Slater, who signed her. The British record industry was reaching a point of stagnation. Progressive rock was very popular and visually oriented rock performers were growing in popularity, thus record labels looking for the next big thing were considering experimental acts. Bush was put on retainer for two years by Bob Mercer, managing director of EMI group-repertoire division. According to Mercer, he felt Bush's material was good enough to release, but felt that if the album failed it would be demoralising and if it was successful Bush was too young to handle it. However, in a 1987 interview, Gilmour disputed this version of events, blaming EMI for initially using the "wrong" producers. After the contract signing, EMI gave her a large advance, which she used to enroll in interpretive dance classes taught by Lindsay Kemp, a former teacher of David Bowie, and mime training with Adam Darius. For the first two years of her contract, Bush spent more time on schoolwork than recording. She left school after doing her mock A-levels and having gained ten GCE O-Level qualifications. Bush wrote and made demos of almost 200 songs, some of which circulated as bootlegs. From March to August 1977, she fronted the KT Bush Band at public houses in London. The band included Del Palmer (bass), Brian Bath (guitar), and Vic King (drums). She began recording her first album in August 1977. For her début album, "The Kick Inside" (1978), Bush was persuaded to use established session musicians instead of the KT Bush Band. She retained some of these even after she had brought her bandmates back on board. Her brother Paddy played the harmonica and mandolin. Stuart Elliott played some of the drums and became her main drummer on subsequent albums. "The Kick Inside" was released when Bush was 19, with some songs written when she was as young as 13. EMI originally wanted the more rock-oriented track "James and the Cold Gun" to be her debut single, but Bush, who already had a reputation for asserting herself in decisions about her work, insisted that it should be "Wuthering Heights". In the United Kingdom alone, "The Kick Inside" sold over a million copies. "Wuthering Heights" topped the UK and Australian charts and became an international hit. Bush became the first British woman to reach number one on the UK charts with a self-written song. A second single, "The Man with the Child in His Eyes", reached number six in the UK charts. It also made it onto the American "Billboard" Hot 100 where it reached number 85 in early 1979, and went on to win her an Ivor Novello Award in 1979 for Outstanding British Lyric. According to "Guinness World Records", Bush was the first female artist in pop history to have written every track on a million-selling debut album. Bob Mercer blamed Bush's lesser success in the United States on American radio formats, saying there were no outlets for Bush's visual presentation. EMI capitalised on Bush's appearance by promoting the album with a poster of her in a tight pink top that emphasised her breasts. In an interview with "NME" in 1982, Bush criticised the choice: "People weren't even generally aware that I wrote my own songs or played the piano. The media just promoted me as a female body. It's like I've had to prove that I'm an artist in a female body." In late 1978, EMI persuaded Bush to quickly record a follow-up album, "Lionheart", to take advantage of the success of "The Kick Inside". The album was produced by Andrew Powell, assisted by Bush. While it gained high sales and spawned the hit single "Wow", it did not reach the success of "The Kick Inside", reaching number six in the UK album charts. She went on to express dissatisfaction with "Lionheart", feeling that it had needed more time. Bush set up her own publishing company, Kate Bush Music, and her own management company, Novercia, to maintain control of her work. Members of her family, along with Bush herself, composed the board of directors. Following the release of "Lionheart", she was required by EMI to undertake heavy promotional work and an exhausting tour. The Tour of Life began in April 1979 and lasted six weeks. It was described by "The Guardian" as "an extraordinary, hydra-headed beast, combining music, dance, poetry, mime, burlesque, magic and theatre". The show was co-devised and performed on stage with magician Simon Drake. Bush was involved in every aspect of the production, choreography, set design, costume design and hiring. The shows were noted for her dancing, complex lighting and her 17 costume changes per show. Because of her need to dance as she sang, sound engineers used a wire coat hanger and a radio microphone to fashion a headset microphone; it was the first used by a rock performer since the Spotnicks used a rudimentary version in the early 1960s. Released in September 1980, "Never for Ever" saw Bush's second foray into production, co-producing with Jon Kelly. Her first experience as a producer was on her "Live on Stage" EP, released after her tour the previous year. The first two albums had resulted in a definitive sound evident in every track, with orchestral arrangements supporting the live band sound. The range of styles on "Never for Ever" is much more diverse, veering from the straightforward rocker "Violin" to the wistful waltz of hit single "Army Dreamers". "Never for Ever" was her first album to feature synthesisers and drum machines, in particular the Fairlight CMI, to which she was introduced when providing backing vocals on Peter Gabriel's eponymous third album in early 1980. It was her first record to reach the top position in the UK album charts, also making her the first female British artist to achieve that status, and the first female artist ever to enter the album chart at the top. The top-selling single from the album was "Babooshka", which reached number five in the UK singles chart. In November 1980, she released the standalone Christmas single "December Will Be Magic Again", which reached number 29 in the UK charts. September 1982 saw the release of "The Dreaming", the first album Bush produced by herself. With her new-found freedom, she experimented with production techniques, creating an album that features a diverse blend of musical styles and is known for its near-exhaustive use of the Fairlight CMI. "The Dreaming" received a mixed reception in the UK, and critics were baffled by the dense soundscapes Bush had created to become "less accessible". In a 1993 interview with "Q" magazine, Bush stated: "That was my 'She's gone mad' album." However, the album became her first to enter the US "Billboard" 200 chart, albeit only reaching number 157. The album entered the UK album chart at number-three, but is to date her lowest-selling album, garnering "only" a silver disc. "Sat in Your Lap" was the first single from the album to be released. It pre-dated the album by over a year and peaked at number 11 in the UK. The title track, featuring Rolf Harris and Percy Edwards, stalled at number 48, while the third single, "There Goes a Tenner", stalled at number 93, despite promotion from EMI and Bush. The track "Suspended in Gaffa" was released as a single in Europe, but not in the UK. Continuing in her storytelling tradition, Bush looked far outside her own personal experience for sources of inspiration. She drew on old crime films for "There Goes a Tenner", a documentary about the Vietnam War for "Pull Out the Pin", and the plight of Indigenous Australians for "The Dreaming". "Houdini" is about the magician's death, and "Get Out of My House" was inspired by Stephen King's novel "The Shining". "Hounds of Love" was released in 1985. Because of the high cost of hiring studio space for her previous album, she built a private studio near her home, where she could work at her own pace. "Hounds of Love" ultimately topped the charts in the UK, knocking Madonna's "Like a Virgin" from the number-one position. The album takes advantage of the vinyl and cassette formats with two very different sides. The first side, "Hounds of Love", contains five "accessible" pop songs, including the four singles "Running Up that Hill", "Cloudbusting", "Hounds of Love", and "The Big Sky". "Running Up that Hill" reached number three in the UK charts and re-introduced Bush to American listeners, climbing to number 30 on the "Billboard" Hot 100 in November 1985. The second side of the album, "The Ninth Wave", takes its name from Tennyson's poem, "Idylls of the King", about the legendary King Arthur's reign, and is seven interconnecting songs joined in one continuous piece of music. The album earned Bush nominations for Best Female Solo Artist, Best Album, Best Single, and Best Producer at the 1986 BRIT Awards. In the same year, Bush and Peter Gabriel had a UK Top 10 hit with the duet "Don't Give Up" (Dolly Parton, Gabriel's original choice to sing the female vocal, turned his offer down), and EMI released her "greatest hits" album, "The Whole Story". Bush provided a new lead vocal and refreshed backing track on "Wuthering Heights", and recorded a new single, "Experiment IV", for inclusion on the compilation. Dawn French and Hugh Laurie were among those featured in the video for Experiment IV. At the 1987 BRIT Awards, Bush won the award for Best Female Solo Artist. Released in 1989, "The Sensual World" was described by Bush herself as "her most honest, personal album". One of the tracks, "Heads We're Dancing", inspired by her own black humour, is about a woman who dances all night with a charming stranger only to discover in the morning that he is Adolf Hitler. The title track drew its inspiration from James Joyce's novel "Ulysses". "The Sensual World" went on to become her biggest-selling album in the US, receiving an RIAA Gold certification four years after its release for 500,000 copies sold. In the United Kingdom album charts, it reached the number-two position. Another single from the album, "This Woman's Work", was featured in the John Hughes film "She's Having a Baby", and a slightly remixed version appeared on Bush's album "The Sensual World". The song reached number-eight in 2005 on the UK download chart after featuring in a British television advertisement for the charity NSPCC. In 1990, the boxed set "" was released; it included all of her albums with their original cover art, as well as two discs of all her singles' B-sides recorded from 1978 to 1990. In 1991, Bush released a cover of Elton John's "Rocket Man", which reached number 12 in the UK singles chart, and reached number two in Australia. In 2007, it was voted the greatest cover ever by readers of "The Observer" newspaper. Another John cover, "Candle in the Wind", was the B-side. In the same year, she starred in the black comedy film "Les Dogs", produced by "The Comic Strip" for BBC television. Bush plays the bride Angela at a wedding set in a post-apocalyptic Britain. Bush's seventh studio album, "The Red Shoes", was released in November 1993. The album gave Bush her highest chart position in the US, reaching number 28, although the only song from the album to make the US singles chart was "Rubberband Girl", which peaked at number 88 in January 1994. In the UK, the album reached number-two, and the singles "Rubberband Girl", "The Red Shoes", "Moments of Pleasure", and "And So Is Love" all reached the top 30. Bush directed and starred in the short film "The Line, the Cross and the Curve", which featured music from her album "The Red Shoes", itself inspired by the 1948 film of that name. It was released on VHS in the UK in 1994 and also received a small number of cinema screenings around the world. The initial plan had been to tour with "The Red Shoes" release, but did not reach fruition. Thus, Bush deliberately produced her tracks live, with less studio production that had typified her last three albums and which would have been too difficult to re-create on stage. The result polarised her fan base, who had enjoyed the intricacy of her earlier compositions, with other fans claiming they had found new complexities in the lyrics and the emotions they expressed. During this period, Bush had suffered a series of bereavements, including the loss of guitarist Alan Murphy, who had started working with her on The Tour of Life in 1979, and her mother Hannah, to whom she was exceptionally close. The people she lost were honoured in the ballad "Moments of Pleasure." However, Bush's mother was still alive when "Moments of Pleasure" was written and recorded. Bush describes playing the song to her mother, who thought the line where she is quoted by Bush as saying, "Every old sock meets an old shoe", was hilarious and "couldn't stop laughing." After the release of "The Red Shoes", Kate Bush dropped out of the public eye. She had originally intended to take one year off, but despite working on material, twelve years passed before her next album release. Her name occasionally cropped up in the media with rumours of a new album release. The press often viewed her as an eccentric recluse, sometimes drawing a comparison with Miss Havisham from Charles Dickens's "Great Expectations". In 1998, Bush gave birth to Albert, known as "Bertie", fathered by guitarist Dan McIntosh, whom she married in 1992. In 2001, Bush was awarded a Q Award as Classic Songwriter. In 2002, she was awarded an Ivor Novello Award for Outstanding Contribution to Music, and performed "Comfortably Numb" at David Gilmour's concert at the Royal Festival Hall in London. Kate Bush's eighth studio album, "Aerial", was released on double CD and vinyl in November 2005. The album single "King of the Mountain", had its premiere on BBC Radio 2 two months prior. The single entered the UK Downloads Chart at number six, and would become Bush's third-highest-charting single ever in the UK, peaking at number four on the full chart. "Aerial" entered the UK albums chart at number three, and the US chart at number 48. "Aerial," as on "Hounds of Love" (1985), is divided into two sections, each with its own theme and mood. The first disc, subtitled "A Sea of Honey", features a set of unrelated themed songs, including "King of the Mountain"; "Bertie", a Renaissance-style ode to her son; and "Joanni", based on the story of Joan of Arc. In the song "formula_1", Bush sings 117 digits of the number Pi. The second disc, subtitled "A Sky of Honey", features one continuous piece of music describing the experience of 24 hours passing by. "Aerial" earned Bush two nominations at the 2006 BRIT Awards, for Best British Female Solo Artist and Best British Album. In 2007, Bush was asked to write a song for "The Golden Compass" soundtrack which made reference to the lead character, Lyra Belacqua. The song, "Lyra", was used in the closing credits of the film, reached number 187 in the UK Singles Chart and was nominated for the International Press Academy's Satellite Award for original song in a motion picture. According to Del Palmer, Bush was asked to compose the song on short notice and the project was completed in 10 days. In May 2011, Bush released the album "Director's Cut," comprising 11 reworked tracks from "The Sensual World" and "The Red Shoes", recorded using analogue rather than digital equipment. All the tracks have new lead vocals, drums, and instrumentation. Some were transposed to a lower key to accommodate her lowering voice. Three of the songs, including "This Woman's Work", have been completely rerecorded, with lyrics often changed in places. Bush described the album as a new project rather than a collection of remixes. It was the first album on her new label, "Fish People", a division of EMI Records. In addition to "Director's Cut" in its single CD form, the album was released with a box-set that contains the albums "The Sensual World" and the analogue re-mastered "The Red Shoes". It debuted at number two on the United Kingdom chart. Bush's next studio album, "50 Words for Snow", was released on 21 November 2011. It features high-profile cameo appearance of Elton John on the duet "Snowed in at Wheeler Street". The album contains seven new songs "set against a backdrop of falling snow", with a total running time of 65 minutes. The album's songs are built around Bush's quietly jazzy piano and Steve Gadd's drums, and utilise both sung and spoken word vocals in what "Classic Rock" critic Stephen Dalton calls "a ... supple and experimental affair, with a contemporary chamber pop sound grounded in crisp piano, minimal percussion and light-touch electronics ... billowing jazz-rock soundscapes, interwoven with fragmentary narratives delivered in a range of voices from shrill to Laurie Anderson-style cooing". Bassist Danny Thompson appears on the album, which also features a performance by Stephen Fry. "50 Words for Snow" received general acclaim from music critics. At Metacritic, which assigns a normalised rating out of 100 to reviews from mainstream critics, the album received an average score of 88, based on 26 reviews, which indicates "universal acclaim". She was nominated for a Brit Award in the "Best Female Artist" category, and the album won the 2012 Best Album at the South Bank Arts Awards, and was also nominated for Best Album at the Ivor Novello Awards. Bush turned down an invitation to perform at the 2012 Summer Olympics closing ceremony. Instead, a new remix of her 1985 single "Running Up that Hill" was played. In 2013, Bush became the only female artist to have top five albums in the UK charts in five successive decades. In March 2014, Bush announced her first live concerts in decades: Before the Dawn, a 22-night residency in London running from 26 August to 1 October 2014 at the Hammersmith Apollo. Tickets sold out in 15 minutes. The concerts received positive reviews. An album of recordings from the concerts, "Before the Dawn", was released on 25 November 2016. Bolstered by publicity around "Before the Dawn", Bush became the first female performer to have eight albums in the UK Top 40 Albums Chart simultaneously, putting her at number three for simultaneous UK Top 40 albums. The only artists ahead of Bush were Elvis Presley, who had 12 entries in the top 40 after his death in 1977 and The Beatles who had 11 in 2009. She had 11 albums in the top 50. On 6 December 2018, Bush published her first book, a compilation of lyrics, "How to Be Invisible: Selected Lyrics". In October 2018, Bush announced two boxsets of remasters of her studio albums, released on 16 and 30 November. Vocals from Rolf Harris, who was convicted of multiple sexual assault charges in 2014, were replaced by Bush's son Bertie. The compilation of rare tracks, cover versions and remixes was released separately and titled "The Other Sides" on 8 March 2019. It includes the previously unreleased track "Humming", recorded in 1975. Bush's musical aesthetic is eclectic, and is known to employ varied influences and meld disparate styles, often within a single song or over the course of an album. Even in her earliest works, with piano the primary instrument, she wove together diverse influences, drawing on classical music, glam rock, and a wide range of ethnic and folk sources. This would continue throughout her career. By the time of "Never for Ever", Bush had begun to make prominent use of the Fairlight CMI synthesizer, which allowed her to sample and manipulate sounds, expanding her sonic palette. She has been compared with other "'arty' 1970s and '80s British pop rock artists" such as Roxy Music and Peter Gabriel. "The Guardian" called Bush "the queen of art-pop". Bush has a dramatic soprano vocal range. Her vocals contain elements of British, Anglo-Irish and most prominently (southern) English accents and, in its utilisation of musical instruments from various periods and cultures, her music has differed from American pop norms. Reviewers have used the term "surreal" to describe her music. Her songs explore melodramatic emotional and musical surrealism that defies easy categorisation. It has been observed that even her more joyous pieces are often tinged with traces of melancholy, and even the most sorrowful pieces have elements of vitality struggling against all that would oppress them. Elements of Bush's lyrics employ historical or literary references, as embodied in her first single "Wuthering Heights", which is based on Emily Brontë's novel of the same name. She has described herself as a storyteller who embodies the character singing the song and has dismissed efforts by others to conceive of her work as autobiographical. Bush's lyrics have been known to touch on obscure or esoteric subject matter, and "New Musical Express" noted that Bush was not afraid to tackle sensitive and taboo subjects in her work. "The Kick Inside" is based on a traditional English folk song ("The Ballad of Lucy Wan") about an incestuous pregnancy and a resulting suicide. "Kashka from Baghdad" is a song about a homosexual male couple; "Out" magazine listed two of her albums in their "Top 100 Greatest Gayest Albums" list. She has referenced G. I. Gurdjieff in the song "Them Heavy People", while "Cloudbusting" was inspired by Peter Reich's autobiography, "A Book of Dreams", about his relationship with his father, Wilhelm Reich. "Breathing" explores the results of nuclear fallout from the perspective of a fœtus. Other non-musical sources of inspiration for Bush include horror films, which have influenced the gothic nature of her songs, such as "Hounds of Love", which samples the 1957 horror movie "Night of the Demon". "The Infant Kiss" is a song about a haunted, unstable woman's paedophilic infatuation with a young boy in her care (inspired by Jack Clayton's film "The Innocents" (1961), which had been based on Henry James's novella "The Turn of the Screw");. Her songs have occasionally combined comedy and horror to form dark humour, such as murder by poisoning in "Coffee Homeground", an alcoholic mother in "Ran Tan Waltz" and the upbeat "The Wedding List", a song inspired by François Truffaut's 1967 film of Cornell Woolrich's "The Bride Wore Black" about the death of a groom and the bride's subsequent revenge against the killer. Bush has also cited comedy as a significant influence. She has cited Woody Allen, "Monty Python", "Fawlty Towers", and "The Young Ones" as particular favourites. Bush is regarded as the first artist to have had a headset with a wireless microphone built for use in music. For her "Tour of Life" in 1979 she had a compact microphone combined with a self-made construction of wire clothes hangers, so that she did not have to use a hand microphone and had her hands free and could dance her rehearsed choreography of expressionist dance on the concert stage and sing with a microphone at the same time. Later, her idea was adopted by other artists such as Madonna and Peter Gabriel. Musicians who have cited Bush as an influence include Beverley Craven, Regina Spektor, Ellie Goulding, Charli XCX, Tegan and Sara, k.d. lang, Paula Cole, Kate Nash, Bat for Lashes, Erasure, Alison Goldfrapp of Goldfrapp, Rosalía, Tim Bowness of No-Man, Chris Braide, Kyros, Aisles, Darren Hayes, Grimes, and Solange Knowles. Nerina Pallot was inspired to become a songwriter after seeing Bush play "This Woman's Work" on "Wogan". Coldplay took inspiration from "Running Up That Hill" to compose their single "Speed of Sound". In 2015, Adele stated that the release of her third studio album was inspired by Bush's 2014 comeback to the stage. In addition to those artists who state that Bush has been a direct influence on their own careers, other artists have been quoted expressing admiration for her work including Tori Amos, Annie Lennox, Björk, Florence Welch, Little Boots, Elizabeth Fraser of Cocteau Twins, Dido, Sky Ferreira, St. Vincent, Lily Allen, Anohni of Antony and the Johnsons, Big Boi of OutKast, Stevie Nicks, Steven Wilson, Steve Rothery of Marillion, and André Matos. According to an unauthorized biography, Courtney Love of Hole listened to Bush among other artists as a teenager. Tricky wrote an article about "The Kick Inside", saying: "Her music has always sounded like dreamland to me... I don't believe in God, but if I did, her music would be my bible". Suede front-man Brett Anderson stated about "Hounds of Love": "I love the way it's a record of two halves, and the second half is a concept record about fear of drowning. It's an amazing record to listen to really late at night, unsettling and really jarring". John Lydon, better known as Johnny Rotten of the Sex Pistols, declared her work to be "beauty beyond belief". Rotten once wrote a song for her, titled "Bird in Hand" (about exploitation of parrots) that Bush rejected. Bush was one of the singers whom Prince thanked in the liner notes of 1991's "Diamonds and Pearls". In December 1989, Robert Smith of The Cure chose "The Sensual World" as his favourite single of the year, "The Sensual World" as his favourite album of the year and included "all of Kate Bush" plus other artists in his list, "the best things about the eighties". Kele Okereke of Bloc Party said about "Hounds of Love": "The first time I heard it I was sitting in a reclining sofa. As the beat started I was transported somewhere else. Her voice, the imagery, the huge drum sound: it seemed to capture everything for me. As a songwriter you're constantly chasing that feeling". Rufus Wainwright named Bush as one of his top ten gay icons. Outside music, Bush has been an inspiration to several fashion designers, including Hussein Chalayan. Since 1998, an asteroid bears her name. In 2019, Pone, ex Fonky Family member, releases "Kate and me", an entire album created from samples of Kate Bush's work. According to "The Guardian", it's the "first album in history to be entirely produced through an eye-tracking device". On this occasion, Pone declares that Bush is, according to him, the greatest artist of the past 40 years. In 2020, Grazia magazine conducted an interview with UK Prime Minister Boris Johnson. When asked about the 5 most influential women in his life, Johnson placed Kate Bush at the fifth spot after deliberating between nominating Queen Elizabeth II, Margaret Thatcher, and Bush. Bush's only tour, the Tour of Life, ran for six weeks in May 1979, covering Britain and mainland Europe. The BBC suggested that she may have quit touring due to a fear of flying, or because of the death of a lighting engineer, Bill Duffield, who was killed in an accident during a warmup concert. Mercer, who signed Bush to EMI, said touring was "just too hard ... I think [Bush] liked it but the equation didn't work ... I could see at the end of the show that she was completely wiped out." Bush described the tour as "enormously enjoyable" but "absolutely exhausting". During the same period as the Tour of Life, Bush performed on television programs including "Top of the Pops" in the UK, "Bios Bahnhof" in Germany, and "Saturday Night Live" in the United States (performing "Them Heavy People" with Paul Shaffer on piano), which remains her only American television appearance. On 28 December 1979, BBC TV aired the "Kate Bush Christmas Special". Bush participated in the first benefit concert in aid of The Prince's Trust in July 1982, at which she sang "Breathing" for the first time. She performed live for British charity event Comic Relief in 1986, singing "Do Bears... ?", a humorous duet with Rowan Atkinson, and a rendition of "Breathing". In March 1987, Bush sang "Running Up That Hill" at The Secret Policeman's Third Ball accompanied by David Gilmour. She appeared with Gilmour again in 2002, singing the Pink Floyd song "Comfortably Numb" at the Royal Festival Hall in London. Bush returned to headline performance with a 22-night residency, Before the Dawn, which ran from 26 August to 1 October 2014 at the London Hammersmith Apollo. The set list encompassed most of "Hounds of Love" featuring the entire Ninth Wave suite, most of "Aerial", two songs from "The Red Shoes", and one song from "50 Words for Snow". Bush provided vocals on two of Peter Gabriel's albums, including the hits "Games Without Frontiers" and "Don't Give Up", as well as "No Self-Control". Gabriel appeared on Bush's 1979 television special, where they sang a duet of Roy Harper's "Another Day". She has sung on two Roy Harper tracks, "You", on his 1979 album, "The Unknown Soldier"; and "Once", the title track of his 1990 album. She has also sung on the title song of the 1986 Big Country album "The Seer;" the Midge Ure song "Sister and Brother" from his 1988 album "Answers to Nothing"; Go West's 1987 single "The King Is Dead"; and two songs with Prince – "Why Should I Love You?", from her 1993 album "The Red Shoes", and "My Computer" from Prince's 1996 album "Emancipation". In 1987, she sang a verse on the Beatles cover charity single "Let It Be" by Ferry Aid. She sang a line on the charity single "Spirit of the Forest" by Spirit of the Forest in 1989. In 1990 Bush produced a song for another artist, Alan Stivell's "Kimiad" for his album "Again"; this is the only time she has done this to date. Stivell had appeared on "The Sensual World". In 1991, Kate Bush was invited to perform a cover of Elton John's 1972 song "Rocket Man" for the tribute album "". In 2011, Elton John collaborated with Bush once again in "Snowed in at Wheeler Street" for her most recent album "50 Words for Snow". In 1994, Bush covered George Gershwin's "The Man I Love" for the tribute album "The Glory of Gershwin". In 1996, Bush contributed a version of ""Mná na hÉireann"" (Irish for "Women of Ireland") for the Anglo-Irish folk-rock compilation project "Common Ground: The Voices of Modern Irish Music". Bush had to sing the song in Irish, which she learned to do phonetically. Artists who have contributed to Bush's own albums include Elton John, Eric Clapton, Jeff Beck, David Gilmour, Nigel Kennedy, Gary Brooker, Danny Thompson, and Prince. Bush provided backing vocals for a song that was recorded during the 1990s titled "Wouldn't Change a Thing" by Lionel Azulay, the drummer with the original band that was later to become the KT Bush Band. The song, which was engineered and produced by Del Palmer, was released on Azulay's album "Out of the Ashes". Bush declined a request by Erasure to produce one of their albums because, according to Vince Clarke, "she didn't feel that that was her area". Bush was in a long-term relationship with bassist and engineer Del Palmer from the late 1970s to the early 1990s. Bush is a former resident of Eltham, southeast London. In the 1990s she moved to a canalside residence in Sulhamstead, Berkshire, and then to Devon in 2004. Bush is a vegetarian. Raised a Roman Catholic, she said in 1999: The length of time between albums has led to rumours concerning Bush's health or appearance. In 2011, she told BBC Radio 4 that the amount of time between albums was stressful: "It's very frustrating the albums take as long as they do ... I wish there weren't such big gaps between them". In the same interview, she denied that she was a perfectionist, saying: "I think it's important that things are flawed ... That's what makes a piece of art interesting sometimes – the bit that's wrong or the mistake you've made that's led onto an idea you wouldn't have had otherwise." She reiterated her prioritisation of her family life. Bush's son, Bertie, featured prominently in the 2014 concert Before the Dawn. Her nephew, Raven Bush, is violinist in the English indie band Syd Arthur. In the "Comic Strip Presents" film, "GLC", she produced and sang on the theme song "Ken". The song was written about Ken Livingstone, the leader of the Greater London Council and future mayor of London, who at the time was working with musicians to help the Labour Party garner the youth vote. In 2016, Canadian news magazine "Maclean's" published an interview where Bush was asked about Theresa May, then Conservative Prime Minister of the United Kingdom. It quoted Bush as saying: "I actually really like her and think she's wonderful. I think it's the best thing that's happened to us in a long time ... It is great to have a woman in charge of the country. She's very sensible and I think that's a good thing at this point in time." In 2019, Bush published a statement on her website saying she did not support the Conservative Party. She wrote: "My response to the interviewer was not meant to be political but rather was in the defence of women in power ... I said that we had a woman in charge of our country, and that I felt it was a good thing to have women in power … it could make it seem like I am a Tory supporter which I want to make clear I am not."
https://en.wikipedia.org/wiki?curid=17443
Kittiwake The kittiwakes (genus Rissa) are two closely related seabird species in the gull family Laridae, the black-legged kittiwake ("Rissa tridactyla") and the red-legged kittiwake ("Rissa brevirostris"). The epithets "black-legged" and "red-legged" are used to distinguish the two species in North America, but in Europe, where "Rissa brevirostris" is not found, the black-legged kittiwake is often known simply as kittiwake, or more colloquially in some areas as tickleass or tickleace. The name is derived from its call, a shrill 'kittee-wa-aaake, kitte-wa-aaake'. The genus name "Rissa" is from the Icelandic name "Rita" for the black-legged kittiwake. The two species are physically very similar. They have a white head and body, grey back, grey wings tipped solid black and a yellow bill. Black-legged kittiwake adults are somewhat larger (roughly in length with a wingspan of ) than red-legged kittiwakes ( in length with a wingspan around ). Other differences include a shorter bill, larger eyes, a larger, rounder head and darker grey wings in the red-legged kittiwake. While most black-legged kittiwakes do, indeed, have dark-grey legs, some have pinkish-grey to reddish legs, making colouration a somewhat unreliable identifying marker. In contrast to the dappled chicks of other gull species, kittiwake chicks are downy and white since they are under relatively little threat of predation, as the nests are on extremely steep cliffs. Unlike other gull chicks which wander around as soon as they can walk, kittiwake chicks instinctively sit still in the nest to avoid falling off. Juveniles take three years to reach maturity. When in winter plumage, both birds have a dark grey smudge behind the eye and a grey hind-neck collar. The sexes are visually indistinguishable. Kittiwakes are coastal breeding birds ranging in the North Pacific, North Atlantic, and Arctic oceans. They form large, dense, noisy colonies during the summer reproductive period, often sharing habitat with murres. They are the only gull species that are exclusively cliff-nesting. A colony of kittiwakes living in Newcastle upon Tyne and Gateshead in the north east of England has made homes on both the Tyne Bridge and Baltic Centre for Contemporary Art. This colony is notable because it is the furthest inland colony of kittiwakes in the world.
https://en.wikipedia.org/wiki?curid=17445