text
stringlengths
10
951k
source
stringlengths
39
44
Casino game Games available in most casinos are commonly called casino games. In a casino game, the players gamble casino chips on various possible random outcomes or combinations of outcomes. Casino games are also available in online casinos, where permitted by law. Casino games can also be played outside casinos for entertainment purposes like in parties or in school competitions, some on machines that simulate gambling. There are three general categories of casino games: table games, electronic gaming machines, and random number ticket games such as keno. Gaming machines, such as slot machines and pachinko, are usually played by one player at a time and do not require the involvement of casino employees to play. Random number games are based upon the selection of random numbers, either from a computerized random number generator or from other gaming equipment. Random number games may be played at a table, such as roulette, or through the purchase of paper tickets or cards, such as keno or bingo. Casino games typically provide a predictable long-term advantage to the casino, or "house", while offering the players the possibility of a short-term gain that in some cases can be large. Some casino games have a skill element, where the players' decisions have an impact on the results. Players possessing sufficient skills to eliminate the inherent long-term disadvantage (the "house edge" or vigorish) in a casino game are referred to as advantage players. The players' disadvantage is a result of the casino not paying winning wagers according to the game's "true odds", which are the payouts that would be expected considering the odds of a wager either winning or losing. For example, if a game is played by wagering on the number that would result from the roll of one die, true odds would be 5 times the amount wagered since there is a 1 in 6 chance of any single number appearing, assuming that the player gets the original amount wagered back. However, the casino may only pay 4 times the amount wagered for a winning wager. The house edge or vigorish is defined as the casino profit expressed as the percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player double and splits.) In American Roulette, there are two "zeroes" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European Roulette. The chances of a player, who bets 1 unit on red, winning is 18/38 and his chances of losing 1 unit is 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. After 10 spins, betting 1 unit per spin, the average house profit will be 10 × 1 × 5.26% = 0.53 units. European roulette wheels have only one "zero" and therefore the house advantage (ignoring the en prison rule) is equal to 1/37 = 2.7%. The house edge of casino games varies greatly with the game, with some games having an edge as low as 0.3%. Keno can have house edges up to 25%, slot machines having up to 15%. The calculation of the roulette house edge was a trivial exercise; for other games, this is not usually the case. Combinatorial analysis and/or computer simulation is necessary to complete the task. In games which have a skill element, such as blackjack or Spanish 21, the house edge is defined as the house advantage from optimal play (without the use of advanced techniques such as card counting), on the first hand of the shoe (the container that holds the cards). The set of the optimal plays for all possible hands is known as "basic strategy" and is highly dependent on the specific rules and even the number of decks used. Good blackjack and Spanish 21 games have house edges below 0.5%. Traditionally, the majority of casinos have refused to reveal the house edge information for their slots games and due to the unknown number of symbols and weightings of the reels, in most cases it is much more difficult to calculate the house edge than that in other casino games. However, due to some online properties revealing this information and some independent research conducted by Michael Shackleford in the offline sector, this pattern is slowly changing. The luck factor in a casino game is quantified using standard deviations (SD). The standard deviation of a simple game like Roulette can be calculated using the binomial distribution. In the binomial distribution, SD = , where "n" = number of rounds played, "p" = probability of winning, and "q" = probability of losing. The binomial distribution assumes a result of 1 unit for a win, and 0 units for a loss, rather than −1 units for a loss, which doubles the range of possible outcomes. Furthermore, if we flat bet at 10 units per round instead of 1 unit, the range of possible outcomes increases 10 fold. For example, after 10 rounds at 1 unit per round, the standard deviation will be 2 × 1 × = 3.16 units. After 10 rounds, the expected loss will be 10 × 1 × 5.26% = 0.53. As you can see, standard deviation is many times the magnitude of the expected loss. The standard deviation for pai gow poker is the lowest out of all common casinos. Many casino games, particularly slots, have extremely high standard deviations. The bigger size of the potential payouts, the more the standard deviation may increase. As the number of rounds increases, eventually, the expected loss will exceed the standard deviation, many times over. From the formula, we can see the standard deviation is proportional to the square root of the number of rounds played, while the expected loss is proportional to the number of rounds played. As the number of rounds increases, the expected loss increases at a much faster rate. This is why it is impossible for a gambler to win in the long term. It is the high ratio of short-term standard deviation to expected loss that fools gamblers into thinking that they can win. It is important for a casino to know both the house edge and variance for all of their games. The house edge tells them what kind of profit they will make as percentage of turnover, and the variance tells them how much they need in the way of cash reserves. The mathematicians and computer programmers that do this kind of work are called gaming mathematicians and gaming analysts. Casinos do not have in-house expertise in this field, so outsource their requirements to experts in the gaming analysis field.
https://en.wikipedia.org/wiki?curid=5362
Video game A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a two- or three-dimensional video display device such as a touchscreen, virtual reality headset or monitor/TV set. Since the 1980s, video games have become an increasingly important part of the entertainment industry, and whether they are also a form of art is a matter of dispute. The electronic systems used to play video games are called platforms. Video games are developed and released for one or several platforms and may not be available on others. Specialized platforms such as arcade games, which present the game in a large, typically coin-operated chassis, were common in the 1980s in video arcades, but declined in popularity as other, more affordable platforms became available. These include dedicated devices such as video game consoles, as well as general-purpose computers like a laptop, desktop or handheld computing devices. The input device used for games, the game controller, varies across platforms. Common controllers include gamepads, joysticks, mouse devices, keyboards, the touchscreens of mobile devices, or even a person's body, using a Kinect sensor. Players view the game on a display device such as a television or computer monitor or sometimes on virtual reality head-mounted display goggles. There are often game sound effects, music and voice actor lines which come from loudspeakers or headphones. Some games in the 2000s include haptic, vibration-creating effects, force feedback peripherals and virtual reality headsets. Since the 2010s, the commercial importance of the video game industry has been increasing. The emerging Asian markets and mobile games on smartphones in particular are driving the growth of the industry. As of 2018, video games generated sales of US$134.9 billion annually worldwide, and were the third-largest segment in the U.S. entertainment market, behind broadcast and cable TV. Early games used interactive electronic devices with various display formats. The earliest example is from 1947—a "Cathode ray tube Amusement Device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consisted of an analog device that allowed a user to control a vector-drawn dot on the screen to simulate a missile being fired at targets, which were drawings fixed to the screen. Other early examples include: The Nimrod computer at the 1951 Festival of Britain; "OXO" a tic-tac-toe Computer game by Alexander S. Douglas for the EDSAC in 1952; "Tennis for Two", an electronic interactive game engineered by William Higinbotham in 1958; "Spacewar!", written by MIT students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961; and the hit ping pong-style "Pong", a 1972 game by Atari. Each game used different means of display: NIMROD used a panel of lights to play the game of Nim, OXO used a graphical display to play tic-tac-toe "Tennis for Two" used an oscilloscope to display a side view of a tennis court, and "Spacewar!" used the DEC PDP-1's vector display to have two spaceships battle each other. In 1971, "Computer Space", created by Nolan Bushnell and Ted Dabney, was the first commercially sold, coin-operated video game. It used a black-and-white television for its display, and the computer system was made of 74 series TTL chips. The game was featured in the 1973 science fiction film "Soylent Green". "Computer Space" was followed in 1972 by the Magnavox Odyssey, the first home console. Modeled after a late 1960s prototype console developed by Ralph H. Baer called the "Brown Box", it also used a standard television. These were followed by two versions of Atari's "Pong"; an arcade version in 1972 and a home version in 1975 that dramatically increased video game popularity. The commercial success of "Pong" led numerous other companies to develop "Pong" clones and their own systems, spawning the video game industry. A flood of "Pong" clones eventually led to the video game crash of 1977, which came to an end with the mainstream success of Taito's 1978 shooter game "Space Invaders", marking the beginning of the golden age of arcade video games and inspiring dozens of manufacturers to enter the market. The game inspired arcade machines to become prevalent in mainstream locations such as shopping malls, traditional storefronts, restaurants, and convenience stores. The game also became the subject of numerous articles and stories on television and in newspapers and magazines, establishing video gaming as a rapidly growing mainstream hobby. "Space Invaders" was soon licensed for the Atari VCS (later known as Atari 2600), becoming the first "killer app" and quadrupling the console's sales. This helped Atari recover from their earlier losses, and in turn the Atari VCS revived the home video game market during the second generation of consoles, up until the North American video game crash of 1983. The home video game industry was revitalized shortly afterwards by the widespread success of the Nintendo Entertainment System, which marked a shift in the dominance of the video game industry from the United States to Japan during the third generation of consoles. A number of video game developers emerged in Britain in the late 1970s and early 1980s. The term "platform" refers to the specific combination of electronic components or computer hardware which, in conjunction with software, allows a video game to operate. The term "system" is also commonly used. The distinctions below are not always clear and there may be games that bridge one or more platforms. In addition to laptop/desktop computers and mobile devices, there are other devices which have the ability to play games but are not primarily video game machines, such as PDAs and graphing calculators. In common use a "PC game" refers to a form of media that involves a player interacting with a personal computer connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications ("mods"), open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming. A "console game" is played on a specialized electronic device ("home video game console") that connects to a common television set or composite video monitor, unlike PCs, which can run all sorts of computer programs, a console is a dedicated video game platform manufactured by a specific company. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation, and Nintendo. A "handheld" gaming device is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. "Arcade game" generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them. The web browser has also established itself as platform in its own right in the 2000s, while providing a cross-platform environment for video games designed to be played on a wide spectrum of platforms. In turn, this has generated new terms to qualify classes of web browser-based games. These games may be identified based on the website that they appear, such as with "Miniclip" games. Others are named based on the programming platform used to develop them, such as Java and Flash games. With the advent of standard operating systems for mobile devices such as iOS and Android and devices with greater hardware performance, mobile gaming has become a significant platform. These games may utilize unique features of mobile devices that are not necessary present on other platforms, such as global positing information and camera devices to support augmented reality gameplay. Mobile games also led into the development of microtransactions as a valid revenue model for casual games. Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit. A new platform of video games emerged in late 2017 in which users could take ownership of game assets (digital assets) using Blockchain technologies. An example of this is Cryptokitties. A video game, like most other forms of media, may be categorized into genres. Video game genres are used to categorize video games based on their gameplay interaction rather than visual or narrative differences. A video game genre is defined by a set of gameplay challenges and are classified independent of their setting or game-world content, unlike other works of fiction such as films or books. For example, a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. Because genres are dependent on content for definition, genres have changed and evolved as newer styles of video games have come into existence. Ever advancing technology and production values related to video game development have fostered more lifelike and complex games which have in turn introduced or enhanced genre possibilities (e.g., virtual pets), pushed the boundaries of existing video gaming or in some cases add new possibilities in play (such as that seen with games specifically designed for devices like Sony's EyeToy). Some genres represent combinations of others, such as multiplayer online battle arena (MOBA), and massively multiplayer online role-playing games (MMORPG). It is also common to see higher level genre terms that are collective in nature across all other genres such as with action, music/rhythm or horror-themed video games. Casual games derive their name from their ease of accessibility, simple to understand gameplay and quick to grasp rule sets. Additionally, casual games frequently support the ability to jump in and out of play on demand. Casual games as a format existed long before the term was coined and include video games such as Solitaire or Minesweeper which can commonly be found pre-installed with many versions of the Microsoft Windows operating system. Examples of genres within this category are match three, hidden object, time management, puzzle or many of the tower defense style games. Casual games are generally available through app stores and online retailers such as PopCap and GameHouse or provided for free play through web portals such as Newgrounds. While casual games are most commonly played on personal computers, phones or tablets, they can also be found on many of the on-line console system download services (e.g., the PlayStation Network, WiiWare or Xbox Live). Serious games are games that are designed primarily to convey information or a learning experience to the player. Some serious games may even fail to qualify as a video game in the traditional sense of the term. Educational software does not typically fall under this category (e.g., touch typing tutors, language learning programs, etc.) and the primary distinction would appear to be based on the game's primary goal as well as target age demographics. As with the other categories, this description is more of a guideline than a rule. Serious games are games generally made for reasons beyond simple entertainment and as with the core and casual games may include works from any given genre, although some such as exercise games, educational games, or propaganda games may have a higher representation in this group due to their subject matter. These games are typically designed to be played by professionals as part of a specific job or for skill set improvement. They can also be created to convey social-political awareness on a specific subject. One of the longest-running serious games franchises is "Microsoft Flight Simulator", first published in 1982 under that name. The United States military uses virtual reality-based simulations, such as VBS1 for training exercises, as do a growing number of first responder roles (e.g., police, firefighters, EMTs). One example of a non-game environment utilized as a platform for serious game development would be the virtual world of "Second Life", which is currently used by several United States governmental departments (e.g., NOAA, NASA, JPL), Universities (e.g., Ohio University, MIT) for educational and remote learning programs and businesses (e.g., IBM, Cisco Systems) for meetings and training. Tactical media in video games plays a crucial role in making a statement or conveying a message on important relevant issues. This form of media allows for a broader audience to be able to receive and gain access to certain information that otherwise may not have reached such people. An example of tactical media in video games would be newsgames. These are short games related to contemporary events designed to illustrate a point. For example, Take Action Games is a game studio collective that was co-founded by Susana Ruiz and has made successful serious games. Some of these games include "Darfur is Dying", "Finding Zoe", and "In The Balance". All of these games bring awareness to important issues and events. On 23 September 2009, U.S. President Barack Obama launched a campaign called "Educate to Innovate" aimed at improving the technological, mathematical, scientific and engineering abilities of American students. This campaign states that it plans to harness the power of interactive games to help achieve the goal of students excelling in these departments. This campaign has stemmed into many new opportunities for the video game realm and has contributed to many new competitions. Some of these competitions include the Stem National Video Game Competition and the Imagine Cup. Both of these bring a focus to relevant and important current issues through gaming. www.NobelPrize.org entices the user to learn about information pertaining to the Nobel prize achievements while engaging in a fun video game. There are many different types and styles of educational games, including counting to spelling to games for kids, to games for adults. Some other games do not have any particular targeted audience in mind and intended to simply educate or inform whoever views or plays the game. Video game can use several types of input devices to translate human actions to a game, the most common game controllers are keyboard and mouse for "PC games, consoles usually come with specific gamepads, handheld consoles have built in buttons. Other game controllers are commonly used for specific games like racing wheels, light guns or dance pads. Digital cameras can also be used as game controllers capturing movements of the body of the player. As technology continues to advance, more can be added onto the controller to give the player a more immersive experience when playing different games. There are some controllers that have presets so that the buttons are mapped a certain way to make playing certain games easier. Along with the presets, a player can sometimes custom map the buttons to better accommodate their play style. On keyboard and mouse, different actions in the game are already preset to keys on the keyboard. Most games allow the player to change that so that the actions are mapped to different keys that are more to their liking. The companies that design the controllers are trying to make the controller visually appealing and also feel comfortable in the hands of the consumer. An example of a technology that was incorporated into the controller was the touchscreen. It allows the player to be able to interact with the game differently than before. The person could move around in menus easier and they are also able to interact with different objects in the game. They can pick up some objects, equip others, or even just move the objects out of the player's path. Another example is motion sensor where a person's movement is able to be captured and put into a game. Some motion sensor games are based on where the controller is. The reason for that is because there is a signal that is sent from the controller to the console or computer so that the actions being done can create certain movements in the game. Other type of motion sensor games are webcam style where the player moves around in front of it, and the actions are repeated by a game character. Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need to be able to pay their staff a competitive wage in order to attract and retain the best talent, while publishers are constantly looking to keep costs down in order to maintain profitability on their investment. Typically, a video game console development team can range in sizes of anywhere from 5 to 50 people, with some teams exceeding 100. In May 2009, one game project was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. A phenomenon of additional game content at a later date, often for additional funds, began with digital video game distribution known as downloadable content (DLC). Developers can use digital distribution to issue new storylines after the main game is released, such as Rockstar Games with "Grand Theft Auto IV" ("" and ""), or Bethesda with "Fallout 3" and its expansions. New gameplay modes can also become available, for instance, "Call of Duty" and its zombie modes, a multiplayer mode for "Mushroom Wars" or a higher difficulty level for "". Smaller packages of DLC are also common, ranging from better in-game weapons ("Dead Space", "Just Cause 2"), character outfits ("LittleBigPlanet", "Minecraft"), or new songs to perform ("SingStar", "Rock Band", "Guitar Hero"). A variation of downloadable content is expansion packs. Unlike DLC, expansion packs add a whole section to the game that either already exists in the game's code or is developed after the game is released. Expansions add new maps, missions, weapons, and other things that weren't previously accessible in the original game. An example of an expansion is Bungie's "Destiny", which had the "" expansion. The expansion added new weapons, new maps, and higher levels, and remade old missions. Expansions are added to the base game to help prolong the life of the game itself until the company is able to produce a sequel or a new game altogether. Developers may plan out their game's life and already have the code for the expansion in the game, but inaccessible by players, who later unlock these expansions, sometimes for free and sometimes at an extra cost. Some developers make games and add expansions later, so that they could see what additions the players would like to have. There are also expansions that are set apart from the original game and are considered a stand-alone game, such as Ubisoft's expansion " Freedom's Cry", which features a different character than the original game. Many games produced for the PC are designed such that technically oriented consumers can modify the game. These mods can add an extra dimension of replayability and interest. Developers such as id Software, Valve, Crytek, Bethesda, Epic Games and Blizzard Entertainment ship their games with some of the development tools used to make the game, along with documentation to assist mod developers. The Internet provides an inexpensive medium to promote and distribute mods, and they may be a factor in the commercial success of some games. This allows for the kind of success seen by popular mods such as the "Half-Life" mod "Counter-Strike". Cheating in computer games may involve cheat codes and hidden spots implemented by the game developers, modification of game code by third parties, or players exploiting a software glitch. Modifications are facilitated by either cheat cartridge hardware or a software trainer. Cheats usually make the game easier by providing an unlimited amount of some resource; for example weapons, health, or ammunition; or perhaps the ability to walk through walls. Other cheats might give access to otherwise unplayable levels or provide unusual or amusing features, like altered game colors or other graphical appearances. Software errors not detected by software testers during development can find their way into released versions of computer and video games. This may happen because the glitch only occurs under unusual circumstances in the game, was deemed too minor to correct, or because the game development was hurried to meet a publication deadline. Glitches can range from minor graphical errors to serious bugs that can delete saved data or cause the game to malfunction. In some cases publishers will release updates (referred to as "patches") to repair glitches. Sometimes a glitch may be beneficial to the player; these are often referred to as exploits. Easter eggs are hidden messages or jokes left in games by developers that are not part of the main game. Easter eggs are secret responses that occur as a result of an undocumented set of commands. The results can vary from a simple printed message or image, to a page of programmer credits or a small videogame hidden inside an otherwise serious piece of software. Videogame cheat codes are a specific type of Easter egg, in which entering a secret command will unlock special powers or new levels for the player. Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from "Star Trek", arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as "Tron", "eXistenZ" and "The Last Starfighter". Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game "Tomb Raider", saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player. While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game. An emulator is a program that replicates the behavior of a video game console, allowing games to run on a different platform from the original hardware. Emulators exist for PCs, smartphones and consoles other than the original. Emulators are generally used to play old games, hack existing games, translate unreleased games in a specific region, or add enhanced features to games like improved graphics, speed up or down, bypass regional lockouts, or online multiplayer support. Some manufacturers have released official emulators for their own consoles. For example, Nintendo's Virtual Console allows users to play games for old Nintendo consoles on the Wii, Wii U, and 3DS. Virtual Console is part of Nintendo's strategy for deterring video game piracy. In November 2015, Microsoft launched backwards compatibility of Xbox 360 games on Xbox One console via emulation. Also, Sony announced relaunching PS2 games on PS4 via emulation. According to "Sony Computer Entertainment America v. Bleem", creating an emulator for a proprietary video game console is legal. However, Nintendo claims that emulators promote the distribution of illegally copied games. The November 2005 Nielsen Active Gamer Study, taking a survey of 2,000 regular gamers, found that the U.S. games market is diversifying. The age group among male players has expanded significantly in the 25–40 age group. For casual online puzzle-style and simple mobile cell phone games, the gender divide is more or less equal between men and women. More recently there has been a growing segment of female players engaged with the aggressive style of games historically considered to fall within traditionally male genres (e.g., first-person shooters). According to the ESRB, almost 41% of PC gamers are women. Participation among African-Americans is lower. One survey of over 2000 game developers returned responses from only 2.5% who identified as black. When comparing today's industry climate with that of 20 years ago, women and many adults are more inclined to be using products in the industry. While the market for teen and young adult men is still a strong market, it is the other demographics which are posting significant growth. The Entertainment Software Association (ESA) provides the following summary for 2011 based on a study of almost 1,200 American households carried out by Ipsos MediaCT: A 2006 academic study, based on a survey answered by 10,000 gamers, identified the gaymers (gamers that identify as gay) as a demographic group. A follow-up survey in 2009 studied the purchase habits and content preferences of people in the group. Based on the study by NPD group in 2011, approximately 91 percent of children aged 2–17 play games. Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. Multiplayer video games are those that can be played either competitively, sometimes in Electronic Sports, or cooperatively by using either multiple input devices, or by hotseating. "Tennis for Two", arguably the first video game, was a two player game, as was its successor "Pong". The first commercially available game console, the Magnavox Odyssey, had two controller inputs. Since then, most consoles have been shipped with two or four controller inputs. Some have had the ability to expand to four, eight or as many as 12 inputs with additional adapters, such as the Multitap. Multiplayer arcade games typically feature play for two to four players, sometimes tilting the monitor on its back for a top-down viewing experience allowing players to sit opposite one another. Many early computer games for non-PC descendant based platforms featured multiplayer support. Personal computer systems from Atari and Commodore both regularly featured at least two game ports. PC-based computer games started with a lower availability of multiplayer options because of technical limitations. PCs typically had either one or no game ports at all. Network games for these early personal computers were generally limited to only text based adventures or MUDs that were played remotely on a dedicated server. This was due both to the slow speed of modems (300-1200-bit/s), and the prohibitive cost involved with putting a computer online in such a way where multiple visitors could make use of it. However, with the advent of widespread local area networking technologies and Internet based online capabilities, the number of players in modern games can be 32 or higher, sometimes featuring integrated text and/or voice chat. Massively multiplayer online game (MMOs) can offer extremely high numbers of simultaneous players; "Eve Online" set a record with 65,303 players on a single server in 2013. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. In Steven Johnson's book, "Everything Bad Is Good for You", he argues that video games in fact demand far more from a player than traditional games like "Monopoly". To experience the game, the player must first determine the objectives, as well as how to complete them. They must then learn the game controls and how the human-machine interface works, including menus and HUDs. Beyond such skills, which after some time become quite fundamental and are taken for granted by many gamers, video games are based upon the player navigating (and eventually mastering) a highly complex system with many variables. This requires a strong analytical ability, as well as flexibility and adaptability. He argues that the process of learning the boundaries, goals, and controls of a given game is often a highly demanding one that calls on many different areas of cognitive function. Indeed, most games require a great deal of patience and focus from the player, and, contrary to the popular perception that games provide instant gratification, games actually delay gratification far longer than other forms of entertainment such as film or even many books. Some research suggests video games may even increase players' attention capacities. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking. The U.S. Army has deployed machines such as the PackBot and UAV vehicles, which make use of a game-style hand controller to make it more familiar for young people. According to research discussed at the 2008 Convention of the American Psychological Association, certain types of video games can improve the gamers' dexterity as well as their ability to do problem solving. A study of 33 laparoscopic surgeons found that those who played video games were 27 percent faster at advanced surgical procedures and made 37 percent fewer errors compared to those who did not play video games. A second study of 303 laparoscopic surgeons (82 percent men; 18 percent women) also showed that surgeons who played video games requiring spatial skills and hand dexterity and then performed a drill testing these skills were significantly faster at their first attempt and across all 10 trials than the surgeons who did not play the video games first. An experiment carried out by Richard De Lisi and Jennifer Woldorf demonstrates the positive effect that video games may have on spatial skills. De Lisi and Woldorf took two groups of third graders, one control group and one experiment group. Both groups took a paper-and-pencil test of mental rotation skills. After this test, the experiment group only played 11 sessions of the game "Tetris". This game was chosen as it requires mental rotation. After this game, both groups took the test again. The result showed that the scores of the experiment group raised higher than that of the control group, thereby confirming this theory. The research showing benefits from action games has been questioned due to methodological shortcomings, such as recruitment strategies and selection bias, potential placebo effects, and lack of baseline improvements in control groups. In addition, many of the studies are cross-sectional, and of the longitudinal interventional trials, not all have found effects. A response to this pointed out that the skill improvements from action games are more broad than predicted, such as mental rotation, which is not a common task in action games. Action gamers are not only better at ignoring distractions, but also at focusing on the main task. Like other media, video games have been the subject of objections, controversies, and censorship, for depictions of violence, criminal activities, sexual themes, alcohol, tobacco and other drugs, propaganda, profanity, or advertisements. Critics of video games include parents' groups, politicians, religious groups, scientists and other advocacy groups. Claims that some video games cause addiction or violent behavior continue to be made and to be disputed. There have been a number of societal and scientific arguments about whether the content of video games change the behavior and attitudes of a player, and whether this is reflected in video game culture overall. Since the early 1980s, advocates of video games have emphasized their use as an expressive medium, arguing for their protection under the laws governing freedom of speech and also as an educational tool. Detractors argue that video games are harmful and therefore should be subject to legislative oversight and restrictions. The positive and negative characteristics and effects of video games are the subject of scientific study. Results of investigations into links between video games and addiction, aggression, violence, social development, and a variety of stereotyping and sexual morality issues are debated. A study was done that claimed that young people who have had a greater exposure to violence in video games ended up behaving more aggressively towards people in a social environment. In 2018, the World Health Organization declared "gaming disorder" a mental disorder for people who are addicted to video games. Some studies have claimed video games can negatively affect health and mental state for some players. In spite of the alleged negative effects of video games, certain studies indicate that they may have value in terms of academic performance, perhaps because of the skills that are developed in the process. "When you play games you’re solving puzzles to move to the next level and that involves using some of the general knowledge and skills in maths, reading and science that you’ve been taught during the day", said Alberto Posso an Associate Professor at the Royal Melbourne Institute of Technology, after analysing data from the results of standardized testing completed by over 12,000 high school students across Australia. As summarized by "The Guardian", the study (published in the "International Journal of Communication") "found that students who played online games almost every day scored 15 points above average in maths and reading tests and 17 points above average in science." However, the reporter added an important comment that was not provided by some of the numerous websites that published a brief summary of the Australian study: "[the] methodology cannot prove that playing video games were the cause of the improvement." "The Guardian" also reported that a Columbia University study indicated that extensive video gaming by students in the 6 to 11 age group provided a greatly increased chance of high intellectual functioning and overall school competence. In an interview with CNN, Edward Castronova, a professor of Telecommunications at Indiana University Bloomington said he was not surprised by the outcome of the Australian study but also discussed the issue of causal connection. "Though there is a link between gaming and higher math and science scores, it doesn't mean playing games caused the higher scores. It could just be that kids who are sharp are looking for a challenge, and they don't find it on social media, and maybe they do find it on board games and video games," he explained. Video games have also been proven to raise self-esteem and build confidence. It gives people an opportunity to do things that they cannot do offline, and to discover new things about themselves. There is a social aspect to gaming as well – research has shown that a third of video game players make good friends online. As well as that, many video games can also considered to be therapeutic as they can help to relieve stress. Although short term, studies have shown that children with developmental delays gain a temporary physical improvement in health when they interact and play video games on a regular, and consistent basis due to the cognitive benefits and the use of hand eye coordination. Self-determination theory (SDT) is a macro theory of human motivation based around competence, autonomy, and relatedness to facilitate positive outcomes. SDT provides a framework for understanding the effects of playing video games; well-being, problem solving, group relations, physical activities. These factors can be measured to determine the effect video games can have on people. The ability to create an ideal image of one's self and being given multiple options to change that image gives a sense of satisfaction. This topic has much controversy; it is unknown whether this freedom can be beneficial to one's character or detrimental. With increased game usage, a player can become too invested in a fictionally generated character, where the desire to look that way overpowers the enjoyment of the game. Players see this character creation as entertainment and a release, creating a self-image they could not obtain in reality, bringing comfort outside of the game from lack of investment to the fictional character. Problems that arise based on character design may be link to personality disorders. Cognitive skills can be enhanced through repetition of puzzles, memory games, spatial abilities and attention control. Most video games present opportunities to use these skills with the ability to try multiple times even after failure. Many of these skills can be translated to reality and problem solving. This allows the player to learn from mistakes and fully understand how and why a solution to a problem may work. Some researchers believe that continual exposure to challenges may lead players to develop greater persistence over time after a study was shown that frequent players spent more time on puzzles in task that did not involve video games. Although players were shown to spend more time on puzzles, much of that could have been due to the positive effects of problem solving in games, which involve forming strategy and weighing option before testing a solution. Representatives of Game Academy claim that such games as Civilization, Total War, or X-Com, where strategy and resource management are key, help players to develop skills that are of great use to managers. Also, they found out that IT workers play unusual puzzle games like Portal or tower defense games like Defense Grid more often than specialists from other fields. In a study that followed students through school, students that played video games showed higher levels of problem solving than students who did not. This contradicts the previous study in that higher success rate was seen in video game players. Time being a factor for problem solving led to different conclusions in the different studies. See video game controversies for more. Online gaming being on the rise allows for video game players to communicate and work together in order to accomplish a certain task. Being able to work as a group in a game translates well to reality and jobs, where people must work together to accomplish a task. Research on players in violent and non-violent games show similar results, where the players' relations improved to improve synergy. With the introduction of Wii Fit and VR (virtual reality), "exergame" popularity has been increasing, allowing video game players to experience more active rather than sedentary game play. Mobile apps have tried to expand this concept with the introduction of "Pokémon Go," which involves walking to progress in the game. Due to "exergaming" being relatively new, there is still much to be researched. No major differences were seen in tests with children that played on the Wii vs. a non-active game after 12 weeks. Testing a larger range of ages may show better results. Cognitive remediation therapies using tailored video games to improve cognitive deficits, which are associated with poorer outcomes, have a well established efficacy. Recent studies show that commercial video games modify similar brain areas to these specialized training programs. Such games may help in the treatment of schizophrenia. Video game laws vary from country to country. Console manufacturers usually exercise tight control over the games that are published on their systems, so unusual or special-interest games are more likely to appear as PC games. Free, casual, and browser-based games are usually played on available computers, mobile phones, tablet computers or PDAs. Various organisations in different regions are responsible for giving content ratings to video games. The Entertainment Software Rating Board (ESRB) gives video games maturity ratings based on their content. For example, a game might be rated "T" for "Teen" if the game contained obscene words or violence. If a game contains explicit violence or sexual themes, it is likely to receive an "M" for "Mature" rating, which means that no one under 17 should play it. The rating "A/O", for "Adults Only", indicates games with massive violence or nudity. There are no laws that prohibit children from purchasing "M" rated games in the United States. Laws attempting to prohibit minors from purchasing "M" rated games were established in California, Illinois, Michigan, Minnesota, and Louisiana, but all were overturned on the grounds that these laws violated the First Amendment. However, many stores have opted to not sell such games to children anyway. One of the most controversial games of all time, "Manhunt 2" by Rockstar Studios, was given an AO rating by the ESRB until Rockstar could make the content more suitable for a mature audience. Pan European Game Information (PEGI) is a system that was developed to standardize the game ratings in all of Europe (not just European Union, although the majority are EU members), the current members are: all EU members, except Germany and the 10 accession states; Norway; Switzerland. Iceland is expected to join soon, as are the 10 EU accession states. For all PEGI members, they use it as their sole system, with the exception of the UK, where if a game contains certain material, it must be rated by BBFC. The PEGI ratings are legally binding in Vienna and it is a criminal offence to sell a game to someone if it is rated above their age. Stricter game rating laws mean that Germany does not operate within the PEGI. Instead, they adopt their own system of certification which is required by law. The Unterhaltungssoftware Selbstkontrolle (USK) checks every game before release and assigns an age rating to it – either none (white), 6 years of age (yellow), 12 years of age (green), 16 years of age (blue) or 18 years of age (red). It is forbidden for anyone, retailers, friends or parents alike, to allow a child access to a game for which he or she is underage. If a game is considered to be harmful to young people (for example because of extremely violent, pornographic or racist content), it may be referred to the Bundesprüfstelle für jugendgefährdende Medien (BPjM) who may opt to place it on the Index upon which the game may not be sold openly or advertised in the open media. It is considered a felony to supply these games to a child. The Computer Entertainment Rating Organization (CERO) that rates video games and PC games (except dating sims, visual novels, and eroge) in Japan with levels of rating that informs the customer of the nature of the product and for what age group it is suitable. It was established in July 2002 as a branch of Computer Entertainment Supplier's Association, and became an officially recognized non-profit organization in 2003. These ratings are: According to the market research firm SuperData, as of May 2015, the global games market was worth US$74.2 billion. By region, North America accounted for $23.6 billion, Asia for $23.1 billion, Europe for $22.1 billion and South America for $4.5 billion. By market segment, mobile games were worth $22.3 billion, retail games 19.7 billion, free-to-play MMOs 8.7 billion, social games $7.9 billion, PC DLC 7.5 billion, and other categories $3 billion or less each. In the United States, also according to SuperData, the share of video games in the entertainment market grew from 5% in 1985 to 13% in 2015, becoming the third-largest market segment behind broadcast and cable television. The research firm anticipated that Asia would soon overtake North America as the largest video game market due to the strong growth of free-to-play and mobile games. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, despite the decline of arcades in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Gaming conventions are an important showcase of the industry. The annual gamescom in Cologne in August is the world's leading expo for video games in attendance. The E3 in June in Los Angeles is also of global importance, but is an event for industry insiders only. The Tokyo Game Show in September is the main fair in Asia. Other notable conventions and trade fairs include Brasil Game Show in October, Paris Games Week in October–November, EB Games Expo (Australia) in October, KRI, ChinaJoy in July and the annual Game Developers Conference. Some publishers, developers and technology producers also host their own regular conventions, with BlizzCon, QuakeCon, Nvision and the X shows being prominent examples. Short for electronic sports, are video game competitions played most by professional players individually or in teams that gained popularity from the late 2000s, the most common genres are fighting, first-person shooter (FPS), multiplayer online battle arena (MOBA) and real-time strategy. There are certain games that are made for just competitive multiplayer purposes. With those type of games, players focus entirely on choosing the right character or obtaining the right equipment in the game to help them when facing other players. Tournaments are held so that people in the area or from different regions can play against other players of the same game and see who is the best. Major League Gaming (MLG) is a company that reports tournaments that are held across the country. The players that compete in these tournaments are given a rank depending on their skill level in the game that they choose to play in and face other players that play that game. The players that also compete are mostly called professional players for the fact that they have played the game they are competing in for many, long hours. Those players have been able to come up with different strategies for facing different characters. The professional players are able to pick a character to their liking and be able to master how to use that character very effectively. With strategy games, players tend to know how to get resources quick and are able to make quick decisions about where their troops are to be deployed and what kind of troops to create. Creators will nearly always copyright their games. Laws that define copyright, and the rights that are conveyed over a video game vary from country to country. Usually a fair use copyright clause allows consumers some ancillary rights, such as for a player of the game to stream a game online. This is a vague area in copyright law, as these laws predate the advent of video games. This means that rightsholders often must define what they will allow a consumer to do with the video game. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: "Pac-Man", "Dragon's Lair", and "Pong". The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum.
https://en.wikipedia.org/wiki?curid=5363
Cambrian The Cambrian Period ( ) was the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 55.6 million years from the end of the preceding Ediacaran Period 541 million years ago (mya) to the beginning of the Ordovician Period mya. Its subdivisions, and its base, are somewhat in flux. The period was established (as “Cambrian series”) by Adam Sedgwick, who named it after Cambria, the Latin name of Wales, where Britain's Cambrian rocks are best exposed. The Cambrian is unique in its unusually high proportion of sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, our understanding of the Cambrian biology surpasses that of some later periods. The Cambrian marked a profound change in life on Earth; prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple; the Precambrian "Charnia" being exceptional. Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized—hence readily fossilized—organisms became common. The rapid diversification of life forms in the Cambrian, known as the Cambrian explosion, produced the first representatives of all modern animal phyla. Phylogenetic analysis has supported the view that during the Cambrian radiation, metazoa (animals) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren—with nothing more complex than a microbial soil crust and a few molluscs that emerged to browse on the microbial biofilm. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period. The base of the Cambrian lies atop a complex assemblage of trace fossils known as the "Treptichnus pedum" assemblage. The use of "Treptichnus pedum", a reference ichnofossil to mark the lower boundary of the Cambrian, is difficult since the occurrence of very similar trace fossils belonging to the Treptichnids group are found well below the "T. pedum" in Namibia, Spain and Newfoundland, and possibly in the western USA. The stratigraphic range of "T. pedum" overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain. The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period. The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point). Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three series (epochs) with locally differing names – the Early Cambrian (Caerfai or Waucoban, mya), Middle Cambrian (St Davids or Albertan, mya) and Furongian ( mya; also known as Late Cambrian, Merioneth or Croixan). Rocks of these epochs are referred to as belonging to the Lower, Middle, or Upper Cambrian. Trilobite zones allow biostratigraphic correlation in the Cambrian. Each of the local series is divided into several stages. The Cambrian is divided into several regional faunal stages of which the Russian-Kazakhian system is most used in international parlance: *Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms. The International Commission on Stratigraphy list the Cambrian period as beginning at and ending at . The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, led to calls for a more precisely defined base to the Cambrian period. Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian period, which was to be correlated worldwide by the earliest appearance of "Treptichnus pedum". Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the "T. pedum" ichnofossil assemblage that is now formally used to correlate the base of the Cambrian. This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. Early dates of quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise date using modern radiometric dating yield a date of . The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils ("Namacalathus", "Cloudina"). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata — which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable. Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the period, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian. With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were 'ice ages', associated with pulses of expansion and contraction of a south polar ice cap. In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea. The Earth was generally cold during the early Cambrian, probably due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. However, average temperatures were 7 degrees Celsius higher than today. There were likely polar ice caps and a series of glaciations, as the planet was still recovering from an earlier Snowball Earth. It became warmer towards the end of the period; the glaciers receded and eventually disappeared, and sea levels rose dramatically. This trend would continue into the Ordovician period. Although there were a variety of macroscopic marine plants no land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya., and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains. The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation, and gradually turned the seabeds into what they are today. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla except the Bryozoa, which appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla. While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event. Some Cambrian organisms ventured onto land, producing the trace fossils "Protichnites" and "Climactichnites". Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the "Protichnites". Fossils of the track-maker of "Climactichnites" have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc. In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry. Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of . The United States Federal Geographic Data Committee uses a "barred capital C" character to represent the Cambrian Period. The Unicode character is .
https://en.wikipedia.org/wiki?curid=5367
Category of being In ontology, the different kinds or ways of being are called categories of being; or simply categories. To investigate the categories of being is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. The process of abstraction required to discover the number and names of the categories has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas, firstly, for example by Gilbert of Poitiers, dividing Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not: Secondly, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle’s list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation, and if relations only exist in the mind as many supposed, to the two highest categories, Mind and Matter, reflected most clearly in the dualism of René Descartes. An alternative conclusion however began to be formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things. In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists "inter alia" the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category. In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled. In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit. One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ... "... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon". He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories. "Category" came into use with Aristotle's essay "Categories", in which he discussed univocal and equivocal terms, predication, and ten categories: Plotinus in writing his "Enneads" around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less ... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue "Parmenides" and which comprised the following three coupled terms: Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process": Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore Unity, having from all eternity arrived by motion at duality, came to rest in trinity". In the "Critique of Pure Reason" (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of "a priori" concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the "Critique", one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows. Table of Judgements Mathematical Dynamical Table of Categories Mathematical Dynamical Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book "The Philosophy of Hegel" suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of Notion. G.W.F. Hegel in his "Science of Logic" (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed "the first principle of the world, the Absolute, is a system of categories ... the categories must be the reason of which the world is a consequent". Using his own logical method of combination, later to be called the Hegelian dialectic, of arguing from thesis through antithesis to synthesis, he arrived, as shown in W.T.Stace's work cited, at a hierarchy of some 270 categories. The three very highest categories were Logic, Nature and Spirit. The three highest categories of Logic, however, he called Being, Essence and Notion which he explained as follows: Schopenhauer's category that corresponded with Notion was that of Idea, which in his ""Four-Fold Root of Sufficient Reason"" he complemented with the category of the Will. The title of his major work was "The World as Will and Idea". The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the "Farbenlehre" of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his "Science of Logic" accordingly asks us to see his system not as a tree but as a circle. Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S.Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge". Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology. For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category. Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006).
https://en.wikipedia.org/wiki?curid=5370
Concrete Concrete is a composite material composed of fine and coarse aggregate bonded together with a fluid cement (cement paste) that hardens (cures) over time. In the past limebased cement binders were often used, such as lime putty, but sometimes with other hydraulic cements, such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water and other ingredients to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix or the finished material. Most concrete is poured with reinforcing materials (such as rebar) embedded to provide tensile strength, yielding reinforced concrete. Because concrete cures (which is not the same as drying such as with paint) how concrete is handled after it is poured is just as important as before. Concrete is one of the most frequently used building materials. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminum combined. Globally, the ready-mix concrete industry, the largest segment of the concrete market, is projected to exceed $600 billion in revenue by 2025. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. The word concrete comes from the Latin word ""concretus"" (meaning compact or condensed), the perfect passive participle of ""concrescere"", from ""con"-" (together) and ""crescere"" (to grow). Mayan concrete at the ruins of Uxmal is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock." Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day. In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to the mix allowed it to set underwater. Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400–1200 BC. Lime mortars were used in Greece, Crete, and Cyprus in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures. The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or "opus caementicium") was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the concrete dome of the Pantheon is the world's largest unreinforced concrete dome. Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick. Modern tests show that "opus caementicium" had as much compressive strength as modern Portland-cement concrete (ca. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed: Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension. The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminum-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time. The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon. After the Roman Empire collapsed, use of concrete became rare until the technology was redeveloped in the mid-18th century. Worldwide, concrete has overtaken steel in tonnage of material used. After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his "De proprietatibus rerum" (1240) describes the making of mortar. In an English translation of 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century the quality of mortar is again excellent, but only from the 17th century is pozzolana commonly added. The "Canal du Midi" was built using concrete in 1670. Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate. A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement. Reinforced concrete was invented in 1849 by Joseph Monier. and the first house was built by François Coignet in 1853. The first concrete reinforced bridge was designed and built by Joseph Monier in 1875. Concrete is a composite material, comprising a matrix of aggregate (typically a rocky material) and a binder (typically Portland cement or asphalt), which holds the matrix together. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application for the material. These variables determine strength, density, as well as chemical and thermal resistance of the finished product. Aggregate consists of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand. A cement, most commonly Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry powder and aggregate, which produces a semi-liquid slurry that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly Ash and Slag can enhance some properties of concrete such as fresh properties and durability. Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a byproduct of steelmaking; and silica fume, a byproduct of industrial electric arc furnaces. Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar. Other materials can also be used as a concrete binder, the most prevalent alternative is asphalt, which is used as the binder in asphalt concrete. The "mix design" depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure. Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminum and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminum and iron) and grinding this product (called "clinker") with a source of sulfate (most commonly gypsum). In modern cement kilns many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels. Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely. As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure. Hydration involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass. Reaction: Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted. The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete. Redistribution of aggregates after compaction often creates inhomogeneity due to the influence of vibration. This can lead to strength gradients. Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers. Concrete is strong in compression, as the aggregate efficiently carries the compression load. However, it is weak in tension as the cement holding the aggregate in place can crack, allowing the structure to fail. Reinforced concrete adds either steel reinforcing bars, steel fibers, aramid fibers, carbon fibers, glass fibers, or plastic fibers to carry tensile loads. Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows: Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. This development is due to cement production being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions, as well as lowering costs, improving concrete properties, and recycling wastes. Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant. A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck. Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products. A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a "cold joint" between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product. Thorough mixing is essential to produce uniform, high-quality concrete. Concrete Mixes are primarily divided into two types, "nominal mix" and "design mix": Nominal Mix ratios are given in volume of formula_1. Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance. Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength. For example, in Indian standards, the mixes of grades M10, M15, M20 and M25 correspond approximately to the mix proportions (1:3:6), (1:2:4), (1:1.5:3) and (1:1:2) respectively. Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix. Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. The use of an aggregate blend with an undesirable gradation can result in a very harsh mix design with a very low slump, which cannot readily be made more workable by addition of reasonable amounts of water. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish. Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of one foot (305 mm). A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test. Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix. High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted. After mixing, concrete is a fluid and can be pumped to the location where needed. Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars. Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength. Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking. During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use. Traditional conditions for curing involve by spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete. For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature, so that the hydration process proceeds more quickly and more thoroughly. "Asphalt concrete" (commonly called "asphalt", "blacktop", or "pavement" in North America, and "tarmac", "bitumen macadam", or "rolled asphalt" in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt. The terms "asphalt" (or "asphaltic") "concrete", "bituminous asphalt concrete", and "bituminous mixture" are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, "AC", is sometimes used for "asphalt concrete" but can also denote "asphalt content" or "asphalt cement", referring to the liquid asphalt portion of the composite material. Pervious concrete is a mix of specially graded coarse aggregate, cement, water and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze–thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding. Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated. Bacteria such as "Bacillus pasteurii", "Bacillus pseudofirmus", "Bacillus cohnii", "Sporosarcina pasteuri", and "Arthrobacter crystallopoietes" increase the compression strength of concrete through their biomass. Not all bacteria increase the strength of concrete significantly with their biomass. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. "Sporosarcina pasteurii" reduces water and chloride permeability. "B. pasteurii" increases resistance to acid. "Bacillus pasteurii" and "B. sphaericuscan" induce calcium carbonate precipitation in the surface of cracks, adding compression strength. Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for repair and construction of other applications, such as drains. Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of respirable crystalline silica workers could legally come into contact with to 50 micrograms per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That the deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep. Tests can be performed to ensure that the properties of concrete correspond to specifications for the application. The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures. The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, to concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons. Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth. Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, "post-cooling" is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures. Another approach to mass concrete structures that minimizes cement's thermal byproduct is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass. Advantage and Disadvantage of Concrete Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing. Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants. Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials. The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures. Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting. More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. See Brutalism. Extreme weather conditions (extreme heat or cold; windy condition, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing. The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is: In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1: The minimum strength before exposing concrete to extreme cold is 500 psi (3.5 MPa). CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing. Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground. Energy requirements for transportation of concrete are low because it is produced locally from local resources, typically manufactured within 100 kilometers of the job site. Similarly, relatively little energy is used in producing and combining the raw materials (although large amounts of CO2 are produced by the chemical reactions in cement manufacture). The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for most structural and construction materials. Once in place, concrete offers great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad. Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces. As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey). Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus Alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminum, iron, calcium, and silicon. The manufacture and use of concrete produce a wide range of environmental and social consequences. Some are harmful, some welcome, and some both, depending on circumstances. A major component of concrete is cement, which similarly exerts environmental and social effects. The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas (the other two being the energy production and transportation industries). Every tonne of cement produced releases one tonne of CO2 into the atmosphere. As of 2019, the production of Portland cement contributed eight percent to global anthropogenic CO2 emissions, largely due to the sintering of limestone and clay at . Researchers have suggested a number of approaches to improving carbon sequestration relevant to concrete production. In August 2019, a reduced CO2 cement was announced which "reduces the overall carbon footprint in precast concrete by 70%." Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt. Workers who cut, grind or polish concrete are at risk of inhaling airborne silica, which can lead to silicosis. This includes crew members who work in concrete chipping. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment. Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris was once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits. The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil. The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of . The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia. The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area. The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry. 118. https://gemengserv.com/concrete-blowouts-in-post-tension-slabs/ Q .
https://en.wikipedia.org/wiki?curid=5371
Condom A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms. With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. They also to a lesser extent protect against genital herpes, human papillomavirus (HPV), and syphilis. The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, easy to access, and few side effects. In those with a latex allergy a polyurethane or other synthetic version should be used. Female condoms are typically made from polyurethane and may be used multiple times. Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. The wholesale cost in the developing world is about 0.03 to US$0.08 each. In the United States condoms usually cost less than US$1.00. Globally less than 10% of those using birth control are using the condom. Rates of condom use are higher in the developed world. In the United Kingdom the condom is the second most common method of birth control (22%) while in the United States it is the third most common (15%). About six to nine billion are sold a year. The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. "Perfect use" or "method" effectiveness rates only include people who use condoms properly and consistently. "Actual use", or "typical use" effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables. The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year.
https://en.wikipedia.org/wiki?curid=5374
Country code Country codes are short alphabetic or numeric geographical codes (geocodes) developed to represent countries and dependent areas, for use in data processing and communications. Several different systems have been developed to do this. The term "country code" frequently refers to ISO 3166-1 alpha-2 or international dialing codes, the E.164 country calling codes. This standard defines for most of the countries and dependent areas in the world: The two-letter codes are used as the basis for some other codes or applications, for example, For more applications see ISO 3166-1 alpha-2. The developers of ISO 3166 intended that in time it would replace other coding systems in existence. The following can represent countries: -
https://en.wikipedia.org/wiki?curid=5375
Cladistics Cladistics (, from Greek , "kládos", "branch") is an approach to biological classification in which organisms are categorized in groups ("clades") based on the most recent common ancestor. Hypothesized relationships are typically based on shared derived characteristics (synapomorphies")" that can be traced to the most recent common ancestor and are not present in more distant groups and ancestors. A key feature of a clade is that a common ancestor and all its descendants are part of the clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if within a "strict" cladistic framework the terms "animals", "bilateria/worms", "fishes/vertebrata", or "monkeys/anthropoidea" were used, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade'. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings. The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.) Cladistics is now the most commonly used method to classify organisms. The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field. What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr. Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for constructing phylogenies from morphological data. In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics. The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified. Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct. Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds: If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this: If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms provide competing accounts of real events, at most one of them is correct. The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes and humans) are hypothesized to have had a common ancestor all of whose descendants were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea. The following terms, coined by Hennig, are used to identify shared or distinct character states among groups: The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features. It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence. Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below. Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing phylogeny from cladistic analysis, but it has remained unpopular. However, homology is usually determined from analysis of the results that are evaluated with homology measures, mainly the consistency index (CI) and retention index (RI), which, it has been claimed, makes the process objective. Also, homology can be equated to synapomorphy, which is what Patterson has done. In organisms with sexual reproduction, incomplete lineage sorting may result in inconsistent phylogenetic trees, depending on which genes are assessed. It is also possible that multiple surviving lineages are generated while interbreeding is still significantly occurring (polytomy). Interbreeding is possible over periods of about 10 million years. Typically speciation occurs over only about 1 million years, which makes it less likely multiple long surviving lineages developed "simultaneously". Even so, interbreeding can result in a lineage being overwhelmed and absorbed by a related more numerous lineage. Simulation studies suggest that phylogenetic trees are most accurately recovered from data that is morphologically coherent (i.e. where closely related organisms share the highest proportion of characters). This relationships is weaker in data generated under selection, potentially due to convergent evolution. The cladistic method does not typically identify fossil species as actual ancestors of a clade. Instead, they are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, the default assumption is that they are more likely to be a related species. The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured. Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features. Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales. Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the "Canterbury Tales", and the manuscripts of the Sanskrit "Charaka Samhita". Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics). Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time. Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
https://en.wikipedia.org/wiki?curid=5376
Calendar A calendar is a system of organizing days for social, religious, commercial or administrative purposes. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single, specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. The term "calendar" is taken from "calendae", the term for the first day of the month in the Roman calendar, related to the verb "calare" "to call out", referring to the "calling" of the new moon when it was first seen. Latin "calendarium" meant "account book, register" (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as "calendier" and from there in Middle English as "calender" by the 13th century (the spelling "calendar" is early modern). The course of the sun and the moon are the most salient natural, regularly recurring events useful for timekeeping, thus in pre-modern societies worldwide lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year. The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars. A large number of Ancient Near East calendar systems based on the Babylonian calendar date from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar. A great number of Hellenic calendars developed in Classical Greece, and in the Hellenistic period gave rise to both the ancient Roman calendar and to various Hindu calendars. Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar. The Roman calendar was reformed by Julius Caesar in 46 BC.. The Julian calendar was no longer dependent on the observation of the new moon but simply followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from the lunation. The Islamic calendar is based on the prohibition of intercalation ("nasi'") by Muhammad, in Islamic tradition dated to a sermon held on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year. The first calendar reform of the early modern era was the Gregorian calendar, introduced in 1582 based on the observation of a long-term shift between the Julian calendar and the solar year. There have been a number of modern proposals for reform of the calendar, such as the World Calendar, International Fixed Calendar, Holocene calendar, and, recently, the Hanke-Henry Permanent Calendar. Such ideas are mooted from time to time but have failed to gain traction because of the loss of continuity, massive upheaval in implementation, and religious objections. A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years. The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction. Other calendars have one (or multiple) larger units of time. Calendars that contain one level of cycles: Calendars with two levels of cycles: Cycles can be synchronized with periodic phenomena: Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar calendars assign a "date" to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar which uses a 19-year cycle. Nearly all calendar systems group consecutive days into "months" and also into "years". In a "solar calendar" a "year" approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a "lunar calendar", the "month" approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the "tropical year" is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. An "astronomical calendar" is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an "observation-based" calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An "arithmetic calendar" is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a "rule-based" calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar. The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used to help people manage their personal schedules, time and activities, particularly when individuals have numerous work, school, and family commitments. People frequently use multiple systems and may keep both a business and family calendar to help prevent them from overcommitting their time. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date and weekday. Some may also show lunar phase. The Gregorian calendar is the "de facto" international standard and is used almost everywhere in the world for civil purposes. It is a purely solar calendar, with a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). It was introduced in 1582 as a refinement to the Julian calendar which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, however, its adoption was mostly limited to Roman Catholic nations, but by the 19th century, it became widely adopted worldwide for the sake of convenience in international trade. The last European country to adopt the reform was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for "Anno Domini") or CE (for "Common Era" or, indeed, "Christian Era"). The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated in relation to the calculation of the Easter date, it is now in worldwide secular use as the "de facto" standard. Alongside the use of the Gregorian calendar for secular matters, there remain a number of calendars in use for religious purposes. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar, is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar), and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendar, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the "Sūrya Siddhānta" and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques). Bahá'ís worldwide use the Bahá'í calendar. The Baha'i Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baha'i Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. The term "calendar" applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word. In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain. It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary. When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row. Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or may be a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server).
https://en.wikipedia.org/wiki?curid=5377
Physical cosmology Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood. Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations. Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics. Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this "cosmological constant" to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed. In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables. Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time. For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s. An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented. The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a "later energy release", meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to "sudden energy releases" from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming "quasars" and "active galaxies". Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle. There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not obviously transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy. Thermodynamics of the universe is a field of study that explores which form of energy dominates the cosmos – relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light. As the universe expands, both matter and radiation in it become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion. The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model. Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool down and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago. During the earliest moments of the universe the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period. As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is formula_1 with formula_2 being the Hubble parameter, which varies with time. The expansion timescale formula_1 is roughly equal to the age of the universe at each point in time. Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever. Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in "Timeline of the Big Bang." The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic "(see the cosmological principle)". Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation. Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called "baryogenesis". Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry. Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe. Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino. The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology. The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background. On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of "B"-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way. Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the "Sloan Digital Sky Survey" and the 2dF Galaxy Redshift Survey. Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy. Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include: These will help cosmologists settle the question of when and how structure formed in the universe. Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate. Apart from its density and its clustering properties, nothing is known about dark energy. "Quantum field theory" predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists "(see string landscape)" have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between: Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology. A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a big freeze, or follow some other scenario. Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang. In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction. Cosmologists also study:
https://en.wikipedia.org/wiki?curid=5378
Inflation (cosmology) In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10−36 seconds after the conjectured Big Bang singularity to some time between 10−33 and 10−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 9 billion years old (~4 billion years ago). Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation." It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed. The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton. In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize "for development of the concept of inflation in cosmology". In 2012 Alan Guth and Andrei Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology. Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This was quickly interpreted as meaning galaxies were receding from Earth. If Earth is not in some special, privileged, central position in the universe, then it would mean all galaxies are moving apart, and the further away, the faster they are moving away. It is now understood that the universe is expanding, carrying the galaxies with it, and causing this observation. Many other observations agree, and also lead to the same conclusion. However, for many years it was not clear why or how the universe might be expanding, or what it might signify. Based on a huge amount of experimental observation and theoretical work, it is now believed that the reason for the observation is that "space itself is expanding", and that it expanded very rapidly within the first fraction of a second after the Big Bang. This kind of expansion is known as a ""metric"" expansion. In the terminology of mathematics and physics, a "metric" is a measure of distance that satisfies a specific list of properties, and the term implies that "the sense of distance within the universe is itself changing", although at this time it is far too small an effect to see on less than an intergalactic scale. The modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979, while investigating the problem of why no magnetic monopoles are seen today. He found that if the universe contained a field in a positive-energy false vacuum state, then according to general relativity it would generate an exponential expansion of space. It was very quickly realized that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does "today", the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation theory largely resolves these problems as well, thus making a universe like ours much more likely in the context of Big Bang theory. No physical field has yet been discovered that is responsible for this inflation. However such a field would be scalar and the first relativistic scalar field proven to exist, the Higgs field, was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered. The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton. If this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today. An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly. The observable universe is one "causal patch" of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone. Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous. As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space. The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed. In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially). In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric: This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure "p" equal in magnitude to its energy density "ρ"; the equation of state is "p=−ρ". Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases. Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles "before" a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes. The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for philosophical disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins. A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least 1026 during inflation. Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from 1027 K down to 1022 K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called "reheating" or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflation is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance. Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does "today", the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory. The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the "Phoenix universe" of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe "more" chaotic, could lead to statistical homogeneity and isotropy. The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry). Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent. The magnetic monopole problem, sometimes called the exotic-relics problem, says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced. This is a problem with Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field. Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe. A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!" In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe. In the early 1970s Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success. In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology. In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of "f"("R") modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action which corresponds to the potential in the Einstein frame. This results in the observables: formula_4 In 1978, Zeldovich noted the monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details. Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions. Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate. The bubble collision problem was solved by Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named "new inflation" or "slow-roll inflation" (Guth's model then became known as "old inflation"). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur. Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and Bardeen, Steinhardt and Turner. Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within 0.5 percent, and that it is homogeneous and isotropic to one part in 100,000. Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the "spectral index", which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe). The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1. Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called "adiabatic" or "isentropic" perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The "spectral index", "n"s is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that "n"s is between 0.92 and 0.98. This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that "n"s=0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11. These are considered an important confirmation of the theory of inflation. Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics. Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias. An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (1015–1016 GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio formula_5 was between 0.15 and 0.27 (rejecting the null hypothesis; formula_5 is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that formula_5 is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation. Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great. In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely "ad hoc" modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain. One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the "slow-roll conditions" must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory. Linde proposed a theory known as "chaotic inflation" in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called "large field" models and the competing new inflation models are called "small field" models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories. Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around 1016 GeV or 10−3 times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by 10−12 compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification. In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time. All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model. Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic. Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions. In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible. Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason. Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever. Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally. Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase. Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations. Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable. Another kind of inflation, called "hybrid inflation", is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state. In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation. Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10−12 GeV, roughly 27 orders of magnitude less than the scale of inflation. The discovery of flux compactifications opened the way for reconciling inflation and string theory. "Brane inflation" suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the "Dirac-Born-Infeld action", is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism. When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back. Other models explain some of the observations explained by inflation. However none of these "alternatives" has the same breadth of explanation and still require inflation for a more complete fit with observation. They should therefore be regarded as adjuncts to inflation, rather than as alternatives. The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era. The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well "before" the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years. Another adjunct, the varying speed of light model was offered by Jean-Pierre Petit in 1988, John Moffat in 1992, and the two-man team of Andreas Albrecht and João Magueijo in 1998. Instead of superluminal expansion the speed of light was 60 orders of magnitude faster than its current value solving the horizon and homogeneity problems in the early universe. String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called "string gas cosmology", was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. Supporters admit that their model "does not solve the entropy and flatness problems of standard cosmology ... and we can provide no explanation for why the current universe is so close to being spatially flat". Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding, "we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology." In order to work, and as pointed out by Roger Penrose from 1986 on, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved: "There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. [...] For, if the thermalization is actually doing anything [...] then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after." The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that "inflation isn't falsifiable, it's falsified. [...] BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye." A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them: "Not only is bad inflation more likely than good inflation, but no inflation is more likely than either [...] Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation [...] Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol (10 to the 100) power!" Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite. Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Andrei Linde, saying that "cosmic inflation is on a stronger footing than ever before".
https://en.wikipedia.org/wiki?curid=5382
Candela The candela ( or ; symbol: cd) is the base unit of luminous intensity in the International System of Units (SI); that is, luminous power per unit solid angle emitted by a point light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the standard luminosity function (a model of the sensitivity of the human eye to different wavelengths). A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured. The word "candela" is Latin for "candle". The old name "candle" is still sometimes used, as in "foot-candle" and the modern definition of "candlepower". The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is: The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540 × 1012 Hz, "K"cd, to be 683 when expressed in the unit lm W–1, which is equal to , or , where the kilogram, metre and second are defined in terms of "h", "c" and Δ"ν"Cs. The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength "λ" is given by where "I"v("λ") is the luminous intensity, "I"e("λ") is the radiant intensity and formula_2 is the photopic luminosity function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity. Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp. It became clear that a better-defined unit was needed. Jules Violle had proposed a standard based on the light emitted by 1 cm2 of platinum at its melting point (or freezing point), calling this the Violle. The light intensity was due to the Planck radiator (a black body) effect, and was thus independent of the construction of the device. This made it easy for anyone to measure the standard, as high-purity platinum was widely available and easily prepared. The "Commission Internationale de l'Éclairage" (International Commission on Illumination) and the CIPM proposed a “new candle” based on this basic concept. However, the value of the new unit was chosen to make it similar to the earlier unit candlepower by dividing the Violle by 60. The decision was promulgated by the CIPM in 1946: The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre. It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the "candela". In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum: The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of  newtons per square metre. In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela: The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of  watt per steradian. The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminosity function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminosity function. An appendix to the SI Brochure makes it clear that the luminosity function is not uniquely specified, but must be selected to fully define the candela. The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition. The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants. If a source emits a known luminous intensity "I"v (in candelas) in a well-defined cone, the total luminous flux "Φ"v in lumens is given by where "A" is the "radiation angle" of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps. If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4π: a uniform 1 candela source emits 12.6 lumens. For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally-uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If r"i" is the position of the "i"-th source of uniform intensity "Ii", and â is the unit vector normal to the illuminated elemental opaque area "dA" being measured, and provided that all light sources lie in the same half-space divided by the plane of this area, In the case of a single point light source of intensity "Iv", at a distance "r" and normally incident, this reduces to
https://en.wikipedia.org/wiki?curid=5385
Condensed matter physics Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interactions between the constituents are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. More exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using methods of theoretical physics to develop mathematical models that help in understanding physical behavior. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as "solid state physics". Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics. According to physicist Philip Warren Anderson, the term was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from "Solid state theory" to "Theory of Condensed Matter" in 1967, as they felt it did not exclude their interests in the study of liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been present in Europe for some years, most prominently in the form of a journal published in English, French, and German by Springer-Verlag titled "Physics of Condensed Matter", which was launched in 1963. The funding environment and Cold War politics of the 1960s and 1970s were also factors that lead some physicists to prefer the name "condensed matter physics", which emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, over "solid state physics", which was often associated with the industrial applications of metals and semiconductors. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 book "Kinetic Theory of Liquids", Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'". One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures. In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas." Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series "International Tables of Crystallography", first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics. In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later. Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices. The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transition and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant formula_1.(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance can be characterized in terms of a topological invariable called Chern number which was formulated by Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of a constant. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics. In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two non-magnetic insulators are joined to create conductivity, superconductivity, and ferromagnetism. The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, called the Bloch wave. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it's very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids. Some states of matter exhibit "symmetry breaking", where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry. Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations. Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. Classical phase transition occurs at finite temperature when the order of the system was destroyed. For example, when ice melts and becomes water, the ordered crystal structure is destroyed. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances. Two classes of phase transitions occur: "first-order transitions" and "second-order" or "continuous transitions". For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system. The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed. Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition. Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction. Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density. Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy. In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual electrons, thus giving information about the atomic, molecular, and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 60 Tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect. The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its sourrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change, magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). Especially PAC is ideal for the study of phase changes at extreme temperature above 2000°C due to no temperature dependence of the method. Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a "lattice", in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as "quantum simulators", that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering. In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state. Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states. Condensed matter physics also has important uses for biophysics, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis.
https://en.wikipedia.org/wiki?curid=5387
Cultural anthropology Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The umbrella term sociocultural anthropology includes both cultural and social anthropology traditions. Anthropologists have pointed out that through culture people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances). Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys. The rubric "cultural" anthropology is generally applied to ethnographic works that are holistic in spirit, oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. "Social" anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry. Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France. One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor who writes on the first page of his 1871 book: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture. The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the minds of not only Marx and Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others." The relative status of various humans, some of whom had modern advanced technologies that included engines and telegraphs, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle, was of interest to the first generation of cultural anthropologists. Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused". Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized. 20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology. Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which he or she lived or lives. Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved. Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism. Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies. Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation." Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from "savagery", to "barbarism", to "civilization". Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale. Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature. Influenced by the German tradition, Boas argued that the world was full of distinct "cultures," rather than societies whose evolution could be measured by how much or how little "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible. In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture. Boas used his positions at Columbia University and the American Museum of Natural History to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages. The publication of Alfred Kroeber's textbook "Anthropology" (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up. Though such works as Mead's "Coming of Age in Samoa" (1928) and Benedict's "The Chrysanthemum and the Sword" (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH. In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris. Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in 1960s and 1970s, including cognitive anthropology and componential analysis. In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as "Reinventing Anthropology" worried about anthropology's relevance. Since the 1980s issues of power, such as those examined in Eric Wolf's "Europe and the People Without History", have been central to the discipline. In the 1980s books like "Anthropology and the Colonial Encounter" pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: Geertz's interpretive method involved what he called "thick description." The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies. Prominent British symbolic anthropologists include Victor Turner and Mary Douglas. In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became "de rigueur" as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology, cultural, gender and racial positioning, and their influence on his or her ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others – usually missionaries, traders, explorers, or colonial officials – earning them the moniker of "arm-chair anthropologists". Participant observation is one of the principle research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and his or her subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, he or she will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions he or she is trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has his or her own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what he or she will eventually write about a culture, because each researcher is influenced by his or her own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what he or she reports about a culture. In terms of representation, an anthropologist has greater power than his or her subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group. Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research. Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown’s students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements. In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics). American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say. One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the "Cross-Cultural Survey" (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. "eHRAF World Cultures" includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, "eHRAF Archaeology", covers major archaeological traditions and many more sub-traditions and sites around the world. Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are: Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus. A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities. Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing," such as a particular commodity, as it is transported through the networks of global capitalism. Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft. Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for "In Search of Respect", a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees. Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another, and further, how those relationships operate within and define social organization. Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Kinship is the bedrock of all human societies that we know. All humans recognize fathers and mothers, sons and daughters, brothers and sisters, uncles and aunts, husbands and wives, grandparents, cousins, and often many more complex types of relationships in the terminologies that they use. That is the matrix into which human children are born in the great majority of cases, and their first words are often kinship terms.Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit. There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen. In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities. Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences. Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research. Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies. Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book "A critique of the study of kinship" that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as "parent" and "sibling". A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In "Critical Kinship Studies", social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices. The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government. The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because of they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems. The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed. In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did. Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging.
https://en.wikipedia.org/wiki?curid=5388
Conversion of units Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors. The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as: Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called "soft conversion". It does not involve changing the physical configuration of the item being measured. By contrast, a "hard conversion" or an "adaptive conversion" may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used. A conversion factor is used to change the units of a measured quantity without changing its value. The unity bracket method of unit conversion consists of a fraction in which the denominator is equal to the numerator, but they are in different units. Because of the identity property of multiplication, the value of a quantity will not change as long as it is multiplied by one. Also, if the numerator and denominator of a fraction are equal to each other, then the fraction is equal to one. So as long as the numerator and denominator of the fraction are equivalent, they will not affect the value of the measured quantity. The following example demonstrates how the unity bracket method is used to convert the rate 5 kilometers per second to meters per second. The symbols km, m, and s represent kilometer, meter, and second, respectively. formula_1formula_2formula_3formula_4formula_5 Thus, it is found that 5 kilometers per second is equal to 5000 meters per second. There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications. There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities. For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential is often given in Boltzmann constant times nanokelvin. The condensate's healing length is given by: For a 23Na condensate with chemical potential of (Boltzmann constant times) 128 nK, the calculation of healing length (in microns) can be done in two steps: Assume that formula_7 this gives which is our pre-factor. Now, make use of the fact that formula_9. With formula_10, formula_11. This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it's very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is formula_12. This article gives lists of conversion factors for each of a number of physical quantities, which are listed in the index. For each physical quantity, a number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit. Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted. Notes: A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed. "See also:" Conversion between weight (force) and mass Modern standards (such as ISO 80000) prefer the shannon to the bit as a unit for a quantity of information entropy, whereas the (discrete) storage space of digital devices is measured in bits. Thus, uncompressed redundant data occupy more than one bit of storage per shannon of information entropy. The multiples of a bit listed above are usually used with this meaning. The candela is the preferred nomenclature for the SI unit. Although becquerel (Bq) and hertz (Hz) both ultimately refer to the same SI base unit (s−1), Hz is used only for periodic phenomena (i.e. repetitions at regular intervals), and Bq is only used for stochastic processes (i.e. at random intervals) associated with radioactivity. The roentgen is not an SI unit and the NIST strongly discourages its continued use. Although the definitions for sievert (Sv) and gray (Gy) would seem to indicate that they measure the same quantities, this is not the case. The effect of receiving a certain dose of radiation (given as Gy) is variable and depends on many factors, thus a new unit was needed to denote the biological effectiveness of that dose on the body; this is known as the equivalent dose and is shown in Sv. The general relationship between absorbed dose and equivalent dose can be represented as where "H" is the equivalent dose, "D" is the absorbed dose, and "Q" is a dimensionless quality factor. Thus, for any quantity of "D" measured in Gy, the numerical value for "H" measured in Sv may be different.
https://en.wikipedia.org/wiki?curid=5390
Democracy Democracy (, "dēmokratiā", from "dēmos" 'people' and "kratos" 'rule') is a form of government in which the people have the authority to choose their governing legislation. Who people are and how authority is shared among them are core issues for democratic theory, development and constitution. Some cornerstones of these issues are freedom of assembly and speech, inclusiveness and equality, membership, consent, voting, right to life and minority rights. Generally, there are two types of democracy: direct and representative. In a direct democracy, the people directly deliberate and decide on legislature. In a representative democracy, the people elect representatives to deliberate and decide on legislature, such as in parliamentary or presidential democracy. Liquid democracy combines elements of these two basic types. However, the noun "democracy" has, over time, been modified by more than 3,500 adjectives which suggests that it may have types that can elude and elide this duality. The most common day-to-day decision making approach of democracies has been the majority rule, though other decision making approaches like supermajority and consensus have been equally integral to democracies. They serve the crucial purpose of inclusiveness and broader legitimacy on sensitive issues, counterbalancing majoritarianism, and therefore mostly take precedence on a constitutional level. In the common variant of liberal democracy, the powers of the majority are exercised within the framework of a representative democracy, but the constitution limits the majority and protects the minority, usually through the enjoyment by all of certain individual rights, e.g. freedom of speech, or freedom of association. Besides these general types of democracy, there have been a wealth of further types (see below). Republics, though often associated with democracy because of the shared principle of rule by consent of the governed, are not necessarily democracies, as republicanism does not specify "how" the people are to rule. Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes. The uncertainty of outcomes is inherent in democracy. Democracy makes all forces struggle repeatedly to realize their interests and devolves power from groups of people to sets of rules. Western democracy, as distinct from that which existed in pre-modern societies, is generally considered to have originated in city-states such as Classical Athens and the Roman Republic, where various schemes and degrees of enfranchisement of the free male population were observed before the form disappeared in the West at the beginning of late antiquity. The English word dates back to the 16th century, from the older Middle French and Middle Latin equivalents. According to American political scientist Larry Diamond, democracy consists of four key elements: a political system for choosing and replacing the government through free and fair elections; the active participation of the people, as citizens, in politics and civic life; protection of the human rights of all citizens; a rule of law, in which the laws and procedures apply equally to all citizens. Todd Landman, nevertheless, draws our attention to the fact that democracy and human rights are two different concepts and that "there must be greater specificity in the conceptualisation and operationalisation of democracy and human rights". The term appeared in the 5th century BC to denote the political systems then existing in Greek city-states, notably Athens, to mean "rule of the people", in contrast to aristocracy (, ""), meaning "rule of an elite". While theoretically, these definitions are in opposition, in practice the distinction has been blurred historically. The political system of Classical Athens, for example, granted democratic citizenship to free men and excluded slaves and women from political participation. In virtually all democratic governments throughout ancient and modern history, democratic citizenship consisted of an elite class, until full enfranchisement was won for all adult citizens in most modern democracies through the suffrage movements of the 19th and 20th centuries. Democracy contrasts with forms of government where power is either held by an individual, as in an absolute monarchy, or where power is held by a small number of individuals, as in an oligarchy. Nevertheless, these oppositions, inherited from Greek philosophy, are now ambiguous because contemporary governments have mixed democratic, oligarchic and monarchic elements. Karl Popper defined democracy in contrast to dictatorship or tyranny, thus focusing on opportunities for the people to control their leaders and to oust them without the need for a revolution. No consensus exists on how to define democracy, but legal equality, political freedom and rule of law have been identified as important characteristics. These principles are reflected in all eligible citizens being equal before the law and having equal access to legislative processes. For example, in a representative democracy, every vote has equal weight, no unreasonable restrictions can apply to anyone seeking to become a representative, and the freedom of its eligible citizens is secured by legitimised rights and liberties which are typically protected by a constitution. Other uses of "democracy" include that of direct democracy. One theory holds that democracy requires three fundamental principles: upward control (sovereignty residing at the lowest levels of authority), political equality, and social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality. The term "democracy" is sometimes used as shorthand for liberal democracy, which is a variant of representative democracy that may include elements such as political pluralism; equality before the law; the right to petition elected officials for redress of grievances; due process; civil liberties; human rights; and elements of civil society outside the government. Roger Scruton argues that democracy alone cannot provide personal and political freedom unless the institutions of civil society are also present. In some countries, notably in the United Kingdom which originated the Westminster system, the dominant principle is that of parliamentary sovereignty, while maintaining judicial independence. In the United States, separation of powers is often cited as a central attribute. In India, parliamentary sovereignty is subject to the Constitution of India which includes judicial review. Though the term "democracy" is typically used in the context of a political state, the principles also are applicable to private organisations. There are many decision making methods used in democracies, but majority rule is the dominant form. Without compensation, like legal protections of individual or group rights, political minorities can be oppressed by the "tyranny of the majority". Majority rule is a competitive approach, opposed to consensus democracy, creating the need that elections, and generally deliberation, are substantively and procedurally "fair," i.e., just and equitable. In some countries, freedom of political expression, freedom of speech, freedom of the press, and internet democracy are considered important to ensure that voters are well informed, enabling them to vote according to their own interests. It has also been suggested that a basic feature of democracy is the capacity of all voters to participate freely and fully in the life of their society. With its emphasis on notions of social contract and the collective will of all the voters, democracy can also be characterised as a form of political collectivism because it is defined as a form of government in which all eligible citizens have an equal say in lawmaking. While representative democracy is sometimes equated with the republican form of government, the term "republic" classically has encompassed both democracies and aristocracies. Many democracies are constitutional monarchies, such as the United Kingdom. Historically, democracies and republics have been rare. Republican theorists linked democracy to small size: as political units grew in size, the likelihood increased that the government would turn despotic. At the same time, small political units were vulnerable to conquest. Montesquieu wrote, "If a republic be small, it is destroyed by a foreign force; if it be large, it is ruined by an internal imperfection." According to Johns Hopkins University political scientist Daniel Deudney, the creation of the United States, with its large size and its system of checks and balances, was a solution to the dual problems of size. Retrospectively different polity, outside of declared democracies, have been described as proto-democratic (see History of democracy). The term "democracy" first appeared in ancient Greek political and philosophical thought in the city-state of Athens during classical antiquity. The word comes from "demos", "common people" and "kratos", "strength". Led by Cleisthenes, Athenians established what is generally held as the first democracy in 508–507 BC. Cleisthenes is referred to as "the father of Athenian democracy." Athenian democracy took the form of a direct democracy, and it had two distinguishing features: the random selection of ordinary citizens to fill the few existing government administrative and judicial offices, and a legislative assembly consisting of all Athenian citizens. All eligible citizens were allowed to speak and vote in the assembly, which set the laws of the city state. However, Athenian citizenship excluded women, slaves, foreigners (μέτοικοι / "métoikoi"), and men under 20 years of age. Owning land was not a requirement for citizenship, but it did allow one to purchase land. The exclusion of large parts of the population from the citizen body is closely related to the ancient understanding of citizenship. In most of antiquity the benefit of citizenship was tied to the obligation to fight war campaigns. Athenian democracy was not only "direct" in the sense that decisions were made by the assembled people, but also the "most direct" in the sense that the people through the assembly, boule and courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Even though the rights of the individual were not secured by the Athenian constitution in the modern sense (the ancient Greeks had no word for "rights"), the Athenians enjoyed their liberties not in opposition to the government but by living in a city that was not subject to another power and by not being subjects themselves to the rule of another person. Range voting appeared in Sparta as early as 700 BC. The Apella was an assembly of the people, held once a month, in which every male citizen of at least 30 years of age could participate. In the Apella, Spartans elected leaders and cast votes by range voting and shouting. Aristotle called this "childish", as compared with the stone voting ballots used by the Athenians. Sparta adopted it because of its simplicity, and to prevent any bias voting, buying, or cheating that was predominant in the early democratic elections. Vaishali, capital city of the Vajjian Confederacy of (Vrijji mahajanapada), India was also considered one of the first examples of a republic around the 6th century BCE. Even though the Roman Republic contributed significantly to many aspects of democracy, only a minority of Romans were citizens with votes in elections for representatives. The votes of the powerful were given more weight through a system of gerrymandering, so most high officials, including members of the Senate, came from a few wealthy and noble families. In addition, the overthrow of the Roman Kingdom was the first case in the Western world of a polity being formed with the explicit purpose of being a republic, although it didn't have much of a democracy. The Roman model of governance inspired many political thinkers over the centuries, and today's modern representative democracies imitate more the Roman than the Greek models because it was a state in which supreme power was held by the people and their elected representatives, and which had an elected or nominated leader. Other cultures, such as the Iroquois Nation in the Americas between around 1450 and 1600 AD also developed a form of democratic society before they came in contact with the Europeans. This indicates that forms of democracy may have been invented in other societies around the world. While most regions in Europe during the Middle Ages were ruled by clergy or feudal lords, there existed various systems involving elections or assemblies (although often only involving a small part of the population). These included: The Kouroukan Fouga divided the Mali Empire into ruling clans (lineages) that were represented at a great assembly called the "Gbara". However, the charter made Mali more similar to a constitutional monarchy than a democratic republic. The Parliament of England had its roots in the restrictions on the power of kings written into Magna Carta (1215), which explicitly protected certain rights of the King's subjects and implicitly supported what became the English writ of habeas corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal. The first representative national assembly in England was Simon de Montfort's Parliament in 1265. The emergence of petitioning is some of the earliest evidence of parliament being used as a forum to address the general grievances of ordinary people. However, the power to call parliament remained at the pleasure of the monarch. Studies have linked the emergence of parliamentary institutions in Europe during the medieval period to urban agglomeration and the creation of new classes, such as artisans, as well as the presence of nobility and religious elites. Scholars have also linked the emergence of representative government to Europe's relative political fragmentation. New York University political scientist David Stasavage links the fragmentation of Europe, and its subsequent democratization, to the manner in which the Roman Empire collapsed: Roman territory was conquered by small fragmented groups of Germanic tribes, thus leading to the creation of small political units where rulers were relatively weak and needed the consent of the governed to ward off foreign threats. In 17th century England, there was renewed interest in Magna Carta. The Parliament of England passed the Petition of Right in 1628 which established certain liberties for subjects. The English Civil War (1642–1651) was fought between the King and an oligarchic but elected Parliament, during which the idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. Subsequently, the Protectorate (1653–59) and the English Restoration (1660) restored more autocratic rule, although Parliament passed the Habeas Corpus Act in 1679 which strengthened the convention that forbade detention lacking sufficient cause or evidence. After the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689 which codified certain rights and liberties and is still in effect. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail. Economic historians Douglass North and Barry Weingast have characterized the institutions implemented in the Glorious Revolution as a resounding success in terms of restraining the government and ensuring protection for property rights. In the Cossack republics of Ukraine in the 16th and 17th centuries, the Cossack Hetmanate and Zaporizhian Sich, the holder of the highest post of Hetman was elected by the representatives from the country's districts. In North America, representative government began in Jamestown, Virginia, with the election of the House of Burgesses (forerunner of the Virginia General Assembly) in 1619. English Puritans who migrated from 1620 established colonies in New England whose local governance was democratic and which contributed to the democratic development of the United States; although these local assemblies had some small amounts of devolved power, the ultimate authority was held by the Crown and the English Parliament. The Puritans (Pilgrim Fathers), Baptists, and Quakers who founded these colonies applied the democratic organisation of their congregations also to the administration of their communities in worldly matters. The first Parliament of Great Britain was established in 1707, after the merger of the Kingdom of England and the Kingdom of Scotland under the Acts of Union. Although the monarch increasingly became a figurehead, only a small minority actually had a voice; Parliament was elected by only a few percent of the population (less than 3% as late as 1780). During the Age of Liberty in Sweden (1718–1772), civil rights were expanded and power shifted from the monarch to parliament. The taxed peasantry was represented in parliament, although with little influence, but commoners without taxed property had no suffrage. The creation of the short-lived Corsican Republic in 1755 marked the first nation in modern history to adopt a democratic constitution (all men and women above age of 25 could vote). This Corsican Constitution was the first based on Enlightenment principles and included female suffrage, something that was not granted in most other democracies until the 20th century. In the American colonial period before 1776, and for some time after, often only adult white male property owners could vote; enslaved Africans, most free black people and most women were not extended the franchise. This changed state by state, beginning with the republican State of New Connecticut, soon after called Vermont, which, on declaring independence of Great Britain in 1777, adopted a constitution modelled on Pennsylvania's with citizenship and democratic suffrage for males with or without property, and went on to abolish slavery. On the American frontier, democracy became a way of life, with more widespread social, economic and political equality. Although not described as a democracy by the founding fathers, they shared a determination to root the American experiment in the principles of natural freedom and equality. The American Revolution led to the adoption of the United States Constitution in 1787, the oldest surviving, still active, governmental codified constitution. The Constitution provided for an elected government and protected civil rights and liberties for some, but did not end slavery nor extend voting rights in the United States, instead leaving the issue of suffrage to the individual states. Generally, suffrage was limited to white male property owners and taxpayers, of whom between 60% and 90% were eligible to vote by the end of the 1780s. The Bill of Rights in 1791 set limits on government power to protect personal freedoms but had little impact on judgements by the courts for the first 130 years after ratification. The Polish-Lithuanian Constitution of 3 May 1791 (Polish: Konstytucja Trzeciego Maja) is called "the first constitution of its kind in Europe" by historian Norman Davies. Short lived due to Russian, German, Austrian aggression, It was instituted by the Government Act (Polish: Ustawa rządowa) adopted on that date by the Sejm (parliament) of the Polish-Lithuanian Commonwealth. (, "Governance Act"), was a constitution adopted by the Great Sejm ("Four-Year Sejm", meeting in 1788–92) for the Polish–Lithuanian Commonwealth, a dual monarchy comprising the Crown of the Kingdom of Poland and the Grand Duchy of Lithuania. The Constitution was designed to correct the Commonwealth's political flaws and had been preceded by a period of agitation forand gradual introduction ofreforms, beginning with the Convocation Sejm of 1764 and the consequent election that year of Stanisław August Poniatowski as the Commonwealth's last king. The Constitution sought to implement a more effective constitutional monarchy, introduced political equality between townspeople and nobility, and placed the peasants under the protection of the government, mitigating the worst abuses of serfdom. It banned pernicious parliamentary institutions such as the "liberum veto", which had put the Sejm at the mercy of any single deputy, who could veto and thus undo all the legislation that had been adopted by that Sejm. The Commonwealth's neighbours reacted with hostility to the adoption of the Constitution. King Frederick William II broke Prussia's alliance with the Polish-Lithuanian Commonwealth and joined with Catherine the Great's Imperial Russia and the Targowica Confederation of anti-reform Polish magnates to defeat the Commonwealth in the Polish–Russian War of 1792. The 1791 Constitution was in force for less than 19 months. It was declared null and void by the Grodno Sejm that met in 1793, though the Sejm's legal power to do so was questionable. The Second and Third Partitions of Poland (1793, 1795) ultimately ended Poland's sovereign existence until the close of World War I in 1918. Over those 123 years, the 1791 Constitution helped keep alive Polish aspirations for the eventual restoration of the country's sovereignty. In the words of two of its principal authors, Ignacy Potocki and Hugo Kołłątaj, the 1791 Constitution was "the last will and testament of the expiring Homeland." The Constitution of 3 May 1791 combined a monarchic republic with a clear division of executive, legislative, and judiciary powers. It is generally considered Europe's first, and the world's second, modern written national constitution, after the United States Constitution that had come into force in 1789. In 1789, Revolutionary France adopted the Declaration of the Rights of Man and of the Citizen and, although short-lived, the National Convention was elected by all men in 1792. However, in the early 19th century, little of democracy—as theory, practice, or even as word—remained in the North Atlantic world. During this period, slavery remained a social and economic institution in places around the world. This was particularly the case in the United States, and especially in the last fifteen slave states that kept slavery legal in the American South until the Civil War. A variety of organisations were established advocating the movement of black people from the United States to locations where they would enjoy greater freedom and equality. The United Kingdom's Slave Trade Act 1807 banned the trade across the British Empire, which was enforced internationally by the Royal Navy under treaties Britain negotiated with other nations. As the voting franchise in the U.K. was increased, it also was made more uniform in a series of reforms beginning with the Reform Act 1832, although the United Kingdom did not manage to become a complete democracy well into the 20th century. In 1833, the United Kingdom passed the Slavery Abolition Act which took effect across the British Empire. Universal male suffrage was established in France in March 1848 in the wake of the French Revolution of 1848. In 1848, several revolutions broke out in Europe as rulers were confronted with popular demands for liberal constitutions and more democratic government. In the 1860 United States Census, the slave population in the United States had grown to four million, and in Reconstruction after the Civil War (late 1860s), the newly freed slaves became citizens with a nominal right to vote for men. Full enfranchisement of citizens was not secured until after the Civil Rights Movement gained passage by the United States Congress of the Voting Rights Act of 1965. In 1876 the Ottoman Empire transitioned from an absolute monarchy to a constitutional one, and held two elections the next year to elect members to her newly formed parliament. Provisional Electoral Regulations were issued on 29 October 1876, stating that the elected members of the Provincial Administrative Councils would elect members to the first Parliament. On 24 December a new constitution was promulgated, which provided for a bicameral Parliament with a Senate appointed by the Sultan and a popularly elected Chamber of Deputies. Only men above the age of 30 who were competent in Turkish and had full civil rights were allowed to stand for election. Reasons for disqualification included holding dual citizenship, being employed by a foreign government, being bankrupt, employed as a servant, or having "notoriety for ill deeds". Full universal suffrage was achieved in 1934. 20th-century transitions to liberal democracy have come in successive "waves of democracy", variously resulting from wars, revolutions, decolonisation, and religious and economic circumstances. Global waves of "democratic regression" reversing democratization, have also occurred in the 1920s and 30s, in the 1960s and 1970s, and in the 2010s. World War I and the dissolution of the Ottoman and Austro-Hungarian empires resulted in the creation of new nation-states from Europe, most of them at least nominally democratic. In the 1920s democracy flourished and women's suffrage advanced, but the Great Depression brought disenchantment and most of the countries of Europe, Latin America, and Asia turned to strong-man rule or dictatorships. Fascism and dictatorships flourished in Nazi Germany, Italy, Spain and Portugal, as well as non-democratic governments in the Baltics, the Balkans, Brazil, Cuba, China, and Japan, among others. World War II brought a definitive reversal of this trend in western Europe. The democratisation of the American, British, and French sectors of occupied Germany (disputed), Austria, Italy, and the occupied Japan served as a model for the later theory of government change. However, most of Eastern Europe, including the Soviet sector of Germany fell into the non-democratic Soviet bloc. The war was followed by decolonisation, and again most of the new independent states had nominally democratic constitutions. India emerged as the world's largest democracy and continues to be so. Countries that were once part of the British Empire often adopted the British Westminster system. By 1960, the vast majority of country-states were nominally democracies, although most of the world's populations lived in nations that experienced sham elections, and other forms of subterfuge (particularly in "Communist" nations and the former colonies.) A subsequent wave of democratisation brought substantial gains toward true liberal democracy for many nations. Spain, Portugal (1974), and several of the military dictatorships in South America returned to civilian rule in the late 1970s and early 1980s (Argentina in 1983, Bolivia, Uruguay in 1984, Brazil in 1985, and Chile in the early 1990s). This was followed by nations in East and South Asia by the mid-to-late 1980s. Economic malaise in the 1980s, along with resentment of Soviet oppression, contributed to the collapse of the Soviet Union, the associated end of the Cold War, and the democratisation and liberalisation of the former Eastern bloc countries. The most successful of the new democracies were those geographically and culturally closest to western Europe, and they are now members or candidate members of the European Union. In 1986, after the toppling of the most prominent Asian dictatorship, the only democratic state of its kind at the time emerged in the Philippines with the rise of Corazon Aquino, who would later be known as the Mother of Asian Democracy. The liberal trend spread to some nations in Africa in the 1990s, most prominently in South Africa. Some recent examples of attempts of liberalisation include the Indonesian Revolution of 1998, the Bulldozer Revolution in Yugoslavia, the Rose Revolution in Georgia, the Orange Revolution in Ukraine, the Cedar Revolution in Lebanon, the Tulip Revolution in Kyrgyzstan, and the Jasmine Revolution in Tunisia. According to Freedom House, in 2007 there were 123 electoral democracies (up from 40 in 1972). According to "World Forum on Democracy", electoral democracies now represent 120 of the 192 existing countries and constitute 58.2 percent of the world's population. At the same time liberal democracies i.e. countries Freedom House regards as free and respectful of basic human rights and the rule of law are 85 in number and represent 38 percent of the global population. Most electoral democracies continue to exclude those younger than 18 from voting. The voting age has been lowered to 16 for national elections in a number of countries, including Brazil, Austria, Cuba, and Nicaragua. In California, a 2004 proposal to permit a quarter vote at 14 and a half vote at 16 was ultimately defeated. In 2008, the German parliament proposed but shelved a bill that would grant the vote to each citizen at birth, to be used by a parent until the child claims it for themselves. In 2007 the United Nations declared 15 September the International Day of Democracy. According to Freedom House, starting in 2005, there have been eleven consecutive years in which declines in political rights and civil liberties throughout the world have outnumbered improvements, as populist and nationalist political forces have gained ground everywhere from Poland (under the Law and Justice Party) to the Philippines (under Rodrigo Duterte). In a Freedom House report released in 2018, Democracy Scores for most countries declined for the 12th consecutive year. "The Christian Science Monitor" reported that nationalist and populist political ideologies were gaining ground, at the expense of rule of law, in countries like Poland, Turkey and Hungary. For example, in Poland, the President appointed 27 new Supreme Court judges over objections from the European Union. In Turkey, thousands of judges were removed from their positions following a failed coup attempt during a government crackdown . Aristotle contrasted rule by the many (democracy/timocracy), with rule by the few (oligarchy/aristocracy), and with rule by a single person (tyranny or today autocracy/absolute monarchy). He also thought that there was a good and a bad variant of each system (he considered democracy to be the degenerate counterpart to timocracy). A common view among early and renaissance Republican theorists was that democracy could only survive in small political communities. Heeding the lessons of the Roman Republic's shift to monarchism as it grew larger or smaller, these Republican theorists held that the expansion of territory and population inevitably led to tyranny. Democracy was therefore highly fragile and rare historically, as it could only survive in small political units, which due to their size were vulnerable to conquest by larger political units. Montesquieu famously said, "if a republic is small, it is destroyed by an outside force; if it is large, it is destroyed by an internal vice." Rousseau asserted, "It is, therefore the natural property of small states to be governed as a republic, of middling ones to be subject to a monarch, and of large empires to be swayed by a despotic prince." Among modern political theorists, there are three contending conceptions of democracy: "aggregative democracy", "deliberative democracy", and "radical democracy". The theory of "aggregative democracy" claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented. Different variants of aggregative democracy exist. Under "minimalism", democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book "Capitalism, Socialism, and Democracy". Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner. According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socialises and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not rule themselves unless they directly decide laws and policies. Governments will tend to produce laws and policies that are close to the views of the median voter—with half to their left and the other half to their right. This is not a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book "An Economic Theory of Democracy". Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation. Similarly, Ronald Dworkin argues that "democracy is a substantive, not a merely procedural, ideal." "Deliberative democracy" is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. "Authentic deliberation" is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups. If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule. "Radical democracy" is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision-making processes. Several freedom indices are published by several organisations according to their own various definitions of the term and relying on different types of data: Dieter Fuchs and Edeltraud Roller suggest that, in order to truly measure the quality of democracy, objective measurements need to be complemented by "subjective measurements based on the perspective of citizens". Similarly, Quinton Mayne and Brigitte Geißel also defend that the quality of democracy does not depend exclusively on the performance of institutions, but also on the citizens' own dispositions and commitment. Because democracy is an overarching concept that includes the functioning of diverse institutions which are not easy to measure, strong limitations exist in quantifying and econometrically measuring the potential effects of democracy or its relationship with other phenomena—whether inequality, poverty, education etc. Given the constraints in acquiring reliable data with within-country variation on aspects of democracy, academics have largely studied cross-country variations. Yet variations between democratic institutions are very large across countries which constrains meaningful comparisons using statistical approaches. Since democracy is typically measured aggregately as a macro variable using a single observation for each country and each year, studying democracy faces a range of econometric constraints and is limited to basic correlations. Cross-country comparison of a composite, comprehensive and qualitative concept like democracy may thus not always be, for many purposes, methodologically rigorous or useful. Democracy has taken a number of forms, both in theory and practice. Some varieties of democracy provide better representation and more freedom for their citizens than others. However, if any democracy is not structured to prohibit the government from excluding the people from the legislative process, or any branch of government from altering the separation of powers in its favour, then a branch of the system can accumulate too much power and destroy the democracy. The following kinds of democracy are not exclusive of one another: many specify details of aspects that are independent of one another and can co-exist in a single system. Several variants of democracy exist, but there are two basic forms, both of which concern how the whole body of all eligible citizens executes its will. One form of democracy is direct democracy, in which all eligible citizens have active participation in the political decision making, for example voting on policy initiatives directly. In most modern democracies, the whole body of eligible citizens remain the sovereign power but political power is exercised indirectly through elected representatives; this is called a representative democracy. Direct democracy is a political system where the citizens participate in the decision-making personally, contrary to relying on intermediaries or representatives. The use of a lot system, a characteristic of Athenian democracy, is unique to direct democracies. In this system, important governmental and administrative tasks are performed by citizens picked from a lottery. A direct democracy gives the voting population the power to: Within modern-day representative governments, certain electoral tools like referendums, citizens' initiatives and recall elections are referred to as forms of direct democracy. However, some advocates of direct democracy argue for local assemblies of face-to-face discussion. Direct democracy as a government system currently exists in the Swiss cantons of Appenzell Innerrhoden and Glarus, the Rebel Zapatista Autonomous Municipalities, communities affiliated with the CIPO-RFM, the Bolivian city councils of FEJUVE, and Kurdish cantons of Rojava. Representative democracy involves the election of government officials by the people being represented. If the head of state is also democratically elected then it is called a democratic republic. The most common mechanisms involve election of the candidate with a majority or a plurality of the votes. Most western countries have representative systems. Representatives may be elected or become diplomatic representatives by a particular district (or constituency), or represent the entire electorate through proportional systems, with some using a combination of the two. Some representative democracies also incorporate elements of direct democracy, such as referendums. A characteristic of representative democracy is that while the representatives are elected by the people to act in the people's interest, they retain the freedom to exercise their own judgement as how best to do so. Such reasons have driven criticism upon representative democracy, pointing out the contradictions of representation mechanisms with democracy Parliamentary democracy is a representative democracy where government is appointed by, or can be dismissed by, representatives as opposed to a "presidential rule" wherein the president is both head of state and the head of government and is elected by the voters. Under a parliamentary democracy, government is exercised by delegation to an executive ministry and subject to ongoing review, checks and balances by the legislative parliament elected by the people. Parliamentary systems have the right to dismiss a Prime Minister at any point in time that they feel he or she is not doing their job to the expectations of the legislature. This is done through a Vote of No Confidence where the legislature decides whether or not to remove the Prime Minister from office by a majority support for his or her dismissal. In some countries, the Prime Minister can also call an election whenever he or she so chooses, and typically the Prime Minister will hold an election when he or she knows that they are in good favour with the public as to get re-elected. In other parliamentary democracies, extra elections are virtually never held, a minority government being preferred until the next ordinary elections. An important feature of the parliamentary democracy is the concept of the "loyal opposition". The essence of the concept is that the second largest political party (or coalition) opposes the governing party (or coalition), while still remaining loyal to the state and its democratic principles. Presidential Democracy is a system where the public elects the president through free and fair elections. The president serves as both the head of state and head of government controlling most of the executive powers. The president serves for a specific term and cannot exceed that amount of time. Elections typically have a fixed date and aren't easily changed. The president has direct control over the cabinet, specifically appointing the cabinet members. The president cannot be easily removed from office by the legislature, but he or she cannot remove members of the legislative branch any more easily. This provides some measure of separation of powers. In consequence, however, the president and the legislature may end up in the control of separate parties, allowing one to block the other and thereby interfere with the orderly operation of the state. This may be the reason why presidential democracy is not very common outside the Americas, Africa, and Central and Southeast Asia. A semi-presidential system is a system of democracy in which the government includes both a prime minister and a president. The particular powers held by the prime minister and president vary by country. Some modern democracies that are predominantly representative in nature also heavily rely upon forms of political action that are directly democratic. These democracies, which combine elements of representative democracy and direct democracy, are termed "hybrid democracies", "semi-direct democracies" or "participatory democracies". Examples include Switzerland and some U.S. states, where frequent use is made of referendums and initiatives. The Swiss confederation is a semi-direct democracy. At the federal level, citizens can propose changes to the constitution (federal popular initiative) or ask for a referendum to be held on any law voted by the parliament. Between January 1995 and June 2005, Swiss citizens voted 31 times, to answer 103 questions (during the same period, French citizens participated in only two referendums). Although in the past 120 years less than 250 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government. In the United States, no mechanisms of direct democracy exists at the federal level, but over half of the states and many localities provide for citizen-sponsored ballot initiatives (also called "ballot measures", "ballot questions" or "propositions"), and the vast majority of states allow for referendums. Examples include the extensive use of referendums in the US state of California, which is a state that has more than 20 million voters. In New England, Town meetings are often used, especially in rural areas, to manage local government. This creates a hybrid form of government, with a local direct democracy and a representative state government. For example, most Vermont towns hold annual town meetings in March in which town officers are elected, budgets for the town and schools are voted on, and citizens have the opportunity to speak and be heard on political matters. Many countries such as the United Kingdom, Spain, the Netherlands, Belgium, Scandinavian countries, Thailand, Japan and Bhutan turned powerful monarchs into constitutional monarchs with limited or, often gradually, merely symbolic roles. For example, in the predecessor states to the United Kingdom, constitutional monarchy began to emerge and has continued uninterrupted since the Glorious Revolution of 1688 and passage of the Bill of Rights 1689. In other countries, the monarchy was abolished along with the aristocratic system (as in France, China, Russia, Germany, Austria, Hungary, Italy, Greece and Egypt). An elected president, with or without significant powers, became the head of state in these countries. Elite upper houses of legislatures, which often had lifetime or hereditary tenure, were common in many nations. Over time, these either had their powers limited (as with the British House of Lords) or else became elective and remained powerful (as with the Australian Senate). The term "republic" has many different meanings, but today often refers to a representative democracy with an elected head of state, such as a president, serving for a limited term, in contrast to states with a hereditary monarch as a head of state, even if these states also are representative democracies with an elected or appointed head of government such as a prime minister. The Founding Fathers of the United States rarely praised and often criticised democracy, which in their time tended to specifically mean direct democracy, often without the protection of a constitution enshrining basic rights; James Madison argued, especially in "The Federalist" No. 10, that what distinguished a direct "democracy" from a "republic" was that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combats faction by its very structure. What was critical to American values, John Adams insisted, was that the government be "bound by fixed laws, which the people have a voice in making, and a right to defend." As Benjamin Franklin was exiting after writing the U.S. constitution, a woman asked him "Well, Doctor, what have we got—a republic or a monarchy?". He replied "A republic—if you can keep it." A liberal democracy is a representative democracy in which the ability of the elected representatives to exercise decision-making power is subject to the rule of law, and moderated by a constitution or laws that emphasise the protection of the rights and freedoms of individuals, and which places constraints on the leaders and on the extent to which the will of the majority can be exercised against the rights of minorities (see civil liberties). In a liberal democracy, it is possible for some large-scale decisions to emerge from the many individual decisions that citizens are free to make. In other words, citizens can "vote with their feet" or "vote with their dollars", resulting in significant informal government-by-the-masses that exercises many "powers" associated with formal government elsewhere. Socialist thought has several different views on democracy. Social democracy, democratic socialism, and the dictatorship of the proletariat (usually exercised through Soviet democracy) are some examples. Many democratic socialists and social democrats believe in a form of participatory, industrial, economic and/or workplace democracy combined with a representative democracy. Within Marxist orthodoxy there is a hostility to what is commonly called "liberal democracy", which is simply referred to as parliamentary democracy because of its often centralised nature. Because of orthodox Marxists' desire to eliminate the political elitism they see in capitalism, Marxists, Leninists and Trotskyists believe in direct democracy implemented through a system of communes (which are sometimes called soviets). This system ultimately manifests itself as council democracy and begins with workplace democracy. Anarchists are split in this domain, depending on whether they believe that a majority-rule is tyrannic or not. To many anarchists, the only form of democracy considered acceptable is direct democracy. Pierre-Joseph Proudhon argued that the only acceptable form of direct democracy is one in which it is recognised that majority decisions are not binding on the minority, even when unanimous. However, anarcho-communist Murray Bookchin criticised individualist anarchists for opposing democracy, and says "majority rule" is consistent with anarchism. Some anarcho-communists oppose the majoritarian nature of direct democracy, feeling that it can impede individual liberty and opt-in favour of a non-majoritarian form of consensus democracy, similar to Proudhon's position on direct democracy. Henry David Thoreau, who did not self-identify as an anarchist but argued for "a better government" and is cited as an inspiration by some anarchists, argued that people should not be in the position of ruling others or being ruled when there is no consent. Sometimes called "democracy without elections", sortition chooses decision makers via a random process. The intention is that those chosen will be representative of the opinions and interests of the people at large, and be more fair and impartial than an elected official. The technique was in widespread use in Athenian Democracy and Renaissance Florence and is still used in modern jury selection. A consociational democracy allows for simultaneous majority votes in two or more ethno-religious constituencies, and policies are enacted only if they gain majority support from both or all of them. A consensus democracy, in contrast, would not be dichotomous. Instead, decisions would be based on a multi-option approach, and policies would be enacted if they gained sufficient support, either in a purely verbal agreement or via a consensus vote—a multi-option preference vote. If the threshold of support were at a sufficiently high level, minorities would be as it were protected automatically. Furthermore, any voting would be ethno-colour blind. Qualified majority voting is designed by the Treaty of Rome to be the principal method of reaching decisions in the European Council of Ministers. This system allocates votes to member states in part according to their population, but heavily weighted in favour of the smaller states. This might be seen as a form of representative democracy, but representatives to the Council might be appointed rather than directly elected. Inclusive democracy is a political theory and political project that aims for direct democracy in all fields of social life: political democracy in the form of face-to-face assemblies which are confederated, economic democracy in a stateless, moneyless and marketless economy, democracy in the social realm, i.e. self-management in places of work and education, and ecological democracy which aims to reintegrate society and nature. The theoretical project of inclusive democracy emerged from the work of political philosopher Takis Fotopoulos in "Towards An Inclusive Democracy" and was further developed in the journal "Democracy & Nature" and its successor "The International Journal of Inclusive Democracy". The basic unit of decision making in an inclusive democracy is the demotic assembly, i.e. the assembly of demos, the citizen body in a given geographical area which may encompass a town and the surrounding villages, or even neighbourhoods of large cities. An inclusive democracy today can only take the form of a confederal democracy that is based on a network of administrative councils whose members or delegates are elected from popular face-to-face democratic assemblies in the various demoi. Thus, their role is purely administrative and practical, not one of policy-making like that of representatives in representative democracy. The citizen body is advised by experts but it is the citizen body which functions as the ultimate decision-taker. Authority can be delegated to a segment of the citizen body to carry out specific duties, for example, to serve as members of popular courts, or of regional and confederal councils. Such delegation is made, in principle, by lot, on a rotation basis, and is always recallable by the citizen body. Delegates to regional and confederal bodies should have specific mandates. A Parpolity or Participatory Polity is a theoretical form of democracy that is ruled by a Nested Council structure. The guiding philosophy is that people should have decision making power in proportion to how much they are affected by the decision. Local councils of 25–50 people are completely autonomous on issues that affect only them, and these councils send delegates to higher level councils who are again autonomous regarding issues that affect only the population affected by that council. A council court of randomly chosen citizens serves as a check on the tyranny of the majority, and rules on which body gets to vote on which issue. Delegates may vote differently from how their sending council might wish but are mandated to communicate the wishes of their sending council. Delegates are recallable at any time. Referendums are possible at any time via votes of most lower-level councils, however, not everything is a referendum as this is most likely a waste of time. A parpolity is meant to work in tandem with a participatory economy. Cosmopolitan democracy, also known as "Global democracy" or "World Federalism", is a political system in which democracy is implemented on a global scale, either directly or through representatives. An important justification for this kind of system is that the decisions made in national or regional democracies often affect people outside the constituency who, by definition, cannot vote. By contrast, in a cosmopolitan democracy, the people who are affected by decisions also have a say in them. According to its supporters, any attempt to solve global problems is undemocratic without some form of cosmopolitan democracy. The general principle of cosmopolitan democracy is to expand some or all of the values and norms of democracy, including the rule of law; the non-violent resolution of conflicts; and equality among citizens, beyond the limits of the state. To be fully implemented, this would require reforming existing international organisations, e.g. the United Nations, as well as the creation of new institutions such as a World Parliament, which ideally would enhance public control over, and accountability in, international politics. Cosmopolitan Democracy has been promoted, among others, by physicist Albert Einstein, writer Kurt Vonnegut, columnist George Monbiot, and professors David Held and Daniele Archibugi. The creation of the International Criminal Court in 2003 was seen as a major step forward by many supporters of this type of cosmopolitan democracy. Creative Democracy is advocated by American philosopher John Dewey. The main idea about Creative Democracy is that democracy encourages individual capacity building and the interaction among the society. Dewey argues that democracy is a way of life in his work of "Creative Democracy: The Task Before Us" and an experience built on faith in human nature, faith in human beings, and faith in working with others. Democracy, in Dewey's view, is a moral ideal requiring actual effort and work by people; it is not an institutional concept that exists outside of ourselves. "The task of democracy", Dewey concludes, "is forever that of creation of a freer and more humane experience in which all share and to which all contribute". Guided democracy is a form of democracy which incorporates regular popular elections, but which often carefully "guides" the choices offered to the electorate in a manner which may reduce the ability of the electorate to truly determine the type of government exercised over them. Such democracies typically have only one central authority which is often not subject to meaningful public review by any other governmental authority. Russian-style democracy has often been referred to as a "Guided democracy." Russian politicians have referred to their government as having only one center of power/ authority, as opposed to most other forms of democracy which usually attempt to incorporate two or more naturally competing sources of authority within the same government. Aside from the public sphere, similar democratic principles and mechanisms of voting and representation have been used to govern other kinds of groups. Many non-governmental organisations decide policy and leadership by voting. Most trade unions and cooperatives are governed by democratic elections. Corporations are controlled by shareholders on the principle of one share, one vote—sometimes supplemented by workplace democracy. Amitai Etzioni has postulated a system that fuses elements of democracy with sharia law, termed "islamocracy". Several justifications for democracy have been postulated. Social contract theory argues that the legitimacy of government is based on public acceptance of it, i.e. an election. Condorcet's jury theorem is logical proof that if each decision-maker has more than a 50% probability of making the right decision, then having the largest number of decision-makers, i.e. a democracy, will result in the best decisions. This has also been argued by theories of the wisdom of the crowd. Democratic peace theory claims, and has successfully proven empirically, that liberal democracies do not go to war against each other. In "Why Nations Fail", Daron Acemoglu and James A. Robinson argue that democracies are more economically successful because undemocratic political systems tend to limit markets and favor monopolies at the expense of the creative destruction which is necessary for sustained economic growth. Arrow's impossibility theorem suggests that democracy is logically incoherent. This is based on a certain set of criteria for democratic decision-making being inherently conflicting. Some economists have criticized the efficiency of democracy, citing the premise of the irrational voter, or a voter who makes decisions without all of the facts or necessary information in order to make a truly informed decision. Another argument is that democracy slows down processes because of the amount of input and participation needed in order to go forward with a decision. A common example often quoted to substantiate this point is the high economic development achieved by China (a non-democratic country) as compared to India (a democratic country). According to economists, the lack of democratic participation in countries like China allows for unfettered economic growth. On the other hand, Socrates believed that democracy without educated masses (educated in the more broader sense of being knowledgeable and responsible) would only lead to populism being the criteria to become an elected leader and not competence. This would ultimately lead to a demise of the nation. This was quoted by Plato in book 10 of The Republic, in Socrates' conversation with Adimantus. Socrates was of the opinion that the right to vote must not be an indiscriminate right (for example by birth or citizenship), but must be given only to people who thought sufficiently of their choice. The 20th-century Italian thinkers Vilfredo Pareto and Gaetano Mosca (independently) argued that democracy was illusory, and served only to mask the reality of elite rule. Indeed, they argued that elite oligarchy is the unbendable law of human nature, due largely to the apathy and division of the masses (as opposed to the drive, initiative and unity of the elites), and that democratic institutions would do no more than shift the exercise of power from oppression to manipulation. As Louis Brandeis once professed, "We may have democracy, or we may have wealth concentrated in the hands of a few, but we can't have both.". British writer Ivo Mosley, grandson of blackshirt Oswald Mosley describes in "In the Name of the People: Pseudo-Democracy and the Spoiling of Our World", how and why current forms of electoral governance are destined to fall short of their promise. A study led by Princeton professor Martin Gilens of 1,779 U.S. government decisions concluded that "elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence." Plato's "The Republic" presents a critical view of democracy through the narration of Socrates: "Democracy, which is a charming form of government, full of variety and disorder, and dispensing a sort of equality to equals and unequaled alike." In his work, Plato lists 5 forms of government from best to worst. Assuming that "the Republic" was intended to be a serious critique of the political thought in Athens, Plato argues that only Kallipolis, an aristocracy led by the unwilling philosopher-kings (the wisest men), is a just form of government. James Madison critiqued direct democracy (which he referred to simply as "democracy") in Federalist No. 10, arguing that representative democracy—which he described using the term "republic"—is a preferable form of government, saying: "... democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths." Madison offered that republics were superior to democracies because republics safeguarded against tyranny of the majority, stating in Federalist No. 10: "the same advantage which a republic has over a democracy, in controlling the effects of faction, is enjoyed by a large over a small republic". More recently, democracy is criticised for not offering enough political stability. As governments are frequently elected on and off there tends to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline-grabbing protests and harsh criticism from the popular media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priorities. This opportunist alliance not only has the handicap of having to cater to too many ideologically opposing factions, but it is usually short-lived since any perceived or actual imbalance in the treatment of coalition partners, or changes to leadership in the coalition partners themselves, can very easily result in the coalition partner withdrawing its support from the government. Biased media has been accused of causing political instability, resulting in the obstruction of democracy, rather than its promotion. In representative democracies, it may not benefit incumbents to conduct fair elections. A study showed that incumbents who rig elections stay in office 2.5 times as long as those who permit fair elections. Democracies in countries with high per capita income have been found to be less prone to violence, but in countries with low incomes the tendency is the reverse. Election misconduct is more likely in countries with low per capita incomes, small populations, rich in natural resources, and a lack of institutional checks and balances. Sub-Saharan countries, as well as Afghanistan, all tend to fall into that category. Governments that have frequent elections tend to have significantly more stable economic policies than those governments who have infrequent elections. However, this trend does not apply to governments where fraudulent elections are common. Democracy in modern times has almost always faced opposition from the previously existing government, and many times it has faced opposition from social elites. The implementation of a democratic government within a non-democratic state is typically brought about by democratic revolution. Several philosophers and researchers have outlined historical and social factors seen as supporting the evolution of democracy. Other commentators have mentioned the influence of economic development. In a related theory, Ronald Inglehart suggests that improved living-standards in modern developed countries can convince people that they can take their basic survival for granted, leading to increased emphasis on self-expression values, which correlates closely with democracy. Douglas M. Gibler and Andrew Owsiak in their study argued about the importance of peace and stable borders for the development of democracy. It has often been assumed that democracy causes peace, but this study shows that, historically, peace has almost always predated the establishment of democracy. Carroll Quigley concludes that the characteristics of weapons are the main predictor of democracy: Democracy—this scenario—tends to emerge only when the best weapons available are easy for individuals to obtain and use. By the 1800s, guns were the best personal weapons available, and in the United States of America (already nominally democratic), almost everyone could afford to buy a gun, and could learn how to use it fairly easily. Governments couldn't do any better: it became the age of mass armies of citizen soldiers with guns. Similarly, Periclean Greece was an age of the citizen soldier and democracy. Other theories stressed the relevance of education and of human capital—and within them of cognitive ability to increasing tolerance, rationality, political literacy and participation. Two effects of education and cognitive ability are distinguished: Evidence consistent with conventional theories of why democracy emerges and is sustained has been hard to come by. Statistical analyses have challenged modernisation theory by demonstrating that there is no reliable evidence for the claim that democracy is more likely to emerge when countries become wealthier, more educated, or less unequal. Neither is there convincing evidence that increased reliance on oil revenues prevents democratisation, despite a vast theoretical literature on "the Resource Curse" that asserts that oil revenues sever the link between citizen taxation and government accountability, seen as the key to representative democracy. The lack of evidence for these conventional theories of democratisation have led researchers to search for the "deep" determinants of contemporary political institutions, be they geographical or demographic. More inclusive institutions lead to democracy because as people gain more power, they are able to demand more from the elites, who in turn have to concede more things to keep their position. This virtuous circle may end up in democracy. An example of this is the disease environment. Places with different mortality rates had different populations and productivity levels around the world. For example, in Africa, the tsetse fly—which afflicts humans and livestock—reduced the ability of Africans to plow the land. This made Africa less settled. As a consequence, political power was less concentrated. This also affected the colonial institutions European countries established in Africa. Whether colonial settlers could live or not in a place made them develop different institutions which led to different economic and social paths. This also affected the distribution of power and the collective actions people could take. As a result, some African countries ended up having democracies and others autocracies. An example of geographical determinants for democracy is having access to coastal areas and rivers. This natural endowment has a positive relation with economic development thanks to the benefits of trade. Trade brought economic development, which in turn, broadened power. Rulers wanting to increase revenues had to protect property-rights to create incentives for people to invest. As more people had more power, more concessions had to be made by the ruler and in many places this process lead to democracy. These determinants defined the structure of the society moving the balance of political power. In the 21st century, democracy has become such a popular method of reaching decisions that its application beyond politics to other areas such as entertainment, food and fashion, consumerism, urban planning, education, art, literature, science and theology has been criticised as "the reigning dogma of our time". The argument suggests that applying a populist or market-driven approach to art and literature (for example), means that innovative creative work goes unpublished or unproduced. In education, the argument is that essential but more difficult studies are not undertaken. Science, as a truth-based discipline, is particularly corrupted by the idea that the correct conclusion can be arrived at by popular vote. However, more recently, theorists have also advanced the concept epistemic democracy to assert that democracy actually does a good job tracking the truth. Robert Michels asserts that although democracy can never be fully realised, democracy may be developed automatically in the act of striving for democracy: The peasant in the fable, when on his death-bed, tells his sons that a treasure is buried in the field. After the old man's death the sons dig everywhere in order to discover the treasure. They do not find it. But their indefatigable labor improves the soil and secures for them a comparative well-being. The treasure in the fable may well symbolise democracy. Dr. Harald Wydra, in his book "Communism and The Emergence of Democracy" (2007), maintains that the development of democracy should not be viewed as a purely procedural or as a static concept but rather as an ongoing "process of meaning formation". Drawing on Claude Lefort's idea of the empty place of power, that "power emanates from the people [...] but is the power of nobody", he remarks that democracy is reverence to a symbolic mythical authority—as in reality, there is no such thing as the people or "demos". Democratic political figures are not supreme rulers but rather temporary guardians of an empty place. Any claim to substance such as the collective good, the public interest or the will of the nation is subject to the competitive struggle and times of for gaining the authority of office and government. The essence of the democratic system is an empty place, void of real people, which can only be temporarily filled and never be appropriated. The seat of power is there but remains open to constant change. As such, people's definitions of "democracy" or of "democratic" progress throughout history as a continual and potentially never-ending process of social construction.
https://en.wikipedia.org/wiki?curid=7959
Logical disjunction In logic and mathematics, or is the truth-functional operator of (inclusive) disjunction, also known as alternation; the "or" of a set of operands is true if and only if "one or more" of its operands is true. The logical connective that represents this operator is typically written as ∨ or +. formula_1 is true if formula_2 is true, or if formula_3 is true, or if both formula_2 and formula_3 are true. In logic, "or" by itself means the "inclusive" "or", distinguished from an exclusive or, which is false when both of its arguments are true, while an "or" is true in that case. An operand of a disjunction is called a disjunct. Related concepts in other fields are: Or is usually expressed with an infix operator: in mathematics and logic, ∨; in electronics, +; and in most programming languages, |, ||, or or. In Jan Łukasiewicz's prefix notation for logic, the operator is A, for Polish "alternatywa" (English: alternative). Logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value of "false" if and only if both of its operands are false. More generally, a disjunction is a logical formula that can have one or more literals separated only by 'or's. A single literal is often considered to be a degenerate disjunction. The disjunctive identity is false, which is to say that the "or" of an expression with false has the same value as the original expression. In keeping with the concept of vacuous truth, when disjunction is defined as an operator or function of arbitrary arity, the empty disjunction (OR-ing over an empty set of operands) is generally defined as false. The truth table of formula_1: The following properties apply to disjunction: The mathematical symbol for logical disjunction varies in the literature. In addition to the word "or", and the formula "A"pq"", the symbol "formula_15", deriving from the Latin word "vel" (“either”, “or”) is commonly used for disjunction. For example: ""A" formula_15 "B" " is read as ""A" or "B" ". Such a disjunction is false if both "A" and "B" are false. In all other cases it is true. All of the following are disjunctions: The corresponding operation in set theory is the set-theoretic union. Operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations. Examples: The codice_1 operator can be used to set bits in a bit field to 1, by codice_1-ing the field with a constant field with the relevant bits set to 1. For example, codice_3 will force the final bit to 1 while leaving other bits unchanged. Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages following C, bitwise disjunction is performed with the single pipe (codice_4) and logical disjunction with the double pipe (codice_5) operators. Logical disjunction is usually short-circuited; that is, if the first (left) operand evaluates to codice_6 then the second (right) operand is not evaluated. The logical disjunction operator thus usually constitutes a sequence point. In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one terminates with value true, the other is interrupted. This operator is thus called the parallel or. Although in most languages the type of a logical disjunction expression is boolean and thus can only have the value codice_6 or codice_8, in some (such as Python and JavaScript) the logical disjunction operator returns one of its operands: the first operand if it evaluates to a true value, and the second operand otherwise. The Curry–Howard correspondence relates a constructivist form of disjunction to tagged union types. The membership of an element of a union set in set theory is defined in terms of a logical disjunction: "x" ∈ "A" ∪ "B" if and only if ("x" ∈ "A") ∨ ("x" ∈ "B"). Because of this, logical disjunction satisfies many of the same identities as set-theoretic union, such as associativity, commutativity, distributivity, and de Morgan's laws, identifying logical conjunction with set intersection, logical negation with set complement. As with other notions formalized in mathematical logic, the meaning of the natural-language coordinating conjunction "or" is closely related to but different from the logical "or". For example, "Please ring me or send an email" likely means "do one or the other, but not both". On the other hand, "Her grades are so good that either she's very bright or she studies hard" does not exclude the possibility of both. In other words, in ordinary language "or" (even if used with "either") can mean either the inclusive "or" [exclusive-]or the exclusive "or."
https://en.wikipedia.org/wiki?curid=7962
Disjunctive syllogism In classical logic, disjunctive syllogism (historically known as modus tollendo ponens (MTP), Latin for "mode that affirms by denying") is a valid argument form which is a syllogism having a disjunctive statement for one of its premises. An example in English: In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or abbreviated ∨E), is a valid rule of inference. If we are told that at least one of two statements is true; and also told that it is not the former that is true; we can infer that it has to be the latter that is true. If "P" is true or "Q" is true and "P" is false, then "Q" is true. The reason this is called "disjunctive syllogism" is that, first, it is a syllogism, a three-step argument, and second, it contains a logical disjunction, which simply means an "or" statement. "P or Q" is a disjunction; P and Q are called the statement's "disjuncts". The rule makes it possible to eliminate a disjunction from a logical proof. It is the rule that: where the rule is that whenever instances of "formula_2", and "formula_3" appear on lines of a proof, "formula_4" can be placed on a subsequent line. Disjunctive syllogism is closely related and similar to hypothetical syllogism, in that it is also type of syllogism, and also the name of a rule of inference. It is also related to the law of noncontradiction, one of the . The "disjunctive syllogism" rule may be written in sequent notation: where formula_6 is a metalogical symbol meaning that formula_4 is a syntactic consequence of formula_2, and formula_9 in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: where formula_11, and formula_4 are propositions expressed in some formal system. Here is an example: Here is another example: Please observe that the disjunctive syllogism works whether 'or' is considered 'exclusive' or 'inclusive' disjunction. See below for the definitions of these terms. There are two kinds of logical disjunction: The widely used English language concept of "or" is often ambiguous between these two meanings, but the difference is pivotal in evaluating disjunctive arguments. This argument: is valid and indifferent between both meanings. However, only in the "exclusive" meaning is the following form valid: With the "inclusive" meaning you could draw no conclusion from the first two premises of that argument. See affirming a disjunct. Unlike "modus ponens" and "modus ponendo tollens", with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a (slightly devious) combination of reductio ad absurdum and disjunction elimination. Other forms of syllogism include: Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent logics.
https://en.wikipedia.org/wiki?curid=7963
Definition A definition is a statement of the meaning of a term (a word, phrase, or other set of symbols). Definitions can be classified into two large categories, intensional definitions (which try to give the sense of a term) and extensional definitions (which try to list the objects that a term describes). Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples. A term may have many different senses and multiple meanings, and thus require multiple definitions. In mathematics, a definition is used to give a precise meaning to a new term, by describing a condition which unambiguously qualifies what a mathematical term is and is not. Definitions and axioms form the basis on which all of modern mathematics is to be constructed.
https://en.wikipedia.org/wiki?curid=7964
Disco Disco is a genre of dance music and a subculture that emerged in the 1970s from the United States’ urban nightlife scene. Its sound is typified by four-on-the-floor beats, syncopated basslines, string sections, horns, electric piano, synthesizers, and electric rhythm guitars. Well-known disco artists include Donna Summer, Gloria Gaynor, the Bee Gees, Chic, KC and the Sunshine Band, Thelma Houston, Sister Sledge, The Trammps, the Village People and Michael Jackson. While performers and singers garnered public attention, record producers working behind the scenes played an important role in developing the genre. Films such as "Saturday Night Fever" (1977) and "Thank God It's Friday" (1978) contributed to disco's mainstream popularity. Disco started as a mixture of music from venues popular with African Americans, Hispanic and Latino Americans, Italian Americans, and LGBT people in Philadelphia and New York City during the late 1960s and early 1970s. Disco can be seen as a reaction by the 1960s counterculture to both the dominance of rock music and the stigmatization of dance music at the time. Several dance styles were developed during the period of disco's popularity in the United States, including “the Bump” and “the Hustle”. By the late 1970s, most major U.S. cities had thriving disco club scenes, and DJs would mix dance records at clubs such as Studio 54 in New York City, a venue popular among celebrities. Discothèque-goers often wore expensive, extravagant and sexy fashions. There was also a thriving drug subculture in the disco scene, particularly for drugs that would enhance the experience of dancing to the loud music and the flashing lights, such as cocaine and Quaaludes, the latter being so common in disco subculture that they were nicknamed "disco biscuits". Disco clubs were also associated with promiscuity as a reflection of the sexual revolution of this era in popular history. Disco was the last popular music movement driven by baby boomers, peaking in popularity during the mid-late 1970s. It declined as a major trend in popular music during the late 1970s to early 1980s, but remained a key influence in the development of electronic dance music, house music, hip-hop, new wave, and post-disco. While no new disco movement has dominated popular music since its decline, the style has had several revivals since the 1990s, and the influence of disco remains strong across American and European pop music. The term "disco" is shorthand for the word "discothèque", a French word for "library of phonograph records" derived from "bibliothèque". The word "discothèque" had the same meaning in English in the 1950s. "Discothèque" became used in French for a type of nightclub in Paris, France after these had resorted to playing records during the Nazi occupation in the early 1940s. Some clubs used it as their proper name. In 1960 it was also used to describe a Parisian nightclub in an English magazine. In the summer of 1964, a short sleeveless dress called "discotheque dress" was briefly very popular in the United States. The earliest known use for the abbreviated form "disco" described this dress and has been found in "The Salt Lake Tribune" of 12 July 1964, but "Playboy" magazine used it in September of the same year to describe Los Angeles nightclubs. Vince Aletti was one of the first to describe disco as a sound or a music genre. He wrote the feature article "Discotheque Rock Paaaaarty" that appeared in "Rolling Stone" magazine in September 1973. The music typically layered soaring, often-reverberated vocals, often doubled by horns, over a background "pad" of electric pianos and "chicken-scratch" rhythm guitars played on an electric guitar. Lead guitar features less frequently in disco than in rock. "The 'chicken scratch' sound is achieved by lightly pressing the guitar strings against the fretboard and then quickly releasing them just enough to get a slightly muted poker [sound] while constantly strumming very close to the bridge." Other backing keyboard instruments include the piano, electric organ (during early years), string synthesizers, and electromechanical keyboards such as the Fender Rhodes electric piano, Wurlitzer electric piano, and Hohner Clavinet. Synthesizers are also fairly common in disco, especially in the late 1970s. The rhythm is laid down by prominent, syncopated basslines (with heavy use of broken octaves, that is, octaves with the notes sounded one after the other) played on the bass guitar and by drummers using a drum kit, African/Latin percussion, and electronic drums such as Simmons and Roland drum modules. The sound was enriched with solo lines and harmony parts played by a variety of orchestral instruments, such as harp, violin, viola, cello, trumpet, saxophone, trombone, clarinet, flugelhorn, French horn, tuba, English horn, oboe, flute (sometimes especially the alto flute and occasionally bass flute), piccolo, timpani and synth strings, string section or a full string orchestra. Most disco songs have a steady four-on-the-floor beat, a quaver or semi-quaver hi-hat pattern with an open hi-hat on the off-beat, and a heavy, syncopated bass line. Other Latin rhythms such as the rhumba, the samba, and the cha-cha-cha are also found in disco recordings, and Latin polyrhythms, such as a rhumba beat layered over a merengue, are commonplace. The quaver pattern is often supported by other instruments such as the rhythm guitar and may be implied rather than explicitly present. Songs often use syncopation, which is the accenting of unexpected beats. In general, the difference between disco, or any dance song, and a rock or popular song is that in dance music the bass drum hits "four to the floor", at least once a beat (which in 4/4 time is 4 beats per measure). Disco is further characterized by a 16th note division of the quarter notes as shown in the second drum pattern below, after a typical rock drum pattern. The orchestral sound is usually known as "disco sound" relies heavily on string sections and horns playing linear phrases, in unison with the soaring, often reverberated vocals or playing instrumental fills, while electric pianos and chicken-scratch guitars create the background "pad" sound defining the harmony progression. Typically, all of the doubling of parts and use of additional instruments creates a rich "wall of sound". There are, however, more minimalist flavors of disco with reduced, transparent instrumentation, pioneered by Chic. Harmonically, disco music typically contains major and minor seven chords, which are found more often in jazz than pop music. The "disco sound" was much more costly to produce than many of the other popular music genres from the 1970s. Unlike the simpler, four-piece-band sound of funk, soul music of the late 1960s, or the small jazz organ trios, disco music often included a large band, with several chordal instruments (guitar, keyboards, synthesizer), several drum or percussion instruments (drumkit, Latin percussion, electronic drums), a horn section, a string orchestra, and a variety of "classical" solo instruments (for example, flute, piccolo, and so on). Disco songs were arranged and composed by experienced arrangers and orchestrators, and record producers added their creative touches to the overall sound using multitrack recording techniques and effects units. Recording complex arrangements with such a large number of instruments and sections required a team that included a conductor, copyists, record producers, and mixing engineers. Mixing engineers had an important role in the disco production process, because disco songs used as many as 64 tracks of vocals and instruments. Mixing engineers and record producers, under the direction of arrangers, compiled these tracks into a fluid composition of verses, bridges, and refrains, complete with builds and breaks. Mixing engineers and record producers helped to develop the "disco sound" by creating a distinctive-sounding, sophisticated disco mix. Early records were the "standard" three-minute version until Tom Moulton came up with a way to make songs longer so that he could take a crowd of dancers at a club to another level and keep them dancing longer. He found that it was impossible to make the 45-RPM vinyl singles of the time longer, as they could usually hold no more than five minutes of good-quality music. With the help of José Rodriguez, his remaster/mastering engineer, he pressed a single on a 10" disc instead of 7". They cut the next single on a 12" disc, the same format as a standard album. Moulton and Rodriguez discovered that these larger records could have much longer songs and remixes. 12" single records, also known as "Maxi singles", quickly became the standard format for all DJs of the disco genre. By the late 1970s most major US cities had thriving disco club scenes. The largest scenes were in San Francisco, Miami, Washington, D.C., and most notably New York City. The scene was centered on discotheques, nightclubs, and private loft parties. In the 1970s, notable discos included "Crisco Disco", "The Sanctuary", "Leviticus", "Studio 54" and "Paradise Garage" in New York, "Artemis" in Philadelphia, "Studio One" in Los Angeles, "Dugan's Bistro" in Chicago, and "The Library" in Atlanta. In the late '70s, "Studio 54" in New York City was arguably the most well known nightclub in the world. This club played a major formative role in the growth of disco music and nightclub culture in general. It was operated by Steve Rubell and Ian Schrager and was notorious for the hedonism that went on within; the balconies were known for sexual encounters, and drug use was rampant. Its dance floor was decorated with an image of the "Man in the Moon" that included an animated cocaine spoon. The "Copacabana", another New York nightclub dating to the 1940s, had a revival in the late 1970s when it embraced disco; it would become the setting of a Barry Manilow song of the same name. In Washington, D.C., large disco clubs such as "The Pier" ("Pier 9") and "The Other Side," originally regarded exclusively as "gay bars," became particularly popular among the capital area's gay and straight college students in the late '70s. Powerful, bass-heavy, hi-fi sound systems were viewed as a key part of the disco club experience. "Mancuso introduced the technologies of tweeter arrays (clusters of small loudspeakers, which emit high-end frequencies, positioned above the floor) and bass reinforcements (additional sets of subwoofers positioned at ground level) at the start of the 1970s to boost the treble and bass at opportune moments, and by the end of the decade sound engineers such as Richard Long had multiplied the effects of these innovations in venues such as the Garage." Typical lighting designs for disco dance floors could include multi-coloured lights that swirl around or flash to the beat, strobe light, an illuminated dance floor and a mirror ball. Disco-era disc jockeys (DJs) would often remix existing songs using reel-to-reel tape machines, and add in percussion breaks, new sections, and new sounds. DJs would select songs and grooves according to what the dancers wanted, transitioning from one song to another with a DJ mixer and using a microphone to introduce songs and speak to the audiences. Other equipment was added to the basic DJ setup, providing unique sound manipulations, such as reverb, equalization, and echo effects unit. Using this equipment, a DJ could do effects such as cutting out all but the bassline of a song and then slowly mixing in the beginning of another song using the DJ mixer's crossfader. Notable U.S. disco DJs include Francis Grasso of The Sanctuary, David Mancuso of The Loft, Frankie Knuckles of the Chicago Warehouse, Larry Levan of the Paradise Garage, Nicky Siano, Walter Gibbons, Karen Mixon Cook, Jim Burgess, John "Jellybean" Benitez, Richie Kulala of Studio 54 and Rick Salsalini. Some DJs were also record producers who created and produced disco songs in the recording studio. Larry Levan, for example, was a prolific record producer as well as a DJ. Because record sales were often dependent on dance floor play by DJs in leading nightclubs, DJs were also influential for the development and popularization of certain types of disco music being produced for record labels. In the early years, dancers in discos danced in a "hang loose" or "freestyle" approach. At first, many dancers improvised their own dance styles and dance steps. Later in the disco era, popular dance styles were developed, including the "Bump", "Penguin", "Boogaloo", "Watergate" and "Robot". By October 1975 the Hustle reigned. It was highly stylized, sophisticated and overtly sexual. Variations included the Brooklyn Hustle, New York Hustle and Latin Hustle. During the disco era, many nightclubs would commonly host disco dance competitions or offer free dance lessons. Some cities had disco dance instructors or dance schools, which taught people how to do popular disco dances such as ""touch dancing"", ""the hustle"", and ""the cha cha"". The pioneer of disco dance instruction was Karen Lustgarten in San Francisco in 1973. Her book "The Complete Guide to Disco Dancing" (Warner Books 1978) was the first to name, break down and codify popular disco dances as dance forms and distinguish between disco freestyle, partner and line dances. The book topped the "New York Times" bestseller list for 13 weeks and was translated into Chinese, German and French. In Chicago, the "Step By Step" disco dance TV show was launched with the sponsorship support of the Coca-Cola company. Produced in the same studio that Don Cornelius used for the nationally syndicated dance/music television show, "Soul Train", "Step by Step"'s audience grew and the show became a success. The dynamic dance duo of Robin and Reggie led the show. The pair spent the week teaching disco dancing to dancers in the disco clubs. The instructional show which aired on Saturday mornings had a following of dancers who would stay up all night on Fridays so they could be on the set the next morning, ready to return to the disco on Saturday night knowing with the latest personalized dance steps. The producers of the show, John Reid and Greg Roselli, routinely made appearances at disco functions with Robin and Reggie to scout out new dancing talent and promote upcoming events such as "Disco Night at White Sox Park". In Sacramento Disco King Paul Dale Roberts danced for the Guinness Book of World Records. Roberts danced for 205 hours which is the equivalent of 8 ½ days. Other dance marathons took place after Roberts held the world’s record for disco dancing for a short period of time. Referenced: https://www.valcomnews.com/former-pocket-area-resident-was-sacto%e2%80%99s-%e2%80%9cdisco-king%e2%80%9d/ Some notable professional dance troupes of the 1970s included Pan's People and Hot Gossip. For many dancers, a key source of inspiration for 1970s disco dancing was the film "Saturday Night Fever" (1977). This developed into the music and dance style of such films as "Fame" (1980), "Disco Dancer" (1982), "Flashdance" (1983), and "The Last Days of Disco" (1998). Interest in disco dancing also helped spawn dance competition TV shows such as "Dance Fever" (1979). Disco fashions were very trendy in the late 1970s. Discothèque-goers often wore glamorous, expensive and extravagant fashions for nights out at their local disco club. Some women would wear sheer, flowing dresses, such as Halston dresses or loose, flared pants. Other women wore tight, revealing, sexy clothes, such as backless halter tops, disco pants, "hot pants" or body-hugging spandex bodywear or "catsuits". Men would wear shiny polyester Qiana shirts with colorful patterns and pointy, extra wide collars, preferably open at the chest. Men often wore Pierre Cardin suits, three piece suits with a vest and double-knit polyester shirt jackets with matching trousers known as the leisure suit. Men's leisure suits were typically form-fitted in some parts of the body, such as the waist and bottom, but the lower part of the pants were flared in a bell bottom style, to permit freedom of movement. During the disco era, men engaged in elaborate grooming rituals and spent time choosing fashion clothing, both activities that would have been considered "feminine" according to the gender stereotypes of the era. Women dancers wore glitter makeup, sequins or gold lamé clothing that would shimmer under the lights. Bold colors were popular for both genders. Platform shoes and boots for both genders and high heels for women were popular footwear. Necklaces and medallions were a common fashion accessory. Less commonly, some disco dancers wore outlandish costumes, dressed in drag, covered their bodies with gold or silver paint, or wore very skimpy outfits leaving them nearly nude; these uncommon get-ups were more likely to be seen at invitation-only New York City loft parties and disco clubs. In addition to the dance and fashion aspects of the disco club scene, there was also a thriving club drug subculture, particularly for drugs that would enhance the experience of dancing to the loud, bass-heavy music and the flashing colored lights, such as cocaine (nicknamed "blow"), amyl nitrite ("poppers"), and the "... other quintessential 1970s club drug Quaalude, which suspended motor coordination and gave the sensation that one's arms and legs had turned to 'Jell-O.'" Quaaludes were so popular at disco clubs that the drug was nicknamed "disco biscuits". Paul Gootenberg states that "[t]he relationship of cocaine to 1970s disco culture cannot be stressed enough..." During the 1970s, the use of cocaine by well-to-do celebrities led to its "glamorization" and to the widely held view that it was a "soft drug". LSD, marijuana, and "speed" (amphetamines) were also popular in disco clubs, and the use of these drugs "...contributed to the hedonistic quality of the dance floor experience." Since disco dances were typically held in liquor licensed-nightclubs and dance clubs, alcoholic drinks were also consumed by dancers; some users intentionally combined alcohol with the consumption of other drugs, such as Quaaludes, for a stronger effect. According to Peter Braunstein, the "massive quantities of drugs ingested in discothèques produced the next cultural phenomenon of the disco era: rampant promiscuity and public sex. While the dance floor was the central arena of seduction, actual sex usually took place in the nether regions of the disco: bathroom stalls, exit stairwells, and so on. In other cases the disco became a kind of 'main course' in a hedonist's menu for a night out." At The Saint nightclub, a high percentage of the gay male dancers and patrons would have sex in the club; they typically had unprotected sex, because in 1980, HIV-AIDS had not yet been identified. At The Saint, "dancers would elope to an un[monitored] upstairs balcony to engage in sex." The promiscuity and public sex at discos was part of a broader trend towards exploring a freer sexual expression in the 1970s, an era that is also associated with "swingers clubs, hot tubs, [and] key parties." In his paper, "In Defense of Disco" (1979), Richard Dyer claims eroticism as one of the three main characteristics of disco. As opposed to rock music which has a very phallic centered eroticism focusing on the sexual pleasure of men over other persons, Dyer describes disco as featuring a non-phallic full body eroticism. Through a range of percussion instruments, a willingness to play with rhythm, and the endless repeating of phrases without cutting the listener off, disco achieved this full body eroticism by restoring eroticism to the whole body for both sexes. This allowed for the potential expression of sexualities not defined by the cock/penis, and the erotic pleasure of bodies that are not defined by a relationship to a penis. The sexual liberation expressed through the rhythm of disco is further represented in the club spaces that disco grew within. In Peter Shapiro’s ": Throbbing Words on Sound", he discusses eroticism through the technology disco utilizes to create it’s audacious sound. The music, Shapiro states, is adjunct to “the pleasure-is-politics ethos of post-Stonewall culture.” He explains how “mechano-eroticism,” which links the technology used to create the unique mechanical sound of disco to eroticism, sets the genre in a new dimension of reality living outside of naturalism and heterosexuality. He uses Donna Summer's singles “Love to Love You Baby” (1975) and “I Feel Love” (1977) as examples of the ever present relationship between the synthesized bass lines and backgrounds to the simulated sounds of orgasms Summers echoes in the tracks, and likens them to the drug-fervent, sexually liberated fans of disco who sought to free themselves through disco's “aesthetic of machine sex.” Shapiro sees this as an influence that creates sub-genres like hi-NRG and dub-disco, which allowed for eroticism and technology to be further explored through intense synth bass lines and alternative rhythmic techniques that tap into the entire body rather than the obvious erotic parts of the body. The New York nightclub The Sanctuary under resident DJ Francis Grasso is a prime example of this sexual liberty. In their history of the disc jockey and club culture, Bill Brewster and Frank Broughton describe the Sanctuary as “poured full of newly liberated gay men, then shaken (and stirred) by a weighty concoction of dance music and pharmacoia of pills and potions, the result is a festivaly of carnality.” The Sanctuary was the “first totally uninhibited gay discotheque in America” and while sex was not allowed on the dancefloor, the dark corners, the bathrooms and the hallwasy of the adjacent buildings were all utilized for orgy like sexual engagements. By describing the music, drugs and liberated mentality as a trifecta coming together to create the festival of carnality, Brewster and Broughton are inciting all three as stimuli for the dancing, sex and other embodied movements that contributed to the corporeal vibrations within the Sanctuary. This supports the argument that the disco music took a role in facilitating this sexual liberation that was experienced in the discotheques. Further, this coupled with the recent legalization of abortions, the introduction of antibiotics and the pill all facilitated a culture shift around sex from one of procreation to pleasure and enjoyment fostering a very sex positive framework around discotheques. Given that at this time all instances of oral and anal gay sex were considered deviant and illegal acts in New York state, this sexual freedom can be considered quite liberatory and resistant to dominant oppressive structures. Further, in addition to gay sex being illegal in New York state, until 1973 the American Psychiatric Association classified homosexuality as an illness. This law and classification coupled together can be understood to have heavily dissuaded the expression of queerness in public, as such the liberatory dynamics of discotheques can be seen as having provided space for self-realization for queer persons. David Mancuso's club/house party, The Loft, was described as having a "pansexual attitude [that] was revolutionary in a country where up until recently it had been illegal for two men to dance together unless there was a woman present; where women were legally obliged to wear at least one recognizable item of female clothing in public; and where men visiting gay bars usually carried bail money with them." Disco was mostly developed from music that was popular on the dance floor in clubs that started playing records instead of having a live band. The first discotheques mostly played swing music. Later on uptempo rhythm and blues became popular in American clubs and northern soul and glam rock records in the UK. In the early 1940s nightclubs in Paris resorted to playing (jazz) records during the Nazi occupation. Régine Zylberberg claimed to have started the first discotheque and to have been the first club DJ in 1953 in the "Whisky à Go-Go" in Paris. She installed a dance floor with coloured lights and two turntables so she could play records without having a gap in the music. In October 1959 the owner of the Scotch Club in Aachen, West Germany chose to install a record player for the opening night instead of hiring a live band. The patrons were unimpressed until a young reporter, who happened to be covering the opening of the club, impulsively took control of the record player and introduced the records that he chose to play. Klaus Quirini later claimed to thus have been the world's first nightclub DJ. Discotheque dancing became a European trend that was enthusiastically picked up by the American press. The birth of disco is often claimed to be found in the private dance parties held by New York City DJ David Mancuso's home that became known as The Loft, an invitation-only non-commercial underground club that inspired many others. He organized the first major party in his Manhattan home on Valentine's Day 1970 with the name "Love Saves The Day". After some months the parties became weekly events and Mancuso continued to give regular parties into the 1990s. Mancuso required that the music played had to be soulful, rhythmic, and impart words of hope, redemption, or pride. In the 1970s, the key counterculture of the 1960s, the hippie movement, was fading away. The economic prosperity of the previous decade had declined, and unemployment, inflation and crime rates had soared. Political issues like the backlash from the Civil Rights Movement culminating in the form of race riots, the Vietnam War, the assassinations of Dr. Martin Luther King and John F. Kennedy and the Watergate scandal left many feeling disillusioned and hopeless. The start of the ’70s was marked by a shift in the consciousness of the American people: the rise of the feminist movement, identity politics, gangs, etc. very much shaped this era. Disco music and disco dancing provided an escape from negative social and economic issues. In "Beautiful Things in Popular Culture", Simon Frith highlights the sociability of disco and its roots in 1960s counterculture. "The driving force of the New York underground dance scene in which disco was forged was not simply that city's complex ethnic and sexual culture but also a 1960s notion of community, pleasure and generosity that can only be described as hippie", he says. "The best disco music contained within it a remarkably powerful sense of collective euphoria." When Mancuso threw his first informal house parties, the gay community (which made up much of The Loft's attendee roster) was often harassed in the gay bars and dance clubs, with many gay men carrying bail money with them to gay bars. But at The Loft and many other early, private discotheques, they could dance together without fear of police action thanks to Mancuso's underground, yet legal, policies. Vince Aletti described it "like going to party, completely mixed, racially and sexually, where there wasn't any sense of someone being more important than anyone else," and Alex Rosner reiterated this saying "It was probably about sixty percent black and seventy percent gay...There was a mix of sexual orientation, there was a mix of races, mix of economic groups. A real mix, where the common denominator was music." Film critic Roger Ebert called the popular embrace of disco's exuberant dance moves an escape from "the general depression and drabness of the political and musical atmosphere of the late seventies." Pauline Kael, writing about the disco-themed film "Saturday Night Fever", said the film and disco itself touched on "something deeply romantic, the need to move, to dance, and the need to be who you'd like to be. Nirvana is the dance; when the music stops, you return to being ordinary." During the 1960s, when the discotheque culture from Europe became popular in the United States, several music genres with dance-able rhythms rose to popularity and evolved into different sub-genres: rhythm and blues (originated in the 1940s), soul (late 1950s and 1960s), funk (mid-1960s) and go-go (mid-1960s and 1970s; more than "disco", the word "go-go" originally indicated a music club). Those genres, mainly African-American ones, would influence much of early disco music. During the 60s, the Motown record label developed a popular and influential own sound, described as having "1) simply structured songs with sophisticated melodies and chord changes, 2) a relentless four-beat drum pattern, 3) a gospel use of background voices, vaguely derived from the style of the Impressions, 4) a regular and sophisticated use of both horns and strings, 5) lead singers who were half way between pop and gospel music, 6) a group of accompanying musicians who were among the most dextrous, knowledgeable, and brilliant in all of popular music (Motown bassists have long been the envy of white rock bassists) and 7) a trebly style of mixing that relied heavily on electronic limiting and equalizing (boosting the high range frequencies) to give the overall product a distinctive sound, particularly effective for broadcast over AM radio." Motown had many hits with early disco elements by acts like the Supremes (for instance "You Keep Me Hangin' On" in 1966), Stevie Wonder (for instance "Superstition" in 1972), The Jackson 5 and Eddie Kendricks ("Keep on Truckin'" in 1973). In the mid-1960s and early 1970s Philadelphia soul and New York soul developed as sub-genres that also had lavish percussion, lush string orchestra arrangements and expensive record production processes. At the end of the 1960s musicians and audiences from the Black, Italian and Latino communities adopted several traits from the hippie and psychedelia subcultures. They included using music venues with a loud, overwhelming sound, free-form dancing, trippy lighting, colorful costumes, and the use of hallucinogenic drugs. In addition, the perceived positivity, lack of irony, and earnestness of the hippies informed proto-disco music like MFSB's album "Love Is the Message". Partly through the success of Jimi Hendrix, psychedelic elements that were popular in rock music of the late 1960s found their way into soul and early funk music and formed the subgenre psychedelic soul. Examples can be found in the music of the Chambers Brothers, George Clinton with his Parliament-Funkadelic collective, Sly and the Family Stone and the productions of Norman Whitfield with The Temptations. The long instrumental introductions and detailed orchestration found in psychedelic soul tracks by the Temptations are also considered as cinematic soul. In the early 1970s, Curtis Mayfield and Isaac Hayes scored hits with cinematic soul songs that were actually composed for movie soundtracks: "Superfly" (1972) and "Theme from Shaft" (1971). The latter is sometimes regarded as an early disco song. Psychedelic soul influenced proto-disco acts such as Willie Hutch and Philadelphia soul. In the early 1970s, the Philadelphia soul productions by Gamble and Huff evolved from the simpler arrangements of the late-1960s into a style featuring lush strings, thumping basslines, and sliding hi-hat rhythms. These elements would become typical for disco music and are found in several of the hits they produced in the early 1970s: Other early disco tracks that helped shape disco and became popular on the dance floors of (underground) discotheque clubs and parties include: Early disco was dominated by record producers and labels such as Salsoul Records (Ken, Stanley, and Joseph Cayre), West End Records (Mel Cheren), Casablanca (Neil Bogart), and Prelude (Marvin Schlachter), to name a few. The genre was also shaped by Tom Moulton, who wanted to extend the enjoyment of dance songs — thus creating the extended mix or "remix", going from a three-minute 45 rpm single to the much longer 12" record. Other influential DJs and remixers who helped to establish what became known as the "disco sound" included David Mancuso, Nicky Siano, Shep Pettibone, Larry Levan, Walter Gibbons, and Chicago-based Frankie Knuckles. Frankie Knuckles was not only an important disco DJ; he also helped to develop house music in the 1980s. Disco hit the television airwaves as part of the music/dance variety show "Soul Train" in 1971 hosted by Don Cornelius, then Marty Angelo's "Disco Step-by-Step Television Show" in 1975, Steve Marcus' "Disco Magic/Disco 77", Eddie Rivera's "Soap Factory", and Merv Griffin's "Dance Fever", hosted by Deney Terrio, who is credited with teaching actor John Travolta to dance for his role in the film "Saturday Night Fever", as well as DANCE, based out of Columbia, South Carolina. In 1974, New York City's WPIX-FM premiered the first disco radio show. As a producer and songwriter Norman Whitfield had helped to develop the Motown sound in the 1960s with many hits for Marvin Gaye, the Velvelettes, the Temptations and Gladys Knight & The Pips. From around the production of the Temptations' album "Cloud Nine" in 1968 he incorporated some psychedelic influences and started to produce longer tracks, with more room for elaborate rhythmic instrumental parts. A clear example of such a long psychedelic soul track is "Papa Was a Rollin' Stone", which appeared as a single edit of almost seven minutes and an approximately 12-minute-long 12" version. By the early 70s, many of his productions had evolved more and more towards funk and disco, as heard on albums by the Undisputed Truth and the 1973 album "" by The Jackson 5. After he left Motown in 1975 he produced some more disco hits, including "Car Wash" (1976) by Rose Royce. In the late 1960s, uptempo soul with heavy beats and some associated dance styles and fashion were picked up in the British mod scene and formed the northern soul movement. Originating at venues such as the Twisted Wheel in Manchester, it quickly spread to other UK dancehalls and nightclubs like the Chateau Impney (Droitwich), Catacombs (Wolverhampton), the Highland Rooms at Blackpool Mecca, Golden Torch (Stoke-on-Trent) and Wigan Casino. As the favoured beat became more uptempo and frantic in the early 1970s, northern soul dancing became more athletic, somewhat resembling the later dance styles of disco and break dancing. Featuring spins, flips, karate kicks and backdrops, club dancing styles were often inspired by the stage performances of touring American soul acts such as Little Anthony & the Imperials and Jackie Wilson. In 1974 there were an estimated 25,000 mobile discos and 40,000 professional disc jockeys in the United Kingdom. Mobile discos were hired deejays that brought their own equipment to provide music for special events. Glam rock tracks were popular, with for example Gary Glitter's 1972 single "Rock and Roll Part 2" becoming popular on UK dance floors while it did not get any radio airplay. From 1974 to 1977, disco music continued to increase in popularity as many disco songs topped the charts. The Hues Corporation's 1974 "Rock the Boat", a US number-one single and million-seller, was another one of the early disco songs to reach number one. The same year saw the release of "Kung Fu Fighting", performed by Carl Douglas and produced by Biddu, which reached number one in both the UK and US, and became the best-selling single of the year and one of the best-selling singles of all time with 11 million records sold worldwide, helping to popularize disco to a great extent. Another notable disco success that year was George McCrae's "Rock Your Baby": it became the United Kingdom's first number one chart disco single. In the northwestern sections of the United Kingdom, the northern soul explosion, which started in the late 1960s and peaked in 1974, made the region receptive to disco, which the region's disc jockeys were bringing back from New York City. The shift by some DJs to the newer sounds coming from the U.S.A. resulted in a split in the scene, whereby some abandoned the 1960s soul and pushed a modern soul sound which tended to be more closely aligned with disco than soul. In 1975, Gloria Gaynor released her first side-long vinyl album, which included a remake of the Jackson 5's "Never Can Say Goodbye" (which, in fact, is also the album title) and two other songs, "Honey Bee" and her disco version of "Reach Out (I'll Be There)", first topped the Billboard disco/dance charts in November 1974. Later in 1978, Gaynor's number-one disco song was "I Will Survive", which was seen as a symbol of female strength and a gay anthem, like her further disco hit, a 1983 remake of "I Am What I Am"; in 1979 she released "Let Me Know (I Have a Right)", a single which gained popularity in the civil rights movements. Also in 1975, Vincent Montana Jr.'s Salsoul Orchestra contributed with their Latin-flavored orchestral dance song "Salsoul Hustle", reaching number four on the Billboard Dance Chart and their 1976 hits "Tangerine" and "Nice 'n' Naasty", the first being a cover of a 1941 song. Songs such as Van McCoy's 1975 "The Hustle" and the humorous Joe Tex 1977 "Ain't Gonna Bump No More (With No Big Fat Woman)" gave names to the popular disco dances "the Bump" and "the Hustle". Other notable early successful disco songs include Barry White's "You're the First, the Last, My Everything" (1974), Labelle's "Lady Marmalade" (1974), Disco-Tex and the Sex-O-Lettes' "Get Dancin'" (1974), Silver Convention's "Fly, Robin, Fly" (1975) and "Get Up and Boogie" (1976) and Johnny Taylor's "Disco Lady" (1976). Formed by Harry Wayne Casey (a.k.a. "KC") and Richard Finch, Miami's KC and the Sunshine Band had a string of disco-definitive top-five singles between 1975 and 1977, including "Get Down Tonight", "That's the Way (I Like It)", "(Shake, Shake, Shake) Shake Your Booty", "I'm Your Boogie Man" and "Keep It Comin' Love". In this period, rock bands like the English Electric Light Orchestra featured in their songs a violin sound that became a staple of disco music, as in the 1975 hit "Evil Woman", although the genre was correctly described as orchestral rock. In 1970s Munich, West Germany, music producers Giorgio Moroder and Pete Bellotte made a decisive contribution to disco music with a string of hits for Donna Summer, which became known as the "Munich Sound". In 1975, Summer suggested the lyric "Love to Love You Baby" to Moroder and Bellotte, who turned the lyric into a full disco song. The final product, which contained a series of simulated orgasms, initially was not intended for release, but when Moroder played it in the clubs it caused a sensation and he released it. The song became an international hit, reaching the charts in many European countries and the US (No. 2). It has been described as the arrival of the expression of raw female sexual desire in pop music. A 17-minute 12-inch single was released. The 12" single became and remains a standard in discos today. In 1976 Donna Summer's version of "Could It Be Magic" brought disco further into the mainstream. In 1977 Summer, Moroder and Bellotte further released "I Feel Love", as the B-side of "Can't We Just Sit Down (And Talk It Over)", which revolutionized dance music with its mostly electronic production and was a massive worldwide success, spawning the Hi-NRG subgenre. Other disco producers such as Tom Moulton took ideas and techniques from dub music (which came with the increased Jamaican migration to New York City in the 1970s) to provide alternatives to the "four on the floor" style that dominated. DJ Larry Levan utilized styles from dub and jazz and remixing techniques to create early versions of house music that sparked the genre. In December 1977, the film "Saturday Night Fever" was released. It was a huge success and its soundtrack became one of the best-selling albums of all time. The idea for the film was sparked by a 1976 "New York" magazine article titled "Tribal Rites of the New Saturday Night" which supposedly chronicled the disco culture in mid-1970s New York City, but was later revealed to have been fabricated. Some critics said the film "mainstreamed" disco, making it more acceptable to heterosexual white males. The Bee Gees used Barry Gibb's falsetto to garner hits such as "You Should Be Dancing", "Stayin' Alive", "Night Fever", "More Than A Woman" and "Love You Inside Out". Andy Gibb, a younger brother to the Bee Gees, followed with similarly styled solo singles such as "I Just Want to Be Your Everything", "(Love Is) Thicker Than Water" and "Shadow Dancing". In 1978, Donna Summer's multi-million selling vinyl single disco version of "MacArthur Park" was number one on the "Billboard" Hot 100 chart for three weeks and was nominated for the Grammy Award for Best Female Pop Vocal Performance. The recording, which was included as part of the "MacArthur Park Suite" on her double live album "Live and More", was eight minutes and 40 seconds long on the album. The shorter seven-inch vinyl single version of MacArthur Park was Summer's first single to reach number one on the Hot 100; it does not include the balladic second movement of the song, however. A 2013 remix of "MacArthur Park" by Summer topped the Billboard Dance Charts marking five consecutive decades with a number-one song on the charts. From mid-1978 to late 1979, Summer continued to release singles such as "Last Dance", "Heaven Knows" (with Brooklyn Dreams), "Hot Stuff", "Bad Girls", "Dim All the Lights" and "On the Radio", all very successful songs, landing in the top five or better, on the Billboard pop charts. The band Chic was formed mainly by guitarist Nile Rodgers—a self-described "street hippie" from late 1960s New York—and bassist Bernard Edwards. "Le Freak" was a popular 1978 single of theirs that is regarded as an iconic song of the genre. Other successful songs by Chic include the often-sampled "Good Times" (1979) and "Everybody Dance" (1979). The group regarded themselves as the disco movement's rock band that made good on the hippie movement's ideals of peace, love, and freedom. Every song they wrote was written with an eye toward giving it "deep hidden meaning" or D.H.M. Sylvester, a flamboyant and openly gay singer famous for his soaring falsetto voice, scored his biggest disco hit in late 1978 with "You Make Me Feel (Mighty Real)". His singing style was said to have influenced the singer Prince. At that time, disco was one of the forms of music most open to gay performers. The Village People were a singing/dancing group created by Jacques Morali and Henri Belolo to target disco's gay audience. They were known for their onstage costumes of typically male-associated jobs and ethnic minorities and achieved mainstream success with their 1978 hit song "Macho Man". Other songs include "Y.M.C.A." (1979) and "In the Navy" (1979). The Jacksons (formerly the Jackson 5) released many disco songs from 1977 to 1981, including "Blame It on the Boogie" (1978), "Shake Your Body (Down to the Ground)" (1979), "Lovely One" (1980) and "Can You Feel It" (1981): all of them were sung by Michael Jackson, whose 1979 solo album, "Off the Wall", also included several disco hits, such as the album's title song, "Rock with You", "Workin' Day and Night" and his second chart-topping solo disco hit, "Don't Stop 'Til You Get Enough". Also noteworthy are The Trammps' "Disco Inferno" (1978, reissue due to the popularity gained from the "Saturday Night Fever" soundtrack), Cheryl Lynn's "Got to Be Real" (1978), Evelyn "Champagne" King's "Shame" (1978), Alicia Bridges' "I Love the Nightlife" (1978), Patrick Hernandez' "Born to Be Alive" (1978), Sister Sledge's "We Are Family" (1979), Anita Ward's "Ring My Bell" (1979), Lipps Inc.'s "Funkytown" (1979), and Walter Murphy's various attempts to bring classical music to the mainstream, most notably his disco song "A Fifth of Beethoven" (1976), which was inspired by Beethoven's fifth symphony. At the height of its popularity, many non-disco artists recorded songs with disco elements, such as Rod Stewart with his "Da Ya Think I'm Sexy?" in 1979. Even mainstream rock artists adopted elements of disco. Progressive rock group Pink Floyd used disco-like drums and guitar in their song "Another Brick in the Wall, Part 2" (1979), which became their only number-one single in both the US and UK. The Eagles referenced disco with "One of These Nights" (1975) and "Disco Strangler" (1979), Paul McCartney & Wings with "Silly Love Songs" (1976) and "Goodnight Tonight" (1979), Queen with "Another One Bites the Dust" (1980), the Rolling Stones with "Miss You" (1978) and "Emotional Rescue" (1980), Stephen Stills with his album "Thoroughfare Gap" (1978), Electric Light Orchestra with "Shine a Little Love" and "Last Train to London" (both 1979), Chicago with "Street Player" (1979), the Kinks with "(Wish I Could Fly Like) Superman" (1979), the Grateful Dead with "Shakedown Street", The Who with "Eminence Front" (1982), and the J. Geils Band with "Come Back" (1980). Even hard rock group KISS jumped in with "I Was Made For Lovin' You" (1979), and Ringo Starr's album "Ringo the 4th" (1978) features a strong disco influence. The disco sound was also adopted by "non-pop" artists, including the 1979 U.S. number one hit "No More Tears (Enough Is Enough)" by easy listening singer Barbra Streisand in a duet with Donna Summer. In country music, artists like Connie Smith covered Andy Gibb's "I Just Want to Be Your Everything" in 1977, Bill Anderson recorded "Double S" in 1978, and Ronnie Milsap released "Get It Up" and covered blues singer Tommy Tucker's song "Hi-Heel Sneakers" in 1979. Pre-existing non-disco songs, standards, and TV themes were frequently "disco-ized" in the 1970s, such as the "I Love Lucy" theme or Mike Post's "Theme from "Magnum P.I."" The rich orchestral accompaniment that became identified with the disco era conjured up the memories of the big band era—which brought out several artists that recorded and disco-ized some big-band arrangements, including Perry Como, who re-recorded his 1945 song "Temptation", in 1975, as well as Ethel Merman, who released an album of disco songs entitled "The Ethel Merman Disco Album" in 1979. Myron Floren, second-in-command on "The Lawrence Welk Show", released a recording of the "Clarinet Polka" entitled "Disco Accordion." Similarly, Bobby Vinton adapted "The Pennsylvania Polka" into a song named "Disco Polka". Easy listening icon Percy Faith, in one of his last recordings, released an album entitled "Disco Party" (1975) and recorded a disco version of his "Theme from "A Summer Place"" in 1976. Classical music was even adapted for disco, notably Walter Murphy's "A Fifth of Beethoven" (1976, based on the first movement of Beethoven's 5th Symphony) and "Flight 76" (1976, based on Rimsky-Korsakov's "Flight of the Bumblebee"), and Louis Clark's "Hooked On Classics" series of albums and singles. Many original television theme songs of the era also showed a strong disco influence, such as "Star Wars", "Star Wars Theme/Cantina Band" (1977) by Meco, and "Twilight Zone/Twilight Tone" (1979) by the Manhattan Transfer. Other examples include "S.W.A.T." (1975), "Wonder Woman" (1975), "Charlie's Angels" (1976), "NBC Saturday Night At The Movies" (1976), "The Love Boat" (1977), "The Donahue Show" (1977), "CHiPs" (1977), "The Professionals" (1977), "Dallas" (1978), NBC Sports broadcasts (1978), "Kojak" (1977), "The Hollywood Squares" (1979). Disco jingles also made their way into many TV commercials, including Purina's 1979 "Good Mews" cat food commercial and an "IC Light" commercial by Pittsburgh's Iron City Brewing Company. Several parodies of the disco style were created. Rick Dees, at the time a radio DJ in Memphis, Tennessee, recorded "Disco Duck" (1976) and "Dis-Gorilla" (1977); Frank Zappa parodied the lifestyles of disco dancers in "Disco Boy" on his 1976 "Zoot Allures" album, and in "Dancin' Fool" on his 1979 "Sheik Yerbouti" album; "Weird Al" Yankovic's eponymous 1983 debut album includes a disco song called "Gotta Boogie", an extended pun on the similarity of the disco move to the American slang word "booger". Comedian Bill Cosby devoted his entire 1977 album "Disco Bill" to disco parodies. In 1980, "Mad Magazine" released a flexi-disc titled "Mad Disco" featuring six full-length parodies of the genre. Rock and roll songs critical of disco included Bob Seger's "Old Time Rock and Roll" and, especially, The Who's "Sister Disco" (both 1978)—although The Who's "Eminence Front" (four years later) had a disco feel. By the end of the 1970s, a strong anti-disco sentiment developed among rock music fans and musicians, particularly in the United States. Disco was criticized as mindless, consumerist, overproduced and escapist. The slogans "Disco sucks" and "Death to disco" became common. Rock artists such as Rod Stewart and David Bowie who added disco elements to their music were accused of being sell-outs. The punk subculture in the United States and United Kingdom was often hostile to disco, although in the UK, many early Sex Pistols fans such as the Bromley Contingent and Jordan quite liked disco, often congregating at nightclubs such as Louise's in Soho and the Sombrero in Kensington. The track "Love Hangover" by Diana Ross, the house anthem at the former, was cited as a particular favourite by many early UK punks. Also, the film "The Great Rock 'n' Roll Swindle" and its soundtrack album contained a disco medley of Sex Pistols songs, entitled "Black Arabs" and credited to a group of the same name. Jello Biafra of the Dead Kennedys, in the song "Saturday Night Holocaust", likened disco to the cabaret culture of Weimar-era Germany for its apathy towards government policies and its escapism. Mark Mothersbaugh of Devo said that disco was "like a beautiful woman with a great body and no brains", and a product of political apathy of that era. New Jersey rock critic Jim Testa wrote "Put a Bullet Through the Jukebox", a vitriolic screed attacking disco that was considered a punk call to arms. Steve Hillage, shortly prior to his transformation from a progressive rock musician into an electronic artist at the end of the 1970s with the inspiration of disco, disappointed his rockist fans by admitting his love for disco, with Hillage recalling "it's like I'd killed their pet cat." Anti-disco sentiment was expressed in some television shows and films. A recurring theme on the show "WKRP in Cincinnati" was a hostile attitude towards disco music. In one scene of the 1980 comedy film "Airplane!", a wayward airplane slices a radio tower with its wing, knocking out an all-disco radio station. July 12, 1979, became known as "the day disco died" because of the Disco Demolition Night, an anti-disco demonstration in a baseball double-header at Comiskey Park in Chicago. Rock-station DJs Steve Dahl and Garry Meier, along with Michael Veeck, son of Chicago White Sox owner Bill Veeck, staged the promotional event for disgruntled rock fans between the games of a White Sox doubleheader which involved exploding disco records in centerfield. As the second game was about to begin, the raucous crowd stormed onto the field and proceeded by setting fires, tearing out seats and pieces of turf, and other damage. The Chicago Police Department made numerous arrests, and the extensive damage to the field forced the White Sox to forfeit the second game to the Detroit Tigers, who had won the first game. Disco's decline in popularity after Disco Demolition Night was rapid. On July 21, 1979, the top six records on the U.S. music charts were disco songs. By September 22, there were no disco songs in the US Top 10 chart, with the exception of Herb Alpert's instrumental "Rise," a smooth jazz composition with some disco overtones. Some in the media, in celebratory tones, declared disco "dead" and rock revived. Karen Mixon Cook, the first female disco DJ, stated that people still pause every July 12 for a moment of silence in honor of disco. Dahl stated in a 2004 interview that disco was "probably on its way out [at the time]. But I think it [Disco Demolition Night] hastened its demise". The anti-disco movement, combined with other societal and radio industry factors, changed the face of pop radio in the years following Disco Demolition Night. Starting in the 1980s, country music began a slow rise in American main pop charts. Emblematic of country music's rise to mainstream popularity was the commercially successful 1980 movie "Urban Cowboy". The continued popularity of power pop and the revival of oldies in the late 1970s was also related to disco's decline; the 1978 film "Grease" was emblematic of this trend. Coincidentally, the star of both films was John Travolta, who in 1977 had starred in "Saturday Night Fever", which remains one of the most iconic disco films of the era. During this period of decline in disco's popularity, several record companies folded, were reorganized, or were sold. In 1979, MCA Records purchased ABC Records, absorbed some of its artists, and then shut the label down. Midsong International Records ceased operations in 1980. RSO Records founder Robert Stigwood left the label in 1981 and TK Records closed in the same year. Salsoul Records continues to exist in the 2000s, but primarily is used as a reissue brand. Casablanca Records had been releasing fewer records in the 1980s, and was shut down in 1986 by parent company PolyGram. Many groups that were popular during the disco period subsequently struggled to maintain their success—even those that tried to adapt to evolving musical tastes. The Bee Gees, for instance, had only one top-10 entry (1989's "One") and three more top-40 songs (despite recording and releasing far more than that and completely abandoning disco in their 1980s and 1990s songs) in the United States after the 1970s, even though numerous songs they wrote and had "other" artists perform were successful. Of the handful of groups "not" taken down by disco's fall from favor, Kool and the Gang, Donna Summer, the Jacksons—and Michael Jackson in particular—stand out: In spite of having helped "define" the disco sound early on, they continued to make popular and danceable, if more refined, songs for yet another generation of music fans in the 1980s and beyond. Earth, Wind & Fire also survived the anti-disco trend and continued to produce successful singles at roughly the same pace for several more years, in addition to an even longer string of R&B chart hits that lasted into the 1990s. Six months prior to the chaotic event (in December 1978), popular progressive rock radio station WDAI (WLS-FM) had suddenly switched to an all-disco format, disenfranchising thousands of Chicago rock fans and leaving Dahl unemployed. WDAI, who survived the change of public sentiment and still had good ratings at this point, continued to play disco until it flipped to a short-lived hybrid Top 40/rock format in May 1980. Another disco outlet that also competed against WDAI at the time, WGCI-FM, would later incorporate R&B and pop songs into the format, eventually evolving into an urban contemporary outlet that it continues with today. The latter also helped bring the Chicago house genre to the airwaves. Factors that have been cited as leading to the decline of disco in the United States include economic and political changes at the end of the 1970s, as well as burnout from the hedonistic lifestyles led by participants. In the years since Disco Demolition Night, some social critics have described the "Disco sucks" movement as implicitly macho and bigoted, and an attack on non-white and non-heterosexual cultures. It was also interpreted being part of a wider cultural "backlash" towards conservatism, that also made its way into US politics with the election of conservative president Ronald Reagan in 1980, which also led to Republican control of the United States Senate for the first time since 1954, plus the subsequent rise of the Religious Right around the same time. In January 1979, rock critic Robert Christgau argued that homophobia, and most likely racism, were reasons behind the movement, a conclusion seconded by John Rockwell. Craig Werner wrote: "The Anti-disco movement represented an unholy alliance of funkateers and feminists, progressives and puritans, rockers and reactionaries. Nonetheless, the attacks on disco gave respectable voice to the ugliest kinds of unacknowledged racism, sexism and homophobia." Legs McNeil, founder of the fanzine "Punk", was quoted in an interview as saying, "the hippies always wanted to be black. We were going, 'f**k the blues, f**k the black experience'." He also said that disco was the result of an "unholy" union between homosexuals and blacks. Steve Dahl, who had spearheaded Disco Demolition Night, denied any racist or homophobic undertones to the promotion, saying, "It's really easy to look at it historically, from this perspective, and attach all those things to it. But we weren't thinking like that." It has been noted that British punk rock critics of disco were very supportive of the pro-black/anti-racist reggae genre as well as the more pro-gay new romantics movement. Christgau and Jim Testa have said that there were legitimate artistic reasons for being critical of disco. In 1979, the music industry in the United States underwent its worst slump in decades, and disco, despite its mass popularity, was blamed. The producer-oriented sound was having difficulty mixing well with the industry's artist-oriented marketing system. Harold Childs, senior vice president at A&M Records, told the "Los Angeles Times" that "radio is really desperate for rock product" and "they're all looking for some white rock-n-roll". Gloria Gaynor argued that the music industry supported the destruction of disco because rock music producers were losing money and rock musicians were losing the spotlight. Despite its decline in popularity, disco music remained "relatively" successful in the early 1980s, with songs like Irene Cara's "Flashdance... What a Feeling" (theme to the film "Flashdance") and the theme song to the film "Fame" (later re-sung by Erica Gimpel for the TV show of the same name), Michael Jackson's "Thriller" and "Wanna Be Startin' Somethin'", and Madonna's first album–all which had strong disco influences. Record producer Giorgio Moroder's soundtracks to "American Gigolo", "Flashdance" and "Scarface" (which also had a heavy disco influence) proved that the style was still very much embraced. Queen's 1982 album, "Hot Space" was inspired by the genre as well. To a significant extent, the transition from disco to 1980s dance music was one of relabeling. The "word" "disco" simply became unfashionable to use when describing new music. As late as 1983, K.C. and the Sunshine Band had a major hit single, "Give It Up", which was not considered disco, even though it would have been considered to be in the heart of the genre if it had been released four years earlier. In 1980s house music, and Chicago house in particular, a strong disco influence—mediated by subgenres like post disco and Italo disco—was constantly present, which is why house music, regarding its enormous success in shaping electronic dance music and contemporary club culture, is often described being "disco's revenge". In the 1990s, disco and its legacy became more accepted by pop music artists and listeners alike, as more songs and films were released that referenced disco. This was part of a wave of 1970s nostalgia that was taking place in popular culture at the time. Examples of songs during this time that were influenced by disco included Deee-Lite's "Groove Is in the Heart" (1990), U2's "Lemon" (1993), Blur's "Girls & Boys" (1994) & "Entertain Me" (1995), Pulp's "Disco 2000" (1995), and Jamiroquai's "Canned Heat" (1999), while films such as "Boogie Nights" (1997) and "The Last Days of Disco" (1998) featured primarily disco soundtracks. In the early 2000s, an updated genre of disco called "nu-disco" began breaking into the mainstream. A few examples like Daft Punk's "One More Time" and Kylie Minogue's "Love At First Sight" and "Can't Get You Out of My Head" became club favorites and commercial successes. Several nu-disco songs were crossovers with funky house, such as Spiller's "Groovejet (If This Ain't Love)" and Modjo's "Lady (Hear Me Tonight)", both songs sampling older disco songs and both reaching number one on the UK Singles Chart in 2000. Robbie Williams' disco single "Rock DJ" was the UK's fourth best-selling single the same year. Rock band Manic Street Preachers released a disco song, "Miss Europa Disco Dancer", in 2001. The song's disco influence, which appears on "Know Your Enemy", was described as being "much-discussed". In 2005, Madonna immersed herself in the disco music of the 1970s, and released her album "Confessions on a Dance Floor" to rave reviews. In addition to that, her song "Hung Up" became a major top-10 song and club staple, and sampled ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)". In addition to her disco-influenced attire to award shows and interviews, her Confessions Tour also incorporated various elements of the 1970s, such as disco balls, a mirrored stage design, and the roller derby. The success of the "nu-disco" revival of the early 2000s was described by music critic Tom Ewing as more interpersonal than the pop music of the 1990s: "The revival of disco within pop put a spotlight on something that had gone missing over the 90s: a sense of music not just for dancing, but for dancing with someone. Disco was a music of mutual attraction: cruising, flirtation, negotiation. Its dancefloor is a space for immediate pleasure, but also for promises kept and otherwise. It’s a place where things start, but their resolution, let alone their meaning, is never clear. All of 2000s great disco number ones explore how to play this hand. Madison Avenue look to impose their will upon it, to set terms and roles. Spiller is less rigid. 'Groovejet' accepts the night’s changeability, happily sells out certainty for an amused smile and a few great one-liners." In 2013, several 1970s-style disco and funk songs charted, and the pop charts had more dance songs than at any other point since the late 1970s. The biggest disco song of the year as of June was "Get Lucky" by Daft Punk, featuring Nile Rodgers on guitar. "Random Access Memories" also ended up winning Album of the Year at the 2014 Grammys. Other disco-styled songs that made it into the top 40 were Robin Thicke's "Blurred Lines" (number one), Justin Timberlake's "Take Back the Night" (number 29), Bruno Mars' "Treasure" (number five) and Michael Jackson's posthumous release "Love Never Felt So Good" (number nine). In addition, Arcade Fire's "Reflektor" featured strong disco elements. In 2014, disco music could be found in Lady Gaga's "Artpop" and Katy Perry's "Birthday". Other disco songs from 2014 include "I Want It All" By Karmin, 'Wrong Club" by the Ting Tings and "Blow" by Beyoncé. In 2014 Brazilian Globo TV, the fourth biggest television network in the world, aired Boogie Oogie, a telenovela about the Disco Era that takes place between 1978 and 1979, from the hit fever to the decadence. The show's success was responsible for a Disco revival across the country, bringing back to stage, and to record charts, Discothèque Divas like Lady Zu and As Frenéticas. Other top-10 entries from 2015 like Mark Ronson's disco groove-infused "Uptown Funk", Maroon 5's "Sugar", the Weeknd's "Can't Feel My Face" and Jason Derulo's "Want To Want Me" also ascended the charts and have a strong disco influence. Disco mogul and producer Giorgio Moroder also re-appeared with his new album "Déjà Vu" in 2015 which has proved to be a modest success. Other songs from 2015 like "I Don't Like It, I Love It" by Flo Rida, "Adventure of a Lifetime" by Coldplay, "Back Together" by Robin Thicke and "Levels" by Nick Jonas feature disco elements as well. In 2016, disco songs or disco-styled pop songs are showing a strong presence on the music charts as a possible backlash to the 1980s-styled synthpop, electro house, and dubstep that have been dominating the current charts. Justin Timberlake's 2016 song "Can't Stop the Feeling!", which shows strong elements of disco, became the 26th song to debut at number-one on the "Billboard" Hot 100 in the history of the chart. "The Martian", a 2015 film, extensively uses disco music as a soundtrack, although for the main character, astronaut Mark Watney, there's only one thing worse than being stranded on Mars: it's being stranded on Mars with nothing but disco music. "Kill the Lights", featured on an episode of the HBO television series "Vinyl" (2016) and with Nile Rodgers' guitar licks, hit number one on the US Dance chart in July 2016. In 2020, disco-influenced hits such as Doja Cat’s “Say So”, Lady Gaga's "Stupid Love", and Dua Lipa’s "Don't Start Now" were popular in the US as they hit number 1, 5, and 2, respectively, on the Billboard Hot 100 Chart. An article on "Billboard" declared that Dua Lipa is "Leading the Charge Toward Disco-Influenced Production" the day after her retro and disco-influenced album "Future Nostalgia" was released on March 27, 2020. Diana Ross was one of the first Motown artists to embrace the disco sound with her successful 1976 outing "Love Hangover" from her self-titled album. Her 1980 dance classics "Upside Down" and "I'm Coming Out" were written and produced by Nile Rodgers and Bernard Edwards of the group Chic. The Supremes, the group that made Ross famous, scored a handful of hits in the disco clubs without her, most notably 1976's "I'm Gonna Let My Heart Do the Walking" and, their last charted single before disbanding, 1977's "You're My Driving Wheel". At the request of Motown that he produce songs in the disco genre, Marvin Gaye released "Got to Give It Up" in 1978, despite his dislike of disco. He vowed not to record any songs in the genre, and actually wrote the song as a parody. However, several of Gaye's songs have disco elements, including "I Want You" (1975). Stevie Wonder released the disco single "Sir Duke" in 1977 as a tribute to Duke Ellington, the influential jazz legend who had died in 1974. Smokey Robinson left the Motown group the Miracles for a solo career in 1972 and released his third solo album "A Quiet Storm" in 1975, which spawned and lent its name to the "Quiet Storm" musical programming format and subgenre of R&B. It contained the disco single "Baby That's Backatcha". Other Motown artists who scored disco hits include: Robinson's former group, the Miracles, with "Love Machine" (1975), Eddie Kendricks with "Keep On Truckin'" (1973), the Originals with "Down to Love Town" (1976) and Thelma Houston with her cover of the Harold Melvin and the Blue Notes song "Don't Leave Me This Way" (1976). The label continued to release successful disco songs into the 1980s with Rick James' "Super Freak" (1981), and the Commodores' "Lady (You Bring Me Up)" (1981). Several of Motown's solo artists who left the label went on to have successful disco songs. Mary Wells, Motown's first female superstar with her signature song "My Guy" (written by Smokey Robinson), abruptly left the label in 1964. She briefly reappeared on the charts with the disco song "Gigolo" in 1980. Jimmy Ruffin, the elder brother of the Temptations lead singer David Ruffin, was also signed to Motown, and released his most successful and well-known song "What Becomes of the Brokenhearted" as a single in 1966. Ruffin eventually left the record label in the mid-1970s, but saw success with the 1980 disco song "Hold On (To My Love)", which was written and produced by Robin Gibb of the Bee Gees, for his album "Sunrise". Edwin Starr, known for his Motown protest song "War" (1970), reentered the charts in 1979 with a pair of disco songs, "Contact" and "H.A.P.P.Y. Radio". Kiki Dee became the first white British singer to sign with Motown in the US, and released one album, "Great Expectations" (1970), and two singles "The Day Will Come Between Sunday and Monday" (1970) and "Love Makes the World Go Round" (1971), the latter giving her first ever chart entry (number 87 on the US Chart). She soon left the company and signed with Elton John's The Rocket Record Company, and in 1976 had her biggest and best-known single, "Don't Go Breaking My Heart", a disco duet with John. The song was intended as an affectionate disco-style pastiche of the Motown sound, in particular the various duets recorded by Marvin Gaye with Tammi Terrell and Kim Weston. Michael Jackson released many successful solo singles under the Motown label, like "Got To Be There" (1971), "Ben" (1972) and a cover of Bobby Day's "Rockin' Robin" (1972). He went on to score hits in the disco genre with "Rock with You" (1979), "Don't Stop 'Til You Get Enough" (1979) and "Billie Jean" (1983) for Epic Records. Many Motown groups who had left the record label charted with disco songs. Michael Jackson was the lead singer of the Jackson 5, one of Motown's premier acts in the early 1970s. They left the record company in 1975 (Jermaine Jackson, however, remained with the label) after successful songs like "I Want You Back" (1969) and "ABC" (1970), and even the disco song "Dancing Machine" (1974). Renamed as 'the Jacksons' (as Motown owned the name 'the Jackson 5'), they went on to find success with disco songs like "Blame It on the Boogie" (1978), "Shake Your Body (Down to the Ground)" (1979) and "Can You Feel It?" (1981) on the Epic label. the Isley Brothers, whose short tenure at the company had produced the song "This Old Heart of Mine (Is Weak for You)" in 1966, went on release successful disco songs like "That Lady" (1973) and "It's a Disco Night (Rock Don't Stop)" (1979). Gladys Knight and the Pips, who recorded the most successful version of "I Heard It Through the Grapevine" (1967) before Marvin Gaye, scored commercially successful singles such as "Baby, Don't Change Your Mind" (1977) and "Bourgie, Bourgie" (1980) in the disco era. The Detroit Spinners were also signed to the Motown label and saw success with the Stevie Wonder-produced song "It's a Shame" in 1970. They left soon after, on the advice of fellow Detroit native Aretha Franklin, to Atlantic Records, and there had disco songs like "The Rubberband Man" (1976). In 1979, they released a successful cover of Elton John's "Are You Ready for Love", as well as a medley of the Four Seasons' song "Working My Way Back to You" and Michael Zager's "Forgive Me, Girl". The Four Seasons themselves were briefly signed to Motown's MoWest label, a short-lived subsidiary for R&B and soul artists based on the West Coast, and there the group produced one album, "Chameleon" (1972) – to little commercial success in the US. However, one single, "The Night", was released in Britain in 1975, and thanks to popularity from the Northern Soul circuit, reached number seven on the UK Singles Chart. The Four Seasons left Motown in 1974 and went on to have a disco hit with their song "December, 1963 (Oh, What a Night)" (1975) for Warner Curb Records. Norman Whitfield was a producer at Motown, renowned for creating innovative "psychedelic soul" songs. The genre later developed into funk, and from there into disco. The Undisputed Truth, a Motown recording act assembled by Whitfield to experiment with his psychedelic soul production techniques, found success with their 1971 song "Smiling Faces Sometimes". The disco single "You + Me = Love" (number 43) in 1976, which also made number 2 on the US Dance Charts. In 1977, singer, songwriter and producer Willie Hutch signed with Whitfield's new label. He had been signed to Motown since 1970, scored a successful disco single with his song "In and Out". The group Rose Royce produced the to the 1976 film "Car Wash", which contained the commercially successful song of the same name. Singer Stacy Lattisaw signed with Motown "after" achieving success in the disco genre. In 1980, she released her album "Let Me Be Your Angel", which spawned the disco singles "Dynamite" and "Jump to the Beat" on the Cotillion label. Lattisaw continued to enjoy success as a contemporary R&B/pop artist throughout the 1980s. She signed with Motown in 1986, and achieved most success when teaming up with Johnny Gill, releasing the 1989 song "Where Do We Go From Here?" from her last ever album, "What You Need", before retiring. In addition, her debut single, in 1979, was a disco cover of "When You're Young and in Love", which was recorded by Motown female group the Marvelettes in 1967. Additionally, the debut single of Shalamar, the group originally created as a disco-driven vehicle by "Soul Train" creator Don Cornelius, was "Uptown Festival" (1977), a medley of 10 classic Motown songs sung over a 1970s disco beat. In the mid to late 1970s, European acts such as Silver Convention (1974–1979), Boney M. (1974–1986), Love and Kisses (1977–1982), the "Munich Sound" by West Germany-based Donna Summer and producer Giorgio Moroder, whom AllMusic described as "one of the principal architects of the disco sound" with the Donna Summer song "I Feel Love" (1977), Moroder's disco music project Munich Machine (1976–1980), as well as Jean-Marc Cerrone and the Village People, defined the so-called Euro disco sound. The German group Kraftwerk also had an influence on Euro disco. By far the most successful Euro disco act was ABBA. This Swedish quartet, which sang in English, found success with singles such as "Waterloo" (1974), "Fernando" (1976), "Take a Chance on Me" (1978), "Gimme! Gimme! Gimme! (A Man After Midnight)" (1979), and their signature smash hit "Dancing Queen" (1976)—ranks as the Fourth best-selling act of all time. In Germany, Boney M. was a Euro disco group of four West Indian singers and dancers masterminded by West German record producer Frank Farian. Boney M. charted worldwide with such songs as "Daddy Cool" (1976) "Ma Baker" (1977) and "Rivers Of Babylon" (1978). Another prominent European pop and disco groups was Luv' from the Netherlands. In France, Dalida released "J'attendrai" ("I Will Wait") in 1975, which also became successful in Canada, Europe and Japan. Dalida successfully adjusted herself to disco era and released at least a dozen of songs that charted among top number 10 in whole Europe and wider. Claude François, who re-invented himself as the king of French disco, released "La plus belle chose du monde", a French version of the Bee Gees song "Massachusetts", which became successful in Canada and Europe and "Alexandrie Alexandra" was posthumously released on the day of his burial and became a worldwide success. Cerrone's early songs, "Love in C Minor" (1976), "Supernature" (1977) and "Give Me Love" (1978) were successful in the US and Europe. Another Euro disco act was the French diva Amanda Lear, where Euro disco sound is most heard in "Enigma (Give a Bit of Mmh to Me)" (1978). In Italy Raffaella Carrà is the most successful disco act. Her greatest international single was "Tanti Auguri" ("Best Wishes"), which has become a popular song with gay audiences. The song is also known under its Spanish title "Para hacer bien el amor hay que venir al sur" (which refers to Southern Europe, since the song was recorded and taped in Spain). The Estonian version of the song "Jätke võtmed väljapoole" was performed by Anne Veski. "A far l'amore comincia tu" ("To make love, your move first") was another success for her internationally, known in Spanish as "En el amor todo es empezar", in German as "Liebelei", in French as "Puisque tu l'aimes dis le lui", and in English as "Do It, Do It Again". It was her only entry to the UK Singles Chart, reaching number 9, where she remains a one-hit wonder. In 1977, she recorded another successful single, "Fiesta" ("The Party" in English) originally in Spanish, but then recorded it in French and Italian after the song hit the charts. "A far l'amore comincia tu" has also been covered in Turkish by a Turkish popstar Ajda Pekkan as "Sakın Ha" in 1977. Recently, Carrà has gained new attention for her appearance as the female dancing soloist in a 1974 TV performance of the experimental gibberish song "Prisencolinensinainciusol" (1973) by Adriano Celentano. A remixed video featuring her dancing went viral on the internet in 2008. In 2008 a video of a performance of her only successful UK single, "Do It, Do It Again", was featured in the "Doctor Who" episode "Midnight". Rafaella Carrà worked with Bob Sinclar on the new single "Far l'Amore" which was released on YouTube on March 17, 2011. The song charted in different European countries. Euro disco continued evolving within the broad mainstream pop music scene, even when disco's popularity sharply declined in the United States, abandoned by major U.S. record labels and producers. The rising popularity of disco came in tandem with developments in the role of the DJ. DJing developed from the use of multiple record turntables and DJ mixers to create a continuous, seamless mix of songs, with one song transitioning to another with no break in the music to interrupt the dancing. The resulting DJ mix differed from previous forms of dance music in the 1960s, which were oriented towards live performances by musicians. This in turn affected the arrangement of dance music, since songs in the disco era typically contained beginnings and endings marked by a simple beat or riff that could be easily used to transition to a new song. The development of DJing was also influenced by new turntablism techniques, such as beatmatching and scratching, a process facilitated by the introduction of new turntable technologies such as the Technics SL-1200 MK 2, first sold in 1978, which had a precise variable pitch control and a direct drive motor. DJs were often avid record collectors, who would hunt through used record stores for obscure soul records and vintage funk recordings. DJs helped to introduce rare records and new artists to club audiences. In the 1970s, individual DJs became more prominent, and some DJs, such as Larry Levan, the resident at Paradise Garage, Jim Burgess, Tee Scott and Francis Grasso became famous in the disco scene. Levan, for example, developed a cult following among club-goers, who referred to his DJ sets as "Saturday Mass". Some DJs would use reel to reel tape recorders to make remixes and tape edits of songs. Some DJs who were making remixes made the transition from the DJ booth to becoming a record producer, notably Burgess. Scott developed several innovations. He was the first disco DJ to use three turntables as sound sources, the first to simultaneously play two beat matched records, the first user of electronic effects units in his mixes and an innovator in mixing dialogue in from well-known movies into his mixes, typically over a percussion break. These mixing techniques were also applied to radio DJs, such as Ted Currier of WKTU and WBLS. Grasso is particularly notable for taking the DJ “profession out of servitude and [making] the DJ the musical head chef”. Once he entered the scene, the DJ was no longer responsible for waiting on the crowd hand and foot, meeting their every song request. Instead, with increased agency and visibility, the DJ was now able to use his own technical and creative skills to whip up a nightly special of innovative mixes, refining his personal sound and aesthetic, and building his own reputation. Known as the first DJ to create a take his audience on a narrative, musical journey, Grasso discovered that music could effectively shift the energy of the crowd, and even more, that he had all this power at his fingertips. The disco sound had a strong influence on early hip hop. Most of the early hip hop songs were created by isolating existing disco bass-guitar lines and dubbing over them with MC rhymes. The Sugarhill Gang used Chic's "Good Times" as the foundation for their 1979 song "Rapper's Delight", generally considered to be the song that first popularized rap music in the United States and around the world. With synthesizers and Krautrock influences, that replaced the previous disco foundation, a new genre was born when Afrika Bambaataa released the single "Planet Rock," spawning a hip hop electronic dance trend that includes songs such as Planet Patrol's "Play at Your Own Risk" (1982), C-Bank's "One More Shot" (1982), Cerrone's "Club Underworld" (1984), Shannon's "Let the Music Play" (1983), Freeez's "I.O.U." (1983), Midnight Star's "Freak-a-Zoid" (1983), Chaka Khan's "I Feel For You" (1984). The transition from the late-1970s disco styles to the early-1980s dance styles was marked primarily by the change from complex arrangements performed by large ensembles of studio session musicians (including a horn section and an orchestral string section), to a leaner sound, in which one or two singers would perform to the accompaniment of synthesizer keyboards and drum machines. In addition, dance music during the 1981–83 period borrowed elements from blues and jazz, creating a style different from the disco of the 1970s. This emerging music was still known as disco for a short time, as the word had become associated with any kind of dance music played in discothèques. Examples of early-1980s' dance sound performers include D. Train, Kashif, and Patrice Rushen. These changes were influenced by some of the notable R&B and jazz musicians of the 1970s, such as Stevie Wonder, Kashif and Herbie Hancock, who had pioneered "one-man-band"-type keyboard techniques. Some of these influences had already begun to emerge during the mid-1970s, at the height of disco's popularity. During the first years of the 1980s, the disco sound began to be phased out, and faster tempos and synthesized effects, accompanied by guitar and simplified backgrounds, moved dance music toward the funk and pop genres. This trend can be seen in singer Billy Ocean's recordings between 1979 and 1981. Whereas Ocean's 1979 song "American Hearts" was backed with an orchestral arrangement played by the Los Angeles Symphony Orchestra, his 1981 song "One of Those Nights (Feel Like Gettin' Down)" had a more bare, stripped-down sound, with no orchestration or symphonic arrangements. This drift from the original disco sound is called post-disco which also included boogie and Italo disco. It had an important influence on early alternative dance and dance pop, and played a key role in the transition between disco and house music during the early 1980s. The post-punk movement that originated in the late 1970s both supported punk rock's rule breaking while rejecting its move back to raw rock music. Post-punk's mantra of constantly moving forward lent itself to both openness to and experimentation with elements of disco and other styles. Public Image Limited is considered the first post-punk group. The group's second album "Metal Box" fully embraced the "studio as instrument" methodology of disco. The group's founder John Lydon, the former lead singer for the Sex Pistols, told the press that disco was the only music he cared for at the time. No wave was a subgenre of post-punk centered in New York City. For shock value, James Chance, a notable member of the no wave scene, penned an article in the "East Village Eye" urging his readers to move uptown and get "trancin' with some superadioactive disco voodoo funk". His band James White and the Blacks wrote a disco album titled "Off White". Their performances resembled those of disco performers (horn section, dancers and so on). In 1981 ZE Records led the transition from no wave into the more subtle mutant disco (post-disco/punk) genre. Mutant disco acts such as Kid Creole and the Coconuts, Was Not Was, ESG and Liquid Liquid influenced several British post-punk acts such as New Order, Orange Juice and A Certain Ratio. In the early 2000s the dance-punk (new rave in the United Kingdom) emerged as a part of a broader post punk revival. It fused elements of punk-related rock with different forms of dance music including disco. Klaxons, LCD Soundsystem, Death From Above 1979, the Rapture and Shitdisco were among acts associated with the genre. House music is a genre of electronic dance music that originated in Chicago in the early 1980s (also see: Chicago house). It quickly spread to other American cities such as Detroit, where it developed into the harder and more industrial techno, New York City (also see: garage house) and Newark – all of which developed their own regional scenes. In the mid- to late 1980s, house music became popular in Europe as well as major cities in South America, and Australia. Early house music commercial success in Europe saw songs such as "Pump Up The Volume" by MARRS (1987), "House Nation" by House Master Boyz and the Rude Boy of House (1987), "Theme from S'Express" by S'Express (1988) and "Doctorin' the House" by Coldcut (1988) in the pop charts. Since the early to mid-1990s, house music has been infused in mainstream pop and dance music worldwide. Early house music was generally dance-based music characterized by repetitive four on the floor beats, rhythms mainly provided by drum machines, off-beat hi-hat cymbals, and synthesized basslines. While house displayed several characteristics similar to disco music, it was more electronic and minimalist, and the repetitive rhythm of house was more important than the song itself. As well, house did not use the lush string sections that were a key part of the disco sound. House music in the 2010s, while keeping several of these core elements, notably the prominent kick drum on every beat, varies widely in style and influence, ranging from the soulful and atmospheric deep house to the more aggressive acid house or the minimalist microhouse. House music has also fused with several other genres creating fusion subgenres, such as euro house, tech house, electro house and jump house. In the late 1980s and early 1990s, rave culture began to emerge from the house and acid house scene. Like house, it incorporated disco culture's same love of dance music played by DJs over powerful sound systems, recreational drug and club drug exploration, sexual promiscuity, and hedonism. Although disco culture started out underground, it eventually thrived in the mainstream by the late 1970s, and major labels commodified and packaged the music for mass consumption. In contrast, the rave culture started out underground and stayed (mostly) underground. In part this was to avoid the animosity that was still surrounding disco and dance music. The rave scene also stayed underground to avoid law enforcement attention that was directed at the rave culture due to its use of secret, unauthorized warehouses for some dance events and its association with illegal club drugs like Ecstasy. Nu-disco is a 21st-century dance music genre associated with the renewed interest in 1970s and early 1980s disco, mid-1980s Italo disco, and the synthesizer-heavy Euro disco aesthetics. The moniker appeared in print as early as 2002, and by mid-2008 was used by record shops such as the online retailers Juno and Beatport. These vendors often associate it with re-edits of original-era disco music, as well as with music from European producers who make dance music inspired by original-era American disco, electro and other genres popular in the late 1970s and early 1980s. It is also used to describe the music on several American labels that were previously associated with the genres electroclash and French house.
https://en.wikipedia.org/wiki?curid=7966
Donegal fiddle tradition The Donegal fiddle tradition is the way of playing the fiddle that is traditional in County Donegal, Ireland. It is one of the distinct fiddle traditions within Irish traditional music. The distinctness of the Donegal tradition developed due to the close relations between Donegal and Scotland, and the Donegal repertoire and style has influences from Scottish fiddle music. For example, in addition to the standard tune types such as Jigs and Reels, the Donegal tradition also has Highlands (influenced by the Scottish Strathspey). The distinctiveness of the Donegal tradition led to some conflict between Donegal players and representatives of the mainstream tradition when Irish traditional music was organised in the 1960s. The tradition has several distinguishing traits compared to other fiddle traditions such as the Sliabh Luachra style of southern Ireland, most of which involves styles of bowing and the ornamentation of the music, and rhythm. Due to the frequency of double stops and the strong bowing it is often compared to the Cape Breton tradition. Another characteristic of the style is the rapid pace at which it tends to proceed. Modern players, such as the fiddle group Altan, continue to be popular due to a variety of reasons. Among the most famous Donegal style players are John Doherty from the early twentieth century and James Byrne, Paddy Glackin, Tommy Peoples and Mairéad Ní Mhaonaigh in recent decades. The fiddle has ancient roots in Ireland, the first report of bowed instruments similar to the violin being in the Book of Leinster (ca. 1160). The modern violin was ubiquitous in Ireland by the early 1700s. However the first mention of the fiddle being in use in Donegal is from the blind harper Arthur O'Neill who in his 1760 memoirs described a wedding in Ardara as having "plenty of pipers and fiddlers". Donegal fiddlers participated in the development of the Irish music tradition in the 18th century during which jigs and slipjigs and later reels and hornpipes became the dominant musical forms. However, Donegal musicians, many of them being fishermen, also frequently travelled to Scotland, where they acquired tune types from the Scottish repertoire such as the Strathspey which was integrated into the Donegal tradition as "Highland" tunes. The Donegal tradition derives much of its unique character from the synthesis of Irish and Scottish stylistic features and repertoires. Aoidh notes however that while different types of art music were commonly played among the upper classes of Scottish society in the 18th century, the Donegal tradition drew exclusively from the popular types of Scottish music. Like some Scottish fiddlers (who, like Donegal fiddlers, tend to use a short bow and play in a straight-ahead fashion), some Donegal fiddlers worked at imitating the sound of the bagpipes. Workers from Donegal would bring their music to Scotland and also bring back Scottish tunes with them such music of J. Scott Skinner and Mackenzie Murdoch. Lilting, unaccompanied singing of wordless tunes, was also an important part of the Donegal musical tradition often performed by women in social settings. Describing the musical life of Arranmore Island in the late 19th century singer Róise Rua Nic Gríanna describes the most popular dances: "The Sets, the Lancers, the Maggie Pickie [i.e., Maggie Pickins] the Donkey, the Mazurka and the Barn dances". Among the travelling fiddlers of the late 19th century players such as John Mhosaí McGinley, Anthony Hilferty, the McConnells and the Dohertys are best known. As skill levels increased through apprenticeships several fiddle masters appeared such as the Cassidy's, Connie Haughey, Jimmy Lyons and Miock McShane of Teelin and Francie Dearg and Mickey Bán Byrne of Kilcar. These virtuosos played unaccompanied listening pieces in addition to the more common dance music. The influences between Scotland and Donegal went both ways and were furthered by a wave of immigration from Donegal to Scotland in the 19th century (the regions share common names of dances), as can be heard in the volume of strathspeys, schottisches, marches, and Donegal's own strong piping tradition, has influenced and been influenced by music, and by the sounds, ornaments, and repertoire of the Píob Mhór, the traditional bagpipes of Ireland and Scotland. There are other differences between the Donegal style and the rest of Ireland. Instruments such as the tin whistle, flute, concertina and accordion were very rare in Donegal until modern times. Traditionally the píob mór and the fiddle were the only instruments used and the use of pipe or fiddle music was common in old wedding customs. Migrant workers carried their music to Scotland and also brought back a number of tunes of Scottish origin. The Donegal fiddlers may well have been the route by which Scottish tunes such as Lucy Campbell, Tarbolton Lodge (Tarbolton) and The Flagon (The Flogging Reel), that entered the Irish repertoire. These players prided themselves on their technical abilities, which included playing in higher positions (fairly uncommon among traditional Irish fiddlers), and sought out material which would demonstrate their skills. As Irish music was consolidated and organised under the Comhaltas Ceoltóirí Éireann movement in the 1960s, both strengthened the interest in traditional music but sometimes conflicted with the Donegal tradition and its social conventions. The rigidly organised sessions of the Comhaltas reflected the traditions of Southern Ireland and Donegal fiddlers like John Doherty considered the National repertoire with its strong focus on reels to be less diverse than that of Donegal with its varied rhythms. Other old fiddlers dislike the ways comhaltas sessions were organised with a committee player, often not himself a musician, in charge. Sometimes Comhaltas representatives would even disparage the Donegal tradition, with its Scottish flavour, as being un-Irish, and prohibit them from playing local tunes with Scottish genealogies such as the "Highlands" at Comhaltas sessions. This sometimes cause antagonism between Donegal players and the main organisation of traditional music in Ireland. Outside of the Comhaltas movement however, Donegal fiddling stood strong with Paddy Glackin of Ceoltorí Laighean and the Bothy Band and later Tommy Peoples also with the Bothy Band and Mairead Ni Mhaonaigh with Altan, who all drew attention and prestige to the Donegal tradition within folk music circles throughout Ireland. The Donegal style of fiddling is a label often applied to music from this area, though one also might plausibly identify several different, but related, styles within the county. To the extent to which there is one common style in the county, it is characterised by a rapid pace; a tendency to be more un-swung in the playing of the fast dance tune types (reel and jigs); short (non-slurred), aggressive bowing, sparse ornamentation, the use of bowed triplets more often than trills as ornaments, the use of double stops and droning; and the occurrence of "playing the octave", with one player playing the melody and the other playing the melody an octave lower. None of these characteristics are universal, and there is some disagreement as to the extent to which there is a common style at all. In general, however, the style is rather aggressive. Another feature of Donegal fiddling that makes it distinctive among Irish musical traditions is the variety of rare tune types that are played. Highlands, a type of tune in time with some similarities to Scottish strathspeys, which are also played in Donegal, are one of the most commonly played types of tune in the county. Other tune types common solely in the county include barndances, also called "Germans," and mazurkas. There are a number of different strands to the history of fiddle playing in County Donegal. Perhaps the best-known and, in the last half of the twentieth century, the most influential has been that of the Doherty family. Hugh Doherty is the first known musician of this family. Born in 1790, he headed an unbroken tradition of fiddlers and pipers in the Doherty family until the death, in 1980, of perhaps the best-known Donegal fiddler, John Doherty. John, a travelling tinsmith, was known for his extremely precise and fast finger- and bow-work and vast repertoire, and is considered to be one of the greatest Irish fiddlers ever recorded. John's older brother, Mickey, was also recorded and, though Mickey was another of the great Irish fiddlers, his reputation has been overshadowed by John's. There is no single Donegal style but several distinctive styles. These styles traditionally come from the geographical isolated regions of Donegal including Inishowen, eastern Donegal, The Rosses and Gweedore, Croaghs, Teelin, Kilcar, Glencolmcille, Ballyshannon and Bundoran. Even with improved communications and transport, these regions still have recognisably different ways of fiddle playing. Notable deceased players of the older Donegal styles include Neillidh ("Neilly") Boyle, Francie Byrne, Con Cassidy,Frank Cassidy, James Byrne (1946–2008), P.V. O'Donnell (2011), and Tommy Peoples (1948–2018). Currently living Donegal fiddlers, include, Vincent Campbell, John Gallagher, Paddy Glackin, and Danny O'Donnell. Fiddle playing continues to be popular in Donegal. The three fiddlers of the Donegal "supergroup" Altan, Mairéad Ní Mhaonaigh, Paul O'Shaughnessy, and Ciarán Tourish, are generally admired within Donegal. An example of another fiddler-player from Donegal is Liz Doherty. Another well regarded fiddle player hailing from Donegal is Aidan O'Donnell. TG4 Young Musician of the Year 2010 Aidan O'Donnell has been described as one of the finest young Irish musicians at present. He began his music making at the age of 12, and since then has performed with some of traditional music's finest artists, including Donal Lunny, Micheal Ó'Suilleabháin and the Chieftains. In 2007, he won the prestigious ‘Oireachtas na Geailge' fiddle title, and has been a regular tutor at the Irish World Academy of Music and Dance, at the University of Limerick for the past number of years. The fiddle, and traditional music in general, remained popular in Donegal not only because of the international coverage of certain artists but because of local pride in the music. Traditional music "Seisiúns" are still common place both in pubs and in houses. The Donegal fiddle music has been influenced by recorded music, but this is claimed to have had a positive impact on the tradition. Modern Donegal fiddle music is often played in concerts and recorded on albums.
https://en.wikipedia.org/wiki?curid=7973
Double-barreled shotgun A double-barreled shotgun is a shotgun with two parallel barrels, allowing two shots to be fired in quick succession. Modern double-barreled shotguns, often known as "doubles", are almost universally break open actions, with the barrels tilting up at the rear to expose the breech ends of the barrels for unloading and reloading. Since there is no reciprocating action needed to eject and reload the shells, doubles are more compact than repeating designs such as pump action or lever-action shotguns. Double-barreled shotguns come in two basic configurations: the side-by-side shotgun (SxS) and the over/under shotgun ("over and under", O/U, etc.), indicating the arrangement of barrels. The original double-barreled guns were nearly all SxS designs, which was a more practical design of muzzle-loading firearms. Early cartridge shotguns also used the SxS action, because they kept the exposed hammers of the earlier muzzle-loading shotguns from which they evolved. When hammerless designs started to become common, the O/U design was introduced, and most modern sporting doubles are O/U designs. One significant advantage that doubles have over single barrel repeating shotguns is the ability to provide access to more than one choke at a time. Some shotgun sports, such as skeet, use crossing targets presented in a narrow range of distance, and only require one level of choke. Others, like sporting clays, give the shooter targets at differing ranges, and targets that might approach or recede from the shooter, and so must be engaged at differing ranges. Having two barrels lets the shooter use a more open choke for near targets, and a tighter choke for distant targets, providing the optimal shot pattern for each distance. Their disadvantage lies in the fact that the barrels of a double-barreled shotgun, whether "O/U" or "SxS", are not parallel, but slightly angled, so that shots from the barrels converge, usually at "40 yards out". For the "SxS" configuration, the shotstring continues on its path to the opposite side of the rib after the converging point; for example, the left barrel's discharge travels on the left of the rib till it hits dead center at 40 yards out, after that, the discharge continues on to the right. In the "O/U" configuration with a parallel rib, both barrels' discharges will keep to the dead center, but the discharge from the "under" barrel will shoot higher than the discharge from the "over" barrel after 40 yards. Thus, double-barreled shotguns are accurate only at practical shotgun ranges, though the range of their ammunition easily exceeds four to six times that distance. "SxS" shotguns are often more expensive, and may take more practice to aim effectively than a "O/U". The off-center nature of the recoil in a "SxS" gun may make shooting the body-side barrel slightly more painful by comparison to an "O/U", single-shot, or pump/lever action shotgun. Gas-operated, and to a lesser extent recoil-operated, designs will recoil less than either. More "SxS" than "O/U" guns have traditional "cast-off" stocks, where the end of the buttstock veers slightly to the right, allowing a right-handed user to point the gun more easily. Double shotguns are also inherently more safe, as whether the shotgun is loaded or can be fired can be ascertained by anyone present if the action is broken open, for instance on a skeet, trap or hunting clays course when another shooter is firing; if the action is open, the gun cannot fire. Similarly, doubles are more easily examined to see if loaded than pump or semi-automatic shotguns, whose bolt must be opened and chamber closely examined or felt to make sure it is unloaded; with a double gun (or a break-action single gun), whether the gun is loaded, i.e., has cartridges in any chamber, is easily and immediately seen with a glance (and just as easily unloaded). The early doubles used two triggers, one for each barrel. These were located front to back inside the trigger guard, the index finger being used to pull either trigger, as having two fingers inside the trigger guard can cause a very undesirable recoil induced double-discharge. Double trigger designs are typically set up for right-handed users. In double trigger designs, it is often possible to pull both triggers at once, firing both barrels simultaneously, though this is generally not recommended as it doubles the recoil, battering both shotgun and shooter, particularly if it was unanticipated or unintended. Discharging both barrels at the same time has long been a hunting trick employed by hunters using 8 gauge "elephant" shotguns, firing the two two-ounce slugs for sheer stopping power at close range. Later models use a single trigger that alternately fires both barrels, called a "single selective trigger" or "SST". The SST does not allow firing both barrels at once, since the single trigger must be pulled twice in order to fire both barrels. The change from one barrel to the other may be done by a clockwork type system, where a cam alternates between barrels, or by an inertial system where the recoil of firing the first barrel toggles the trigger to the next barrel. A double-barreled shotgun with an inertial trigger works best with full power shotshells; shooting low recoil shotshells often will not reliably toggle the inertial trigger, causing an apparent failure to fire occasionally when attempting to depress the trigger a second time to fire the second barrel (this also can happen if the first shell fails to fire). Generally there is a method of selecting the order in which the barrels of an SST shotgun fire; commonly this is done through manipulation of the safety, pushing to one side to select top barrel first and the other side to select bottom barrel first. In the event that an inertial trigger does not toggle to the second barrel when firing low recoil shotshells, manually selecting the order to the second barrel will enable the second barrel to fire when the trigger is depressed again. One of the advantages of the double, with double triggers or SST, is that a second shot can be taken almost immediately after the first, without removing the gun from the firing position on the shoulder and without any other action than a second trigger pull, utilizing different chokes for the two shots. (Assuming, of course, that full power shotshells are fired, at least for a double-barreled shotgun with an inertial type SST, as needed to toggle the inertial trigger.) This can be noticeably faster than a pump shotgun, which requires pumping to eject and reload for the second shot, and may be faster, or not slower, than a semi-automatic action. Note, however, in neither the pump or semi-automatic will the second shot be a different choke pattern from the first shot, whereas for a double, the two shots are usually with different chokes. Thus, depending on the nature of the hunt, the appropriate choke for the shot is always at hand. For example, while field hunting flushing birds, the first shot is usually closer than the second because the bird flies away from the shooter; so, the more open choke (and barrel) would be better for the first shot, and if a second shot is needed, as the bird is flying away, the more closed (and thus longer distance of an effective shot pattern) choke (and barrel) is then appropriate. Conversely, on a driven hunt, where the birds are driven towards the shooter, the closed (longer effective distance) choke (and barrel) should be fired first, saving the open (closer effective distance) choke (and barrel) for the now-closer incoming bird. None of this is possible with single-barrel shotguns, only with a double, whether SxS or O/U. "Regulation" is a term used for multi-barreled firearms that indicates how close to the same point of aim the barrels will shoot. Regulation is very important, because a poorly regulated gun may hit consistently with one barrel, but miss consistently with the other, making the gun nearly useless for anything requiring two shots. However, the short ranges and spread of shot provide a significant overlap, so a small error in regulation in a double is often too small to be noticed. Generally the shotguns are regulated to hit the point of aim at a given distance, usually the maximum expected range since that is the range at which a full choke is used, and where precise regulation matters most. The double-barreled shotgun is seen as a weapon of prestige and authority in rural parts of India and Pakistan, where it is known as "Dunali" (literally "two pipes"). It is especially common in Bihar, Purvanchal, Uttar Pradesh, Haryana and Punjab.
https://en.wikipedia.org/wiki?curid=7975
Dessert Dessert () is a course that concludes a meal. The course usually consists of sweet foods, such as confections, and possibly a beverage such as dessert wine or liqueur; however, in the United States it may include coffee, cheeses, nuts, or other savory items regarded as a separate course elsewhere. In some parts of the world, such as much of central and western Africa, and most parts of China, there is no tradition of a dessert course to conclude a meal. The term "dessert" can apply to many confections, such as biscuits, cakes, cookies, custards, gelatins, ice creams, pastries, pies, puddings, sweet soups, and tarts. Fruit is also commonly found in dessert courses because of its naturally occurring sweetness. Some cultures sweeten foods that are more commonly savory to create desserts. The word "dessert" originated from the French word "desservir," meaning "to clear the table." Its first known use was in 1600, in a health education manual entitled "Naturall and artificial Directions for Health", written by William Vaughan. In his "A History of Dessert" (2013), Michael Krondl explains it refers to the fact dessert was served after the table had been cleared of other dishes. The term dates from the 14th century but attained its current meaning around the beginning of the 20th century when "service à la française" (setting a variety of dishes on the table at the same time) was replaced with "service à la russe" (presenting a meal in courses.)" The word "dessert" is most commonly used for this course in Australia, Canada, Ireland, New Zealand, and the United States, while "pudding", "sweet", or more colloquially, "afters" are also used in the United Kingdom and some other Commonwealth countries, including Hong Kong and India. Sweets were fed to the gods in ancient Mesopotamia and ancient India and other ancient civilizations. Dried fruit and honey were probably the first sweeteners used in most of the world, but the spread of sugarcane around the world was essential to the development of dessert. Sugarcane was grown and refined in India before 500 BC and was crystallized, making it easy to transport, by AD 500. Sugar and sugarcane were traded, making sugar available to Macedonia by 300 BC and China by AD 600. In the Indian subcontinent, the Middle East, and China, sugar has been a staple of cooking and desserts for over a thousand years. Sugarcane and sugar were little known and rare in Europe until the twelfth century or later, when the Crusades and then colonization spread its use. Herodotus mentions that, as opposed to the Greeks, the main Persian meal was simple, but they would eat many desserts afterwards. Europeans began to manufacture sugar in the Middle Ages, and more sweet desserts became available. Even then sugar was so expensive usually only the wealthy could indulge on special occasions. The first apple pie recipe was published in 1381. The earliest documentation of the term "cupcake" was in "Seventy-five Receipts for Pastry, Cakes, and Sweetmeats" in 1828 in Eliza Leslie's "Receipts" cookbook. The Industrial Revolution in Europe and later America caused desserts (and food in general) to be mass-produced, processed, preserved, canned, and packaged. Frozen foods, including desserts, became very popular starting in the 1920s when freezing emerged. These processed foods became a large part of diets in many industrialized nations. Many countries have desserts and foods distinctive to their nations or region. Sweet desserts usually contain cane sugar, palm sugar, honey or some types of syrup such as molasses, maple syrup, treacle, or corn syrup. Other common ingredients in Western-style desserts are flour or other starches, Cooking fats such as butter or lard, dairy, eggs, salt, acidic ingredients such as lemon juice, and spices and other flavoring agents such as chocolate, peanut butter, fruits, and nuts. The proportions of these ingredients, along with the preparation methods, play a major part in the consistency, texture, and flavor of the end product. Sugars contribute moisture and tenderness to baked goods. Flour or starch components serves as a protein and gives the dessert structure. Fats contribute moisture and can enable the development of flaky layers in pastries and pie crusts. The dairy products in baked goods keep the desserts moist. Many desserts also contain eggs, in order to form custard or to aid in the rising and thickening of a cake-like substance. Egg yolks specifically contribute to the richness of desserts. Egg whites can act as a leavening agent or provide structure. Further innovation in the healthy eating movement has led to more information being available about vegan and gluten-free substitutes for the standard ingredients, as well as replacements for refined sugar. Desserts can contain many spices and extracts to add a variety of flavors. Salt and acids are added to desserts to balance sweet flavors and create a contrast in flavors. Some desserts are coffee-flavored, for example an iced coffee soufflé or coffee biscuits. Alcohol can also be used as an ingredient, to make alcoholic desserts. Dessert consist of variations of flavors, textures, and appearances. Desserts can be defined as a usually sweeter course that concludes a meal. This definition includes a range of courses ranging from fruits or dried nuts to multi-ingredient cakes and pies. Many cultures have different variations of dessert. In modern times the variations of desserts have usually been passed down or come from geographical regions. This is one cause for the variation of desserts. These are some major categories in which desserts can be placed. Biscuits, from the Old French word "bescuit" originally meaning "twice-baked" in Latin, also known as "cookies" in North America, are flattish bite-sized or larger short pastries generally intended to be eaten out of the hand. Biscuits can have a texture that is crispy, chewy, or soft. Examples include layered bars, crispy meringues, and soft chocolate chip cookies. Cakes are sweet tender breads made with sugar and delicate flour. Cakes can vary from light, airy sponge cakes to dense cakes with less flour. Common flavorings include dried, candied or fresh fruit, nuts, cocoa or extracts. They may be filled with fruit preserves or dessert sauces (like pastry cream), iced with buttercream or other icings, and decorated with marzipan, piped borders, or candied fruit. Cake is often served as a celebratory dish on ceremonial occasions, for example weddings, anniversaries, and birthdays. Small-sized cakes have become popular, in the form of cupcakes and petits fours. Chocolate is a typically sweet, usually brown, food preparation of "Theobroma cacao" seeds, roasted, ground, and often flavored. Pure, unsweetened chocolate contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate currently consumed is in the form of sweet chocolate, combining chocolate with sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids. Dark chocolate is produced by adding fat and sugar to the cacao mixture, with no milk or much less than milk chocolate. Candy, also called sweets or lollies, is a confection that features sugar as a principal ingredient. Many candies involve the crystallization of sugar which varies the texture of sugar crystals. Candies comprise many forms including caramel, marshmallows, and taffy. These kinds of desserts usually include a thickened dairy base. Custards are cooked and thickened with eggs. Baked custards include crème brûlée and flan. Puddings are thickened with starches such as corn starch or tapioca. Custards and puddings are often used as ingredients in other desserts, for instance as a filling for pastries or pies. Many cuisines include a dessert made of deep-fried starch-based batter or dough. In many countries, a doughnut is a flour-based batter that has been deep-fried. It is sometimes filled with custard or jelly. Fritters are fruit pieces in a thick batter that have been deep fried. Gulab jamun is an Indian dessert made of milk solids kneaded into a dough, deep-fried, and soaked in honey. Churros are a deep-fried and sugared dough that is eaten as dessert or a snack in many countries. Doughnuts are most famous for being a trademark favorite of fictional character Homer Simpson from the animated television series "The Simpsons". Ice cream, gelato, sorbet and shaved-ice desserts fit into this category. Ice cream is a cream base that is churned as it is frozen to create a creamy consistency. Gelato uses a milk base and has less air whipped in than ice cream, making it denser. Sorbet is made from churned fruit and is not dairy based. Shaved-ice desserts are made by shaving a block of ice and adding flavored syrup or juice to the ice shavings. Jellied desserts are made with a sweetened liquid thickened with gelatin or another thickening agent. They are traditional in many cultures. Grass jelly and annin tofu are Chinese jellied desserts. Yōkan is a Japanese jellied dessert. In English-speaking countries, many dessert recipes are based on gelatin with fruit or whipped cream added. Pastries are sweet baked pastry products. Pastries can either take the form of light and flaky bread with an airy texture, such as a croissant or unleavened dough with a high fat content and crispy texture, such as shortbread. Pastries are often flavored or filled with fruits, chocolate, nuts, and spices. Pastries are sometimes eaten with tea or coffee as a breakfast food. Pies and cobblers are a crust with a filling. The crust can be either made from either a pastry or crumbs. Pie fillings range from fruits to puddings; cobbler fillings are generally fruit-based. Clafoutis are a batter with fruit-based filling poured over the top before baking. Tong sui, literally translated as "sugar water" and also known as tim tong, is a collective term for any sweet, warm soup or custard served as a dessert at the end of a meal in Cantonese cuisine. "Tong sui" are a Cantonese specialty and are rarely found in other regional cuisines of China. Outside of Cantonese-speaking communities, soupy desserts generally are not recognized as a distinct category, and the term "tong sui" is not used. Dessert wines are sweet wines typically served with dessert. There is no simple definition of a dessert wine. In the UK, a dessert wine is considered to be any sweet wine drunk with a meal, as opposed to the white fortified wines (fino and amontillado sherry) drunk before the meal, and the red fortified wines (port and madeira) drunk after it. Thus, most fortified wines are regarded as distinct from dessert wines, but some of the less strong fortified white wines, such as Pedro Ximénez sherry and Muscat de Beaumes-de-Venise, are regarded as honorary dessert wines. In the United States, by contrast, a dessert wine is legally defined as any wine over 14% alcohol by volume, which includes all fortified wines - and is taxed at higher rates as a result. Examples include Sauternes and Tokaji Aszú. Throughout much of central and western Africa, there is no tradition of a dessert course following a meal. Fruit or fruit salad would be eaten instead, which may be spiced, or sweetened with a sauce. In some former colonies in the region, the colonial power has influenced desserts – for example, the Angolan "cocada amarela" (yellow coconut) resembles baked desserts in Portugal. In Asia, desserts are often eaten between meals as snacks rather than as a concluding course. There is widespread use of rice flour in East Asian desserts, which often include local ingredients such as coconut milk, palm sugar, and tropical fruit. In India, where sugarcane has been grown and refined since before 500 BC, desserts have been an important part of the diet for thousands of years; types of desserts include burfis, halvahs, jalebis, and laddus. Dessert nowadays are made into drinks as well, such as Bubble Tea. It is originated in Taiwan, which locates in East Asia. Bubble tea is a kind of dessert made with flavor tea or milk with tapioca. It is well-known across the world. In Ukraine and Russia, breakfast foods such as nalysnyky or blintz or oladi (pancake), and syrniki are served with honey and jam as desserts. European colonization of the Americas yielded the introduction of a number of ingredients and cooking styles. The various styles continued expanding well into the 19th and 20th centuries, proportional to the influx of immigrants. Dulce de leche is a very common confection in Argentina. In Bolivia, sugarcane, honey and coconut are traditionally used in desserts. "Tawa tawa" is a Bolivian sweet fritter prepared using sugar cane, and "helado de canela" is a dessert that is similar to sherbet which is prepared with cane sugar and cinnamon. Coconut tarts, puddings cookies and candies are also consumed in Bolivia. Brazil has a variety of candies such as brigadeiros (chocolate fudge balls), cocada (a coconut sweet), beijinhos (coconut truffles and clove) and romeu e julieta (cheese with a guava jam known as goiabada). Peanuts are used to make paçoca, rapadura and pé-de-moleque. Local common fruits are turned in juices and used to make chocolates, ice pops and ice cream. In Chile, "kuchen" has been described as a "trademark dessert." Several desserts in Chile are prepared with "manjar", (caramelized milk), including "alfajor", "flan", "cuchufli" and "arroz con leche". Desserts consumed in Colombia include dulce de leche, waffle cookies, puddings, nougat, coconut with syrup and thickened milk with sugarcane syrup. Desserts in Ecuador tend to be simple, and desserts are a moderate part of the cuisine. Desserts consumed in Ecuador include tres leches cake, flan, candies and various sweets. Desserts are typically eaten in Australia, and most daily meals "end with simple desserts," which can include various fruits. More complex desserts include cakes, pies and cookies, which are sometimes served during special occasions. The market for desserts has grown over the last few decades, which was greatly increased by the commercialism of baking desserts and the rise of food productions. Desserts are present in most restaurants as the popularity has increased. Many commercial stores have been established as solely desserts stores. Ice cream parlors have been around since before 1800. Many businesses started advertising campaigns focusing solely on desserts. The tactics used to market desserts are very different depending on the audience for example desserts can be advertised with popular movie characters to target children. The rise of companies like Food Network has marketed many shows which feature dessert and their creation. Shows like these have displayed extreme desserts and made a game show atmosphere which made desserts a more competitive field. Desserts are a standard staple in restaurant menus, with different degrees of variety. Pie and cheesecake were among the most popular dessert courses ordered in U.S. restaurants in 2012. Dessert foods often contain relatively high amounts of sugar and fats and, as a result, higher calorie counts per gram than other foods. Fresh or cooked fruit with minimal added sugar or fat is an exception.
https://en.wikipedia.org/wiki?curid=7976
Data Encryption Standard The Data Encryption Standard (DES ) is a symmetric-key algorithm for the encryption of digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement of cryptography. Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS selected a slightly modified version (strengthened against differential cryptanalysis, but weakened against brute-force attacks), which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977. The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose from classified design elements, a relatively short key length of the symmetric-key block cipher design, and the involvement of the NSA, raising suspicions about a backdoor. The S-boxes that had prompted those suspicions were designed by the NSA to remove a backdoor they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack. The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and their cryptanalysis. DES is insecure due to the relatively short 56-bit key size. In January 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. This cipher has been superseded by the Advanced Encryption Standard (AES). DES has been withdrawn as a standard by the National Institute of Standards and Technology. Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm). The origins of DES date to 1972, when a National Bureau of Standards study of US government computer security identified a need for a government-wide standard for encrypting unclassified, sensitive information. Around the same time, engineer Mohamed Atalla in 1972 founded Atalla Corporation and developed the first hardware security module (HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a secure PIN generating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard. Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the DES standard. The IBM 3624 later adopted a similar PIN verification system to the earlier Atalla system. On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time, IBM submitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel's Lucifer cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman, Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman, Bill Notz, Lynn Smith, and Bryant Tuckerman. On 17 March 1975, the proposed DES was published in the "Federal Register". Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received from public-key cryptography pioneers Martin Hellman and Whitfield Diffie, citing a shortened key length and the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages. Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different." The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote: However, it also found that Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!" In contrast, a declassified NSA book on cryptologic history states: and Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret. Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it". Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES." Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information. The algorithm is also specified in ANSI X3.92 (Today X3 is known as INCITS and ANSI X3.92 as ANSI INCITS 92), NIST SP 800-67 and ISO/IEC 18033-3 (as a component of TDEA). Another theoretical attack, linear cryptanalysis, was published in 1994, but it was the Electronic Frontier Foundation's DES cracker in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in this article. The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES, File:DES-main-network.png|thumb|250px|""— The overall Feistel structure of DES rect 0 130 639 229 Initial permutation rect 220 300 421 405 Feistel function rect 220 594 421 701 Feistel function rect 220 1037 421 1144 Feistel function rect 220 1330 421 1437 Feistel function rect 0 1478 639 1577 Final permutation circle 50 351 26 XOR circle 50 647 26 XOR circle 50 1090 26 XOR circle 50 1383 26 XOR DES is the archetypal block cipher—an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits. The key is nominally stored or transmitted as 8 bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSI INCITS 92-1981), section 3.5: Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74. Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.) The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed "rounds". There is also an initial and final permutation, termed "IP" and "FP", which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware. Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms. The ⊕ symbol denotes the exclusive-OR (XOR) operation. The "F-function" scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes. The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages: File:Data_Encription_Standard_Flow_Diagram.svg|thumb|250px|""—The Feistel function (F-function) of DES rect 10 88 322 170 Expansion function rect 9 340 77 395 Substitution box 1 rect 89 340 157 395 Substitution box 2 rect 169 340 237 395 Substitution box 3 rect 247 340 315 395 Substitution box 4 rect 327 340 395 395 Substitution box 5 rect 405 340 473 395 Substitution box 6 rect 485 340 553 395 Substitution box 7 rect 565 340 633 395 Substitution box 8 rect 9 482 630 565 Permutation circle 319 232 21 XOR The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher. File:DES-key-schedule.png|thumb|250px|""— The key-schedule of DES rect 96 28 298 58 Permuted choice 1 rect 127 122 268 155 Permuted choice 2 rect 127 216 268 249 Permuted choice 2 rect 127 357 268 390 Permuted choice 2 rect 127 451 268 484 Permuted choice 2 rect 96 91 127 116 Left shift by 1 rect 268 91 299 116 Left shift by 1 rect 96 185 127 210 Left shift by 1 rect 268 185 299 210 Left shift by 1 rect 96 326 127 351 Left shift by 2 rect 268 326 299 351 Left shift by 2 rect 96 419 127 444 Left shift by 1 rect 268 419 299 444 Left shift by 1 Figure 3 illustrates the "key schedule" for encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by "Permuted Choice 1" ("PC-1")—the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by "Permuted Choice 2" ("PC-2")—24 bits from the left half, and 24 from the right. The rotations (denoted by "«Stallings, W. "Cryptography and network security: principles and practice". Prentice Hall, 2006. p. 73 In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: ""There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES."" The machine brute-forced a key in a little more than 2 days' worth of searching. The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of these field-programmable gate arrays (FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement of digital hardware—see Moore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs. Their 256 Spartan-6 LX150 model has further lowered this time. In 2012, David Hulton and Moxie Marlinspike announced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online. There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search: differential cryptanalysis (DC), linear cryptanalysis (LC), and Davies' attack. However, the attacks are theoretical and are generally considered infeasible to mount in practice; these types of attack are sometimes termed certificational weaknesses. There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains. Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 215.8 chosen plaintexts and has a 229.2 time complexity (Biham and others, 2002). DES exhibits the complementation property, namely that where formula_2 is the bitwise complement of formula_3 formula_4 denotes encryption with key formula_5 formula_6 and formula_7 denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute-force attack could be reduced by a factor of 2 (or a single bit) under a chosen-plaintext assumption. By definition, this property also applies to TDES cipher. DES also has four so-called "weak keys". Encryption ("E") and decryption ("D") under a weak key have the same effect (see involution): There are also six pairs of "semi-weak keys". Encryption with one of the pair of semiweak keys, formula_10, operates identically to decryption with the other, formula_11: It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage. DES has also been proved not to be a group, or more precisely, the set formula_14 (for all possible keys formula_15) under functional composition is not a group, nor "close" to being a group. This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key. Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques. SDES has similar properties and structure as DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper. Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them. Concerns about security and the relatively slow operation of DES in software motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include RC5, Blowfish, IDEA, NewDES, SAFER, CAST5 and FEAL. Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the Soviet Union the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in Russia later. DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46-3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis. On January 2, 1997, NIST announced that they wished to choose a successor to DES. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS, and Twofish.
https://en.wikipedia.org/wiki?curid=7978
Double-hulled tanker A double-hulled tanker refers to an oil tanker which has a double hull. They reduce the likelihood of leaks occurring compared to single-hulled tankers, and their ability to prevent or reduce oil spills led to double hulls being standardized for oil tankers and other types of ships including by the International Convention for the Prevention of Pollution from Ships or MARPOL Convention. After the Exxon Valdez oil spill disaster in Alaska in 1989, the US Government required all new oil tankers built for use between US ports to be equipped with a full double hull. A number of manufacturers have embraced oil tankers with a double hull because it strengthens the hull of ships, reducing the likelihood of oil disasters in low-impact collisions and groundings over single-hull ships. They reduce the likelihood of leaks occurring at low speed impacts in port areas when the ship is under pilotage. Research of impact damage of ships has revealed that double-hulled tankers are unlikely to perforate both hulls in a collision, preventing oil from seeping out. However, for smaller tankers, U shaped tanks might be susceptible to "free flooding" across the double bottom and up to the outside water level each side of the cargo tank. Salvors prefer to salvage doubled-hulled tankers because they permit the use of air pressure to vacuum out the flood water. In the 1960s, collision proof double hulls for nuclear ships were extensively investigated, due to escalating concerns over nuclear accidents. The ability of double-hulled tankers to prevent or reduce oil spills led to double hulls being standardized for other types of ships including oil tankers by the International Convention for the Prevention of Pollution from Ships or MARPOL Convention. In 1992, MARPOL was amended, making it "mandatory for tankers of 5,000 dwt and more ordered after 6 July 1993 to be fitted with double hulls, or an alternative design approved by IMO". However, in the aftermath of the Erika incident of the coast off France in December 1999, members of IMO adopted a revised schedule for the phase-out of single-hull tankers, which came into effect on 1 September 2003, with further amendments validated on 5 April 2005. After the Exxon Valdez oil spill disaster, when that ship grounded on Bligh Reef outside the port of Valdez, Alaska in 1989, the US Government required all new oil tankers built for use between US ports to be equipped with a full double hull. However, the damage to the Exxon Valdez penetrated sections of the hull (the slops oil tanks, or slop tanks) that were protected by a double bottom, or partial double hull. Although double-hulled tankers reduce the likelihood of ships grazing rocks and creating holes in the hull, a double hull does not protect against major, high-energy collisions or groundings which cause the majority of oil pollution, despite this being the reason that the double hull was mandated by United States legislation. Double-hulled tankers, if poorly designed, constructed, maintained and operated can be as problematic, if not more problematic than their single-hulled counterparts. Double-hulled tankers have a more complex design and structure than their single-hulled counterparts, which means that they require more maintenance and care in operating, which if not subject to responsible monitoring and policing, may cause problems. Double hulls often result in the weight of the hull increasing by at least 20%, and because the steel weight of doubled-hulled tanks should not be greater than that of single-hulled ships, the individual hull walls are typically thinner and theoretically less resistant to wear. Double hulls by no means eliminate the possibility of the hulls breaking apart. Due to the air space between the hulls, there is also a potential problem with volatile gases seeping out through worn areas of the internal hull, increasing the risk of an explosion. Although several international conventions against pollution are in place, as of 2003 there was still no formal body setting international mandatory standards, although the International Safety Guide for Oil Tankers and Terminals (ISGOTT) does provide guidelines giving advice on optimum use and safety, such as recommending that ballast tanks are not entered while loaded with cargo, and that weekly samples are made of the atmosphere inside for hydrocarbon gas. Due to the difficulties of maintenance, ship builders have been competitive in producing double-hulled ships which are easier to inspect, such as ballast and cargo tanks which are easily accessible and easier to spot corrosion in the hull. The Tanker Structure Cooperative Forum (TSCF) published the "Guide to Inspection and Maintenance of Double-Hull Tanker Structures" in 1995 giving advice based on experience of operating double-hulled tankers.
https://en.wikipedia.org/wiki?curid=7983
Drink A drink (or beverage) is a liquid intended for human consumption. In addition to their basic function of satisfying thirst, drinks play important roles in human culture. Common types of drinks include plain drinking water, milk, coffee, tea, hot chocolate, juice and soft drinks. In addition, alcoholic drinks such as wine, beer, and liquor, which contain the drug ethanol, have been part of human culture for more than 8,000 years. Non-alcoholic drinks often signify drinks that would normally contain alcohol, such as beer and wine, but are made with a sufficiently low concentration of alcohol by volume. The category includes drinks that have undergone an alcohol removal process such as non-alcoholic beers and de-alcoholized wines. When the human body becomes dehydrated, it experiences thirst. This craving of fluids results in an instinctive need to drink. Thirst is regulated by the hypothalamus in response to subtle changes in the body's electrolyte levels, and also as a result of changes in the volume of blood circulating. The complete elimination of drinks, that is, water, from the body will result in death faster than the removal of any other substance. Water and milk have been basic drinks throughout history. As water is essential for life, it has also been the carrier of many diseases. As society developed, techniques were discovered to create alcoholic drinks from the plants that were available in different areas. The earliest archaeological evidence of wine production yet found has been at sites in Georgia ( BCE) and Iran ( BCE). Beer may have been known in Neolithic Europe as far back as 3000 BCE, and was mainly brewed on a domestic scale. The invention of beer (and bread) has been argued to be responsible for humanity's ability to develop technology and build civilization. Tea likely originated in Yunnan, China during the Shang Dynasty (1500 BCE–1046 BCE) as a medicinal drink. Drinking has been a large part of socialising throughout the centuries. In Ancient Greece, a social gathering for the purpose of drinking was known as a symposium, where watered down wine would be drunk. The purpose of these gatherings could be anything from serious discussions to direct indulgence. In Ancient Rome, a similar concept of a "convivium" took place regularly. Many early societies considered alcohol a gift from the gods, leading to the creation of gods such as Dionysus. Other religions forbid, discourage, or restrict the drinking of alcoholic drinks for various reasons. In some regions with a dominant religion the production, sale, and consumption of alcoholic drinks is forbidden to everybody, regardless of religion. Toasting is a method of honouring a person or wishing good will by taking a drink. Another tradition is that of the loving cup, at weddings or other celebrations such as sports victories a group will share a drink in a large receptacle, shared by everyone until empty. In East Africa and Yemen, coffee was used in native religious ceremonies. As these ceremonies conflicted with the beliefs of the Christian church, the Ethiopian Church banned the secular consumption of coffee until the reign of Emperor Menelik II. The drink was also banned in Ottoman Turkey during the 17th century for political reasons and was associated with rebellious political activities in Europe. A drink is a form of liquid which has been prepared for human consumption. The preparation can include a number of different steps, some prior to transport, others immediately prior to consumption. Water is the chief constituent in all drinks, and the primary ingredient in most. Water is purified prior to drinking. Methods for purification include filtration and the addition of chemicals, such as chlorination. The importance of purified water is highlighted by the World Health Organization, who point out 94% of deaths from diarrhea – the third biggest cause of infectious death worldwide at 1.8 million annually – could be prevented by improving the quality of the victim's environment, particularly safe water. Pasteurisation is the process of heating a liquid for a period of time at a specified temperature, then immediately cooling. The process reduces the growth of microorganisms within the liquid, thereby increasing the time before spoilage. It is primarily used on milk, which prior to pasteurisation is commonly infected with pathogenic bacteria and therefore is more likely than any other part of the common diet in the developed world to cause illness. The process of extracting juice from fruits and vegetables can take a number of forms. Simple crushing of most fruits will provide a significant amount of liquid, though a more intense pressure can be applied to get the maximum amount of juice from the fruit. Both crushing and pressing are processes used in the production of wine. Infusion is the process of extracting flavours from plant material by allowing the material to remain suspended within water. This process is used in the production of teas, herbal teas and can be used to prepare coffee (when using a coffee press). The name is derived from the word "percolate" which means "to cause (a solvent) to pass through a permeable substance especially for extracting a soluble constituent". In the case of coffee-brewing the solvent is water, the permeable substance is the coffee grounds, and the soluble constituents are the chemical compounds that give coffee its color, taste, aroma, and stimulating properties. Carbonation is the process of dissolving carbon dioxide into a liquid, such as water. Fermentation is a metabolic process that converts sugar to ethanol. Fermentation has been used by humans for the production of drinks since the Neolithic age. In winemaking, grape juice is combined with yeast in an anaerobic environment to allow the fermentation. The amount of sugar in the wine and the length of time given for fermentation determine the alcohol level and the sweetness of the wine. When brewing beer, there are four primary ingredients – water, grain, yeast and hops. The grain is encouraged to germinate by soaking and drying in heat, a process known as malting. It is then milled before soaking again to create the sugars needed for fermentation. This process is known as mashing. Hops are added for flavouring, then the yeast is added to the mixture (now called wort) to start the fermentation process. Distillation is a method of separating mixtures based on differences in volatility of components in a boiling liquid mixture. It is one of the methods used in the purification of water. It is also a method of producing spirits from milder alcoholic drinks. An alcoholic mixed drink that contains two or more ingredients is referred to as a cocktail. Cocktails were originally a mixture of spirits, sugar, water, and bitters. The term is now often used for almost any mixed drink that contains alcohol, including mixers, mixed shots, etc. A cocktail today usually contains one or more kinds of spirit and one or more mixers, such as soda or fruit juice. Additional ingredients may be sugar, honey, milk, cream, and various herbs. A non-alcoholic drink is one that contains little or no alcohol. This category includes low-alcohol beer, non-alcoholic wine, and apple cider if they contain a sufficiently low concentration of alcohol by volume (ABV). The exact definition of what is "non-alcoholic" and what is not depends on local laws: in the United Kingdom, "alcohol-free beer" is under 0.05% ABV, "de-alcoholised beer" is under 0.5%, while "low-alcohol beer" can contain no more than 1.2% ABV. The term "soft drink" specifies the absence of alcohol in contrast to "hard drink" and "drink". The term "drink" is theoretically neutral, but often is used in a way that suggests alcoholic content. Drinks such as soda pop, sparkling water, iced tea, lemonade, root beer, fruit punch, milk, hot chocolate, tea, coffee, milkshakes, and tap water and energy drinks are all soft drinks. Water is the world's most consumed drink, however, 97% of water on Earth is non-drinkable salt water. Fresh water is found in rivers, lakes, wetlands, groundwater, and frozen glaciers. Less than 1% of the Earth's fresh water supplies are accessible through surface water and underground sources which are cost effective to retrieve. In western cultures, water is often drunk cold. In the Chinese culture, it is typically drunk hot. Regarded as one of the "original" drinks, milk is the primary source of nutrition for babies. In many cultures of the world, especially the Western world, humans continue to consume dairy milk beyond infancy, using the milk of other animals (especially cattle, goats and sheep) as a drink. Plant milk, a general term for any milk-like product that is derived from a plant source, also has a long history of consumption in various countries and cultures. The most popular varieties internationally are soy milk, almond milk, rice milk and coconut milk. Carbonated drinks refer to drinks which have carbon dioxide dissolved into them. This can happen naturally through fermenting and in natural water spas or artificially by the dissolution of carbon dioxide under pressure. The first commercially available artificially carbonated drink is believed to have been produced by Thomas Henry in the late 1770s. Cola, orange, various roots, ginger, and lemon/lime are commonly used to create non-alcoholic carbonated drinks; sugars and preservatives may be added later. The most consumed carbonated soft drinks are produced by three major global brands: Coca-Cola, PepsiCo and the Dr Pepper Snapple Group. Fruit juice is a natural product that contains few or no additives. Citrus products such as orange juice and tangerine juice are familiar breakfast drinks, while grapefruit juice, pineapple, apple, grape, lime, and lemon juice are also common. Coconut water is a highly nutritious and refreshing juice. Many kinds of berries are crushed; their juices are mixed with water and sometimes sweetened. Raspberry, blackberry and currants are popular juices drinks but the percentage of water also determines their nutritive value. Grape juice allowed to ferment produces wine. Fruits are highly perishable so the ability to extract juices and store them was of significant value. Some fruits are highly acidic and mixing them with water and sugars or honey was often necessary to make them palatable. Early storage of fruit juices was labor-intensive, requiring the crushing of the fruits and the mixing of the resulting pure juices with sugars before bottling. Vegetable juices are usually served warm or cold. Different types of vegetables can be used to make vegetable juice such as carrots, tomatoes, cucumbers, celery and many more. Some vegetable juices are mixed with some fruit juice to make the vegetable juice taste better. Many popular vegetable juices, particularly ones with high tomato content, are high in sodium, and therefore consumption of them for health must be carefully considered. Some vegetable juices provide the same health benefits as whole vegetables in terms of reducing risks of cardiovascular disease and cancer. A nightcap is a drink taken shortly before bedtime to induce sleep. For example, a small alcoholic drink or a cup of warm milk can supposedly promote a good night's sleep. Today, most nightcaps and relaxation drinks are generally non-alcoholic beverages containing calming ingredients. They are considered beverages which serve to relax a person. Unlike other calming beverages, such as tea, warm milk or milk with honey; relaxation drinks almost universally contain more than one active ingredient. Relaxation drinks have been known to contain other natural ingredients and are usually free of caffeine and alcohol but some have claimed to contain marijuana. A drink is considered "alcoholic" if it contains ethanol, commonly known as alcohol (although in chemistry the definition of "alcohol" includes many other compounds). Beer has been a part of human culture for 8,000 years. In many countries, imbibing alcoholic drinks in a local bar or pub is a cultural tradition. Beer is an alcoholic drink produced by the saccharification of starch and fermentation of the resulting sugar. The starch and saccharification enzymes are often derived from malted cereal grains, most commonly malted barley and malted wheat. Most beer is also flavoured with hops, which add bitterness and act as a natural preservative, though other flavourings such as herbs or fruit may occasionally be included. The preparation of beer is called brewing. Beer is the world's most widely consumed alcoholic drink, and is the third-most popular drink overall, after water and tea. It is said to have been discovered by goddess Ninkasi around 5300 BCE, when she accidentally discovered yeast after leaving grain in jars that were later rained upon and left for several days. Women have been the chief creators of beer throughout history due to its association with domesticity and it, throughout much of history, being brewed in the home for family consumption. Only in recent history have men began to dabble in the field. It is thought by some to be the oldest fermented drink. Some of humanity's earliest known writings refer to the production and distribution of beer: the Code of Hammurabi included laws regulating beer and beer parlours, and "The Hymn to Ninkasi", a prayer to the Mesopotamian goddess of beer, served as both a prayer and as a method of remembering the recipe for beer in a culture with few literate people. Today, the brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. Cider is a fermented alcoholic drink made from fruit juice, most commonly and traditionally apple juice, but also the juice of peaches, pears ("Perry" cider) or other fruit. Cider may be made from any variety of apple, but certain cultivars grown solely for use in cider are known as cider apples. The United Kingdom has the highest per capita consumption of cider, as well as the largest cider-producing companies in the world, , the U.K. produces 600 million litres of cider each year (130 million imperial gallons). Wine is an alcoholic drink made from fermented grapes or other fruits. The natural chemical balance of grapes lets them ferment without the addition of sugars, acids, enzymes, water, or other nutrients. Yeast consumes the sugars in the grapes and converts them into alcohol and carbon dioxide. Different varieties of grapes and strains of yeasts produce different styles of wine. The well-known variations result from the very complex interactions between the biochemical development of the fruit, reactions involved in fermentation, terroir and subsequent appellation, along with human intervention in the overall process. The final product may contain tens of thousands of chemical compounds in amounts varying from a few percent to a few parts per billion. Wines made from produce besides grapes are usually named after the product from which they are produced (for example, rice wine, pomegranate wine, apple wine and elderberry wine) and are generically called fruit wine. The term "wine" can also refer to starch-fermented or fortified drinks having higher alcohol content, such as barley wine, huangjiu, or sake. Wine has a rich history dating back thousands of years, with the earliest production so far discovered having occurred  BC in Georgia. It had reached the Balkans by  BC and was consumed and celebrated in ancient Greece and Rome. From its earliest appearance in written records, wine has also played an important role in religion. Red wine was closely associated with blood by the ancient Egyptians, who, according to Plutarch, avoided its free consumption as late as the 7th-century BC Saite dynasty, "thinking it to be the blood of those who had once battled against the gods". The Greek cult and mysteries of Dionysus, carried on by the Romans in their Bacchanalia, were the origins of western theater. Judaism incorporates it in the Kiddush and Christianity in its Eucharist, while alcohol consumption was forbidden in Islam. Spirits are distilled beverages that contain no added sugar and have at least 20% alcohol by volume (ABV). Popular spirits include borovička, brandy, gin, rum, slivovitz, tequila, vodka, and whisky. Brandy is a spirit created by distilling wine, whilst vodka may be distilled from any starch- or sugar-rich plant matter; most vodka today is produced from grains such as sorghum, corn, rye or wheat. Coffee is a brewed drink prepared from the roasted seeds of several species of an evergreen shrub of the genus "Coffea". The two most common sources of coffee beans are the highly regarded "Coffea arabica", and the "robusta" form of the hardier "Coffea canephora". Coffee plants are cultivated in more than 70 countries. Once ripe, coffee "berries" are picked, processed, and dried to yield the seeds inside. The seeds are then roasted to varying degrees, depending on the desired flavor, before being ground and brewed to create coffee. Coffee is slightly acidic (pH 5.0–5.1) and can have a stimulating effect on humans because of its caffeine content. It is one of the most popular drinks in the world. It can be prepared and presented in a variety of ways. The effect of coffee on human health has been a subject of many studies; however, results have varied in terms of coffee's relative benefit. Coffee cultivation first took place in southern Arabia; the earliest credible evidence of coffee-drinking appears in the middle of the 15th century in the Sufi shrines of Yemen. Hot chocolate, also known as drinking chocolate or cocoa, is a heated drink consisting of shaved chocolate, melted chocolate or cocoa powder, heated milk or water, and usually a sweetener. Hot chocolate may be topped with whipped cream. Hot chocolate made with melted chocolate is sometimes called drinking chocolate, characterized by less sweetness and a thicker consistency. The first chocolate drink is believed to have been created by the Mayans around 2,500-3,000 years ago, and a cocoa drink was an essential part of Aztec culture by 1400 AD, by which they referred to as xocōlātl. The drink became popular in Europe after being introduced from Mexico in the New World and has undergone multiple changes since then. Until the 19th century, hot chocolate was even used medicinally to treat ailments such as liver and stomach diseases. Hot chocolate is consumed throughout the world and comes in multiple variations, including the spiced "chocolate para mesa" of Latin America, the very thick "cioccolata calda" served in Italy and "chocolate a la taza" served in Spain, and the thinner hot cocoa consumed in the United States. Prepared hot chocolate can be purchased from a range of establishments, including cafeterias, fast food restaurants, coffeehouses and teahouses. Powdered hot chocolate mixes, which can be added to boiling water or hot milk to make the drink at home, are sold at grocery stores and online. Tea, the second most consumed drink in the world, is produced from infusing dried leaves of the "camellia sinensis" shrub, in boiling water. There are many ways in which tea is prepared for consumption: lemon or milk and sugar are among the most common additives worldwide. Other additions include butter and salt in Bhutan, Nepal, and Tibet; bubble tea in Taiwan; fresh ginger in Indonesia, Malaysia and Singapore; mint in North Africa and Senegal; cardamom in Central Asia; rum to make Jagertee in Central Europe; and coffee to make yuanyang in Hong Kong. Tea is also served differently from country to country: in China and Japan tiny cups are used to serve tea; in Thailand and the United States tea is often served cold (as "iced tea") or with a lot of sweetener; Indians boil tea with milk and a blend of spices as masala chai; tea is brewed with a samovar in Iran, Kashmir, Russia and Turkey; and in the Australian Outback it is traditionally brewed in a billycan. Tea leaves can be processed in different ways resulting in a drink which appears and tastes different. Chinese yellow and green tea are steamed, roasted and dried; Oolong tea is semi-oxidised and appears green-black and black teas are fully oxidised. Around the world, people refer to other herbal infusions as "teas"; it is also argued that these were popular long before the "Camellia sinensis" shrub was used for tea making. Leaves, flowers, roots or bark can be used to make a herbal infusion and can be bought fresh, dried or powdered. Throughout history, people have come together in establishments to socialise whilst drinking. This includes cafés and coffeehouses, focus on providing hot drinks as well as light snacks. Many coffee houses in the Middle East, and in West Asian immigrant districts in the Western world, offer "shisha" ("nargile" in Turkish and Greek), flavored tobacco smoked through a hookah. Espresso bars are a type of coffeehouse that specialize in serving espresso and espresso-based drinks. In China and Japan, the establishment would be a tea house, were people would socialise whilst drinking tea. Chinese scholars have used the teahouse for places of sharing ideas. Alcoholic drinks are served in drinking establishments, which have different cultural connotations. For example, pubs are fundamental to the culture of Britain, Ireland, Australia, Canada, New England, Metro Detroit, South Africa and New Zealand. In many places, especially in villages, a pub can be the focal point of the community. The writings of Samuel Pepys describe the pub as the heart of England. Many pubs are controlled by breweries, so cask ale or keg beer may be a better value than wines and spirits. In contrast, types of bars range from seedy bars or nightclubs, sometimes termed "dive bars", to elegant places of entertainment for the elite. Bars provide stools or chairs that are placed at tables or counters for their patrons. The term "bar" is derived from the specialized counter on which drinks are served. Some bars have entertainment on a stage, such as a live band, comedians, go-go dancers, or strippers. Patrons may sit or stand at the bar and be served by the bartender, or they may sit at tables and be served by cocktail servers. Food and drink are often paired together to enhance the taste experience. This primarily happens with wine and a culture has grown up around the process. Weight, flavors and textures can either be contrasted or complemented. In recent years, food magazines began to suggest particular wines with recipes and restaurants would offer multi-course dinners matched with a specific wine for each course. Different drinks have unique receptacles for their consumption. This is sometimes purely for presentations purposes, such as for cocktails. In other situations, the drinkware has practical application, such as coffee cups which are designed for insulation or brandy snifters which are designed to encourage evaporation but trap the aroma within the glass. Many glasses include a stem, which allows the drinker to hold the glass without affecting the temperature of the drink. In champagne glasses, the bowl is designed to retain champagne's signature carbonation, by reducing the surface area at the opening of the bowl. Historically, champagne has been served in a champagne coupe, the shape of which allowed carbonation to dissipate even more rapidly than from a standard wine glass. An important export commodity, coffee was the top agricultural export for twelve countries in 2004, and it was the world's seventh-largest legal agricultural export by value in 2005. Green (unroasted) coffee is one of the most traded agricultural commodities in the world. Some drinks, such as wine, can be used as an alternative investment. This can be achieved by either purchasing and reselling individual bottles or cases of particular wines, or purchasing shares in an investment wine fund that pools investors' capital.
https://en.wikipedia.org/wiki?curid=7984
Dill Dill ("Anethum graveolens") is an annual herb in the celery family Apiaceae. It is the only species in the genus "Anethum". Dill is grown widely in Eurasia where its leaves and seeds are used as a herb or spice for flavouring food. Dill grows up to , with slender hollow stems and alternate, finely divided, softly delicate leaves long. The ultimate leaf divisions are broad, slightly broader than the similar leaves of fennel, which are threadlike, less than broad, but harder in texture. The flowers are white to yellow, in small umbels diameter. The seeds are long and thick, and straight to slightly curved with a longitudinally ridged surface. The word "dill" and its close relatives are found in most of the Germanic languages; its ultimate origin is unknown. The generic name "Anethum" is the Latin form of the Greek ἄνῑσον / ἄνησον / ἄνηθον / ἄνητον, which meant both "dill" and "anise". The form "anīsum" came to be used for anise, "anēthum" for dill. The Latin word is the origin of dill's names in the Western Romance languages ("anet", "aneldo", etc.), and also of the obsolete English "anet". Most Slavic language names come from Proto-Slavic "*koprъ". Fresh and dried dill leaves (sometimes called "dill weed" to distinguish it from dill seed) are widely used as herbs in Europe and central Asia. Like caraway, the fernlike leaves of dill are aromatic and are used to flavor many foods such as gravlax (cured salmon) and other fish dishes, borscht, and other soups, as well as pickles (where the dill flower is sometimes used). Dill is best when used fresh, as it loses its flavor rapidly if dried, however, freeze-dried dill leaves retain their flavor relatively well for a few months. Dill oil is extracted from the leaves, stems, and seeds of the plant. The oil from the seeds is distilled and used in the manufacturing of soaps. Dill is the eponymous ingredient in dill pickles. In central and eastern Europe, Scandinavia, Baltic states, Ukraine,and Russia. dill is a popular culinary herb used in the kitchen along with chives or parsley. Fresh, finely cut dill leaves are used as a topping in soups, especially the hot red borsht and the cold borsht mixed with curds, kefir, yogurt, or sour cream, which is served during hot summer weather and is ` okroshka. It also is popular in summer to drink fermented milk (curds, kefir, yogurt, or buttermilk) mixed with dill (and sometimes other herbs). In the same way, prepared dill is used as a topping for boiled potatoes covered with fresh butter – especially in summer when there are so-called "new", or young, potatoes. The dill leaves may be mixed with butter, making a dill butter, to serve the same purpose. Dill leaves mixed with tvorog, form one of the traditional cheese spreads used for sandwiches. Fresh dill leaves are used throughout the year as an ingredient in salads, "e.g.", one made of lettuce, fresh cucumbers, and tomatoes, as basil leaves are used in Italy and Greece. Russian cuisine is noted for liberal use of dill, where it is known as . Its supposed antiflatulent activity caused some Russian cosmonauts to recommend its use in manned spaceflight due to the confined quarters and closed air supply. In Polish cuisine, fresh dill leaves mixed with sour cream are the basis for dressings. It is especially popular to use this kind of sauce with freshly cut cucumbers, which practically are wholly immersed in the sauce, making a salad called mizeria. The dill leaves serve as a basis for cooking dill sauce, used hot for baked freshwater fish and for chicken or turkey breast, or used hot or cold for hard-boiled eggs. A dill-based soup, (zupa koperkowa), served with potatoes and hard-boiled eggs, is popular in Poland. Whole stems including roots and flower buds are used traditionally to prepare Polish-style pickled cucumbers (ogórki kiszone), especially the so-called low-salt cucumbers ("ogórki małosolne"). Whole stems of dill (often including the roots) also are cooked with potatoes, especially the potatoes of autumn and winter, so they resemble the flavor of the newer potatoes found in summer. Some kinds of fish, especially trout and salmon, traditionally are baked with the stems and leaves of dill. In the Czech Republic, white dill sauce made of cream (or milk), butter, flour, vinegar, and dill is called "koprová omáčka" (also "koprovka" or "kopračka") and is served either with boiled eggs and potatoes, or with dumplings and boiled beef. Another Czech dish with dill is a soup called, "kulajda", that contains mushrooms (traditionally wild ones). In Germany, dill is popular as a seasoning for fish and many other dishes, chopped as a garnish on potatoes, and as a flavoring in pickles. In the UK, dill may be used in fish pie. In Bulgaria dill is widely used in traditional vegetable salads, and most notably the yogurt-based cold soup Tarator. It is also used in the preparation of sour pickles, cabbage, and other dishes. In Romania dill ("mărar") is widely used as an ingredient for soups such as "borş" (pronounced "borsh"), pickles, and other dishes, especially those based on peas, beans, and cabbage. It is popular for dishes based on potatoes and mushrooms and may be found in many summer salads (especially cucumber salad, cabbage salad and lettuce salad). During springtime, it is used in omelets with spring onions. It often complements sauces based on sour cream or yogurt and is mixed with salted cheese and used as a filling. Another popular dish with dill as a main ingredient is dill sauce, which is served with eggs and fried sausages. In Hungary, dill is very widely used. It is popular as a sauce or filling, and mixed with a type of cottage cheese. Dill is also used for pickling and in salads. The Hungarian name for dill is "kapor". In Serbia, dill is known as "mirodjija" and is used as an addition to soups, potato and cucumber salads, and French fries. It features in the Serbian proverb, "бити мирођија у свакој чорби" /biti mirodjija u svakoj čorbi/ (to be a dill in every soup), which corresponds to the English proverb "to have a finger in every pie". In Greece, dill is known as 'άνηθος' (anithos). In antiquity it was used as an ingredient in wines that were called "anithites oinos" (wine with anithos-dill). In modern days, dill is used in salads, soups, sauces, and fish and vegetable dishes. In Santa Maria, Azores, dill ("endro") is the most important ingredient of the traditional Holy Ghost soup ("sopa do Espírito Santo"). Dill is found ubiquitously in Santa Maria, yet curiously, is rare in the other Azorean Islands. In Sweden, dill is a common spice or herb. The top of fully grown dill is called "krondill" (English: Crown dill); this is used when cooking crayfish. The "krondill" is put into the water after the crayfish is boiled, but still in hot and salt water. Then the entire dish is refrigerated for at least 24 hours before being served (with toasted bread and butter). "Krondill" also is used for cucumber pickles. Small cucumbers, sliced or not, are put into a solution of hot water, mild acetic white vinegar (made from vodka, not wine), sugar, and "krondill". After a month or two of fermentation, the cucumber pickles are ready to eat, for instance, with pork, brown sauce, and potatoes, as a "sweetener". The thinner part of dill and young plants may be used with boiled fresh potatoes (especially the first potatoes for the year, "new potatoes", which usually are small and have a very thin skin). It is used together with, or instead of other green herbs, such as parsley, chives, and basil, in salads. It often is paired up with chives when used in food. Dill often is used to flavour fish and seafood in Sweden, for example, gravlax and various herring pickles, among them the traditional, sill i dill (literally "herring in dill"). In contrast to the various fish dishes flavoured with dill, there is also a traditional Swedish dish called, dillkött, which is a meaty stew flavoured with dill. The dish commonly contains either pieces of veal or lamb that are boiled until tender and then served together with a vinegary dill sauce. Dill seeds may be used in breads or akvavit. A newer, non-traditional use of dill is when it is paired up with chives as a flavouring of potato chips. This flavour of potato chips called, "dillchips", is quite popular in Sweden. In Iran, dill is known as "shevid" and sometimes, is used with rice and called "shevid-polo". It also is used in Iranian "aash" recipes, and similarly, is called "sheved" in Persian. In India, dill is known as "Sholpa" in Bengali, "shepu" (शेपू) in Marathi, and Konkani, "savaa" in Hindi, or "soa" in Punjabi. In Telugu, it is called "Soa-kura" (for herb greens). It also is called "sabbasige soppu" (ಸಬ್ಬಸಿಗೆ ಸೊಪ್ಪು) in Kannada. In Tamil it is known as "sada kuppi"(சதகுப்பி). In Malayalam, it is ചതകുപ്പ ("chathakuppa") or ശതകുപ്പ ("sathakuppa"). In Sanskrit, this herb is called "shatapushpa". In Gujarati, it is known as "suva" (સૂવા). In India, dill is prepared in the manner of yellow "moong dal", as a main-course dish. It is considered to have very good antiflatulant properties, so it is used as "mukhwas", or an after-meal digestive. Traditionally, it is given to mothers immediately after childbirth. In the state of Uttar Pradesh in India, a small amount of fresh dill is cooked along with cut potatoes and fresh fenugreek leaves (Hindi आलू-मेथी-सोया). In Manipur, dill, locally known as "pakhon", is an essential ingredient of "chagem pomba" – a traditional Manipuri dish made with fermented soybean and rice. In Laos and parts of northern Thailand, dill is known in English as Lao coriander ( or ), and served as a side with salad yum or papaya salad. In the Lao language, it is called "phak see", and in Thai, it is known as "phak chee Lao". In Lao cuisine, Lao coriander is used extensively in traditional Lao dishes such as "mok pa" (steamed fish in banana leaf) and several coconut milk curries that contain fish or prawns. In China dill is called colloquially, "huíxiāng" (, perfums of Hui people), or more properly "shíluó" (). It is a common filling in baozi and xianbing and may be used as vegetarian with rice vermicelli, or combined with either meat or eggs. Vegetarian dill baozi are a common part of a Beijing breakfast. In baozi and xianbing, it often is interchangeable with non-bulbing fennel and the term also may refer to fennel, similarly to caraway and coriander leaf, sharing a name in Chinese as well. Dill also may be stir fried as a potherb, often with egg, in the same manner as Chinese chives. It commonly is used in Taiwan as well. In Northern China, Beijing, Inner-Mongolia, Ningxia, Gansu, and Xinjiang, dill seeds commonly are called "zīrán" (), but also "kūmíng" (), "kūmíngzi" (), "shíluózi" (), "xiǎohuíxiāngzi" () and are used with pepper for lamb meat. In the whole of China, "yángchuàn" () or "yángròu chuàn" (), lamb brochette, a speciality from Uyghurs, uses cumin and pepper. In Vietnam, the use of dill in cooking is regional. It is used mainly in northern Vietnamese cuisine. In Arab countries, dill seed, called "ain jaradeh" (grasshopper's eye), is used as a spice in cold dishes such as "fattoush" and pickles. In Arab countries of the Persian Gulf, dill is called "shibint" and is used mostly in fish dishes. In Egypt, dillweed is commonly used to flavor cabbage dishes, including "mahshi koronb" (stuffed cabbage leaves). In Israel, dill weed is used in salads and also to flavor omelettes, often alongside parsley. It is known in Hebrew as "shammir" (שמיר). Successful cultivation requires warm to hot summers with high sunshine levels; even partial shade will reduce the yield substantially. It also prefers rich, well-drained soil. The seeds are viable for three to ten years. The plants are somewhat monocarpic and quickly die after "bolting" (producing seeds). Hot temperatures may quicken bolting. The seed is harvested by cutting the flower heads off the stalks when the seed is beginning to ripen. The seed heads are placed upside down in a paper bag and left in a warm, dry place for a week. The seeds then separate from the stems easily for storage in an airtight container. These plants, like their fennel and parsley relatives, often are eaten by Black swallowtail caterpillars in areas where that species occurs. For this reason, they may be included in some butterfly gardens. When used as a companion plant, dill attracts many beneficial insects as the umbrella flower heads go to seed. It makes a good companion plant for cucumbers and broccoli. It is a poor companion plant for carrots and tomatoes.
https://en.wikipedia.org/wiki?curid=7985
Dual space In mathematics, any vector space "V" has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on "V", together with the vector space structure of pointwise addition and scalar multiplication by constants. The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the "algebraic dual space". When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the "continuous dual space". Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces. When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis. Early terms for "dual" include "polare Raum" [Hahn 1927], "espace conjugué", "adjoint space" [Alaoglu 1940], and "transponierte Raum" [Schauder 1930] and [Banach 1932]. The term "dual" is due to Bourbaki 1938. Given any vector space "V" over a field "F", the (algebraic) dual space "V"∗ (alternatively denoted by "V"∨ or "V") is defined as the set of all linear maps (linear functionals). Since linear maps are vector space homomorphisms, the dual space is also sometimes denoted by Hom("V", "F"). The dual space "V"∗ itself becomes a vector space over "F" when equipped with an addition and scalar multiplication satisfying: for all "φ" and , , and . Elements of the algebraic dual space "V"∗ are sometimes called covectors or one-forms. The pairing of a functional "φ" in the dual space "V"∗ and an element "x" of "V" is sometimes denoted by a bracket: or . This pairing defines a nondegenerate bilinear mapping called the natural pairing. If "V" is finite-dimensional, then "V"∗ has the same dimension as "V". Given a basis in "V", it is possible to construct a specific basis in "V"∗, called the dual basis. This dual basis is a set of linear functionals on "V", defined by the relation for any choice of coefficients . In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations where formula_4 is the Kronecker delta symbol. This property is referred to as "biorthogonality property". For example, if "V" is R2, let its basis be chosen as . Note that the basis vectors are not orthogonal to each other. Then, e1 and e2 are one-forms (functions that map a vector to a scalar) such that , , , and . (Note: The superscript here is the index, not an exponent.) We can express this system of equations using matrix notation as Solving this equation, we find the dual basis to be . Recalling that e1 and e2 are functionals, we can rewrite them as e1("x", "y") = 2"x" and e2("x", "y") = −"x" + "y". In general, when "V" is R"n", if E = (e1, ..., e"n") is a matrix whose columns are the basis vectors and Ê = (e1, ..., e"n") is a matrix whose columns are the dual basis vectors, then where "I""n" is an identity matrix of order "n". The biorthogonality property of these two basis sets allows us to represent any point x in "V" as even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product formula_8 and the corresponding duality pairing are introduced, as described below in "". In particular, if we interpret R"n" as the space of columns of "n" real numbers, its dual space is typically written as the space of "rows" of "n" real numbers. Such a row acts on R"n" as a linear functional by ordinary matrix multiplication. One way to see this is that a functional maps every "n"-vector "x" into a real number "y". Then, seeing this functional as a matrix "M", and "x", "y" as a matrix and a matrix (trivially, a real number) respectively, if we have , then, by dimension reasons, "M" must be a matrix, i.e., "M" must be a row vector. If "V" consists of the space of geometrical vectors in the plane, then the level curves of an element of "V"∗ form a family of parallel lines in "V", because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of "V"∗ can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, one needs only to determine which of the lines the vector lies on. Or, informally, one "counts" how many lines the vector crosses. More generally, if "V" is a vector space of any dimension, then the level sets of a linear functional in "V"∗ are parallel hyperplanes in "V", and the action of a linear functional on a vector can be visualized in terms of these hyperplanes. If "V" is not finite-dimensional but has a basis e"α" indexed by an infinite set "A", then the same construction as in the finite-dimensional case yields linearly independent elements e"α" () of the dual space, but they will not form a basis. Consider, for instance, the space R∞, whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers N: for , e"i" is the sequence consisting of all zeroes except in the "i"-th position, which is "1". The dual space of R∞ is (isomorphic to) RN, the space of "all" sequences of real numbers: such a sequence ("an") is applied to an element ("xn") of R∞ to give the number which is a finite sum because there are only finitely many nonzero "xn". The dimension of R∞ is countably infinite, whereas RN does not have a countable basis. This observation generalizes to any infinite-dimensional vector space "V" over any field "F": a choice of basis identifies "V" with the space ("FA")0 of functions such that is nonzero for only finitely many , where such a function "f" is identified with the vector in "V" (the sum is finite by the assumption on "f", and any may be written in this way by the definition of the basis). The dual space of "V" may then be identified with the space "FA" of "all" functions from "A" to "F": a linear functional "T" on "V" is uniquely determined by the values it takes on the basis of "V", and any function (with ) defines a linear functional "T" on "V" by Again the sum is finite because "fα" is nonzero for only finitely many "α". Note that ("FA")0 may be identified (essentially by definition) with the direct sum of infinitely many copies of "F" (viewed as a 1-dimensional vector space over itself) indexed by "A", i.e., there are linear isomorphisms On the other hand, "FA" is (again by definition), the direct product of infinitely many copies of "F" indexed by "A", and so the identification is a special case of a general result relating direct sums (of modules) to direct products. Thus if the basis is infinite, then the algebraic dual space is "always" of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional. If "V" is finite-dimensional, then "V" is isomorphic to "V"∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form on "V" gives a mapping of "V" into its dual space via where the right hand side is defined as the functional on "V" taking each to . In other words, the bilinear form determines a linear mapping defined by If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of "V"∗. If "V" is finite-dimensional, then this is an isomorphism onto all of "V"∗. Conversely, any isomorphism formula_17 from "V" to a subspace of "V"∗ (resp., all of "V"∗ if "V" is finite dimensional) defines a unique nondegenerate bilinear form on "V" by Thus there is a one-to-one correspondence between isomorphisms of "V" to a subspace of (resp., all of) "V"∗ and nondegenerate bilinear forms on "V". If the vector space "V" is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms. In that case, a given sesquilinear form determines an isomorphism of "V" with the complex conjugate of the dual space The conjugate space "V"∗ can be identified with the set of all additive complex-valued functionals such that There is a natural homomorphism formula_21 from formula_22 into the double dual formula_23, defined by formula_24 for all formula_25. In other words, if formula_26 is the evaluation map defined by formula_27, then we define formula_28 as the map formula_29. This map formula_21 is always injective; it is an isomorphism if and only if formula_22 is finite-dimensional. Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism. Note that infinite-dimensional Hilbert spaces are not a counterexample to this, as they are isomorphic to their continuous duals, not to their algebraic duals. If is a linear map, then the "transpose" (or "dual") is defined by for every . The resulting functional "f"("φ") in "V" is called the "pullback" of "φ" along "f". The following identity holds for all and : where the bracket [·,·] on the left is the natural pairing of "V" with its dual space, and that on the right is the natural pairing of "W" with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint. The assignment produces an injective linear map between the space of linear operators from "V" to "W" and the space of linear operators from "W" to "V"; this homomorphism is an isomorphism if and only if "W" is finite-dimensional. If then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that . In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over "F" to itself. Note that one can identify ("f") with "f" using the natural injection into the double dual. If the linear map "f" is represented by the matrix "A" with respect to two bases of "V" and "W", then "f" is represented by the transpose matrix "A"T with respect to the dual bases of "W" and "V", hence the name. Alternatively, as "f" is represented by "A" acting on the left on column vectors, "f" is represented by the same matrix acting on the right on row vectors. These points of view are related by the canonical inner product on R"n", which identifies the space of column vectors with the dual space of row vectors. Let "S" be a subset of "V". The annihilator of "S" in "V"∗, denoted here "S", is the collection of linear functionals such that for all . That is, "S" consists of all linear functionals such that the restriction to "S" vanishes: . Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement. The annihilator of a subset is itself a vector space. In particular, the annihilator of the zero vector is the whole dual space: formula_34, and the annihilator of the whole space is just the zero covector: formula_35. Furthermore, the assignment of an annihilator to a subset of "V" reverses inclusions, so that if , then Moreover, if "A" and "B" are two subsets of "V", then and equality holds provided "V" is finite-dimensional. If "Ai" is any family of subsets of "V" indexed by "i" belonging to some index set "I", then In particular if "A" and "B" are subspaces of "V", it follows that If "V" is finite-dimensional, and "W" is a vector subspace, then after identifying "W" with its image in the second dual space under the double duality isomorphism . Thus, in particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space. If "W" is a subspace of "V" then the quotient space "V"/"W" is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional factors through "V"/"W" if and only if "W" is in the kernel of "f". There is thus an isomorphism As a particular consequence, if "V" is a direct sum of two subspaces "A" and "B", then "V"∗ is a direct sum of "A" and "B". When dealing with topological vector spaces, one is typically only interested in the continuous linear functionals from the space into the base field formula_42 (or formula_43). This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space formula_44, denoted by formula_45. For any "finite-dimensional" normed vector space or topological vector space, such as Euclidean "n-"space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps. Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space", since there is no serious need to consider discontinuous maps in this field. For a topological vector space formula_22 its "continuous dual space", or "topological dual space", or just "dual space" (in the sense of the theory of topological vector spaces) formula_45 is defined as the space of all continuous linear functionals formula_48. There is a standard construction for introducing a topology on the continuous dual formula_45 of a topological vector space formula_22. Fix a collection formula_51 of bounded subsets of formula_22. Then one has the topology on formula_22 of uniform convergence on sets from formula_54 or what is the same thing, the topology generated by seminorms of the form where formula_56 is a continuous linear functional on formula_22, and formula_58 runs over the class formula_51. This means that a net of functionals formula_60 tends to a functional formula_56 in formula_45 if and only if Usually (but not necessarily) the class formula_51 is supposed to satisfy the following conditions: If these requirements are fulfilled then the corresponding topology on formula_45 is Hausdorff and the sets form its local base. Here are the three most important special cases. If formula_22 is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on formula_45 is normed (in fact a Banach space if the field of scalars is complete), with the norm Each of these three choices of topology on formula_45 leads to a variant of reflexivity property for topological vector spaces: Let 1 < "p" < ∞ be a real number and consider the Banach space "ℓ p" of all sequences for which Define the number "q" by . Then the continuous dual of "ℓ p" is naturally identified with "ℓ q": given an element , the corresponding element of is the sequence ("φ"(e"n")) where e"n" denotes the sequence whose "n-"th term is 1 and all others are zero. Conversely, given an element , the corresponding continuous linear functional "φ" on is defined by for all (see Hölder's inequality). In a similar manner, the continuous dual of is naturally identified with (the space of bounded sequences). Furthermore, the continuous duals of the Banach spaces "c" (consisting of all convergent sequences, with the supremum norm) and "c"0 (the sequences converging to zero) are both naturally identified with . By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space. This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics. By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures. If is a continuous linear map between two topological vector spaces, then the (continuous) transpose is defined by the same formula as before: The resulting functional is in. The assignment produces a linear map between the space of continuous linear maps from "V" to "W" and the space of linear maps from to . When "T" and "U" are composable continuous linear maps, then When "V" and "W" are normed spaces, the norm of the transpose in is equal to that of "T" in. Several properties of transposition depend upon the Hahn–Banach theorem. For example, the bounded linear map "T" has dense range if and only if the transpose is injective. When "T" is a compact linear map between two Banach spaces "V" and "W", then the transpose is compact. This can be proved using the Arzelà–Ascoli theorem. When "V" is a Hilbert space, there is an antilinear isomorphism "iV" from "V" onto its continuous dual. For every bounded linear map "T" on "V", the transpose and the adjoint operators are linked by When "T" is a continuous linear map between two topological vector spaces "V" and "W", then the transpose is continuous when and are equipped with"compatible" topologies: for example, when for and , both duals have the strong topology of uniform convergence on bounded sets of "X", or both have the weak-∗ topology of pointwise convergence on "X". The transpose is continuous from to , or from to . Assume that "W" is a closed linear subspace of a normed space "V", and consider the annihilator of "W" in, Then, the dual of the quotient can be identified with "W"⊥, and the dual of "W" can be identified with the quotient . Indeed, let "P" denote the canonical surjection from "V" onto the quotient ; then, the transpose is an isometric isomorphism from into, with range equal to "W"⊥. If "j" denotes the injection map from "W" into "V", then the kernel of the transpose is the annihilator of "W": and it follows from the Hahn–Banach theorem that induces an isometric isomorphism If the dual of a normed space "V" is separable, then so is the space "V" itself. The converse is not true: for example, the space is separable, but its dual is not. In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator from a normed space "V" into its continuous double dual , defined by As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning for all "x" in "V". Normed spaces for which the map Ψ is a bijection are called reflexive. When "V" is a topological vector space, one can still define Ψ("x") by the same formula, for every , however several difficulties arise. First, when "V" is not locally convex, the continuous dual may be equal to {0} and the map Ψ trivial. However, if "V" is Hausdorff and locally convex, the map Ψ is injective from "V" to the algebraic dual of the continuous dual, again as a consequence of the Hahn–Banach theorem. Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual , so that the continuous double dual is not uniquely defined as a set. Saying that Ψ maps from "V" to , or in other words, that Ψ("x") is continuous on for every , is a reasonable minimal requirement on the topology of , namely that the evaluation mappings be continuous for the chosen topology on . Further, there is still a choice of a topology on , and continuity of Ψ depends upon this choice. As a consequence, defining reflexivity in this framework is more involved than in the normed case.
https://en.wikipedia.org/wiki?curid=7988
Dianetics Dianetics (from Greek "dia", meaning "through", and "nous", meaning "mind") is a set of ideas and practices regarding the metaphysical relationship between the mind and body created by science fiction writer L. Ron Hubbard. Dianetics is practiced by followers of Scientology and the Nation of Islam (as of 2010). Dianetics divides the mind into three parts: the conscious "analytical mind", the subconscious "reactive mind", and the somatic mind. The goal of Dianetics is to erase the content of the "reactive mind", which practitioners believe interferes with a person's ethics, awareness, happiness, and sanity. The Dianetics procedure to achieve this erasure is called "auditing". In auditing, the Dianetic auditor asks a series of questions (or commands) which are intended to help a person locate and deal with painful past experiences. Practitioners of Dianetics believe that "the basic principle of existence is to survive" and that the basic personality of humans is sincere, intelligent, and good. The drive for goodness and survival is distorted and inhibited by aberrations. Hubbard proposed this model, and then developed Dianetics with the claim that it could eradicate these aberrations. When Hubbard formulated Dianetics, he described it as "a mix of Western technology and Oriental philosophy". Hubbard claimed that Dianetics could increase intelligence, eliminate unwanted emotions and alleviate a wide range of illnesses he believed to be psychosomatic. Among the conditions purportedly treated were arthritis, allergies, asthma, some coronary difficulties, eye trouble, ulcers, migraine headaches, "sexual deviation" (which for Hubbard included homosexuality), and even death. Hubbard initially described Dianetics as a branch of psychology. Jon Atack writes that the original Dianetic techniques can be derived almost entirely from Sigmund Freud's lectures. Hubbard created the "Freudian Foundation of America" and offered graduate auditors certificates which included that of "Freudian Psychoanalyst." Hubbard was influenced in creating Dianetics by many psychologists such as William Sargant's work on abreaction therapy, Carl Jung, Roy Grinker and John Spiegel's writing on hypnosis and hypnoanalysis, Nandor Fodor, Otto Rank, and others. Alfred Korzybski's general semantics was also cited by Hubbard as an influence. Hubbard differentiated Dianetics from Scientology, saying that Dianetics was a mental therapy science and Scientology was a religion. Dianetics predates Hubbard's classification of Scientology as an "applied religious philosophy". Early in 1951, he expanded his writings to include teachings related to the soul, or "thetan". According to Hubbard, when he was sedated for a dental operation in 1938, he had a near-death experience which inspired him to write the manuscript "Excalibur", which was never published. This work would eventually become the basis for Dianetics, and later on also Scientology. The first publication on Dianetics was "", an article by Hubbard in "Astounding Science Fiction" (cover date May 1950). This was followed by the book "" published May 9, 1950. In these works Hubbard claimed that the source of all psychological pain, and therefore the cause of mental and physical health problems, was a form of memory known as "engrams". According to Hubbard, individuals could reach a state he named "Clear" in which a person was freed of these engrams. This would be done by talking with an "auditor". While not accepted by the medical and scientific establishment, in the first two years of its publication, over 100,000 copies of the book were sold. Many enthusiasts emerged to form groups to study and practice Dianetics. The atmosphere from which Dianetics was written about in this period was one of "excited experimentation". Sociologist Roy Wallis writes that Hubbard's work was regarded as an "initial exploration" for further development. Hubbard wrote an additional six books in 1951, drawing the attention of a significant fan base. Publication of "Dianetics: The Modern Science of Mental Health" brought in a flood of revenue, which Hubbard used to establish Dianetics foundations in six major American cities. Dianetics shared The New York Times best-seller list with other self-help writings, including Norman Vincent Peale's "The Art of Happiness" and Henry Overstreet's "The Mature Mind". Scholar Hugh B. Urban attributed that the initial success of Dianetics to Hubbard's "entrepreneurial skills". Posthumously, "Publishers Weekly" awarded Hubbard a plaque to acknowledge Dianetics appearing on its bestseller list for one hundred weeks, consecutively. Two of the strongest initial supporters of Dianetics in the 1950s were John W. Campbell, editor of "Astounding Science Fiction" and Joseph Augustus Winter, a writer and medical physician. Campbell published some of Hubbard's short stories and Winter hoped that his colleagues would likewise be attracted to Hubbard's Dianetics system. Per Wallis, it was Dianetics' popularity as a lay psychotherapy that contributed to the Foundation's downfall. It was the craze of 1950-51, but the fad was dead by 1952. Most people read the book, tried it out, then put it down. The remaining practitioners had no ties to the Foundation and resisted its control. Because there were no trained Dianetics professionals, factions formed. The followers challenged Hubbard's movement and his authority. Wallis suggests Hubbard learned an implicit lesson from this experience. He would not make the same mistake when creating Scientology. Hubbard left the Foundation, which shut down. Creditors began to demand settlement of its outstanding debts. Don Purcell, an oil millionaire Dianeticist from Wichita, Kansas, offered a brief respite from bankruptcy, but the Wichita Foundation's finances soon failed again in 1952 when Hubbard left for Phoenix with all his Dianetics materials to avoid the court bailiffs sent by Purcell, who had purchased from Hubbard for the copyrights to Dianetics in an effort to keep Hubbard from bankruptcy again. In 1954, Hubbard defined Scientology as a religion focused on the spirit, differentiating it from Dianetics, and subsequently Dianetics Auditing Therapy, which he defined as a counseling based science that addressed the physical being. When Hubbard morphed Dianetics therapy into the religion of Scientology, Jesper Aagaard Petersen of Oxford University surmises that it could have been for the benefits from establishing it is a religion as much as it could have been from the result of Hubbard's "discovery of past life experiences and his exploration of the thetan." The reason being to avoid copyright infringement issues with use of the name Dianetics then held by Purcell. Purcell later donated the copyright ownership back to Hubbard as charitable debt relief. With the temporary sale of assets resulting from the HDRF's bankruptcy, Hubbard no longer owned the rights to the name "Dianetics". Scientologists refer to the book "Dianetics: The Modern Science of Mental Health" as "Book One." In 1952, Hubbard published a new set of teachings as "Scientology, a religious philosophy." Scientology did not replace Dianetics but attempted to extended it to cover new areas: Where the goal of Dianetics is to rid the individual of his "reactive mind" engrams, the stated goal of Scientology is to rehabilitate the individual's spiritual nature so that adherents may reach their full potential. In 1963 and again in May 1969, Hubbard reorganized the material in Dianetics, the auditing commands, and original Volney Mathieson invented E-meter use, naming the package "Standard Dianetics." In a 1969 bulletin, "This bulletin combines HCOB 27 April 1969 'R-3-R Restated' with those parts of HCOB 24 June 1963 'Routine 3-R' used in the new Standard Dianetic Course and its application. This gives the complete steps of Routine 3-R Revised." In 1978, Hubbard released "New Era Dianetics" (NED), a revised version supposed to produce better results in a shorter period of time. The course consists of 11 "rundowns" and requires a specifically trained auditor. In the Church of Scientology, OTs study several levels of before reaching the highest level. In the book, "", Hubbard describes techniques that he suggests can rid individuals of fears and psychosomatic illnesses. A basic idea in Dianetics is that the mind consists of two parts: the "analytical mind" and the "reactive mind." The "reactive mind", the mind which operates when a person is physically unconscious, acts as a record of shock, trauma, pain, and otherwise harmful memories. Experiences such as these, stored in the "reactive mind" are dubbed "engrams". Dianetics is proposed as a method to erase these engrams in the reactive mind to achieve a state of clear. Hubbard described Dianetics as "an organized science of thought built on definite axioms: statements of natural laws on the order of those of the physical sciences". In April 1950, before the public release of Dianetics, he wrote: "To date, over two hundred patients have been treated; of those two hundred, two hundred cures have been obtained." In Dianetics, the unconscious or reactive mind is described as a collection of "mental image pictures," which contain the recorded experience of past moments of unconsciousness, including all sensory perceptions and feelings involved, ranging from pre-natal experiences, infancy and childhood, to even the traumatic feelings associated with events from past lives and extraterrestrial cultures. The type of mental image picture created during a period of unconsciousness involves the exact recording of a painful experience. Hubbard called this phenomenon an engram, and defined it as "a complete recording of a moment of unconsciousness containing physical pain or painful emotion and all perceptions." Hubbard proposed that painful physical or emotional traumas caused "aberrations" (deviations from rational thinking) in the mind, which produced lasting adverse physical and emotional effects, similar to conversion disorders. When the analytical (conscious) mind shut down during these moments, events and perceptions of this period were stored as engrams in the unconscious or reactive mind. (In Hubbard's earliest publications on the subject, engrams were variously referred to as "Norns", "Impediments," and "comanomes" before "engram" was adapted from its existing usage at the suggestion of Joseph Augustus Winter, MD.) Some commentators noted Dianetics's blend of science fiction and occult orientations at the time. Hubbard claimed that these engrams are the cause of almost all psychological and physical problems. In addition to physical pain, engrams could include words or phrases spoken in the vicinity while the patient was unconscious. For instance, Winter cites the example of a patient with a persistent headache supposedly tracing the problem to a doctor saying, "Take him now," during the patient's birth. Hubbard similarly claimed that leukemia is traceable to "an engram containing the phrase 'It turns my blood to water.'" While it is sometimes claimed that the Church of Scientology no longer stands by Hubbard's claims that Dianetics can treat physical conditions, it still publishes them: "... when the knee injuries of the past are located and discharged, the arthritis ceases, no other injury takes its place and the person is finished with arthritis of the knee." "[The reactive mind] can give a man arthritis, bursitis, asthma, allergies, sinusitis, coronary trouble, high blood pressure ... And it is the only thing in the human being which can produce these effects ... Discharge the content of [the reactive mind] and the arthritis vanishes, myopia gets better, heart illness decreases, asthma disappears, stomachs function properly and the whole catalog of ills goes away and stays away." Some of the psychometric ideas in Dianetics, in particular the E-meter, can be traced to Carl Jung. Basic concepts, including conversion disorder, are derived from Sigmund Freud, whom Hubbard credited as an inspiration and source. Freud had speculated 40 years previously that traumas with similar content join together in "chains," embedded in the unconscious mind, to cause irrational responses in the individual. Such a chain would be relieved by inducing the patient to remember the earliest trauma, "with an accompanying expression of emotion." According to Bent Corydon, Hubbard created the illusion that Dianetics was the first psychotherapy to address traumatic experiences in their own time, but others had done so as standard procedure. One treatment method Hubbard drew from in developing Dianetics was abreaction therapy. Abreaction is a psychoanalytical term that means bringing to consciousness, and thus adequate expression, material that has been unconscious. "It includes not only the recollection of forgotten memories and experience, but also their reliving with appropriate emotional display and discharge of effect. This process is usually facilitated by the patient's gaining awareness of the causal relationship between the previously undischarged emotion and his symptoms." According to Hubbard, before Dianetics psychotherapists had dealt with very light and superficial incidents (e.g. an incident that reminds the patient of a moment of loss), but with Dianetic therapy, the patient could actually erase moments of pain and unconsciousness. He emphasized: "The discovery of the engram is entirely the property of Dianetics. Methods of its erasure are also owned entirely by Dianetics..." While 1950 style Dianetics was in some respects similar to older therapies, with the development of New Era Dianetics in 1978, the similarity vanished. New Era Dianetics uses an E-Meter and a rote procedure for running "chains" of related traumatic incidents. Dianetics clarifies the understanding of psychosomatic illness in terms of "predisposition", "precipitation", and "prolongation". With the use of Dianetics techniques, Hubbard claimed, the reactive mind could be processed and all stored engrams could be refiled as experience. The central technique was "auditing," a two-person question-and-answer therapy designed to isolate and dissipate engrams (or "mental masses"). An auditor addresses questions to a subject, observes and records the subject's responses, and returns repeatedly to experiences or areas under discussion that appear painful until the troubling experience has been identified and confronted. Through repeated applications of this method, the reactive mind could be "cleared" of its content having outlived its usefulness in the process of evolution; a person who has completed this process would be "Clear". The benefits of going Clear, according to Hubbard, were dramatic. A Clear would have no compulsions, repressions, psychoses or neuroses, and would enjoy a near-perfect memory as well as a rise in IQ of as much as 50 points. He also claimed that "the atheist is activated by engrams as thoroughly as the zealot". He further claimed that widespread application of Dianetics would result in "A world without insanity, without criminals and without war." One of the key ideas of Dianetics, according to Hubbard, is the fundamental existential command to survive. According to Hugh B. Urban, this would serve as the foundation of a big part of later Scientology. According to the Scientology journal "The Auditor", the total number of "Clears" as of May 2006 stands at 50,311. The procedure of Dianetics therapy (known as "auditing") is a two-person activity. One person, the "auditor", guides the other person, the "pre-clear". The pre-Clear's job is to look at the mind and talk to the auditor. The auditor acknowledges what the pre-Clear says and controls the process so the pre-Clear may put his full attention on his work. The auditor and pre-Clear sit down in chairs facing each other. The process then follows in eleven distinct steps: Auditing sessions are supposedly kept confidential. A few transcripts of auditing sessions with confidential information removed have been published as demonstration examples. Some extracts can be found in J.A. Winter's book "". Other, more comprehensive, transcripts of auditing sessions carried out by Hubbard himself can be found in volume 1 of the "Research & Discovery Series" (Bridge Publications, 1980). Examples of public group processing sessions can be found throughout the "Congresses" lecture series. According to Hubbard, auditing enables the pre-Clear to "contact" and "release" engrams stored in the reactive mind, relieving him of the physical and mental aberrations connected with them. The pre-Clear is asked to inspect and familiarize himself with the exact details of his own experience; the auditor may not tell him anything about his case or evaluate any of the information the pre-Clear finds. In August 1950, amidst the success of "", Hubbard held a demonstration in Los Angeles' Shrine Auditorium where he presented a young woman called Sonya Bianca (a pseudonym) to a large audience including many reporters and photographers as 'the world's first Clear." Despite Hubbard's claim that she had "full and perfect recall of every moment of her life", Bianca proved unable to answer questions from the audience testing her memory and analytical abilities, including the question of the color of Hubbard's tie. Hubbard explained Bianca's failure to display her promised powers of recall to the audience by saying that he had used the word "now" in calling her to the stage, and thus inadvertently froze her in "present time," which blocked her abilities. Later, in the late 1950s, Hubbard would claim that several people had reached the state of Clear by the time he presented Bianca as the world's first; these others, Hubbard said, he had successfully cleared in the late 1940s while working "incognito" in Hollywood posing as a swami. In 1966, Hubbard declared South African Scientologist John McMaster to be the first true Clear. Hubbard claimed, in an interview with "The New York Times" in November 1950, that "he had already submitted proof of claims made in the book to a number of scientists and associations." He added that the public as well as proper organizations were entitled to such proof and that he was ready and willing to give such proof in detail. In January 1951, the Hubbard Dianetic Research Foundation of Elizabeth, NJ published "Dianetic Processing: A Brief Survey of Research Projects and Preliminary Results", a booklet providing the results of psychometric tests conducted on 88 people undergoing Dianetics therapy. It presents case histories and a number of X-ray plates to support claims that Dianetics had cured "aberrations" including manic depression, asthma, arthritis, colitis and "overt homosexuality," and that after Dianetic processing, test subjects experienced significantly increased scores on a standardized IQ test. The report's subjects are not identified by name, but one of them is clearly Hubbard himself ("Case 1080A, R. L."). The authors provide no qualifications, although they are described in Hubbard's book "Science of Survival" (where some results of the same study were reprinted) as psychotherapists. Critics of Dianetics are skeptical of this study, both because of the bias of the source and because the researchers appear to ascribe all physical benefits to Dianetics without considering possible outside factors; in other words, the report lacks any scientific controls. J.A. Winter, M.D., originally an associate of Hubbard and an early adopter of Dianetics, had by the end of 1950 cut his ties with Hubbard and written an account of his personal experiences with Dianetics. He described Hubbard as "absolutistic and authoritarian", and criticized the Hubbard Dianetic Research Foundation for failing to undertake "precise scientific research into the functioning of the mind". He also recommended that auditing be done by experts only and that it was dangerous for laymen to audit each other. Hubbard writes: "Again, Dianetics is not being released to a profession, for no profession could encompass it." Hubbard's original book on Dianetics attracted highly critical reviews from science and medical writers and organizations. The American Psychological Association passed a resolution in 1950 calling "attention to the fact that these claims are not supported by empirical evidence of the sort required for the establishment of scientific generalizations." Subsequently, Dianetics has achieved no acceptance as a scientific theory, and scientists cite Dianetics as an example of a pseudoscience. Few scientific investigations into the effectiveness of Dianetics have been published. Professor John A. Lee states in his 1970 evaluation of Dianetics: The MEDLINE database records two independent scientific studies on Dianetics, both conducted in the 1950s under the auspices of New York University. Harvey Jay Fischer tested Dianetic therapy against three claims made by proponents and found it does not effect any significant changes in intellectual functioning, mathematical ability, or the degree of personality conflicts; Jack Fox tested Hubbard's thesis regarding recall of engrams, with the assistance of the Dianetic Research Foundation, and could not substantiate it. Commentators from a variety of backgrounds have described Dianetics as an example of pseudoscience. For example, philosophy professor Robert Carroll points to Dianetics' lack of empirical evidence: The validity and practice of auditing have been questioned by a variety of non-Scientologist commentators. Commenting on the example cited by Winter, the science writer Martin Gardner asserts that "nothing could be clearer from the above dialogue than the fact that the dianetic explanation for the headache existed only in the mind of the therapist, and that it was with considerable difficulty that the patient was maneuvered into accepting it." Other critics and medical experts have suggested that Dianetic auditing is a form of hypnosis. Hubbard, who had previously used hypnosis for entertainment purposes, strongly denied this connection and cautioned against hypnosis in Dianetics auditing. Professor Richard J. Ofshe, a leading expert on false memories, suggests that the feeling of well-being reported by pre-Clear at the end of an auditing session may be induced by post-hypnotic suggestion. Other researchers have identified quotations in Hubbard's work suggesting evidence that false memories were created in "Dianetics," specifically in the form of birth and pre-birth memories. According to an article by physician Martin Gumpert, “Hubbard’s concept of psychosomatic disease is definitely wrong. Psychosomatic ailments are not simply caused by emotional disturbances: they are diseases in which the emotional and the organic factor are closely involved and interdependent.”
https://en.wikipedia.org/wiki?curid=7989
Data warehouse In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise. The data stored in the warehouse is uploaded from the operational systems (such as marketing or sales). The data may pass through an operational data store and may require data cleansing for additional operations to ensure data quality before it is used in the DW for reporting. Extract, transform, load (ETL) and extract, load, transform (E-LT) are the two main approaches used to build a data warehouse system. The typical extract, transform, load (ETL)-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates the disparate data sets by transforming the data from the staging layer often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data. The main source of the data is cleansed, transformed, catalogued, and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support. However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata. IBM InfoSphere DataStage, Ab Initio Software, Informatica – PowerCenter are some of the tools which are widely used to implement ETL based data warehouse. ELT based data warehousing gets rid of a separate ETL tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse. A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to: The environment for data warehouses and marts includes the following: In regards to source systems listed above, R. Kelly Rainer states, "A common source for the data in data warehouses is the company's operational databases, which can be relational databases". Regarding data integration, Rainer states, "It is necessary to extract data from source systems, transform them, and load them into a data mart or warehouse". Rainer discusses storing data in an organization's data warehouse or data marts. Metadata is data about data. "IT personnel need information about data sources; database, table, and column names; refresh schedules; and data usage measures". Today, the most successful companies are those that can respond quickly and flexibly to market changes and opportunities. A key to this response is the effective and efficient use of data and information by analysts and managers. A "data warehouse" is a repository of historical data that is organized by subject to support decision makers in the organization. Once data is stored in a data mart or warehouse, it can be accessed. A data mart is a simple form of a data warehouse that is focused on a single subject (or functional area), hence they draw data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department within an organization. The sources could be internal operational systems, a central data warehouse, or external data. Denormalization is the norm for data modeling techniques in this system. Given that data marts generally cover only a subset of the data contained in a data warehouse, they are often easier and faster to implement. Types of data marts include dependent, independent, and hybrid data marts. Online analytical processing (OLAP) is characterized by a relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems, response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically have data latency of a few hours, as opposed to data marts, where latency is expected to be closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are : Roll-up (Consolidation), Drill-down and Slicing & Dicing. Online transaction processing (OLTP) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize very fast query processing and maintaining data integrity in multi-access environments. For OLTP systems, effectiveness is measured by the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually 3NF). Normalization is the norm for data modeling techniques in this system. Predictive analytics is about finding and quantifying hidden patterns in the data using complex mathematical models that can be used to predict future outcomes. Predictive analysis is different from OLAP in that OLAP focuses on historical data analysis and is reactive in nature, while predictive analysis focuses on the future. These systems are also used for customer relationship management (CRM). The concept of data warehousing dates back to the late 1980s when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as legacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users. Key developments in early years of data warehousing: A fact is a value, or measurement, which represents a fact about the managed entity or system. Facts, as reported by the reporting entity, are said to be at raw level; e.g., in a mobile telephone system, if a BTS (base transceiver station) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the remaining, it would report three facts or measurements to a management system: Facts at the raw level are further aggregated to higher levels in various dimensions to extract more service or business-relevant information from it. These are called aggregates or summaries or aggregated facts. For instance, if there are three BTS in a city, then the facts above can be aggregated from the BTS to the city level in the network dimension. For example: There are three or more leading approaches to storing data in a data warehouse — the most important approaches are the dimensional approach and the normalized approach. The dimensional approach refers to Ralph Kimball's approach in which it is stated that the data warehouse should be modeled using a Dimensional Model/star schema. The normalized approach, also called the 3NF model (Third Normal Form) refers to Bill Inmon's approach in which it is stated that the data warehouse should be modeled using an E-R model/normalized model. In a dimensional approach, transaction data are partitioned into "facts", which are generally numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly. Dimensional structures are easy to understand for business users, because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008). Another advantage offered by dimensional model is that it does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse. The model of facts and dimensions can also be understood as a data cube. Where the dimensions are the categorical coordinates in a multi-dimensional cube, while the fact is a value corresponding to the coordinates. The main disadvantages of the dimensional approach are the following: In the normalized approach, the data in the data warehouse are stored following, to a degree, database normalization rules. Tables are grouped together by "subject areas" that reflect general data categories (e.g., data on customers, products, finance, etc.). The normalized structure divides data into entities, which creates several tables in a relational database. When applied in large enterprises the result is dozens of tables that are linked together by a web of joins. Furthermore, each of the created entities is converted into separate physical tables when the database is implemented (Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. Some disadvantages of this approach are that, because of the number of tables involved, it can be difficult for users to join data from different sources into meaningful information and to access the information without a precise understanding of the sources of data and of the data structure of the data warehouse. Both normalized and dimensional models can be represented in entity-relationship diagrams as both contain joined relational tables. The difference between the two models is the degree of normalization (also known as Normal Forms). These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). In "Information-Driven Business", Robert Hillard proposes an approach to comparing the two approaches based on the information needs of the business problem. The technique shows that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but this extra information comes at the cost of usability. The technique measures information quantity in terms of information entropy and usability in terms of the Small Worlds data transformation measure. In the "bottom-up" approach, data marts are first created to provide reporting and analytical capabilities for specific business processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts. The "top-down" approach is designed using a normalized enterprise data model. "Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse. Data warehouses (DW) often resemble the hub and spokes architecture. Legacy systems feeding the warehouse often include customer relationship management and enterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate the extract transform load process, data warehouses often make use of an operational data store, the information from which is parsed into the actual DW. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse. A hybrid DW database is kept on third normal form to eliminate data redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The DW provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a DW to be replaced with a master data management repository where operational, not static information could reside. The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and star schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which when built, still requires the use of a data mart or star schema based release area for business purposes. There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity. Unlike the operational systems, the data in the data warehouse revolves around subjects of the enterprise (database normalization). Subject orientation can be really useful for decision making. Gathering the required objects is called subject oriented. The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth. While operational systems reflect current values as they support day-to-day operations, data warehouse data represents data over a long time horizon (up to 10 years) which means it stores historical data. It is mainly meant for data mining and forecasting, If a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases. The data in the data warehouse is read-only which means it cannot be updated, created, or deleted. In the data warehouse, data is summarized at different levels. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and moves down to lower levels of details. The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine tuned. Operational systems are optimized for preservation of data integrity and speed of recording of business transactions through use of database normalization and an entity-relationship model. Operational system designers generally follow Codd's 12 rules of database normalization to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. Relational databases are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected each time a transaction is processed. To improve performance, older data are usually periodically purged from operational systems. Data warehouses are optimized for analytic access patterns. Analytic access patterns generally involve selecting specific fields and rarely if ever 'select *' as is more common in operational databases. Because of these differences in access patterns, operational databases (loosely, OLTP) benefit from the use of a row-oriented DBMS whereas analytics databases (loosely, OLAP) benefit from the use of a column-oriented DBMS. Unlike operational systems which maintain a snapshot of the business, data warehouses generally maintain an infinite history which is implemented through ETL processes that periodically migrate data from the operational systems over to the data warehouse. These terms refer to the level of sophistication of a data warehouse:
https://en.wikipedia.org/wiki?curid=7990
Disperser A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser. So for a disperser, an event formula_1 we have: formula_2 Definition (Disperser): "A" formula_3"-disperser is a function" formula_4 "such that for every distribution" formula_5 "on" formula_6 "with" formula_7 "the support of the distribution" formula_8 "is of size at least" formula_9. An ("N", "M", "D", "K", "e")-disperser is a bipartite graph with "N" vertices on the left side, each with degree "D", and "M" vertices on the right side, such that every subset of "K" vertices on the left side is connected to more than (1 − "e")"M" vertices on the right. An extractor is a related type of graph that guarantees an even stronger property; every ("N", "M", "D", "K", "e")-extractor is also an ("N", "M", "D", "K", "e")-disperser. A disperser is a high-speed mixing device used to disperse or dissolve pigments and other solids into a liquid.
https://en.wikipedia.org/wiki?curid=7991
Devonian The Devonian ( ) is a geologic period and system of the Paleozoic, spanning 60 million years from the end of the Silurian, million years ago (Mya), to the beginning of the Carboniferous, Mya. It is named after Devon, England, where rocks from this period were first studied. The first significant adaptive radiation of life on dry land occurred during the Devonian. Free-sporing vascular plants began to spread across dry land, forming extensive forests which covered the continents. By the middle of the Devonian, several groups of plants had evolved leaves and true roots, and by the end of the period the first seed-bearing plants appeared. Various terrestrial arthropods also became well-established. Fish reached substantial diversity during this time, leading the Devonian to often be dubbed the Age of Fishes. The placoderms began dominating almost every known aquatic environment. The ancestors of all four-limbed vertebrates (tetrapods) began adapting to walking on land, as their strong pectoral and pelvic fins gradually evolved into legs. In the oceans, primitive sharks became more numerous than in the Silurian and Late Ordovician. The first ammonites, species of molluscs, appeared. Trilobites, the mollusc-like brachiopods, and the great coral reefs were still common. The Late Devonian extinction which started about 375 million years ago severely affected marine life, killing off all placodermi, and all trilobites, save for a few species of the order Proetida. The palaeogeography was dominated by the supercontinent of Gondwana to the south, the continent of Siberia to the north, and the early formation of the small continent of Euramerica in between. The period is named after Devon, a county in southwestern England, where a controversial argument in the 1830s, over the age and structure of the rocks found distributed throughout the county was eventually resolved by the definition of the Devonian period in the geological timescale. The Great Devonian Controversy was a long period of vigorous argument and counter-argument between the main protagonists of Roderick Murchison with Adam Sedgwick against Henry De la Beche supported by George Bellas Greenough. Murchison and Sedgwick won the debate and named the period they proposed as the Devonian System. While the rock beds that define the start and end of the Devonian period are well identified, the exact dates are uncertain. According to the International Commission on Stratigraphy, the Devonian extends from the end of the Silurian Mya, to the beginning of the Carboniferous Mya – in North America, at the beginning of the Mississippian subperiod of the Carboniferous. In nineteenth-century texts the Devonian has been called the "Old Red Age", after the red and brown terrestrial deposits known in the United Kingdom as the Old Red Sandstone in which early fossil discoveries were found. Another common term is "Age of the Fishes", referring to the evolution of several major groups of fish that took place during the period. Older literature on the Anglo-Welsh basin divides it into the Downtonian, Dittonian, Breconian, and Farlovian stages, the latter three of which are placed in the Devonian. The Devonian has also erroneously been characterised as a "greenhouse age", due to sampling bias: most of the early Devonian-age discoveries came from the strata of western Europe and eastern North America, which at the time straddled the Equator as part of the supercontinent of Euramerica where fossil signatures of widespread reefs indicate tropical climates that were warm and moderately humid but in fact the climate in the Devonian differed greatly during its epochs and between geographic regions. For example, during the Early Devonian, arid conditions were prevalent through much of the world including Siberia, Australia, North America, and China, but Africa and South America had a warm temperate climate. In the Late Devonian, by contrast, arid conditions were less prevalent across the world and temperate climates were more common. The Devonian Period is formally broken into Early, Middle and Late subdivisions. The rocks corresponding to those epochs are referred to as belonging to the Lower, Middle and Upper parts of the Devonian System. The Early Devonian lasted from and began with the Lochkovian stage , which was followed by the Pragian from and then by the Emsian, which lasted until the Middle Devonian began, . During this time, the first ammonoids appeared, descending from bactritoid nautiloids. Ammonoids during this time period were simple and differed little from their nautiloid counterparts. These ammonoids belong to the order Agoniatitida, which in later epochs evolved to new ammonoid orders, for example Goniatitida and Clymeniida. This class of cephalopod molluscs would dominate the marine fauna until the beginning of the Mesozoic era. The Middle Devonian comprised two subdivisions: first the Eifelian, which then gave way to the Givetian . During this time the jawless agnathan fishes began to decline in diversity in freshwater and marine environments partly due to drastic environmental changes and partly due to the increasing competition, predation, and diversity of jawed fishes. The shallow, warm, oxygen-depleted waters of Devonian inland lakes, surrounded by primitive plants, provided the environment necessary for certain early fish to develop such essential characteristics as well developed lungs, and the ability to crawl out of the water and onto the land for short periods of time. Finally, the Late Devonian started with the Frasnian, , during which the first forests took shape on land. The first tetrapods appeared in the fossil record in the ensuing Famennian subdivision, the beginning and end of which are marked with extinction events. This lasted until the end of the Devonian, . The Devonian was a relatively warm period, and probably lacked any glaciers. The temperature gradient from the equator to the poles was not as large as it is today. The weather was also very arid, mostly along the equator where it was the driest. Reconstruction of tropical sea surface temperature from conodont apatite implies an average value of in the Early Devonian. levels dropped steeply throughout the Devonian period as the burial of the newly evolved forests drew carbon out of the atmosphere into sediments; this may be reflected by a Mid-Devonian cooling of around . The Late Devonian warmed to levels equivalent to the Early Devonian; while there is no corresponding increase in concentrations, continental weathering increases (as predicted by warmer temperatures); further, a range of evidence, such as plant distribution, points to a Late Devonian warming. The climate would have affected the dominant organisms in reefs; microbes would have been the main reef-forming organisms in warm periods, with corals and stromatoporoid sponges taking the dominant role in cooler times. The warming at the end of the Devonian may even have contributed to the extinction of the stromatoporoids. The Devonian period was a time of great tectonic activity, as Euramerica and Gondwana drew closer together. The continent Euramerica (or Laurussia) was created in the early Devonian by the collision of Laurentia and Baltica, which rotated into the natural dry zone along the Tropic of Capricorn, which is formed as much in Paleozoic times as nowadays by the convergence of two great air-masses, the Hadley cell and the Ferrel cell. In these near-deserts, the Old Red Sandstone sedimentary beds formed, made red by the oxidised iron (hematite) characteristic of drought conditions. Near the equator, the plate of Euramerica and Gondwana were starting to meet, beginning the early stages of the assembling of Pangaea. This activity further raised the northern Appalachian Mountains and formed the Caledonian Mountains in Great Britain and Scandinavia. The west coast of Devonian North America, by contrast, was a passive margin with deep silty embayments, river deltas and estuaries, found today in Idaho and Nevada; an approaching volcanic island arc reached the steep slope of the continental shelf in Late Devonian times and began to uplift deep water deposits, a collision that was the prelude to the mountain-building episode at the beginning of the Carboniferous called the Antler orogeny. Sea levels were high worldwide, and much of the land lay under shallow seas, where tropical reef organisms lived. The deep, enormous Panthalassa (the "universal ocean") covered the rest of the planet. Other minor oceans were the Paleo-Tethys Ocean, Proto-Tethys Ocean, Rheic Ocean, and Ural Ocean (which was closed during the collision with Siberia and Baltica). During the Devonian, Chaitenia, an island arc, accreted to Patagonia. Sea levels in the Devonian were generally high. Marine faunas continued to be dominated by bryozoa, diverse and abundant brachiopods, the enigmatic hederellids, microconchids and corals. Lily-like crinoids (animals, their resemblance to flowers notwithstanding) were abundant, and trilobites were still fairly common. Among vertebrates, jawless armored fish (ostracoderms) declined in diversity, while the jawed fish (gnathostomes) simultaneously increased in both the sea and fresh water. Armored placoderms were numerous during the lower stages of the Devonian Period and became extinct in the Late Devonian, perhaps because of competition for food against the other fish species. Early cartilaginous (Chondrichthyes) and bony fishes (Osteichthyes) also become diverse and played a large role within the Devonian seas. The first abundant genus of shark, "Cladoselache", appeared in the oceans during the Devonian Period. The great diversity of fish around at the time has led to the Devonian being given the name "The Age of Fish" in popular culture. The first ammonites also appeared during or slightly before the early Devonian Period around 400 Mya. A now dry barrier reef, located in present-day Kimberley Basin of northwest Australia, once extended a thousand kilometres, fringing a Devonian continent. Reefs in general are built by various carbonate-secreting organisms that have the ability to erect wave-resistant structures close to sea level. Although modern reefs are constructed mainly by corals and calcareous algae, the main contributors of the Devonian reefs were different: They were composed of calcareous algae, coral-like stromatoporoids, and tabulate and rugose corals, in that order of importance. By the Devonian Period, life was well underway in its colonisation of the land. The moss forests and bacterial and algal mats of the Silurian were joined early in the period by primitive rooted plants that created the first stable soils and harbored arthropods like mites, scorpions, trigonotarbids and myriapods (although arthropods appeared on land much earlier than in the Early Devonian and the existence of fossils such as "Protichnites" suggest that amphibious arthropods may have appeared as early as the Cambrian). By far the largest land organism at the beginning of this period was the enigmatic "Prototaxites", which was possibly the fruiting body of an enormous fungus, rolled liverwort mat, or another organism of uncertain affinities that stood more than tall, and towered over the low, carpet-like vegetation during the early part of the Devonian. Also the first possible fossils of insects appeared around 416 Mya, in the Early Devonian. Evidence for the earliest tetrapods takes the form of trace fossils in shallow lagoon environments within a marine carbonate platform / shelf during the Middle Devonian, although these traces have been questioned and an interpretation as fish feeding traces (Piscichnus) has been advanced. Many Early Devonian plants did not have true roots or leaves like extant plants although vascular tissue is observed in many of those plants. Some of the early land plants such as "Drepanophycus" likely spread by vegetative growth and spores. The earliest land plants such as "Cooksonia" consisted of leafless, dichotomous axes and terminal sporangia and were generally very short-statured, and grew hardly more than a few centimetres tall. By the Middle Devonian, shrub-like forests of primitive plants existed: lycophytes, horsetails, ferns, and progymnosperms had evolved. Most of these plants had true roots and leaves, and many were quite tall. The earliest-known trees appeared in the Middle Devonian These included a lineage of lycopods and another arborescent, woody vascular plant, the cladoxylopsids. (See also: lignin.) These are the oldest-known trees of the world's first forests. By the end of the Devonian, the first seed-forming plants had appeared. This rapid appearance of so many plant groups and growth forms has been called the "Devonian Explosion". The 'greening' of the continents acted as a carbon sink, and atmospheric concentrations of carbon dioxide may have dropped. This may have cooled the climate and led to a massive extinction event. See Late Devonian extinction. Primitive arthropods co-evolved with this diversified terrestrial vegetation structure. The evolving co-dependence of insects and seed-plants that characterised a recognisably modern world had its genesis in the Late Devonian period. The development of soils and plant root systems probably led to changes in the speed and pattern of erosion and sediment deposition. The rapid evolution of a terrestrial ecosystem that contained copious animals opened the way for the first vertebrates to seek out a terrestrial living. By the end of the Devonian, arthropods were solidly established on the land. A major extinction occurred at the beginning of the last phase of the Devonian period, the Famennian faunal stage (the Frasnian-Famennian boundary), about Mya, when all the fossil agnathan fishes, save for the psammosteid heterostraci, suddenly disappeared. A second strong pulse closed the Devonian period. The Late Devonian extinction was one of five major extinction events in the history of the Earth's biota, and was more drastic than the familiar extinction event that closed the Cretaceous. The Devonian extinction crisis primarily affected the marine community, and selectively affected shallow warm-water organisms rather than cool-water organisms. The most important group to be affected by this extinction event were the reef-builders of the great Devonian reef systems. Amongst the severely affected marine groups were the brachiopods, trilobites, ammonites, conodonts, and acritarchs, as well as jawless fish, and all placoderms. Land plants as well as freshwater species, such as our tetrapod ancestors, were relatively unaffected by the Late Devonian extinction event (there is a counterargument that the Devonian extinctions nearly wiped out the tetrapods). The reasons for the Late Devonian extinctions are still unknown, and all explanations remain speculative. Canadian paleontologist Digby McLaren suggested in 1969, that the Devonian extinction events were caused by an asteroid impact. However, while there were Late Devonian collision events (see the Alamo bolide impact), little evidence supports the existence of a large enough Devonian crater. Categories:
https://en.wikipedia.org/wiki?curid=7992
David Thompson (explorer) David Thompson (30 April 1770 – 10 February 1857) was a British-Canadian fur trader, surveyor, and cartographer, known to some native peoples as "Koo-Koo-Sint" or "the Stargazer". Over Thompson's career, he traveled some across North America, mapping of North America along the way. For this historic feat, Thompson has been described as the "greatest land geographer who ever lived." David Thompson was born in Westminster, Middlesex, to recent Welsh migrants David and Ann Thompson. When Thompson was two, his father died. Due to the financial hardship with his mother without resources, Thompson, 29 April 1777, the day before his seventh birthday, and his older brother were placed in the Grey Coat Hospital, a school for the disadvantaged of Westminster. Thompson graduated to the Grey Coat mathematical school, where he had received an education for the Royal Navy: including mathematics of trigonometry and geometry, practical navigation including using of nautical instruments, finding latitudes and longitudes and making navigational calculations from observing the sun, moon and tides and the drawing of maps and charts, taking land measurements and sketching landscapes. He later built on these to make his career. In 1784, at the age of 14, the Grey Coat treasurer paid the Hudson's Bay Company the sum of five pounds, upon which Thompson became the company's indentured servant for a period of seven years to be trained as a clerk. He set sail on 28 May of that year, and left England for North America. On 2 September 1784, Thompson arrived in Churchill (now in Manitoba) and was put to work as a clerk/secretary, copying the personal papers of the governor of Fort Churchill, Samuel Hearne. The next year he was transferred to nearby York Factory, and over the next few years spent time as a secretary at Cumberland House, Saskatchewan, and South Branch House before arriving at Manchester House in 1787. During those years he learned to keep accounts and other records, calculate values of furs (It was noted that he also had several expensive beaver pelts at that time even when a secretary's job would not pay terribly well), track supplies and other duties. On 23 December 1788, Thompson seriously fractured his tibia, forcing him to spend the next two winters at Cumberland House convalescing. It was during this time that he greatly refined and expanded his mathematical, astronomical, and surveying skills under the tutelage of Hudson's Bay Company surveyor Philip Turnor. It was also during this time that he lost sight in his right eye. In 1790, with his apprenticeship nearing its end, Thompson requested a set of surveying tools in place of the typical parting gift of fine clothes offered by the company to those completing their indenture. He received both. He entered the employ of the Hudson's Bay Company as a fur trader. In 1792 he completed his first significant survey, mapping a route to Lake Athabasca (where today's Alberta/Saskatchewan border is located). In recognition of his map-making skills, the company promoted Thompson to the surveyor in 1794. He continued working for the Hudson's Bay Company until 23 May 1797 when, frustrated with the Hudson's Bay Company's policies over promoting the use of alcohol with indigenous people in the fur trade, he left. He walked in the snow in order to enter the employ of the competition, the North West Company. There he continued to work as a fur trader and surveyor. Thompson's decision to defect to the North West Company (NWC) in 1797 without providing the customary one-year notice was not well received by his former employers. But the North West Company was more supportive of Thompson pursuing his interest in surveying and work on mapping the interior of what was to become Canada, as they judged it in the company's long-term interest. In 1797, Thompson was sent south by his employers to survey part of the Canada-US boundary along the water routes from Lake Superior to Lake of the Woods to satisfy unresolved questions of territory arising from the Jay Treaty between Great Britain and the United States after the American Revolutionary War. By 1798 Thompson had completed a survey of from Grand Portage, through Lake Winnipeg, to the headwaters of the Assiniboine and Mississippi rivers, as well as two sides of Lake Superior. In 1798, the company sent him to Red Deer Lake (Lac La Biche in present-day Alberta) to establish a trading post. (The English translation of Lac la Biche: Red Deer Lake, was first recorded on the Mackenzie map of 1793.) Thompson spent the next few seasons trading based in Fort George (now in Alberta), and during this time led several expeditions into the Rocky Mountains. On 10 July 1804, at the annual meeting of the North West Company in Kaministiquia, Thompson was made a full partner of the company. He became a 'wintering partner', who was based in the field rather than Montreal, with two of the 92 NWC's shares worth more than £4,000. He spent the next few seasons based there managing the fur trading operations but still finding time to expand his surveys of the waterways around Lake Superior. At the 1806 company meeting, officers decided to send Thompson back out into the interior. Concern over the American-backed expedition of Lewis and Clark prompted the North West Company to charge Thompson with the task of finding a route to the Pacific to open up the lucrative trading territories of the Pacific Northwest. After the general meeting in 1806, Thompson travelled to Rocky Mountain House and prepared for an expedition to follow the Columbia River to the Pacific. In June 1807 Thompson crossed the Rocky Mountains and spent the summer surveying the Columbia basin; he continued to survey the area over the next few seasons. Thompson mapped and established trading posts in Northwestern Montana, Idaho, Washington, and Western Canada. Trading posts he founded included Kootenae House, Kullyspell House and Saleesh House; the latter two were the first trading posts west of the Rockies in Idaho and Montana, respectively. These posts established by Thompson extended North West Company fur trading territory into the Columbia Basin drainage area. The maps he made of the Columbia River basin east of the Cascade Mountains were of such high quality and detail that they continued to be regarded as authoritative well into the mid-20th century. In early 1810, Thompson was returning eastward toward Montreal but, while en route at Rainy Lake, received orders to return to the Rocky Mountains and establish a route to the mouth of the Columbia. The North West Company was responding to the plans of American John Jacob Astor to send a ship around the Americas to establish a fur trading post of the Pacific Fur Company on the Pacific Coast. During his return, Thompson was delayed by an angry group of Peigan natives at Howse Pass. He was ultimately forced to seek a new route across the Rocky Mountains and found one through the Athabasca Pass. David Thompson was the first European to navigate the full length of the Columbia River. During Thompson's 1811 voyage down the Columbia River, he camped at the junction with the Snake River on 9 July 1811. There he erected a pole and a notice claiming the country for Great Britain and stating the intention of the North West Company to build a trading post at the site. This notice was found later that year by Astor company workers looking to establish an inland fur post, contributing to their selection of a more northerly site at Fort Okanogan. The North West Company established its post of Fort Nez Percés near the Snake River confluence several years later. Continuing down the Columbia, Thompson passed the barrier of The Dalles with much less difficulty than that undergone by Lewis and Clark, as high water carried his boat over Celilo Falls and many of the rapids. On 14 July 1811, Thompson reached the partially constructed Fort Astoria at the mouth of the Columbia, arriving two months after the Pacific Fur Company's ship, the "Tonquin". Before returning upriver and across the mountains, Thompson hired Naukane, a Native Hawaiian Takane labourer brought to Fort Astoria by the Pacific Fur Company's ship "Tonquin". Naukane, known as Coxe to Thompson, accompanied Thompson across the continent to Lake Superior before journeying on to England. Thompson wintered at Saleesh House before beginning his final journey back to Montreal in 1812, where the North West Company was based. In his published journals, Thompson recorded seeing large footprints near what is now Jasper, Alberta, in 1811. It has been suggested that these prints were similar to what has since been called the sasquatch. However, Thompson noted that these tracks showed "a small Nail at the end of each [toe]", and stated that these tracks "very much resembles a large Bear's Track". The years 1807-1812 are the most carefully scrutinized in his career and his most enduring historical legacy, due to his development of the commercial routes across the Rockies, and his mapping of the lands they traverse. In 1820, the English geologist, John Jeremiah Bigsby, attended a dinner party given by The Hon. William McGillivray at his home, Chateau St. Antoine, one of the early estates in Montreal's Golden Square Mile. He describes the party and some of the guests in his entertaining book "The Shoe and Canoe", giving an excellent description of David Thompson: On 10 June 1799 at Île-à-la-Crosse, Thompson married Charlotte Small, a thirteen-year-old Métis daughter of Scottish fur trader Patrick Small and a Cree mother. Their marriage was formalised thirteen years later at the Scotch Presbyterian Church in Montreal on 30 October 1812. He and Charlotte had 13 children together; five of them were born before he left the fur trade. The family did not adjust easily to life in Eastern Canada; they lived in Montreal while he was traveling. Two of the children, John (aged 5) and Emma (aged 7), died of round worms, a common parasite. By the time of Thompson's death, the couple had been married 57 years, the longest marriage known in Canada pre-Confederation. Upon his arrival back in Montreal, Thompson retired with a generous pension from the North West Company. He settled in nearby Terrebonne and worked on completing his great map, a summary of his lifetime of exploring and surveying the interior of North America. The map covered the wide area stretching from Lake Superior to the Pacific, and was given by Thompson to the North West Company. Thompson's 1814 map, his greatest achievement, was so accurate that 100 years later it was still the basis for many of the maps issued by the Canadian government. It now resides in the Archives of Ontario. In 1815, Thompson moved his family to Williamstown, Upper Canada, and a few years later was employed to survey the newly established borders with the United States from Lake of the Woods to the Eastern Townships of Quebec, established by Treaty of Ghent after the War of 1812. In 1843 Thompson completed his atlas of the region from Hudson Bay to the Pacific Ocean. Afterwards, Thompson returned to a life as a land owner, but soon financial misfortune would ruin him. By 1831 he was so deeply in debt he was forced to take up a position as a surveyor for the British American Land Company to provide for his family. His luck continued to worsen and he was forced to move in with his daughter and son-in-law in 1845. He began work on a manuscript chronicling his life exploring the continent, but this project was left unfinished when his sight failed him completely in 1851. The land mass mapped by Thompson amounted to of wilderness (one-fifth of the continent). His contemporary, the great explorer Alexander Mackenzie, remarked that Thompson did more in ten months than he would have thought possible in two years. Despite these significant achievements, Thompson died in Montreal in near obscurity on 10 February 1857, his accomplishments almost unrecognised. He never finished the book of his 28 years in the fur trade, based on his 77 field notebooks, before he died. In the 1890s geologist J.B. Tyrrell resurrected Thompson's notes and in 1916 published them as "David Thompson's Narrative", as part of the General Series of the Champlain Society. Further editions and re-examinations of Thompson's life and works were published in 1962 by Richard Glover, in 1971 by Victor Hopwood, and in 2015 by William Moreau. Thompson's body was interred in Montreal's Mount Royal Cemetery in an unmarked grave. It was not until 1926 that efforts by J.B. Tyrrell and the Canadian Historical Society resulted in the placing of a tombstone to mark his grave. The next year, Thompson was named a National Historic Person by the federal government, one of the earliest such designations. A federal plaque reflecting that status is located at Jasper National Park, Alberta. Meantime, Thompson's achievements are central reasons for other national historic designations: In 1957, one hundred years after his death, Canada's post office department honoured him with his image on a postage stamp. The David Thompson Highway in Alberta was named in his honour, along with David Thompson High School situated on the side of the highway near Leslieville, Alberta. His prowess as a geographer is now well-recognized. He has been called "the greatest land geographer that the world has produced." There is a monument dedicated to David Thompson (maintained by the state of North Dakota) near the former town site of the ghost town Verendrye, North Dakota, located approximately north and west of Karlsruhe, North Dakota. Thompson Falls, Montana, and British Columbia's Thompson River are also named after the explorer. The year 2007 marked the 150th year of Thompson's death and the 200th anniversary of his first crossing of the Rocky Mountains. Commemorative events and exhibits were planned across Canada and the United States from 2007 to 2011 as a celebration of his accomplishments. In 2007, a commemorative plaque was placed on a wall at the Grey Coat Hospital, the school for the disadvantaged of Westminster David Thompson attended as a boy, by English author and TV presenter Ray Mears. Thompson was the subject of a 1964 National Film Board of Canada short film "David Thompson: The Great Mapmaker ", as well as the BBC2 programme "Ray Mears' Northern Wilderness" (Episode 5), broadcast in November 2009. He is referenced in the 1981 folk song "Northwest Passage" by Stan Rogers. The national park service, Parks Canada, announced in 2018 that it had named its new research vessel , to be used for underwater archaeology, including sea floor mapping, and for marine science in the Pacific, Atlantic, Arctic Oceans, and the Great Lakes. It will be the main platform for research on the Wrecks of HMS "Erebus" and HMS "Terror" National Historic Site. The David Thompson Astronomical Observatory at Fort William Historical Park was named to commemorate David Thompson and his discoveries.
https://en.wikipedia.org/wiki?curid=7994
Dioscoreales The Dioscoreales are an order of monocotyledonous flowering plants in modern classification systems, such as the Angiosperm Phylogeny Group and the Angiosperm Phylogeny Web. Within the monocots Dioscoreales are grouped in the lilioid monocots where they are in a sister group relationship with the Pandanales. Of necessity the Dioscoreales contain the family Dioscoreaceae which includes the yam ("Dioscorea") that is used as an important food source in many regions around the globe. Older systems tended to place all lilioid monocots with reticulate veined leaves (such as Smilacaceae and Stemonaceae together with Dioscoraceae) in Dioscoreales. As currently circumscribed by phylogenetic analysis using combined morphology and molecular methods, Dioscreales contains many reticulate veined vines in Dioscoraceae, it also includes the myco-heterotrophic Burmanniaceae and the autotrophic Nartheciaceae. The order consists of three families, 22 genera and about 850 species. Dioscoreales are vines or herbaceous forest floor plants. They may be achlorophyllous or saprophytic. Synapomorphies include tuberous roots, glandular hairs, seed coat characteristics and the presence of calcium oxalate crystals. Other characteristics of the order include the presence of saponin steroids, annular vascular bundles that are found in both the stem and leaf. The leaves are often unsheathed at the base, have a distinctive petiole and reticulate veined lamina. Alternatively they may be small and scale-like with a sheathed base. The flowers are actinomorphic, and may be bisexual or dioecious, while the flowers or inflorescence bear glandular hairs. The perianth may be conspicuous or reduced and the style is short with well developed style branches. The tepals persist in the development of the fruit, which is a dry capsule or berry. In the seed, the endotegmen is tanniferous and the embryo short. All of the species except the genera placed in Nartheciaceae express simultaneous microsporogenesis. Plants in Nartheciaceae show successive microsporogenesis which is one of the traits indicating that the family is sister to all the other members included in the order. For the early history from Lindley (1853) onwards, see Caddick "et al." (2000) Table 1, Caddick et al. (2002a) Table 1 and Table 2 in Bouman (1995). The taxonomic classification of Dioscoreales has been complicated by the presence of a number of morphological features reminiscent of the dicotyledons, leading some authors to place the order as intermediate between the monocotyledons and the dicotyledons. While Lindley did not use the term "Dioscoreales", he placed the family Dioscoraceae together with four other families in what he referred to as an Alliance (the equivalent of the modern Order) called Dictyogens. He reflected the uncertainty as to the place of this Alliance by placing it as a class of its own between Endogens (monocots) and Exogens (dicots) The botanical authority is given to von Martius (1835) by APG for his description of the Dioscoreae family or "Ordo", while other sources cite Hooker (Dioscoreales Hook.f.) for his use of the term "Dioscorales" in 1873 with a single family, Dioscoreae. However, in his more definitive work, the "Genera plantara" (1883), he simply placed Dioscoraceae in the Epigynae "Series". Although Charles Darwin's Origin of Species (1859) preceded Bentham and Hooker's publication, the latter project was commenced much earlier and George Bentham was initially sceptical of Darwinism. The new phyletic approach changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata, but this did little to further define the circumscription of Dioscoreaceae. The major works in the late nineteenth and early twentieth century employing this approach were in the German literature. Authors such as Eichler, Engler and Wettstein placed this family in the Liliiflorae, a major subdivision of monocotyledons. it remained to Hutchinson (1926) to resurrect the Dioscoreales to group Dioscoreaceae and related families together. Hutchinson's circumscription of Dioscoreales included three other families in addition to Dioscoreaceae, Stenomeridaceae, Trichopodaceae and Roxburghiaceae. Of these only Trichopodaceae was included in the Angiosperm Phylogeny Group (APG) classification (see below), but was subsumed into Dioscoraceae. Stenomeridaceae, as "Stenomeris" was also included in Dioscoreaceae as subfamily Stenomeridoideae, the remaining genera being grouped in subfamily Dioscoreoideae. Roxburghiaceae on the other hand was segregated in the sister order Pandanales as Stemonaceae. Most taxonomists in the twentieth century (the exception was the 1981 Cronquist system which placed most such plants in order Liliales, subclass Liliidae, class Liliopsida=monocotyledons, division Magnoliophyta=angiosperms) recognised Dioscoreales as a distinct order, but demonstrated wide variations in its composition. Dahlgren, in the second version of his taxonomic classification (1982) raised the Liliiflorae to a superorder and placed Dioscoreales as an order within it. In his system, Dioscoreales contained only three families, Dioscoreaceae, Stemonaceae ("i.e." Hutchinson's Roxburghiaceae) and Trilliaceae. The latter two families had been treated as a separate order (Stemonales, or Roxburghiales) by other authors, such as Huber (1969). The APG would later assign these to Pandanales and Liliales respectively. Dahlgren's construction of Dioscoreaceae included the Stenomeridaceae and Trichopodaceae, doubting these were distinct, and Croomiaceae in Stemonaceae. Furthermore, he expressed doubts about the order's homogeneity, especially Trilliaceae. The Dioscoreales at that time were marginally distinguishable from the Asparagales. In his examination of Huber's Stemonales, he found that the two constituent families had as close an affinity to Dioscoreaceae as to each other, and hence included them. He also considered closely related families and their relationship to Dioscoreales, such as the monogeneric Taccaceae, then in its own order, Taccales. Similar considerations were discussed with respect to two Asparagales families, Smilacaceae and Petermanniaceae. In Dahlgren's third and final version (1985) that broader circumscription of Dioscoreales was created within the superorder Lilianae, subclass Liliidae (monocotyledons), class Magnoliopsida (angiosperms) and comprised the seven families Dioscoreaceae, Petermanniaceae, Smilacaceae, Stemonaceae, Taccaceae, Trichopodaceae and Trilliaceae. Thismiaceae has either been treated as a separate family closely related to Burmanniaceae or as a tribe (Thismieae) within a more broadly defined Burmanniaceae, forming a separate Burmanniales order in the Dahlgren system. The related Nartheciaceae were treated as tribe Narthecieae within the Melanthiaceae in a third order, the Melanthiales, by Dahlgren. Dahlgren considered the Dioscoreales to most strongly resemble the ancestral monocotyledons, and hence sharing "dicotyledonous" characteristics, making it the most central monocotyledon order. Of these seven families, Bouman considered Dioscoreaceae, Trichopodaceae, Stemonaceae and Taccaceae to represent the "core" families of the order. However, that study also indicated both a clear delineation of the order from other orders particularly Asparagales, and a lack of homogeneity within the order. The increasing availability of molecular phylogenetics methods in addition to morphological characteristics in the 1990s led to major reconsiderations of the relationships within the monocotyledons. In that large multi-institutional examination of the seed plants using the plastid gene "rbc"L the authors used Dahlgren's system as their basis, but followed Thorne (1992) in altering the suffixes of the superorders from ""-iflorae"" to ""-anae"". This demonstrated that the Lilianae comprised three lineages corresponding to Dahlgren's Dioscoreales, Liliales, and Asparagales orders. Under the Angiosperm Phylogeny Group system of 1998, which took Dahlgren's system as a basis, the order was placed in the monocot clade and comprised the five families Burmanniaceae, Dioscoreaceae, Taccaceae, Thismiaceae and Trichopodaceae. In APG II (2003), a number of changes were made to Dioscoreales, as a result of an extensive study by Caddick and colleagues (2002), using an analysis of three genes, "rbc"L, "atp"B and 18S rDNA, in addition to morphology. These studies resulted in a re-examination of the relationships between most of the genera within the order. Thismiaceae was shown to be a sister group to Burmanniaceae, and so was included in it. The monotypic families Taccaceae and Trichopodaceae were included in Dioscoreaceae, while Nartheciaceae could also be grouped within Dioscoreales. APG III (2009) did not change this, so the order now comprises three families Burmanniaceae, Dioscoreaceae and Nartheciaceae. Although further research on the deeper relationships within Dioscoreales continues, the APG IV (2016) authors felt it was still premature to propose a restructuring of the order. Specifically these issues involve conflicting information as to the relationship between "Thismia" and Burmanniaceae, and hence whether Thismiaceae should be subsumed in the latter, or reinstated. Molecular phylogenetics in Dioscoreales poses special problems due to the absence of plastid genes in mycoheterotrophs. Dioscoreales is monophyletic and is placed as a sister order to Pandanales, as shown in Cladogram I. The data for the evolution of the order is collected from molecular analyses since there are no such fossils found. It is estimated that Dioscoreales and its sister clade Pandanales split up around 121 million years ago during Early Cretaceous when the stem group was formed. Then it took 3 to 6 million years for the crown group to differentiate in Mid Cretaceous. The three families of Dioscreales constitutes about 22 genera and about 849 species making it one of the smaller monocot orders. Of these, the largest group is "Dioscorea" (yams) with about 450 species. By contrast the second largest genus is "Burmannia" with about 60 species, and most have only one or two. Some authors, preferring the original APG (1998)families, continue to treat Thismiaceae separately from Burmanniaceae and Taccaceae from Dioscoreaceae. But in the 2015 study of Hertwerk and colleagues, seven genera representing all three families were examined with an eight gene dataset. Dioscoreales was monophyletic and three subclades were represented corresponding to the APG families. Dioscoreaceae and Burmanniaceae were in a sister group relationship. Named after the type genus "Dioscorea", which in turn was named by Linnaeus in 1753 to honour the Greek physician and botanist Dioscorides. Species from this order are distributed across all of the continents except Antarctica. They are mainly tropical or subtropical representatives but however there are members of Dioscoreaceae and Nartheciaceae families found in cooler regions of Europe and North America. Order Dioscoreales contains plants that are able to form an underground organ for reservation of nutritions as many other monocots. An exception is the family Burmanniaceae which is entirely myco-heterotrophic and contains species that lack photosynthetic abilities. The three families included in order Dioscoreales also represent three different ecological groups of plants. Dioscoreaceae contains mainly vines ("Dioscorea") and other crawling species ("Epipetrum"). Nartheciaceae on the other hand is a family composed of herbaceous plants with a rather lily-like appearance ("Aletris") while Burmanniaceae is entirely myco-heterotrophic group. Many members of Dioscoreaceae produce tuberous starchy roots (yams) which form staple foods in tropical regions. They have also been the source of steroids for the pharmaceutical industry, including the production of oral contraceptives.
https://en.wikipedia.org/wiki?curid=7995
Dentistry Dentistry, also known as dental medicine and oral medicine, is a branch of medicine that consists of the study, diagnosis, prevention, and treatment of diseases, disorders, and conditions of the oral cavity, commonly in the dentition but also the oral mucosa, and of adjacent and related structures and tissues, particularly in the maxillofacial (jaw and facial) area. Although primarily associated with teeth among the general public, the field of dentistry or dental medicine is not limited to teeth but includes other aspects of the craniofacial complex including the temporomandibular joint and other supporting, muscular, lymphatic, nervous, vascular, and anatomical structures. Dentistry is often also understood to subsume the now largely defunct medical specialty of stomatology (the study of the mouth and its disorders and diseases) for which reason the two terms are used interchangeably in certain regions. Dental treatments are carried out by a dental team, which often consists of a dentist and dental auxiliaries (dental assistants, dental hygienists, dental technicians, as well as dental therapists). Most dentists either work in private practices (primary care), dental hospitals or (secondary care) institutions (prisons, armed forces bases, etc.). The history of dentistry is almost as ancient as the history of humanity and civilization with the earliest evidence dating from 7000 BC. Remains from the early Harappan periods of the Indus Valley Civilization ( BC) show evidence of teeth having been drilled dating back 9,000 years. It is thought that dental surgery was the first specialization from medicine. The modern movement of evidence-based dentistry calls for the use of high-quality scientific evidence to guide decision-making. The term dentistry comes from "dentist", which comes from French "dentiste", which comes from the French and Latin words for tooth. The term for the associated scientific study of teeth is odontology (from Ancient Greek ὀδούς (odoús, "tooth")) – the study of the structure, development, and abnormalities of the teeth. Dentistry usually encompasses practices related to the oral cavity. According to the World Health Organization, oral diseases are major public health problems due to their high incidence and prevalence across the globe, with the disadvantaged affected more than other socio-economic groups. The majority of dental treatments are carried out to prevent or treat the two most common oral diseases which are dental caries (tooth decay) and periodontal disease (gum disease or pyorrhea). Common treatments involve the restoration of teeth, extraction or surgical removal of teeth, scaling and root planing, endodontic root canal treatment and cosmetic dentistry All dentists in the United States undergo at least three years of undergraduate studies, but nearly all complete a bachelor's degree. This schooling is followed by four years of dental school to qualify as a "Doctor of Dental Surgery" (DDS) or "Doctor of Dental Medicine" (DMD). Specialization in dentistry is available in the fields of Dental Public Health, Endodontics, Oral Radiology, Oral Maxillofacial Surgery, Oral Medicine and Pathology, Orthodontics, Pediatric Dentistry, Periodontics, and Prosthodontics. By nature of their general training they can carry out the majority of dental treatments such as restorative (fillings, crowns, bridges), prosthetic (dentures), endodontic (root canal) therapy, periodontal (gum) therapy, and extraction of teeth, as well as performing examinations, radiographs (x-rays), and diagnosis. Dentists can also prescribe medications such as antibiotics, sedatives, and any other drugs used in patient management. Depending on their licensing boards, general dentists may be required to complete additional training to perform sedation, dental implants, etc. Dentists also encourage prevention of oral diseases through proper hygiene and regular, twice or more yearly, checkups for professional cleaning and evaluation. Oral infections and inflammations may affect overall health and conditions in the oral cavity may be indicative of systemic diseases, such as osteoporosis, diabetes, celiac disease or cancer. Many studies have also shown that gum disease is associated with an increased risk of diabetes, heart disease, and preterm birth. The concept that oral health can affect systemic health and disease is referred to as "oral-systemic health". Dr. John M. Harris started the world's first dental school in Bainbridge, Ohio, and helped to establish dentistry as a health profession. It opened on 21 February 1828, and today is a dental museum. The first dental college, Baltimore College of Dental Surgery, opened in Baltimore, Maryland, US in 1840. The second in the United States was the Ohio College of Dental Surgery, established in Cincinnati, Ohio, in 1845. The Philadelphia College of Dental Surgery followed in 1852. In 1907, Temple University accepted a bid to incorporate the school. Studies show that dentists that graduated from different countries, or even from different dental schools in one country, may make different clinical decisions for the same clinical condition. For example, dentists that graduated from Israeli dental schools may recommend the removal of asymptomatic impacted third molar (wisdom teeth) more often than dentists that graduated from Latin American or Eastern European dental schools. In the United Kingdom, the 1878 British Dentists Act and 1879 Dentists Register limited the title of "dentist" and "dental surgeon" to qualified and registered practitioners. However, others could legally describe themselves as "dental experts" or "dental consultants". The practice of dentistry in the United Kingdom became fully regulated with the 1921 Dentists Act, which required the registration of anyone practising dentistry. The British Dental Association, formed in 1880 with Sir John Tomes as president, played a major role in prosecuting dentists practising illegally. Dentists in the United Kingdom are now regulated by the General Dental Council. In Korea, Taiwan, Japan, Finland, Sweden, Brazil, Chile, the United States, and Canada, a dentist is a healthcare professional qualified to practice dentistry after graduating with a degree of either Doctor of Dental Surgery (DDS) or Doctor of Dental Medicine (DMD). This is equivalent to the Bachelor of Dental Surgery/Baccalaureus Dentalis Chirurgiae (BDS, BDent, BChD, BDSc) that is awarded in the UK and British Commonwealth countries. In most western countries, to become a qualified dentist one must usually complete at least four years of postgraduate study; within the European Union the education has to be at least five years. Dentists usually complete between five and eight years of post-secondary education before practising. Though not mandatory, many dentists choose to complete an internship or residency focusing on specific aspects of dental care after they have received their dental degree. Some dentists undertake further training after their initial degree in order to specialize. Exactly which subjects are recognized by dental registration bodies varies according to location. Examples include: Tooth decay was low in pre-agricultural societies, but the advent of farming society about 10,000 years ago correlated with an increase in tooth decay (cavities). An infected tooth from Italy partially cleaned with flint tools, between 13,820 and 14,160 years old, represents the oldest known dentistry, although a 2017 study suggests that 130,000 years ago the Neanderthals already used rudimentary dentistry tools. The Indus Valley Civilization (IVC) has yielded evidence of dentistry being practised as far back as 7000 BC. An IVC site in Mehrgarh indicates that this form of dentistry involved curing tooth related disorders with bow drills operated, perhaps, by skilled bead crafters. The reconstruction of this ancient form of dentistry showed that the methods used were reliable and effective. The earliest dental filling, made of beeswax, was discovered in Slovenia and dates from 6500 years ago. Dentistry was practiced in prehistoric Malta, as evidenced by a skull which had an abscess lanced from the root of a tooth dating back to around 2500 BC. An ancient Sumerian text describes a "tooth worm" as the cause of dental caries. Evidence of this belief has also been found in ancient India, Egypt, Japan, and China. The legend of the worm is also found in the "Homeric Hymns", and as late as the 14th century AD the surgeon Guy de Chauliac still promoted the belief that worms cause tooth decay. Recipes for the treatment of toothache, infections and loose teeth are spread throughout the Ebers Papyrus, Kahun Papyri, Brugsch Papyrus, and Hearst papyrus of Ancient Egypt. The Edwin Smith Papyrus, written in the 17th century BC but which may reflect previous manuscripts from as early as 3000 BC, discusses the treatment of dislocated or fractured jaws. In the 18th century BC, the Code of Hammurabi referenced dental extraction twice as it related to punishment. Examination of the remains of some ancient Egyptians and Greco-Romans reveals early attempts at dental prosthetics. However, it is possible the prosthetics were prepared after death for aesthetic reasons. Ancient Greek scholars Hippocrates and Aristotle wrote about dentistry, including the eruption pattern of teeth, treating decayed teeth and gum disease, extracting teeth with forceps, and using wires to stabilize loose teeth and fractured jaws. Some say the first use of dental appliances or bridges comes from the Etruscans from as early as 700 BC. In ancient Egypt, Hesy-Ra is the first named "dentist" (greatest of the teeth). The Egyptians bound replacement teeth together with gold wire. Roman medical writer Cornelius Celsus wrote extensively of oral diseases as well as dental treatments such as narcotic-containing emollients and astringents. The earliest dental amalgams were first documented in a Tang Dynasty medical text written by the Chinese physician Su Kung in 659, and appeared in Germany in 1528. During the Islamic Golden Age Dentistry was discussed in several famous books of medicine such as The Canon in medicine written by Avicenna and Al-Tasreef by Al-Zahrawi who is considered the greatest surgeon of the Middle ages, Avicenna said that jaw fracture should be reduced according to the occlusal guidance of the teeth; this principle is still valid in modern times. while Al-Zahrawi made a lot of surgical tools that resemble the modern tools. Historically, dental extractions have been used to treat a variety of illnesses. During the Middle Ages and throughout the 19th century, dentistry was not a profession in itself, and often dental procedures were performed by barbers or general physicians. Barbers usually limited their practice to extracting teeth which alleviated pain and associated chronic tooth infection. Instruments used for dental extractions date back several centuries. In the 14th century, Guy de Chauliac most probably invented the dental pelican (resembling a pelican's beak) which was used to perform dental extractions up until the late 18th century. The pelican was replaced by the dental key which, in turn, was replaced by modern forceps in the 19th century. The first book focused solely on dentistry was the "Artzney Buchlein" in 1530, and the first dental textbook written in English was called "Operator for the Teeth" by Charles Allen in 1685. In the United Kingdom there was no formal qualification for the providers of dental treatment until 1859 and it was only in 1921 that the practice of dentistry was limited to those who were professionally qualified. The Royal Commission on the National Health Service in 1979 reported that there were then more than twice as many registered dentists per 10,000 population in the UK than there were in 1921. It was between 1650 and 1800 that the science of modern dentistry developed. The English physician Thomas Browne in his "A Letter to a Friend" ( pub. 1690) made an early dental observation with characteristic humour: The French surgeon Pierre Fauchard became known as the "father of modern dentistry". Despite the limitations of the primitive surgical instruments during the late 17th and early 18th century, Fauchard was a highly skilled surgeon who made remarkable improvisations of dental instruments, often adapting tools from watchmakers, jewelers and even barbers, that he thought could be used in dentistry. He introduced dental fillings as treatment for dental cavities. He asserted that sugar derivate acids like tartaric acid were responsible for dental decay, and also suggested that tumors surrounding the teeth and in the gums could appear in the later stages of tooth decay. Fauchard was the pioneer of dental prosthesis, and he discovered many methods to replace lost teeth. He suggested that substitutes could be made from carved blocks of ivory or bone. He also introduced dental braces, although they were initially made of gold, he discovered that the teeth position could be corrected as the teeth would follow the pattern of the wires. Waxed linen or silk threads were usually employed to fasten the braces. His contributions to the world of dental science consist primarily of his 1728 publication Le chirurgien dentiste or The Surgeon Dentist. The French text included "basic oral anatomy and function, dental construction, and various operative and restorative techniques, and effectively separated dentistry from the wider category of surgery". After Fauchard, the study of dentistry rapidly expanded. Two important books, "Natural History of Human Teeth" (1771) and "Practical Treatise on the Diseases of the Teeth" (1778), were published by British surgeon John Hunter. In 1763 he entered into a period of collaboration with the London-based dentist James Spence. He began to theorise about the possibility of tooth transplants from one person to another. He realised that the chances of an (initially, at least) successful tooth transplant would be improved if the donor tooth was as fresh as possible and was matched for size with the recipient. These principles are still used in the transplantation of internal organs. Hunter conducted a series of pioneering operations, in which he attempted a tooth transplant. Although the donated teeth never properly bonded with the recipients' gums, one of Hunter's patients stated that he had three which lasted for six years, a remarkable achievement for the period. Major advances were made in the 19th century, and dentistry evolved from a trade to a profession. The profession came under government regulation by the end of the 19th century. In the UK the Dentist Act was passed in 1878 and the British Dental Association formed in 1879. In the same year, Francis Brodie Imlach was the first ever dentist to be elected President of the Royal College of Surgeons (Edinburgh), raising dentistry onto a par with clinical surgery for the first time. Long term occupational noise exposure can contribute to permanent hearing loss, which is referred to as noise-induced hearing loss (NIHL) and tinnitus. Noise exposure can cause excessive stimulation of the hearing mechanism, which damages the delicate structures of the inner ear. NIHL can occur when an individual is exposed to sound levels above 90 dBA according to the Occupational Safety and Health Administration (OSHA). Regulations state that the permissible noise exposure levels for individuals is 90 dBA. For the National Institute for Occupational Safety and Health (NIOSH), exposure limits are set to 85 dBA. Exposures below 85 dBA are not considered to be hazardous. Time limits are placed on how long an individual can stay in an environment above 85 dBA before it causes hearing loss. OSHA places that limitation at 8 hours for 85 dBA. The exposure time becomes shorter as the dBA level increases. Within the field of dentistry, a variety of cleaning tools are used including piezoelectric and sonic scalers, and ultrasonic scalers and cleaners. While a majority of the tools do not exceed 75 dBA, prolonged exposure over many years can lead to hearing loss or complaints of tinnitus. Few dentists have reported using personal hearing protective devices, which could offset any potential hearing loss or tinnitus. There is a movement in modern dentistry to place a greater emphasis on high-quality scientific evidence in decision-making. Evidence-based dentistry (EBD) uses current scientific evidence to guide decisions. It is an approach to oral health that requires the application and examination of relevant scientific data related to the patient's oral and medical health. Along with the dentist's professional skill and expertise, EBD allows dentists to stay up to date on the latest procedures and patients to receive improved treatment. A new paradigm for medical education designed to incorporate current research into education and practice was developed to help practitioners provide the best care for their patients. It was first introduced by Gordon Guyatt and the Evidence-Based Medicine Working Group at McMaster University in Ontario, Canada in the 1990s. It is part of the larger movement toward evidence-based medicine and other evidence-based practices. Dentistry is unique in that it requires dental students to have competence-based clinical skills that can only be acquired through specialized laboratory training and direct patient care. This necessitates the need for a scientific and professional basis of care with a foundation of research-based education. The accreditation of dental schools plays a role in enhancing the quality of dental education. There are controversial articles that are not evidence based and rely on opinions that are published and attract attention.
https://en.wikipedia.org/wiki?curid=8005
Diameter In geometry, a diameter of a circle is any straight line segment that passes through the center of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid for the diameter of a sphere. In more modern usage, the length of a diameter is also called the diameter. In this sense one speaks of "the" diameter rather than "a" diameter (which refers to the line segment itself), because all diameters of a circle or sphere have the same length, this being twice the radius r. For a convex shape in the plane, the diameter is defined to be the largest distance that can be formed between two opposite parallel lines tangent to its boundary, and the "width" is often defined to be the smallest such distance. Both quantities can be calculated efficiently using rotating calipers. For a curve of constant width such as the Reuleaux triangle, the width and diameter are the same because all such pairs of parallel tangent lines have the same distance. For an ellipse, the standard terminology is different. A diameter of an ellipse is any chord passing through the center of the ellipse. For example, conjugate diameters have the property that a tangent line to the ellipse at the endpoint of one of them is parallel to the other one. The longest diameter is called the major axis. The word "diameter" is derived from Greek διάμετρος ("diametros"), "diameter of a circle", from διά ("dia"), "across, through" and μέτρον ("metron"), "measure". It is often abbreviated DIA, dia, d, or ⌀. The definitions given above are only valid for circles, spheres and convex shapes. However, they are special cases of a more general definition that is valid for any kind of "n"-dimensional convex or non-convex object, such as a hypercube or a set of scattered points. The diameter of a subset of a metric space is the least upper bound of the set of all distances between pairs of points in the subset. So, if "A" is the subset, the diameter is If the distance function d is viewed here as having codomain R (the set of all real numbers), this implies that the diameter of the empty set (the case ) equals −∞ (negative infinity). Some authors prefer to treat the empty set as a special case, assigning it a diameter equal to 0, which corresponds to taking the codomain of d to be the set of nonnegative reals. For any solid object or set of scattered points in n-dimensional Euclidean space, the diameter of the object or set is the same as the diameter of its convex hull. In medical parlance concerning a lesion or in geology concerning a rock, the diameter of an object is the supremum of the set of all distances between pairs of points in the object. In differential geometry, the diameter is an important global Riemannian invariant. In plane geometry, a diameter of a conic section is typically defined as any chord which passes through the conic's centre; such diameters are not necessarily of uniform length, except in the case of the circle, which has eccentricity "e" = 0. The symbol or variable for diameter, , is sometimes used in technical drawings or specifications as a prefix or suffix for a number (e.g. "⌀ 55 mm", indicating that it represents diameter. For example, photographic filter thread sizes are often denoted in this way. In German, the diameter symbol (German "") is also used as an average symbol ("Durchschnittszeichen"). It is similar in size and design to , the Latin small letter o with stroke. The diameter symbol ⌀ is distinct from the empty set symbol , from an (italic) uppercase phi , and from the Nordic vowel (Latin capital letter O with stroke). See also slashed zero. The symbol has a Unicode code point at , in the Miscellaneous Technical set. On an Apple Macintosh, the diameter symbol can be entered via the character palette (this is opened by pressing in most applications), where it can be found in the Technical Symbols category. In Unix/Linux/ChromeOS systems, it is generated using It can be obtained in UNIX-like operating systems using a Compose key by pressing, in sequence, The character will sometimes not display correctly, however, since many fonts do not include it. In many situations the letter ø (the Latin small letter o with stroke) is an acceptable substitute, which in Unicode is . and on a Macintosh by pressing (the letter o, not the number 0). In Unix/Linux/ChromeOS systems, it is generated using or . AutoCAD uses available as a shortcut string . In Microsoft Word the diameter symbol can be acquired by typing 2300 and then pressing Alt+X. In LaTeX the diameter symbol can be obtained with the command codice_1 from the wasysym package.
https://en.wikipedia.org/wiki?curid=8007
Alcohol intoxication Alcohol intoxication, also known as drunkenness or alcohol poisoning, is the negative behavior and physical effects due to the recent drinking of alcohol. Symptoms at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a respiratory depression, coma, or death. Complications may include seizures, aspiration pneumonia, injuries including suicide, and low blood sugar. Alcohol intoxication can lead to alcohol-related crime with perpetrators more likely to be intoxicated than victims. Alcohol intoxication typically begins after two or more alcoholic drinks. Risk factors include a social situation where heavy drinking is common and a person having an impulsive personality. Diagnosis is usually based on the history of events and physical examination. Verification of events by witnesses may be useful. Legally, alcohol intoxication is often defined as a blood alcohol concentration (BAC) of greater than 5.4–17.4 mmol/L (25–80 mg/dL or 0.025–0.080%). This can be measured by blood or breath testing. Alcohol is broken down in human body at a rate of about 3.3 mmol/L (15 mg/dL) per hour. Management of alcohol intoxication involves supportive care. Typically this includes putting the person in the recovery position, keeping the person warm, and making sure breathing is sufficient. Gastric lavage and activated charcoal have not been found to be useful. Repeated assessments may be required to rule out other potential causes of a person's symptoms. Alcohol intoxication is very common, especially in the Western world. Most people who drink alcohol have at some time been intoxicated. In the United States acute intoxication directly results in about 2,200 deaths per year, and indirectly more than 30,000 deaths per year. Acute intoxication has been documented throughout history and alcohol remains one of the world's most widespread recreational drugs. Some religions consider alcohol intoxication to be a sin. Alcohol intoxication is the negative health effects due to the recent drinking of ethanol (alcohol). When severe it may become a medical emergency. Some effects of alcohol intoxication, such as euphoria and lowered social inhibition, are central to alcohol's desirability. Alcohol is metabolized by a normal liver at the rate of about 8 grams of pure ethanol per hour. 8 grams or is one British standard unit. An "abnormal" liver with conditions such as hepatitis, cirrhosis, gall bladder disease, and cancer is likely to result in a slower rate of metabolism. Ethanol is metabolised to acetaldehyde by alcohol dehydrogenase (ADH), which is found in many tissues, including the gastric mucosa. Acetaldehyde is metabolised to acetate by acetaldehyde dehydrogenase (ALDH), which is found predominantly in liver mitochondria. Acetate is used by the muscle cells to produce acetyl-CoA using the enzyme acetyl-CoA synthetase, and the acetyl-CoA is then used in the citric acid cycle. As drinking increases, people become sleepy, or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor co-ordination along with poor judgment increases the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury. In addition to respiratory failure and accidents caused by effects on the central nervous system, alcohol causes significant metabolic derangements. Hypoglycaemia occurs due to ethanol's inhibition of gluconeogenesis, especially in children, and may cause lactic acidosis, ketoacidosis, and acute kidney injury. Metabolic acidosis is compounded by respiratory failure. Patients may also present with hypothermia. In the past, alcohol was believed to be a non-specific pharmacological agent affecting many neurotransmitter systems in the brain. However, molecular pharmacology studies have shown that alcohol has only a few primary targets. In some systems, these effects are facilitatory and in others inhibitory. Among the neurotransmitter systems with enhanced functions are: GABAA, 5-HT3 receptor agonism (responsible for GABAergic (GABAA receptor PAM), glycinergic, and cholinergic effects), nicotinic acetylcholine receptors. Among those that are inhibited are: NMDA, dihydropyridine-sensitive L-type Ca2+ channels and G-protein-activated inwardly rectifying K+ channels. The result of these direct effects is a wave of further indirect effects involving a variety of other neurotransmitter and neuropeptide systems, leading finally to the behavioural or symptomatic effects of alcohol intoxication. The order in which different types of alcohol are consumed ("Grape or grain but never the twain" and "Beer before wine and you'll feel fine; wine before beer and you'll feel queer") does not have any effect. Many of the effects of activating GABAA receptors have the same effects as that of ethanol consumption. Some of these effects include anxiolytic, anticonvulsant, sedative, and hypnotic effects, cognitive impairment, and motor incoordination. This correlation between activating GABAA receptors and the effects of ethanol consumption has led to the study of ethanol and its effects on GABAA receptors. It has been shown that ethanol does in fact exhibit positive allosteric binding properties to GABAA receptors. However, its effects are limited to pentamers containing the δ-subunit rather than the γ-subunit. GABAA receptors containing the δ-subunit have been shown to be located exterior to the synapse and are involved with tonic inhibition rather than its γ-subunit counterpart, which is involved in phasic inhibition. The δ-subunit has been shown to be able to form the allosteric binding site which makes GABAA receptors containing the δ-subunit more sensitive to ethanol concentrations, even to moderate social ethanol consumption levels (30mM). While it has been shown by Santhakumar et al. that GABAA receptors containing the δ-subunit are sensitive to ethanol modulation, depending on subunit combinations receptors, could be more or less sensitive to ethanol. It has been shown that GABAA receptors that contain both δ and β3-subunits display increased sensitivity to ethanol. One such receptor that exhibits ethanol insensitivity is α3-β6-δ GABAA. It has also been shown that subunit combination is not the only thing that contributes to ethanol sensitivity. Location of GABAA receptors within the synapse may also contribute to ethanol sensitivity. Definitive diagnosis relies on a blood test for alcohol, usually performed as part of a toxicology screen. Law enforcement officers in the United States and other countries often use breathalyzer units and field sobriety tests as more convenient and rapid alternatives to blood tests. There are also various models of breathalyzer units that are available for consumer use. Because these may have varying reliability and may produce different results than the tests used for law-enforcement purposes, the results from such devices should be conservatively interpreted. Many informal intoxication tests exist, which, in general, are unreliable and not recommended as deterrents to excessive intoxication or as indicators of the safety of activities such as motor vehicle driving, heavy equipment operation, machine tool use, etc. For determining whether someone is intoxicated by alcohol by some means other than a blood-alcohol test, it is necessary to rule out other conditions such as hypoglycemia, stroke, usage of other intoxicants, mental health issues, and so on. It is best if his / her behavior has been observed while the subject is sober to establish a baseline. Several well-known criteria can be used to establish a probable diagnosis. For a physician in the acute-treatment setting, acute alcohol intoxication can mimic other acute neurological disorders, or is frequently combined with other recreational drugs that complicate diagnosis and treatment. Acute alcohol poisoning is a medical emergency due to the risk of death from respiratory depression or aspiration of vomit if vomiting occurs while the person is unresponsive. Emergency treatment strives to stabilize and maintain an open airway and sufficient breathing, while waiting for the alcohol to metabolize. This can be done by removal of any vomitus or, if the person is unconscious or has impaired gag reflex, intubation of the trachea. Other measures may include Additional medication may be indicated for treatment of nausea, tremor, and anxiety. A normal liver detoxifies the blood of alcohol over a period of time that depends on the initial level and the patient's overall physical condition. An abnormal liver will take longer but still succeeds, provided the alcohol does not cause liver failure. People having drunk heavily for several days or weeks may have withdrawal symptoms after the acute intoxication has subsided. A person consuming a dangerous amount of alcohol persistently can develop memory blackouts and idiosyncratic intoxication or pathological drunkenness symptoms. Long-term persistent consumption of excessive amounts of alcohol can cause liver damage and have other deleterious health effects. Alcohol intoxication is a risk factor in some cases of catastrophic injury, in particular for unsupervised recreational activity. A study in the province of Ontario based on epidemiological data from 1986, 1989, 1992, and 1995 states that 79.2% of the 2,154 catastrophic injuries recorded for the study were preventable, of which 346 (17%) involved alcohol consumption. The activities most commonly associated with alcohol-related catastrophic injury were snowmobiling (124), fishing (41), diving (40), boating (31) and canoeing (7), swimming (31), riding an all-terrain vehicle (24), and cycling (23). These events are often associated with unsupervised young males, often inexperienced in the activity, and many result in drowning. Alcohol use is also associated with unsafe sex. Laws on drunkenness vary. In the United States, it is a criminal offense for a person to be drunk while driving a motorized vehicle, except in Wisconsin, where it is only a fine for the first offense. It is also a criminal offense to fly an aircraft or (in some American states) to assemble or operate an amusement park ride while drunk. Similar laws also exist in the United Kingdom and most other countries. In some countries, it is also an offense to serve alcohol to an already-intoxicated person, and, often, alcohol can be sold only by persons qualified to serve responsibly through alcohol server training. The (BAC) for legal operation of a vehicle is typically measured as a percentage of a unit volume of blood. This percentage ranges from 0.00% in Romania and the United Arab Emirates; to 0.05% in Australia, South Africa, Germany, Scotland and New Zealand (0.00% for underage individuals); to 0.08% in England and Wales, the United States (0.00% for underaged individuals) and Canada. The United States Federal Aviation Administration prohibits crew members from performing their duties within eight hours of consuming an alcoholic beverage, while under the influence of alcohol, or with a BAC greater than 0.04%. In the United States, the United Kingdom, and Australia, public intoxication is a crime (also known as "being drunk and disorderly" or "being drunk and incapable"). In some countries, there are special facilities, sometimes known as "drunk tanks", for the temporary detention of persons found to be drunk. Some religious groups permit the consumption of alcohol. Some permit consumption but prohibit intoxication, while others prohibit alcohol consumption altogether. Many Christian denominations such as Catholic, Orthodox, and Lutheran use wine as a part of the Eucharist and permit the drinking of alcohol but consider it sinful to become intoxicated. In the Bible, the Book of Proverbs contains several chapters dealing with the bad effects of drunkenness and warning to stay away from intoxicating beverages. The book of Leviticus tells of Nadab and Abihu, Aaron the Priest's eldest sons, who were killed for serving in the temple after drinking wine, presumably while intoxicated. The book continues to discuss monasticism where drinking wine is prohibited. The story of Samson in the Book of Judges tells of a monk from the tribe of Dan who is prohibited from cutting his hair and drinking wine. Romans 13:13–14, 1 Corinthians 6:9–11, Galatians 5:19–21, and Ephesians 5:18 are among a number of other Bible passages that speak against drunkenness. While Proverbs 31:4, warns against kings and rulers drinking wine and strong drink, Proverbs 31:6–7 promotes giving strong drink to the perishing and wine to those whose lives are bitter, to forget their poverty and troubles. Some Protestant Christian denominations prohibit the drinking of alcohol based upon Biblical passages that condemn drunkenness, but others allow moderate use of alcohol. In some Christian groups, a small amount of wine is part of the rite of communion. In the Church of Jesus Christ of Latter-day Saints, alcohol consumption is forbidden, and teetotalism has become a distinguishing feature of its members. Jehovah's Witnesses allow moderate alcohol consumption among its members. In the Qur'an, there is a prohibition on the consumption of grape-based alcoholic beverages, and intoxication is considered as an abomination in the Hadith. Islamic schools of law (Madh'hab) have interpreted this as a strict prohibition of the consumption of all types of alcohol and declared it to be haraam ("forbidden"), although other uses may be permitted. In Buddhism, in general, the consumption of intoxicants is discouraged for both monastics and lay followers. Many followers of Buddhism observe a code of conduct known as the five precepts, of which the fifth precept is an undertaking to refrain from the consumption of intoxicating substances (except for medical reasons). In the "bodhisattva" vows of the "Brahma Net Sūtra", observed by Mahāyāna Buddhist communities, distribution of intoxicants is likewise discouraged, as well as consumption. In the branch of Hinduism known as Gaudiya Vaishnavism, one of the four regulative principles forbids the taking of intoxicants, including alcohol. In Judaism, in accordance with the biblical stance against drinking, wine drinking was not permitted for priests and monks The biblical command to sanctify the Sabbath day and other holidays has been interpreted as having three ceremonial meals which include drinking of wine, the Kiddush. The Jewish marriage ceremony ends with the bride and groom drinking a shared cup of wine after reciting seven blessings, and according to western "Ashkenazi" traditions, after a fast day. But it has been customary and in many cases even mandated to drink moderately so as to stay sober, and only after the prayers are over. During the Seder night on Passover (Pesach) there is an obligation to drink 4 ceremonial cups of wine, while reciting the Haggadah. It has been assumed as the source for the wine drinking ritual at the communion in some Christian groups. During Purim there is an obligation to become intoxicated, although, as with many other decrees, in many communities this has been avoided, by allowing sleep during the day to replace it. In the 1920s due to the new beverages law, a rabbi from the Reform Judaism movement proposed using grape-juice for the ritual instead of wine. Although refuted at first, the practice became widely accepted by orthodox Jews as well. At the Cave of the Patriarchs in Hebron—the Ibrahimi Mosque as it is called by the Muslims, the Jewish wine drinking rituals during weddings, the Sabbath day and holidays, are a cause for tension with the Muslims who unwillingly share the site under Israeli authority. In the movie "Animals are Beautiful People", an entire section was dedicated to showing many different animals including monkeys, elephants, hogs, giraffes, and ostriches, eating over-ripe marula tree fruit causing them to sway and lose their footing in a manner similar to human drunkenness. Birds may become intoxicated with fermented berries and some die colliding with hard objects when flying under the influence. In elephant warfare, practiced by the Greeks during the Maccabean revolt and by Hannibal during the Punic wars, it has been recorded that the elephants would be given wine before the attack, and only then would they charge forward after being agitated by their driver. It is a regular practice to give small amounts of beer to race horses in Ireland. Ruminant farm animals have natural fermentation occurring in their stomach, and adding alcoholic beverages in small amounts to their drink will generally do them no harm, and will not cause them to become drunk.
https://en.wikipedia.org/wiki?curid=8011
Data compression In signal processing, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. Compression is useful because it reduces resources required to store and transmit data. Computational resources are consumed in the compression and decompression processes. Data compression is subject to a space–time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data. Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair. The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding. In the late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video. Most forms of lossy compression are based on transform coding, especially the discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed, who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF), video (such as MPEG, AVC and HEVC) and audio (such as MP3, AAC and Vorbis). Lossy image compression is used in digital cameras, to increase storage capacities. Similarly, DVDs, Blu-ray and streaming video use the lossy video coding format. In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players. The theoretical basis for compression is provided by information theory and, more specifically, algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference. There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). An optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence". An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponds to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM. According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software which generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you cant unzip it without both, but there may be an even smaller combined form. Data compression can be viewed as a special case of data differencing. Data differencing consists of producing a "difference" given a "source" and a "target," with patching reproducing the "target" given a "source" and a "difference." Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data. The term "differential compression" is used to emphasize the data differencing connection. Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969. An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format. Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos. Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format. Wavelet compression, the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004. Audio data compression, not to be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression algorithms are implemented in software as audio codecs. Lossy audio compression algorithms provide higher compression at the cost of fidelity and are used in numerous audio applications. These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition, and linear prediction to reduce the amount of information used to represent the uncompressed data. The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB. Lossless audio compression produces a representation of digital data that decompress to an exact digital duplicate of the original audio stream, unlike playback from lossy compression techniques such as Vorbis and MP3. Compression ratios are around 50–60% of original size, which is similar to those for generic lossless data compression. Lossless compression is unable to attain high compression ratios due to the complexity of waveforms and the rapid changes in sound forms. Codecs like FLAC, Shorten, and TTA use linear prediction to estimate the spectrum of the signal. Many of these algorithms use convolution with the filter [-1 1] to slightly whiten or flatten the spectrum, thereby allowing traditional lossless compression to work more efficiently. The process is reversed upon decompression. When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies. A number of lossless audio compression formats exist. Shorten was an early lossless format. Newer ones include Free Lossless Audio Codec (FLAC), Apple's Apple Lossless (ALAC), MPEG-4 ALS, Microsoft's Windows Media Audio 9 Lossless (WMA Lossless), Monkey's Audio, TTA, and WavPack. See list of lossless codecs for a complete listing. Some audio formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream. Other formats are associated with a distinct system, such as: Lossy audio compression is used in a wide range of applications. In addition to the direct applications (MP3 players or computers), digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the Internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression (5–20% of the original size, rather than 50–60%), by discarding less-critical data. The innovation of lossy audio compression was to use psychoacoustics to recognize that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces perceptual redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those sounds are coded with decreased accuracy or not at all. Due to the nature of lossy algorithms, audio quality suffers when a file is decompressed and recompressed (digital generation loss). This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, they are very popular with end users (particularly MP3) as a megabyte can store about a minute's worth of music at adequate quality. To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components calculated using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models. Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) before quantization. LPC may be thought of as a basic perceptual coding technique: reconstruction of an audio signal using a linear predictor shapes the coder's quantization noise into the spectrum of the target signal, partially masking it. Lossy formats are often used for the distribution of streaming audio or interactive applications (such as the coding of speech for digital transmission in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications, and for such applications a codec designed to stream data effectively will usually be chosen. Latency results from the methods used to encode and decode the data. Some codecs will analyze a longer segment of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. (Often codecs create segments called a "frame" to create discrete data segments for encoding and decoding.) The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality. In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analysed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms (46 ms for two-way communication). Speech encoding is an important category of audio data compression. The perceptual models used to estimate what a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice are normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate. If the data to be compressed is analog (such as a voltage that varies with time), quantization is employed to digitize it into numbers (normally integers). This is referred to as analog-to-digital (A/D) conversion. If the integers generated by quantization are 8 bits each, then the entire range of the analog signal is divided into 256 intervals and all the signal values within an interval are quantized to the same number. If 16-bit integers are generated, then the range of the analog signal is divided into 65,536 intervals. This relation illustrates the compromise between high resolution (a large number of analog intervals) and high compression (small integers generated). This application of quantization is used by several speech compression methods. This is accomplished, in general, by some combination of two approaches: Perhaps the earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the μ-law algorithm. In 1950, Bell Labs filed the patent on differential pulse-code modulation (DPCM). Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973. Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC. Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3 and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used by modern audio compression formats such as Dolby Digital, MP3, and Advanced Audio Coding (AAC). The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires. In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom. Twenty years later, almost all the radio stations in the world were using similar technology manufactured by a number of companies. A literature compendium for a large variety of audio coding systems was published in the IEEE's "Journal on Selected Areas in Communications" ("JSAC"), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual (i.e. masking) techniques and some kind of frequency analysis and back-end noiseless coding. Several of these papers remarked on the difficulty of obtaining good, clean digital audio for research purposes. Most, if not all, of the authors in the "JSAC" edition were also active in the MPEG-1 Audio committee, which created the MP3 format. Video compression is a practical implementation of source coding in information theory. In practice, most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-called "container formats". Uncompressed video requires a very high data rate. Although lossless video compression codecs perform at a compression factor of 5 to 12, a typical H.264 lossy compression video has a compression factor between 20 and 200. The two key video compression techniques used in video coding standards are the discrete cosine transform (DCT) and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT video coding (block motion compensation). Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal redundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly. Most video compression formats and codecs exploit both spatial and temporal redundancy (e.g. through difference coding with motion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). Inter-frame compression (a temporal delta encoding) is one of the most powerful compression techniques. It (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression. A class of specialized formats used in camcorders and video editing use less complex compression schemes that restrict their prediction techniques to intra-frame prediction. Usually video compression additionally employs lossy compression techniques like quantization that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas to reduce space, in a manner similar to those used in JPEG image compression. As in all lossy compression, there is a trade-off between video quality and bit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting artifacts. Other methods than the prevalent DCT-based transform formats, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT), have been the subject of some research, but are typically not used in practical products (except for the use of wavelet coding as still-image coders without motion compensation). Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods. Inter-frame coding works by comparing each frame in the video with the previous one. Individual frames of a video sequence are compared from one frame to the next, and the video compression codec sends only the differences to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than intraframe compression. Usually the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate. Today, nearly all commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) share the same basic architecture that dates back to H.261 which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction using motion vectors, as well as nowadays also an in-loop filtering step. In the prediction stage, various deduplication and difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data. Then rectangular blocks of (residue) pixel data are transformed to the frequency domain to ease targeting irrelevant information in quantization and for some spatial redundancy reduction. The discrete cosine transform (DCT) that is widely used in this regard was introduced by N. Ahmed, T. Natarajan and K. R. Rao in 1974. In the main lossy processing stage that data gets quantized in order to reduce information that is irrelevant to human visual perception. In the last stage statistical redundancy gets largely eliminated by an entropy coder which often applies some form of arithmetic coding. In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example are deblocking filters that blur out blocking artefacts from quantization discontinuities at transform block boundaries. In 1967, A.H. Robinson and C. Cherry proposed a run-length encoding bandwidth compression scheme for the transmission of analog television signals. Discrete cosine transform (DCT), which is fundamental to modern video compression, was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology. It was the first video coding format based on DCT compression, which would subsequently become the standard for all of the major video coding formats that followed. H.261 was developed by a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba. The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. In 1999, it was followed by MPEG-4/H.263, which was a major leap forward for video compression technology. It was developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic. The most widely used video coding format is H.264/MPEG-4 AVC. It was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. AVC commercially introduced the modern context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable-length coding (CAVLC) algorithms. AVC is the main video encoding standard for Blu-ray Discs, and is widely used by streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television. Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset. Other algorithms in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes). For a benchmark in genetics/genomics data compressors, see It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
https://en.wikipedia.org/wiki?curid=8013
History of the Democratic Republic of the Congo Discovered in the 1990s, the earliest human remains in the Democratic Republic of the Congo have been dated to approximately 90,000 years ago. The first real states, such as the Kongo, the Lunda, the Luba and Kuba, appeared south of the equatorial forest on the savannah from the 14th century onwards. The Kingdom of Kongo controlled much of western and central Africa including what is now the western portion of the DR Congo between the 14th and the early 19th centuries. At its peak it had many as 500,000 people, and its capital was known as Mbanza-Kongo (south of Matadi, in modern-day Angola). In the late 15th century, Portuguese sailors arrived in the Kingdom of Kongo, and this led to a period of great prosperity and consolidation, with the king's power being founded on Portuguese trade. King Afonso I (1506–1543) had raids carried out on neighboring districts in response to Portuguese requests for slaves. After his death, the kingdom underwent a deep crisis. The Atlantic slave trade occurred from approximately 1500 to 1850, with the entire west coast of Africa targeted, but the region around the mouth of the Congo suffered the most intensive enslavement. Over a strip of coastline about long, about 4 million people were enslaved and sent across the Atlantic to sugar plantations in Brazil, the US and the Caribbean. From 1780 onwards, there was a higher demand for slaves in the US which led to more people being enslaved. By 1780, more than 15,000 people were shipped annually from the Loango Coast, north of the Congo. In 1870, explorer Henry Morton Stanley arrived in and explored what is now the DC Congo Belgian colonization of DR Congo began in 1885 when King Leopold II founded and ruled the Congo Free State. However, de facto control of such a huge area took decades to achieve. Many outposts were built to extend the power of the state over such a vast territory. In 1885, the Force Publique was set up, a colonial army with white officers and black soldiers. In 1886, Leopold made Camille Jansen the first Belgian governor-general of Congo. Over the late 19th century, various Christian (including Catholic and Protestant) missionaries arrived intending to convert the local population. A railway between Matadi and Stanley Pool was built in the 1890s. Reports of widespread murder, torture, and other abuses in the rubber plantations led to international and Belgian outrage and the Belgian government transferred control of the region from Leopold II and established the Belgian Congo in 1908. After an uprising by the Congolese people, Belgium surrendered and this led to the independence of the Congo in 1960. However, the Congo remained unstable because regional leaders had more power than the central government, with Katanga attempting to gain independence with Belgian support. Prime Minister Patrice Lumumba tried to restore order with the aid of the Soviet Union as part of the Cold War, causing the United States to support a coup led by Colonel Joseph Mobutu in 1965. Mobutu quickly seized complete power of the Congo and renamed the country Zaire. He sought to Africanize the country, changing his own name to Mobutu Sese Seko, and demanded that African citizens change their Western names to traditional African names. Mobutu sought to repress any opposition to his rule, which he successfully did throughout the 1980s. However, with his regime weakened in the 1990s, Mobutu was forced to agree to a power-sharing government with the opposition party. Mobutu remained the head of state and promised elections within the next two years that never took place. During the First Congo War, Rwanda invaded Zaire, in which Mobutu lost his power during this process. In 1997, Laurent-Désiré Kabila took power and renamed the country the Democratic Republic of the Congo. Afterward, the Second Congo War broke out, resulting in a regional war in which many different African nations took part and in which millions of people were killed or displaced. Kabila was assassinated by his bodyguard in 2001, and his son, Joseph, succeeded him and was later elected president by the Congolese government in 2006. Joseph Kabila quickly sought peace. Foreign soldiers remained in the Congo for a few years and a power-sharing government between Joseph Kabila and the opposition party was set up. Joseph Kabila later resumed complete control over the Congo and was re-elected in a disputed election in 2011. In 2018, Félix Tshisekedi was elected President; in the first peaceful transfer of power since independence. The area now known as the Democratic Republic of the Congo was populated as early as 80,000 years ago, as shown by the 1988 discovery of the Semliki harpoon at Katanda, one of the oldest barbed harpoons ever found, which is believed to have been used to catch giant river catfish. During its recorded history, the area has also been known as "Congo", "Congo Free State", "Belgian Congo", and "Zaire". The Kingdom of Kongo existed from the 14th to the early 19th century. Until the arrival of the Portuguese it was the dominant force in the region along with the Kingdom of Luba, the Kingdom of Lunda, the Mongo people and the Anziku Kingdom. The Congo Free State was a corporate state privately controlled by Leopold II of Belgium through the "Association Internationale africaine", a non-governmental organization. Leopold was the sole shareholder and chairman. The state included the entire area of the present the Democratic Republic of the Congo. Under Leopold II, the Congo Free State became one of the most infamous international scandals of the turn of the twentieth century. The report of the British Consul Roger Casement led to the arrest and punishment of white officials who had been responsible for cold-blooded killings during a rubber-collecting expedition in 1900, including a Belgian national who caused the shooting of at least 122 Congolese natives. Estimates of the total death toll vary considerably. The first census was only done in 1924, so it is even more difficult to quantify the population loss of the period. Roger Casement's famous 1904 report estimated ten million people. According to Casement's report, indiscriminate "war", starvation, reduction of births and Tropical diseases caused the country's depopulation. European and U.S. press agencies exposed the conditions in the Congo Free State to the public in 1900. By 1908 public and diplomatic pressure had led Leopold II to annex the Congo as the Belgian Congo colony. On 15 November 1908 King Leopold II of Belgium formally relinquished personal control of the Congo Free State. The renamed Belgian Congo was put under the direct administration of the Belgian government and its Ministry of Colonies. Belgian rule in the Congo was based around the "colonial trinity" ("trinité colonial") of state, missionary and private company interests. The privileging of Belgian commercial interests meant that large amounts of capital flowed into the Congo and that individual regions became specialized. The interests of the government and private enterprise became closely tied; the state helped companies break strikes and remove other barriers imposed by the indigenous population. The country was split into nesting, hierarchically organized administrative subdivisions, and run uniformly according to a set "native policy" ("politique indigène")—in contrast to the British and the French, who generally favored the system of indirect rule whereby traditional leaders were retained in positions of authority under colonial oversight. There was also a high degree of racial segregation. Large numbers of white immigrants who moved to the Congo after the end of World War II came from across the social spectrum, but were nonetheless always treated as superior to blacks. During the 1940s and 1950s, the Congo experienced an unprecedented level of urbanization and the colonial administration began various development programs aimed at making the territory into a "model colony". Notable advances were made in treating diseases such as African trypanosomiasis. One of the results of these measures was the development of a new middle class of Europeanised African "évolués" in the cities. By the 1950s the Congo had a wage labor force twice as large as that in any other African colony. The Congo's rich natural resources, including uranium—much of the uranium used by the U.S. nuclear programme during World War II was Congolese—led to substantial interest in the region from both the Soviet Union and the United States as the Cold War developed. During the latter stages of World War II a new social stratum emerged in the Congo, known as the "évolué"s. Forming an African middle class in the colony, they held skilled positions (such as clerks and nurses) made available by the economic boom. While there were no universal criteria for determining "évolué status", it was generally accepted that one would have "a good knowledge of French, adhere to Christianity, and have some form of post-primary education." Early on in their history, most "évolué"s sought to use their unique status to earn special privileges in the Congo. Since opportunities for upward mobility through the colonial structure were limited, the "évolué" class institutionally manifested itself in elite clubs through which they could enjoy trivial privileges that made them feel distinct from the Congolese "masses". Additional groups, such as labor unions, alumni associations, and ethnic syndicates, provided other Congolese the means of organization. Among the most important of these was the Alliance des Bakongo (ABAKO), representing the Kongo people of the Lower Congo. However, they were restricted in their actions by the administration. While white settlers were consulted in the appointment of certain officials, the Congolese had no means of expressing their beliefs through the governing structures. Though native chiefs held legal authority in some jurisdictions, in practice they were used by the administration to further its own policies. Up into the 1950s, most "évolué"s were concerned only with social inequalities and their treatment by the Belgians. Questions of self-government were not considered until 1954 when ABAKO requested that the administration consider a list of suggested candidates for a Léopoldville municipal post. That year the association was taken over by Joseph Kasa-Vubu, and under his leadership, it became increasingly hostile to the colonial authority and sought autonomy for the Kongo regions in the Lower Congo. In 1956 a group of Congolese intellectuals under the tutelage of several European academics issued a manifesto calling for a transition to independence over the course of 30 years. The ABAKO quickly responded with a demand for "immediate independence". The Belgian government was not prepared to grant the Congo independence and even when it started realizing the necessity of a plan for decolonization in 1957, it was assumed that such a process would be solidly controlled by Belgium. In December 1957 the colonial administration instituted reforms that permitted municipal elections and the formation of political parties. Some Belgian parties attempted to establish branches in the colony, but these were largely ignored by the population in favour of Congolese-initiated groups. Nationalism fermented in 1958 as more "évolué"s began interacting with others outside of their own locales and started discussing the future structures of a post-colonial Congolese state. Nevertheless, most political mobilisation occurred along tribal and regional divisions. In Katanga, various tribal groups came together to form the Confédération des associations tribales du Katanga (CONAKAT) under the leadership of Godefroid Munongo and Moïse Tshombe. Hostile to immigrant peoples, it advocated provincial autonomy and close ties with Belgium. Most of its support was rooted in individual chiefs, businessmen, and European settlers of southern Katanga. It was opposed by Jason Sendwe's Association Générale des Baluba du Katanga (BALUBAKAT). In October 1958 a group of Léopoldville "évolués" including Patrice Lumumba, Cyrille Adoula and Joseph Iléo established the Mouvement National Congolais (MNC). Diverse in membership, the party sought to peacefully achieve Congolese independence, promote the political education of the populace, and eliminate regionalism. The MNC drew most of its membership from the residents of the eastern city of Stanleyville, where Lumumba was well known, and from the population of the Kasai Province, where efforts were directed by a Muluba businessman, Albert Kalonji. Belgian officials appreciated its moderate and anti-separatist stance and allowed Lumumba to attend the All-African Peoples' Conference in Accra, Ghana, in December 1958 (Kasa-Vubu was informed that the documents necessary for his travel to the event were not in order and was not permitted to go). Lumumba was deeply impressed by the Pan-Africanist ideals of Ghanaian President Kwame Nkrumah and returned to the Congo with a more radical party programme. He reported on his trip during a widely attended rally in Léopoldville and demanded the country's "genuine" independence. Fearing that they were being overshadowed by Lumumba and the MNC, Kasa-Vubu and the ABAKO leadership announced that they would be hosting their own rally in the capital on 4 January 1959. The municipal government (under Belgian domination) was given short notice, and communicated that only a "private meeting" would be authorised. On the scheduled day of the rally the ABAKO leadership told the crowd that had gathered that the event was postponed and that they should disperse. The mass was infuriated and instead began hurling stones at the police and pillaging European property, initiating three days of violent and destructive riots. The Force Publique, the colonial army, was called into service and suppressed the revolt with considerable brutality. In wake of the riots Kasa-Vubu and his lieutenants were arrested. Unlike earlier expressions of discontent, the grievances were conveyed primarily by uneducated urban residents, not "évolué"s. Popular opinion in Belgium was one of extreme shock and surprise. An investigative commission found the riots to be the culmination of racial discrimination, overcrowding, unemployment, and wishes for more political self-determination. On 13 January the administration announced several reforms, and the Belgian King, Baudouin, declared that independence would be granted to the Congo in the future. Meanwhile, discontent surfaced among the MNC leadership, who were bothered by Lumumba's domination over the party's politics. Relations between Lumumba and Kalonji also grew tense, as the former was upset with how the latter was transforming the Kasai branch into an exclusively Luba group and antagonising other tribes. This culminated into the split of the party into the MNC-Lumumba/MNC-L under Lumumba and the MNC-Kalonji/MNC-K under Kalonji and Iléo. The latter began advocating federalism. Adoula left the organisation. Alone to lead his own faction and facing competition from ABAKO, Lumumba became increasingly strident in his demands for independence. Following an October riot in Stanleyville he was arrested. Nevertheless, the influence of himself and the MNC-L continued to grow rapidly. The party advocated for a strong unitary state, nationalism, and the termination of Belgian rule and began forming alliances with regional groups, such as the Kivu-based Centre du Regroupement Africain (CEREA). Though the Belgians supported a unitary system over the federal models suggested by ABAKO and CONAKAT, they and more moderate Congolese were unnerved by Lumumba's increasingly extremist attitudes. With the implicit support of the colonial administration, the moderates formed the Parti National du Progrès (PNP) under the leadership of Paul Bolya and Albert Delvaux. It advocated centralisation, respect for traditional elements, and close ties with Belgium. In southern Léopoldville Province, a socialist-federalist party, the Parti Solidaire Africain (PSA) was founded. Antoine Gizenga served as its president, and Cléophas Kamitatu was in charge of the Léopoldville Province chapter. Following the riots in Leopoldville 4–7 January 1959, and in Stanleyville on 31 October 1959, the Belgians realised they could not maintain control of such a vast country in the face of rising demands for independence. Belgian and Congolese political leaders held a Round Table Conference in Brussels beginning on 18 January 1960. At the end of the conference, on 27 January 1960, it was announced that elections would be held in the Congo on 22 May 1960, and full independence granted on 30 June 1960. The elections produced the nationalist Patrice Lumumba as prime minister, and Joseph Kasavubu as president. On independence the country adopted the name "Republic of Congo" (République du Congo). The French colony of Middle Congo (Moyen Congo) also chose the name Republic of Congo upon its independence, so the two countries are more commonly known as Congo-Léopoldville and Congo-Brazzaville, after their capital cities. In 1960, the country was very unstable—regional tribal leaders held far more power than the central government—and with the departure of the Belgian administrators, almost no skilled bureaucrats remained in the country. The first Congolese graduated from university only in 1956, and very few in the new nation had any idea how to manage a country of such size. On 5 July 1960, a military mutiny by Congolese soldiers against their European officers broke out in the capital and rampant looting began. On 11 July 1960 the richest province of the country, Katanga, seceded under Moise Tshombe. The United Nations sent 20,000 peacekeepers to protect Europeans in the country and try to restore order. Western paramilitaries and mercenaries, often hired by mining companies to protect their interests, also began to pour into the country. In this period Congo's second richest province, Kasai, also announced its independence on 8 August 1960. After trying to get help from the United States and the United Nations, Prime Minister Lumumba turned to the USSR for assistance. Nikita Khrushchev agreed to help, offering advanced weaponry and technical advisors. The United States viewed the Soviet presence as an attempt to take advantage of the situation and gain a proxy state in sub-Saharan Africa. UN forces were ordered to block any shipments of arms into the country. The United States also looked for a way to replace Lumumba as leader. President Kasavubu had clashed with Prime Minister Lumumba and advocated an alliance with the West rather than the Soviets. The U.S. sent weapons and CIA personnel to aid forces allied with Kasavubu and combat the Soviet presence. On 14 September 1960, with U.S. and CIA support, Colonel Joseph Mobutu overthrew the government and arrested Lumumba. A technocratic government, the College of Commissioners-General, was established. On 17 January 1961 Mobutu sent Lumumba to Élisabethville (now Lubumbashi), capital of Katanga. In full view of the press he was beaten and forced to eat copies of his own speeches. For three weeks afterward, he was not seen or heard from. Then Katangan radio announced implausibly that he had escaped and been killed by villagers. It was soon clear that in fact he had been tortured and killed along with two others shortly after his arrival. In 2001, a Belgian inquiry established that he had been shot by Katangan gendarmes in the presence of Belgian officers, under Katangan command. Lumumba was beaten, placed in front of a firing squad with two allies, cut up, buried, dug up and what remained was dissolved in acid. In Stanleyville, those loyal to the deposed Lumumba set up a rival government under Antoine Gizenga which lasted from 31 March 1961 until it was reintegrated on 5 August 1961. After some reverses, UN and Congolese government forces succeeded in recapturing the breakaway provinces of South Kasai on 30 December 1961, and Katanga on 15 January 1963. A new crisis erupted in the Simba Rebellion of 1964-1965 which saw half the country taken by the rebels. European mercenaries, US, and Belgian troops were called in by the Congolese government to defeat the rebellion. Unrest and rebellion plagued the government until November 1965, when Lieutenant General Joseph-Désiré Mobutu, by then commander in chief of the national army, seized control of the country and declared himself president for the next five years. Mobutu quickly consolidated his power, despite the Stanleyville mutinies of 1966 and 1967, and was elected unopposed as president in 1970 for a seven-year term. Embarking on a campaign of cultural awareness, President Mobutu renamed the country the "Republic of Zaire" in 1971 and required citizens to adopt African names and drop their French-language ones. The name comes from Portuguese, adapted from the Kongo word "nzere" or "nzadi" ("river that swallows all rivers"). Among other changes, Leopoldville became Kinshasa and Katanga Shaba. Relative peace and stability prevailed until 1977 and 1978 when Katangan Front for Congolese National Liberation rebels, based in the Angolan People's Republic, launched the Shaba I and II invasions into the southeast Shaba region. These rebels were driven out with the aid of French and Belgian paratroopers plus Moroccan troops. An Inter-African Force remained in the region for some time afterwards. Zaire remained a one-party state in the 1980s. Although Mobutu successfully maintained control during this period, opposition parties, most notably the Union pour la Démocratie et le Progrès Social (UDPS), were active. Mobutu's attempts to quell these groups drew significant international criticism. As the Cold War came to a close, internal and external pressures on Mobutu increased. In late 1989 and early 1990, Mobutu was weakened by a series of domestic protests, by heightened international criticism of his regime's human rights practices, by a faltering economy, and by government corruption, most notably his own massive embezzlement of government funds for personal use. In April 1990, Mobutu declared the Third Republic, agreeing to a limited multi-party system with free elections and a constitution. As details of the reforms were delayed, soldiers in September 1991 began looting Kinshasa to protest their unpaid wages. Two thousand French and Belgian troops, some of whom were flown in on U.S. Air Force planes, arrived to evacuate the 20,000 endangered foreign nationals in Kinshasa. In 1992, after previous similar attempts, the long-promised Sovereign National Conference was staged, encompassing over 2,000 representatives from various political parties. The conference gave itself a legislative mandate and elected Archbishop Laurent Monsengwo Pasinya as its chairman, along with Étienne Tshisekedi wa Mulumba, leader of the UDPS, as prime minister. By the end of the year Mobutu had created a rival government with its own prime minister. The ensuing stalemate produced a compromise merger of the two governments into the High Council of Republic-Parliament of Transition (HCR-PT) in 1994, with Mobutu as head of state and Kengo Wa Dondo as prime minister. Although presidential and legislative elections were scheduled repeatedly over the next two years, they never took place. By 1996, tensions from the war and genocide in neighboring Rwanda had spilled over into Zaire. Rwandan Hutu militia forces (Interahamwe) who had fled Rwanda following the ascension of a Tutsi-led government had been using Hutu refugee camps in eastern Zaire as bases for incursions into Rwanda. In October 1996 Rwandan forces attacked refugee camps in the Rusizi River plain near the intersection of the Congolese, Rwandan and Burundi borders meet, scattering refugees. They took Uvira, then Bukavu, Goma and Mugunga. Hutu militia forces soon allied with the Zairian armed forces (FAZ) to launch a campaign against Congolese ethnic Tutsis in eastern Zaire. In turn, these Tutsis formed a militia to defend themselves against attacks. When the Zairian government began to escalate the massacres in November 1996, Tutsi militias erupted in rebellion against Mobutu. The Tutsi militia was soon joined by various opposition groups and supported by several countries, including Rwanda and Uganda. This coalition, led by Laurent-Desire Kabila, became known as the Alliance des Forces Démocratiques pour la Libération du Congo-Zaïre (AFDL). The AFDL, now seeking the broader goal of ousting Mobutu, made significant military gains in early 1997. Various Zairean politicians who had unsuccessfully opposed the dictatorship of Mobutu for many years now saw an opportunity for them in the invasion of Zaire by two of the region's strongest military forces. Following failed peace talks between Mobutu and Kabila in May 1997, Mobutu left the country on 16 May. The AFDL entered Kinshasa unopposed a day later, and Kabila named himself president, reverting the name of the country to the Democratic Republic of the Congo. He marched into Kinshasa on 20 May and consolidated power around himself and the AFDL. Kabila demonstrated little ability to manage the problems of his country, and lost his allies. To counterbalance the power and influence of Rwanda in DRC, Ugandan troops created another rebel movement called the Movement for the Liberation of Congo (MLC), led by the Congolese warlord Jean-Pierre Bemba. They attacked in August 1998, backed by Rwandan and Ugandan troops. Soon afterwards, Angola, Namibia, and Zimbabwe became involved militarily in the Congo, with Angola and Zimbabwe supporting the government. While the six African governments involved in the war signed a ceasefire accord in Lusaka in July 1999, the Congolese rebels did not and the ceasefire broke down within months. Kabila was assassinated in 2001 by a bodyguard called Rashidi Kasereka, 18, who was then shot dead, according to Justice Minister Mwenze Kongolo. Another account of the assassination says that the real killer escaped. Kabila was succeeded by his son, Joseph. Upon taking office, Kabila called for multilateral peace talks to end the war. Kabila partly succeeded when a further peace deal was brokered between him, Uganda, and Rwanda leading to the apparent withdrawal of foreign troops. Currently, the Ugandans and the MLC still hold a wide section of the north of the country; Rwandan forces and its front, the Rassemblement Congolais pour la Démocratie (RCD) control a large section of the east; and government forces or their allies hold the west and south of the country. There were reports that the conflict is being prolonged as a cover for extensive looting of the substantial natural resources in the country, including diamonds, copper, zinc, and coltan. The conflict was reignited in January 2002 by ethnic clashes in the northeast and both Uganda and Rwanda then halted their withdrawal and sent in more troops. Talks between Kabila and the rebel leaders, held in Sun City, lasted a full six weeks, beginning in April 2002. In June, they signed a peace accord under which Kabila would share power with former rebels. By June 2003, all foreign armies except those of Rwanda had pulled out of Congo. Few people in the Congo have been unaffected by the conflict. A survey conducted in 2009 by the ICRC and Ipsos shows that three-quarters (76%) of the people interviewed have been affected in some way–either personally or due to the wider consequences of armed conflict. The response of the international community has been incommensurate with the scale of the disaster resulting from the war in the Congo. Its support for political and diplomatic efforts to end the war has been relatively consistent, but it has taken no effective steps to abide by repeated pledges to demand accountability for the war crimes and crimes against humanity that were routinely committed in Congo. The United Nations Security Council and the U.N. Secretary-General have frequently denounced human rights abuses and the humanitarian disaster that the war unleashed on the local population, but have shown little will to tackle the responsibility of occupying powers for the atrocities taking place in areas under their control, areas where the worst violence in the country took place. In particular Rwanda and Uganda have escaped any significant sanction for their role. DR Congo had a transitional government in July 2003 until the election was over. A constitution was approved by voters and on 30 July 2006 the Congo held its first multi-party elections since independence in 1960. Joseph Kabila took 45% of the votes and his opponent Jean-Pierre Bemba 20%. That was the origin of a fight between the two parties from 20–22 August 2006 in the streets of the capital, Kinshasa. Sixteen people died before policemen and MONUC took control of the city. A new election was held on 29 October 2006, which Kabila won with 70% of the vote. Bemba has decried election "irregularities." On 6 December 2006 Joseph Kabila was sworn in as President. In December 2011, Joseph Kabila was re-elected for a second term as president. After the results were announced on 9 December, there was violent unrest in Kinshasa and Mbuji-Mayi, where official tallies showed that a strong majority had voted for the opposition candidate Etienne Tshisekedi. Official observers from the Carter Center reported that returns from almost 2,000 polling stations in areas where support for Tshisekedi was strong had been lost and not included in the official results. They described the election as lacking credibility. On 20 December, Kabila was sworn in for a second term, promising to invest in infrastructure and public services. However, Tshisekedi maintained that the result of the election was illegitimate and said that he intended also to "swear himself in" as president. On 19 January 2015 protests led by students at the University of Kinshasa broke out. The protests began following the announcement of a proposed law that would allow Kabila to remain in power until a national census can be conducted (elections had been planned for 2016). By Wednesday 21 January clashes between police and protesters had claimed at least 42 lives (although the government claimed only 15 people had been killed). Similarly, in September 2016, violent protests were met with brutal force by the police and Republican Guard soldiers. Opposition groups claim 80 dead, including the Students' Union leader. From Monday 19 September Kinshasa residents, as well as residents elsewhere in Congo, where mostly confined to their homes. Police arrested anyone remotely connected to the opposition as well as innocent onlookers. Government propaganda, on television, and actions of covert government groups in the streets, acted against opposition as well as foreigners. The president's mandate was due to end on 19 December 2016, but no plans were made to elect a replacement at that time and this caused further protests. On 30 December 2018 the presidential election to determine the successor to Kabila was held. On 10 January 2019, the electoral commission announced opposition candidate Félix Tshisekedi as the winner of the vote. He was officially sworn in as President on 24 January 2019. in the ceremony of taking of the office Félix Tshisekedi appointed Vital Kamerhe as his chief of staff. The inability of the state and the world's largest United Nations peacekeeping force to provide security throughout the vast country has led to the emergence of up to 70 armed groups around 2016, perhaps the largest number in the world. By 2018, the number of armed groups had increased to about 120. Armed groups are often accused of being proxies or being supported by regional governments interested in Eastern Congo's vast mineral wealth. Some argue that much of the lack of security by the national army is strategic on the part of the government, who let the army profit from illegal logging and mining operations in return for loyalty. Different rebel groups often target civilians by ethnicity and militias often become oriented around ethnic local militias known as "Mai-Mai". Laurent Nkunda with other soldiers from RCD-Goma who were integrated into the army defected and called themselves the National Congress for the Defence of the People (CNDP). Starting in 2004, CNDP, believed to be backed by Rwanda as a way to tackle the Hutu group Democratic Forces for the Liberation of Rwanda (FDLR), rebelled against the government, claiming to protect the Banyamulenge (Congolese Tutsis). In 2009, after a deal between the DRC and Rwanda, Rwandan troops entered the DRC and arrested Nkunda and were allowed to pursue FDLR militants. The CNDP signed a peace treaty with the government where its soldiers would be integrated into the national army. In April 2012, the leader of the CNDP, Bosco Ntaganda and troops loyal to him mutinied, claiming a violation of the peace treaty and formed a rebel group, the March 23 Movement (M23), which was believed to be backed by Rwanda. On 20 November 2012, M23 took control of Goma, a provincial capital with a population of one million people. The UN authorized the Force Intervention Brigade (FIB), which was the first UN peacekeeping force with a mandate to neutralize opposition rather than a defensive mandate, and the FIB quickly defeated M23. The FIB was then to fight the FDLR but were hampered by the efforts of the Congolese government, who some believe tolerate the FDLR as a counterweight to Rwandan interests. Since 2017, fighters from M23, most of whom had fled into Uganda and Rwanda (both were believed to have supported them), started crossing back into DRC with the rising crisis over Kabila's extension of his term limit. DRC claimed of clashes with M23. The Allied Democratic Forces has been waging an insurgency in the Democratic Republic of the Congo and is blamed for the Beni massacre in 2016. While the Congolese army maintains that the ADF is an Islamist insurgency, most observers feel that they are only a criminal group interested in gold mining and logging. There exist claims according to which the ADF had aligned itself with the Islamic State of Iraq and the Levant, even though there were no firm proofs of actual cooperation. Ethnic conflict in Kivu has often involved the Congolese Tutsis known as Banyamulenge, a cattle herding group of Rwandan origin derided as outsiders, and other ethnic groups who consider themselves indigenous. Additionally, neighboring Burundi and Rwanda, who have a thorny relationship, are accused of being involved, with Rwanda accused of training Burundi rebels who have joined with Mai Mai against the Banyamulenge and the Banyamulenge is accused of harboring the RNC, a Rwandan opposition group supported by Burundi. In June 2017, the group, mostly based in South Kivu, called the National People's Coalition for the Sovereignty of Congo (CNPSC) led by William Yakutumba was formed and became the strongest rebel group in the east, even briefly capturing a few strategic towns. The rebel group is one of three alliances of various Mai-Mai militias and has been referred to as the Alliance of Article 64, a reference to Article 64 of the constitution, which says the people have an obligation to fight the efforts of those who seek to take power by force, in reference to President Kabila. Bembe warlord Yakutumba's Mai-Mai Yakutumba is the largest component of the CNPSC and has had friction with the Congolese Tutsis who often make up commanders in army units. In May 2019, Banyamulenge fighters killed a Banyindu traditional chief, Kawaza Nyakwana. Later in 2019, a coalition of militias from the Bembe, Bafuliru and Banyindu are estimated to have burnt more than 100, mostly Banyamulenge, villages and stole tens of thousands of cattle from the largely cattle herding Banyamulenge. About 200,000 people fled their homes. Clashes between Hutu militias and militias of other ethnic groups has also been prominent. In 2012, the Congolese army in its attempt to crush the Rwandan backed and Tutsi-dominated CNDP and M23 rebels, empowered and used Hutu groups such as the FDLR and a Hutu dominated Mai-Mai group called Nyatura as proxies in its fight. The Nyatura and FDLR even arbitrarily executed up to 264 mostly Tembo civilians in 2012. In 2015, the army then launched an offensive against the FDLR militia. The FDLR are accused of killing at least 14 Nande people in January 2016 and of killing 10 Nandes and burning houses in July 2016 while an FDLR allied group Maï Maï Nyatura are also accused of killing Nandes. The Nande-dominate UPDI militia, a Nande militia called Mai-Mai Mazembe and a militia dominated by Nyanga people, the "Nduma Defense of Congo" (NDC), also called Maï-Maï Sheka and led by Gédéon Kyungu Mutanga, are accused of attacking Hutus. In North Kivu, in 2017, an alliance of Mai-Mai groups called the National Movement of Revolutionaries (MNR) began attacks in June 2017 includes Nande Mai-Mai leaders from groups such as Corps du Christ and Mai-Mai Mazembe. Another alliance of Mai-Mai groups is CMC which brings together Hutu militia Nyatura and are active along the border between North Kivu and South Kivu. In Northern Katanga Province starting in 2013, the Pygmy Batwa people, whom the Luba people often exploit and allegedly enslave, rose up into militias, such as the "Perci" militia, and attacked Luba villages. A Luba militia known as "Elements" or "Elema" attacked back, notably killing at least 30 people in the "Vumilia 1" displaced people camp in April 2015. Since the start of the conflict, hundreds have been killed and tens of thousands have been displaced from their homes. The weapons used in the conflict are often arrows and axes, rather than guns. Elema also began fighting the government mainly with machetes, bows and arrows in Congo's Haut Katanga and Tanganyika provinces. The government forces fought alongside a tribe known as the Abatembo and targeting civilians of the Luba and the Tabwa tribes who were believed to be sympathetic to the Elema. In the Kasaï-Central province, starting in 2016, the largely Luba Kamwina Nsapu militia led by Kamwina Nsapu attacked state institutions. The leader was killed by authorities in August 2016 and the militia reportedly took revenge by attacking civilians. By June 2017, more than 3,300 people had been killed and 20 villages have been completely destroyed, half of them by government troops. The militia has expanded to the neighboring Kasai-Oriental area, Kasaï and Lomami. A traditional chief critical of Kabila was killed by security forces, precipitating conflict that has killed more than 3,000 people since. The UN discovered dozens of mass graves. Rebels and government forces are accused of human rights abuses, as well as a state-linked militia called Bana Mura, which shares a name with the hill in the east where presidential guards train. The Ituri conflict in the Ituri region of the north-eastern DRC involved fighting between the agriculturalist Lendu and pastoralist Hema ethnic groups, who together made up around 40% of Ituri's population, with other groups including the Ndo-Okebo and the Nyali. During Belgian rule, the Hema were given privileged positions over the Lendu while long time leader Mobutu Sese Seko also favored the Hema. While "Ituri conflict" often refers to the major fighting from 1999 to 2003, fighting has existed before and continues since that time. During the Second Congolese Civil War, Ituri was considered the most violent region. An agricultural and religious group from the Lendu people known as the "Cooperative for the Development of Congo" or CODECO allegedly reemerged as a militia in 2017 and began attacking the Hema as well as the Alur people to control the resources in the region, with the Ndo-Okebo and the Nyali also involved in the violence. After disagreements over negotiating with the government and the killing of CODECO's leader, Ngudjolo Duduko Justin, in March 2020, the group splintered and violence spread into new areas. In 2018, more than 100 people were killed and 200,000 people were forced to flee while in June 2019, attacks by CODECO led to 240 people being killed and more than 300,000 people fleeing with at least 531 civilians killed by armed groups in Ituri between October 2019 and June 2020. In October 2009 a new conflict started in Dongo, Sud-Ubangi District where clashes had broken out over access to fishing ponds. Nearly 900 people were killed between 16–17 December 2018 around Yumbi, a few weeks before the Presidential election, when mostly those of the Batende tribe massacred mostly those of the Banunu tribe. About 16,000 fled to neighboring Republic of Congo. It was alleged that it was a carefully planned massacre, involving elements of the national military.
https://en.wikipedia.org/wiki?curid=8022
Geography of the Democratic Republic of the Congo The Democratic Republic of the Congo forms part of the Congo River Basin, which covers an area of almost . The country's only outlet to the Atlantic Ocean is a narrow strip of land on the north bank of the Congo River. The vast, low-lying central area is a plateau-shaped basin sloping toward the west, covered by tropical rainforest and criss-crossed by rivers, a large area of this has been categorized by the World Wildlife Fund as the Central Congolian lowland forests ecoregion. The forest center is surrounded by mountainous terraces in the west, plateaus merging into savannas in the south and southwest. Dense grasslands extend beyond the Congo River in the north. High mountains of the Ruwenzori Range (some above ) are found on the eastern borders with Rwanda and Uganda (see Albertine Rift montane forests for a description of this area). The Democratic Republic of the Congo lies on the equator, with one-third of the country to the north and two-thirds to the south. The climate is hot and humid in the river basin and cool and dry in the southern highlands, with a cold, alpine climate in the Rwenzori Mountains. South of the equator, the rainy season lasts from October to May and north of the Equator, from April to November. Along the Equator, rainfall is fairly regular throughout the year. During the wet season, thunderstorms often are violent but seldom last more than a few hours. The average rainfall for the entire country is about . The Democratic Republic of Congo has a hot and wet climate because it lies in the equatorial region. It is hot because it receives direct sunlight and the seas evaporate faster because of the heat and that is the reason for humidity and rainfall. The hot and wet climate of DRC is known as the equatorial climate. Location of Congo: Central Africa, north of Zambia, south of Central African Republic Geographic coordinates: Continent: Africa Area: "total:" 2,344,858 km2 "land:" 2,267,048 km2 "water:" 77,810 km2 Area - comparative: The 11th-largest country in the world (and 2nd in Africa); it is smaller than Algeria but larger than Greenland and Saudi Arabia. It is slightly larger than the U.S. state of Alaska, three times the size of the state of Texas and about a quarter the size of the United States as a whole. Land boundaries: "total:" 10,481 km "border countries:" "Angola 2,646 km, Burundi 236 km, Central African Republic 1,747 km, Republic of the Congo 1,229 km, Rwanda 221 km, South Sudan 714 km, Tanzania 479 km, Uganda 877 km, Zambia 2,332 km" Coastline: Maritime claims: "territorial sea:" "exclusive economic zone:" boundaries with neighbors Climate: tropical; hot and humid in equatorial river basin; cooler and drier in southern highlands; cooler-cold and wetter in eastern highlands and the Ruwenzori Range; north of Equator - wet season April to October, dry season December to February; south of Equator - wet season November to March, dry season April to October Terrain: vast central plateau covered by tropical rainforest, surrounded by mountains in the west, plains and savanna in the south/southwest, and grasslands in the north. The high mountains of the Ruwenzori Range on the eastern borders. Elevation extremes: "lowest point:" Atlantic Ocean 0 m "highest point:" Pic Marguerite on Mont Ngaliema (Mount Stanley) 5,110 m Natural resources: cobalt, copper, niobium, petroleum, industrial and gem diamonds, gold, silver, zinc, manganese, tin, uranium, coal, hydropower, timber Land use: "arable land:" 3.09% "permanent crops:" 0.36% 96.55 (2012 est.) Irrigated land: 105 km2 (2003) Total renewable water resources: 1,283 km3 (2011) "total:" 0.68 km3/yr (68%/21%/11%) "per capita:" 11.25 m3/yr (2005) Natural hazards: periodic droughts in south; Congo River floods (seasonal); in the east, in the Albertine Rift, there are active volcanoes Environment - current issues: Poaching threatens wildlife populations (for example, the painted hunting dog, "Lycaon pictus", is now considered extirpated from the Congo due to human overpopulation and poaching); water pollution; deforestation (chiefly due to land conversion to agriculture by indigenous farmers); refugees responsible for significant deforestation, soil erosion, and wildlife poaching; mining of minerals (coltan — a mineral used in creating capacitors, diamonds, and gold) causing environmental damage Environment - international agreements: "party to:" Biodiversity, Desertification, Endangered Species, Hazardous Wastes, Law of the Sea, Marine Dumping, Nuclear Test Ban, Ozone Layer Protection, Tropical Timber 83, Tropical Timber 94, Wetlands "signed, but not ratified:" Environmental Modification Geography: D.R. Congo is one of six African states that straddles the equator; it's the largest African state that has the equator passing through it. Very narrow strip of land that controls the lower Congo River and is the only outlet to South Atlantic Ocean; dense tropical rainforest in the central river basin and eastern highlands. This is a list of the extreme points of the Democratic Republic of the Congo, the points that are farther north, south, east or west than any other location.
https://en.wikipedia.org/wiki?curid=8023
Demographics of the Democratic Republic of the Congo This article is about the demographic features of the population of the Democratic Republic of the Congo, including ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. As many as 250 ethnic groups have been distinguished and named. The most numerous people are the Luba, Mongo, and Bakongo. Although 700 local languages and dialects are spoken, the linguistic variety is bridged both by the use of French and the intermediary languages Kongo, Luba-Kasai, Swahili, and Lingala. According to the total population was in , compared to only 12,184,000 in 1950. The proportion of children below the age of 15 in 2010 was 46.3%, 51.1% was between 15 and 65 years of age, while 2.7% was 65 years or older . Structure of the population (DHS 2013-2014) (Males 45 548, Females 49 134 = 94 682) : Registration of vital events in the Democratic Republic of the Congo is incomplete. The Population Departement of the United Nations prepared the following estimates. Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR): Fertility data as of 2013-2014 (DHS Program): More than 250 ethnic groups have been identified and named of which the majority are Bantu. The four largest groups - Mongo, Luba, Kongo (all Bantu), and the Mangbetu-Azande make up about 45% of the population. 5,000 people from Belgium and 5,000 people from Greece currently live in DR Congo. Bantu peoples (80%): Central Sudanic/Ubangian : Nilotic peoples : Pygmy peoples : More than 600,000 pygmies (around 1% of the total population) are believed to live in the DR Congo's huge forests, where they survive by hunting wild animals and gathering fruits. The four major languages in the DRC are French (official), Lingala (a lingua franca trade language), Kingwana (a dialect of Swahili), Kikongo, and Tshiluba. There are over 200 ethnic languages. French is generally the medium of instruction in schools. English is taught as a compulsory foreign language in Secondary and High School around the country. It is a required subject in the Faculty of Economics at major universities around the country and there are numerous language schools in the country that teach it. In the town of Beni, for instance, there is a Bilingual University that offer courses in both French and English. President Kabila himself is fluent in both English and French, as was his father. A survey conducted by the Demographic and Health Surveys program in 2013-2014 indicated that Christians constituted 93.7% of the population (Catholics 29.7%, Protestants 26.8%, and other Christians 37.2%). An indigenous religion, Kimbanguism, has the adherence of 2.8%, while Muslims make up 1.2%. Another estimate found Christianity was followed by 95.8% of the population, according to the Pew Research Center in 2010. The CIA The World Factbook states: Roman Catholic 50%, Protestant 20%, Kimbanguist 10%, Islam 10%, Other (includes Syncretic Sects and Indigenous beliefs) 10%. Joshua Project figures: Roman Catholic 43.9%, Protestant 24.8%, Other Christian 23.7%, Muslim 1.6%, Non-religious 0.6%, Hindu 0.1% other syncretic sects and indigenous beliefs 5.3%. Demographic statistics according to the World Population Review in 2019. The following demographic statistics are from the CIA World Factbook. "note": fighting between the Congolese Government and Uganda- and Rwanda-backed Congolese rebels spawned a regional war in DRC in August 1998, which left 2.33 million Congolese internally displaced and caused 412,000 Congolese refugees to flee to surrounding countries (2011 est.) Given the situation in the country and the condition of state structures, it is extremely difficult to obtain reliable data however evidence suggests that DRC continues to be a destination country for immigrants in spite of recent declines. Immigration is seen to be very diverse in nature, with refugees and asylum-seekers - products of the numerous and violent conflicts in the Great Lakes Region - constituting an important subset of the population in the country. Additionally, the country’s large mine operations attract migrant workers from Africa and beyond and there is considerable migration for commercial activities from other African countries and the rest of the world, but these movements are not well studied. Transit migration towards South Africa and Europe also plays a role. Immigration in the DRC has decreased steadily over the past two decades, most likely as a result of the armed violence that the country has experienced. According to the International Organization for Migration, the number of immigrants in the DRC has declined from just over 1 million in 1960, to 754,000 in 1990, to 480,000 in 2005, to an estimated 445,000 in 2010. Valid figures are not available on migrant workers in particular, partly due to the predominance of the informal economy in the DRC. Data are also lacking on irregular immigrants, however given neighbouring country ethnic links to nationals of the DRC, irregular migration is assumed to be a significant phenomenon in the country. Figures on the number of Congolese nationals abroad vary greatly depending on the source, from 3 to 6 million. This discrepancy is due to a lack of official, reliable data. Emigrants from the DRC are above all long-term emigrants, the majority of which live within Africa and to a lesser extent in Europe; 79.7% and 15.3% respectively, according to estimates on 2000 data. Most Congolese emigrants however, remain in Africa, with new destination countries including South Africa and various points en route to Europe. In addition to being a host country, the DRC has also produced a considerable number of refugees and asylum-seekers located in the region and beyond. These numbers peaked in 2004 when, according to UNHCR, there were more than 460,000 refugees from the DRC; in 2008, Congolese refugees numbered 367,995 in total, 68% of which were living in other African countries. The table below shows DRC born people who have emigrated abroad in selected Western countries (although it excludes their descendants). These are only estimates and do not account for Congolese migrants residing illegally in these and other countries.
https://en.wikipedia.org/wiki?curid=8024
Economy of the Democratic Republic of the Congo The economy of the Democratic Republic of the Congo has declined drastically since the mid-1980s, despite being home to vast potential in natural resources and mineral wealth. At the time of its independence in 1960, the Democratic Republic of the Congo was the second most industrialized country in Africa after South Africa. It boasted a thriving mining sector and its agriculture sector was relatively productive. Since then, decades of corruption, war and political instability have been a severe detriment to further growth, today leaving DRC with a GDP per capita and a HDI rating that rank among the world's lowest and make the DRC one of the most fragile and least developed countries in the world. Despite this the DRC is quickly modernizing; it tied with Malaysia for the largest positive change in HDI development in 2016. Government projects include strengthening the health system for maternal and child health, expansion of electricity access, water supply reconstructions, and urban and social rehabilitation programs. The two recent conflicts (the First and Second Congo Wars), which began in 1996, have dramatically reduced national output and government revenue, have increased external debt, and have resulted in deaths of more than five million people from war, and associated famine and disease. Malnutrition affects approximately two thirds of the country's population. Agriculture is the mainstay of the economy, accounting for 57.9% of GDP in 1997. In 1996, agriculture employed 66% of the work force. Rich in minerals, the Democratic Republic of the Congo has a difficult history of predatory mineral extraction, which has been at the heart of many struggles within the country for many decades, but particularly in the 1990s. The economy of the second largest country in Africa relies heavily on mining. However, much economic activity occurs in the informal sector and is not reflected in GDP data. In 2006 Transparency International ranked the Democratic Republic of the Congo 156 out of 163 countries in the Corruption Perception Index, tying Bangladesh, Chad, and Sudan with a 2.0 rating. President Joseph Kabila established the Commission of Repression of Economic Crimes upon his ascension to power in 2001. The conflicts in the DRC were over water, minerals, and other resources. Political agendas have worsened the economy, as in times of crisis, the elite benefit while the general populace suffers. This is worsened as a result of corrupt national and international corporations. The corporations instigate and allow the fighting for resources because they benefit from it. A large proportion of fatalities in the country are attributed to a lack of basic services. The influx of refugees since the war in 1998 only serves to worsen the issue of poverty. Money of the taxpayers in the DRC is often misappropriated by the corrupt leaders of the country, who use the money to benefit themselves instead of the citizens of the DRC. The DRC is consistently rated the lowest on the UN Human Development Index. Forced labor was important for the rural sector. The corporations that dominated the economy were mostly owned by Belgium, but British capital also played an important role. The 1950s were a period of rising income and expectations. Congo was said to have the best public health system in Africa, but there was also a huge wealth disparity. Belgian companies favored workers in certain areas more and exported them to work in different areas, restricting opportunities for others. Favored groups also received better education and were able to secure jobs for people in the same ethnic group which increased tensions. In 1960 there were only 16 university graduates out of a population of 20 million. Belgium still had economic power and independence gave little opportunity for improvement. Common refrains included "no elite, no trouble" and "before independence = after independence". When the Belgians left, most of the government officials and educated residents left with them. Before independence, there were just 3 out of 5000 government jobs held by Congolese people. The resulting loss of institutional knowledge and human capital crippled the government. After the Congo crisis, Mobutu arose as the country's sole ruler and stabilized the country politically. Economically, however, the situation continued to decline, and by 1979, the purchasing power was only 4% of that from 1960. Starting in 1976 the IMF provided stabilizing loans to the dictatorship. Much of the money was embezzled by Mobutu and his circle. This was not a secret as the 1982 report by IMF's envoy Erwin Blumenthal documented. He stated, it is "alarmingly clear that the corruptive system in Zaire with all its wicked and ugly manifestations, its mismanagement and fraud will destroy all endeavors of international institutions, of friendly governments, and of the commercial banks towards recovery and rehabilitation of Zaire’s economy". Blumenthal indicated that there was "no chance" that creditors would ever recover their loans. Yet the IMF and the World Bank continued to lend money that was either embezzled, stolen, or "wasted on elephant projects". "Structural adjustment programmes" implemented as a condition of IMF loans cut support for health care, education, and infrastructure. International Bank for Reconstruction and Development (IBRD) Trust Fund for the Congo. Poor infrastructure, an uncertain legal framework, corruption, and lack of openness in government economic policy and financial operations remain a brake on investment and growth. A number of International Monetary Fund (IMF) and World Bank missions have met with the new government to help it develop a coherent economic plan but associated reforms are on hold. Faced with continued currency depreciation, the government resorted to more drastic measures and in January 1999 banned the widespread use of American dollars for all domestic commercial transactions, a position it later adjusted. The government has been unable to provide foreign exchange for economic transactions, while it has resorted to printing money to finance its expenditure. Growth was negative in 2000 because of the difficulty of meeting the conditions of international donors, continued low prices of key exports, and post-coup instability. Although depreciated, congolese francs have been stable for few years (Ndonda, 2014) Conditions improved in late 2002 with the withdrawal of a large portion of the invading foreign troops. A number of IMF and World Bank missions have met with the government to help it develop a coherent economic plan, and President Kabila has begun implementing reforms. The DRC is embarking on the establishment of special economic zones (SEZ) to encourage the revival of its industry. The first SEZ was planned to come into being in 2012 in N'Sele, a commune of Kinshasa, and will focus on agro-industries. The Congolese authorities also planned to open another zone dedicated to mining (Katanga) and a third dedicated to cement (in the Bas-Congo). There are three phases to the program that each have their own objectives. Phase I was the precursor to the actual investment in the Special Economic Zone where policymakers agreed to the framework, the framework was studied for its establishment, and to predict the potential market demand for the land. Stage one of Phase II involved submitting laws for the Special Economic Zone, finding good sites for businesses, and currently there is an effort to help the government attract foreign investment. Stage two of Phase II hasn't been started yet and it involves assisting the government in creating framework for the country, creating an overall plan for the site, figuring out what the environmental impact of the project will be, and guessing how much it will cost and what the return can be made on the investment. Phase III involves the World Bank creating a transaction phase that will keep everything competitive. The program is looking for options to hand over the program to the World Bank which could be very beneficial for the western part of the country. The following table shows the main economic indicators in 1980–2017. Ongoing conflicts dramatically reduced government revenue and increased external debt. As Reyntjens wrote, "Entrepreneurs of insecurity are engaged in extractive activities that would be impossible in a stable state environment. The criminalization context in which these activities occur offers avenues for considerable factional and personal enrichment through the trafficking of arms, illegal drugs, toxic products, mineral resources and dirty money." Ethnic rivalries were made worse because of economic interests and looting and coltan smuggling took place. Illegal monopolies formed in the country where they used forced labor for children to mine or work as soldiers. National parks were overrun with people looking to exploit minerals and resources. Increased poverty and hunger from the war and that increased the hunting of rare wildlife. Education was denied when the country was under foreign control and very few people make money off the minerals in the country. The national resources are not the root cause for the continued fighting in the region, however, the competition has become an incentive to keep fighting.[1] The DRC's level of economic freedom is one of the lowest in the world, putting it in the repressed category. The armed militias fight with the government in the eastern section of the country over the mining sector or the corruption of the government, and weak policies lead to the instability of the economy. Human rights abuses also ruin economic activity; the DRC has a 7% unemployment rate, but still has one of the lowest GDP's per capita in the world. A major problem for people trying to start their own companies is that the minimum amount of capital needed to launch the company is 5 times the average annual income, and prices are regulated by the government, which almost forces people to have to work for the larger, more corrupt businesses; otherwise, they won't have work. It is hard for the DRC to encourage foreign trade because of the regulatory barriers. International Bank for Reconstruction and Development (IBRD) Trust Fund for the Congo. Poor infrastructure, an uncertain legal framework, corruption, and lack of openness in government economic policy and financial operations remain a brake on investment and growth. A number of International Monetary Fund (IMF) and World Bank missions have met with the new government to help it develop a coherent economic plan but associated reforms are on hold. Faced with continued currency depreciation, the government resorted to more drastic measures and in January 1999 banned the widespread use of U.S. dollars for all domestic commercial transactions, a position it later adjusted. The government has been unable to provide foreign exchange for economic transactions, while it has resorted to printing money to finance its expenditure. Growth was negative in 2000 because of the difficulty of meeting the conditions of international donors, continued low prices of key exports, and post-coup instability. 125 companies in 2003 contributed to the conflict in DRC showing the corruption. With the help of the International Development Association the DRC has worked toward the reestablishment of social services. This is done by giving 15 million people access to basic health services and giving bed nets to prevent malaria from spreading to people. With the Emergency Demobilization and Reintegration Program more than 107,000 adults and 34,000 child soldiers stood down their militarized posture. The travel time from Lubumbashi to Kasomeno in Katanga went down from seven days to two hours because of the improved roads which led to the decrease of prices of main goods by 60%. With the help of the IFC, KfW, and the EU the DRC improved its businesses by reducing the time it took to create a business by 51%, reducing the time it took to get construction permits by 54%, and reducing the number of taxes from 118 to 30. Improvements in health have been noticeable specifically that deliveries attended by trained staff jumped from 47 to 80%. In education 14 million textbooks were provided to children, completion rates of school have increased, and higher education was made available to students that chose to pursue it. The Democratic Republic of Congo ranks 183 on the low end of the ease of doing business scale as ranked by the World Bank. This measures the difficulties of starting a business, enforcing contracts, paying taxes, resolving insolvency, protecting investors, trading across borders, getting credit, getting electricity, dealing with construction permits and registering property (World Bank 2014:8). The IMF plans on giving the DRC a $1 billion loan after its two-year suspension after it failed to give details about a mining deal from one of its state owned mines and an Israeli billionaire, Dan Gertler. The loan may be necessary for the country because there will be elections in December 2016 for the next president and the cost of funding this would range around $1.1 billion. The biggest problem with the vote is getting a country of 68 million people the size of Western Europe to polling stations with less than 1,860 miles of paved roads. Agriculture is the mainstay of the economy, accounting for 57.9% of the GDP in 1997. Main cash crops include coffee, palm oil, rubber, cotton, sugar, tea, and cocoa. Food crops include cassava, plantains, maize, groundnuts, and rice. In 1996, agriculture employed 66% of the work force. The Democratic Republic of Congo also possesses 50 percent of Africa's forests and a river system that could provide hydro-electric power to the entire continent, according to a United Nations report on the country's strategic significance and its potential role as an economic power in central Africa. Fish are the single most important source of animal protein in the DRC. Total production of marine, river, and lake fisheries in 2003 was estimated at 222,965 tons, all but 5,000 tons from inland waters. PEMARZA, a state agency, carries on marine fishing. Forests cover 60 percent of the total land area. There are vast timber resources, and commercial development of the country's 61 million hectares (150 million acres) of exploitable wooded area is only beginning. The Mayumbe area of Bas-Congo was once the major center of timber exploitation, but forests in this area were nearly depleted. The more extensive forest regions of the central cuvette and of the Ubangi River valley have increasingly been tapped. Roundwood removals were estimated at 72,170,000 m2 in 2003, about 95 percent for fuel. Some 14 species are presently being harvested. Exports of forest products in 2003 totalled $25.7 million. Foreign capital is necessary in order for forestry to expand, and the government recognizes that changes in tax structure and export procedures will be needed to facilitate economic growth. Rich in minerals, the DRC has a difficult history of predatory mineral extraction, which has been at the heart of many struggles within the country for many decades, but particularly in the 1990s. Although the economy of the Democratic Republic of the Congo, the second largest country in Africa who has historically relied heavily on mining, is no longer reflected in the GDP data as the mining industry has suffered from long-term "uncertain legal framework, corruption, and a lack of transparency in government policy." The informal sector . In her book entitled "The Real Economy of Zaire", MacGaffey described a second, often illegal economy, "system D," which is outside the official economy (MacGaffey 1991:27). and therefore is not reflected in the GDP. exploitation of mineral substances as MIBA EMAXON and De Beers The economy of the second largest country in Africa relies heavily on mining. The Congo is the world's largest producer of cobalt ore, and a major producer of copper and industrial diamonds. The Congo has 70% of the world's coltan, and more than 30% of the world's diamond reserves., mostly in the form of small, industrial diamonds. The coltan is a major source of tantalum, which is used in the fabrication of electronic components in computers and mobile phones. In 2002, tin was discovered in the east of the country, but, to date, mining has been on a small scale. Smuggling of the conflict minerals, coltan and cassiterite (ores of tantalum and tin, respectively), has helped fuel the war in the Eastern Congo. Katanga Mining Limited, a London-based company, owns the Luilu Metallurgical Plant, which has a capacity of 175,000 tonnes of copper and 8,000 tonnes of cobalt per year, making it the largest cobalt refinery in the world. After a major rehabilitation program, the company restarted copper production in December 2007 and cobalt production in May 2008. Much economic activity occurs in the informal sector and is not reflected in GDP data. Ground transport in the Democratic Republic of Congo has always been difficult. The terrain and climate of the Congo Basin present serious barriers to road and rail construction, and the distances are enormous across this vast country. Furthermore, chronic economic mismanagement and internal conflict has led to serious under-investment over many years. On the other hand, the Democratic Republic of Congo has thousands of kilometres of navigable waterways, and traditionally water transport has been the dominant means of moving around approximately two-thirds of the country.
https://en.wikipedia.org/wiki?curid=8025
Politics of the Democratic Republic of the Congo Politics of the Democratic Republic of Congo take place in a framework of a republic in transition from a civil war to a semi-presidential republic. On 18 and 19 December 2005, a successful nationwide referendum was carried out on a draft constitution, which set the stage for elections in 2006. The voting process, though technically difficult due to the lack of infrastructure, was facilitated and organized by the Congolese Independent Electoral Commission with support from the UN mission to the Congo (MONUC). Early UN reports indicate that the voting was for the most part peaceful, but spurred violence in many parts of the war-torn east and the Kasais. In 2006, many Congolese complained that the constitution was a rather ambiguous document and were unaware of its contents. This is due in part to the high rates of illiteracy in the country. However, interim President Kabila urged Congolese to vote 'Yes', saying the constitution is the country's best hope for peace in the future. 25 million Congolese turned out for the two-day balloting. According to results released in January 2006, the constitution was approved by 84% of voters. The new constitution also aims to decentralize authority, dividing the vast nation into 25 semi-autonomous provinces, drawn along ethnic and cultural lines. The country's first democratic elections in four decades were held on 30 July 2006. From the day of the arguably ill-prepared independence of the Democratic Republic of the Congo, the tensions between the powerful leaders of the political elite, such as Joseph Kasa Vubu, Patrice Lumumba, Moise Tshombe, Joseph Mobutu and others, jeopardize the political stability of the new state. From Tshombe's secession of the Katanga, to the assassination of Lumumba, to the two coups d'état of Mobutu, the country has known periods of true nationwide peace, but virtually no period of genuine democratic rule. The regime of President Mobutu Sese Seko lasted 32 years (1965–1997), during which all but the first seven years the country was named Zaire. His dictatorship operated as a one-party state, which saw most of the powers concentrated between President Mobutu, who was simultaneously the head of both the party and the state through the Popular Movement of the Revolution (MPR), and a series of essentially rubber-stamping institutions. One particularity of the Regime was the claim to be thriving for an "authentic" system, different from Western or Soviet influences. This lasted roughly between the establishment of Zaire in 1971, and the official beginning of the transition towards democracy, on 24 April 1990. This was true at the regular people's level as everywhere else. People were ordered by law to drop their Western Christian names; the titles Mr. and Mrs. were abandoned for the male and female versions of the French word for "citizen"; Men were forbidden to wear suits, and women to wear pants. At the institutional level, many of the institutions also changed denominations, but the end result was a system that borrowed from both systems: Every corporation, whether financial or union, as well as every division of the administration, was set up as branches of the party. CEOs, union leaders, and division directors were each sworn-in as section presidents of the party. Every aspect of life was regulated to some degree by the party, and the will of its founding-president, Mobutu Sese Seko. Most of the petty aspects of the regime disappeared after 1990 with the beginning of the democratic transition. Democratization would prove to be fairly short-lived, as Mobutu's power plays dragged it in length until ultimately 1997, when forces led by Laurent Kabila eventually successfully toppled the regime, after a 9-month-long military campaign. The government of former president Mobutu Sese Seko was toppled by a rebellion led by Laurent Kabila in May 1997, with the support of Rwanda and Uganda. They were later to turn against Kabila and backed a rebellion against him in August 1998. Troops from Zimbabwe, Angola, Namibia, Chad, and Sudan intervened to support the Kinshasa regime. A cease-fire was signed on 10 July 1999 by the DROC, Zimbabwe, Angola, Uganda, Namibia, Rwanda, and Congolese armed rebel groups, but fighting continued. Under Laurent Kabila's regime, all executive, legislative, and military powers were first vested in the President, Laurent-Désiré Kabila. The judiciary was independent, with the president having the power to dismiss or appoint. The president was first head of a 26-member cabinet dominated by the Alliance of Democratic Forces for the Liberation of Congo (ADFL). Towards the end of the 90s, Laurent Kabila created and appointed a Transitional Parliament, with a seat in the buildings of the former Katanga Parliament, in the southern town of Lubumbashi, in a move to unite the country, and to legitimate his regime. Kabila was assassinated on 16 January 2001 and his son Joseph Kabila was named head of state ten days later. The younger Kabila continued with his father's Transitional Parliament, but overhauled his entire cabinet, replacing it with a group of technocrats, with the stated aim of putting the country back on the track of development, and coming to a decisive end of the Second Congo War. In October 2002, the new president was successful in getting occupying Rwandan forces to withdraw from eastern Congo; two months later, an agreement was signed by all remaining warring parties to end the fighting and set up a Transition Government, the make-up of which would allow representation for all negotiating parties. Two founding documents emerged from this: The , and the Global and Inclusive Agreement, both of which describe and determine the make-up and organization of the Congolese institutions, until planned elections in July 2006, at which time the provisions of the new constitution, democratically approved by referendum in December 2005, will take full effect and that is how it happened. Under the Global and All-Inclusive Agreement, signed on 17 December 2002, in Pretoria, there was to be one President and four Vice-Presidents, one from the government, one from the Rally for Congolese Democracy, one from the MLC, and one from civil society. The position of Vice-President expired after the 2006 elections. After being for three years (2003–06) in the interregnum between two constitutions, the Democratic Republic of the Congo is now under the regime of the Constitution of the Third Republic. The constitution, adopted by referendum in 2005, and promulgated by President Joseph Kabila in February 2006, establishes a decentralized semi-presidential republic, with a separation of powers between the three branches of government - executive, legislative and judiciary, and a distribution of prerogatives between the central government and the provinces. As of 8 August 2017 there are 54 political parties legally operating in the Congo. On 15 December 2018 US State Department announced it had decided to evacuate its employees’ family members from Democratic Republic of Congo just before the Congolese elections to choose a successor to President Joseph Kabila. Since the July 2006 elections, the country is led by a semi-presidential, strongly-decentralized state. The executive at the central level, is divided between the President, and a Prime Minister appointed by him/her from the party having the majority of seats in Parlement. Should there be no clear majority, the President can appoint a "government former" that will then have the task to win the confidence of the National Assembly. The President appoints the government members (ministers) at the proposal of the Prime Minister. In coordination, the President and the government have the charge of the executive. The Prime minister and the government are responsible to the lower-house of Parliament, the National Assembly. At the province level, the Provincial legislature (Provincial Assembly) elects a governor, and the governor, with his government of up to 10 ministers, is in charge of the provincial executive. Some domains of government power are of the exclusive provision of the Province, and some are held concurrently with the Central government. This is not a Federal state however, simply a decentralized one, as the majority of the domains of power are still vested in the Central government. The governor is responsible to the Provincial Assembly. The semi-presidential system has been described by some as "conflictogenic" and "dictatogenic", as it ensures frictions, and a reduction of pace in government life, should the President and the Prime Minister be from different sides of the political arena. This was seen several times in France, a country that shares the semi-presidential model. It was also, arguably, in the first steps of the Congo into independence, the underlying cause of the crisis between Prime Minister Patrice Lumumba and President Joseph Kasa Vubu, who ultimately dismissed each other, in 1960. In January 2015 the 2015 Congolese protests broke out in the country's capital following the release of a draft law that would extend the presidential term limits and allow Joseph Kabila to run again for office. The Inter-Congolese dialogue, that set-up the transitional institutions, created a bicameral parliament, with a National Assembly and Senate, made up of appointed representatives of the parties to the dialogue. These parties included the preceding government, the rebel groups that were fighting against the government, with heavy Rwandan and Ugandan support, the internal opposition parties, and the Civil Society. At the beginning of the transition, and up until recently, the National Assembly is headed by the MLC with Speaker Hon. Olivier Kamitatu, while the Senate is headed by a representative of the Civil Society, namely the head of the Church of Christ in Congo, Mgr. Pierre Marini Bodho. Hon. Kamitatu has since left both the MLC and the Parliament to create his own party, and ally with current President Joseph Kabila. Since then, the position of Speaker is held by Hon. Thomas Luhaka, of the MLC. Aside from the regular legislative duties, the Senate had the charge to draft a new constitution for the country. That constitution was adopted by referendum in December 2005, and decreed into law on 18 February 2006. The Parliament of the third republic is also bicameral, with a National Assembly and a Senate. Members of the National Assembly, the lower - but the most powerful - house, are elected by direct suffrage. Senators are elected by the legislatures of the 26 provinces. The Congolese Judicial Branch Consists of a Supreme Court, which handles federal crimes. 10 provinces (provinces, singular - province) and one city* (ville): Bandundu, Bas-Congo, Équateur, Kasai-Occidental, Kasai-Oriental, Katanga, Kinshasa*, Maniema, North Kivu, Orientale. Each province is divided into districts. 25 provinces (provinces, singular - province) and city* (ville): Bas-Uele | Équateur | Haut-Lomami | Haut-Katanga | Haut-Uele | Ituri | Kasaï | Kasaï oriental | Kongo central | Kwango | Kwilu | Lomami | Lualaba | Lulua | Mai-Ndombe | Maniema | Mongala | North Kivu | Nord-Ubangi | Sankuru | South Kivu | Sud-Ubangi | Tanganyika | Tshopo | Tshuapa | Kinshasa* ACCT, ACP, AfDB, AU, CEEAC, CEPGL, ECA, FAO, G-19, G-24, G-77, IAEA, IBRD, ICAO, ICC, ICRM, IDA, IFAD, IFC, IFRCS, IHO, ILO, IMF, UN, UNCTAD, UNESCO, UNHCR, UNIDO, UPU, WCO WFTU, WHO, WIPO, WMO, WToO, WTrO
https://en.wikipedia.org/wiki?curid=8026
Telecommunications in the Democratic Republic of the Congo Telecommunications in the Democratic Republic of the Congo include radio, television, fixed and mobile telephones, and the Internet. Radio is the dominant medium; a handful of stations, including state-run Radio-Télévision Nationale Congolaise (RTNC), broadcast across the country. The United Nations Mission (MONUSCO) and a Swiss-based NGO, Fondation Hirondelle, operate one of country's leading stations, Radio Okapi. The network employs mostly-Congolese staff and aims to bridge political divisions. Radio France Internationale (RFI), which is widely available on FM, is the most popular news station. The BBC broadcasts on FM in Kinshasa (92.7), Lubumbashi (92.0), Kisangani (92.0), Goma (93.3) and Bukavu (102.2).
https://en.wikipedia.org/wiki?curid=8027
Transport in the Democratic Republic of the Congo Ground transport in the Democratic Republic of the Congo (DRC) has always been difficult. The terrain and climate of the Congo Basin present serious barriers to road and rail construction, and the distances are enormous across this vast country. Furthermore, chronic economic mismanagement and internal conflict has led to serious under-investment over many years. On the other hand, the DRC has thousands of kilometres of navigable waterways, and traditionally water transport has been the dominant means of moving around approximately two-thirds of the country. As an illustration of transport difficulties in the DRC, even before wars damaged the infrastructure, the so-called "national" route, used to get supplies to Bukavu from the seaport of Matadi, consisted of the following: In other words, goods had to be loaded and unloaded eight times and the total journey would take many months. Many of the routes listed below are in poor condition and may be operating at only a fraction of their original capacity (if at all), despite recent attempts to make improvements. Up to 2006 the United Nations Joint Logistics Centre (UNJLC) had an operation in Congo to support humanitarian relief agencies working there, and its bulletins and maps about the transport situation are archived on ReliefWeb. The First and Second Congo Wars saw great destruction of transport infrastructure from which the country has not yet recovered. Many vehicles were destroyed or commandeered by militias, especially in the north and east of the country, and the fuel supply system was also badly affected. Consequently, outside of Kinshasa, Matadi and Lubumbashi, private and commercial road transport is almost non-existent and traffic is scarce even where roads are in good condition. The few vehicles in use outside these cities are run by the United Nations, aid agencies, the DRC government, and a few larger companies such as those in the mining and energy sectors. High-resolution satellite photos on the Internet show large cities such as Bukavu, Butembo and Kikwit virtually devoid of traffic, compared to similar photos of towns in neighbouring countries. Air transport is the only effective means of moving between many places within the country. The Congolese government, the United Nations, aid organisations and large companies use air rather than ground transport to move personnel and freight. The UN operates a large fleet of aircraft and helicopters, and compared to other African countries the DRC has a large number of small domestic airlines and air charter companies. The transport (and smuggling) of minerals with a high value for weight is also carried out by air, and in the east, some stretches of paved road isolated by destroyed bridges or impassable sections have been turned into airstrips. For the ordinary citizen though, especially in rural areas, often the only options are to cycle, walk or go by dugout canoe. Some parts of the DRC are more accessible from neighbouring countries than from Kinshasa. For example, Bukavu itself and Goma and other north-eastern towns are linked by paved road from the DRC border to the Kenyan port of Mombasa, and most goods for these cities have been brought via this route in recent years. Similarly, Lubumbashi and the rest of Katanga Province is linked to Zambia, through which the paved highway and rail networks of Southern Africa can be accessed. Such links through neighbouring countries are generally more important for the east and south-east of the country, and are more heavily used, than surface links to the capital. In 2007 China agreed to lend the DRC US$5bn for two major transport infrastructure projects to link mineral-rich Katanga, specifically Lubumbashi, by rail to an ocean port (Matadi) and by road to the Kisangani river port, and to improve its links to the transport network of Southern Africa in Zambia. The two projects would also link the major parts of the country not served by water transport, and the main centres of the economy. Loan repayments will be from concessions for raw materials which China desperately needs: copper, cobalt, gold and nickel, as well as by toll revenues from the road and railway. In the face of reluctance by the international business community to invest in DRC, this represents a revitalisation of DRC's infrastructure much needed by its government. The China Railway Seventh Group Co. Ltd will be in charge of the contract, under signed by the China Railway Engineering Corporation, with construction to be started from June 2008. The Democratic Republic of the Congo has fewer all-weather paved highways than any country of its population and size in Africa — a total of 2250 km, of which only 1226 km is in good condition (see below). To put this in perspective, the road distance across the country in any direction is more than 2500 km (e.g. Matadi to Lubumbashi, 2700 km by road). The figure of 2250 km converts to 35 km of paved road per 1,000,000 of population. Comparative figures for Zambia and Botswana are 721 km and 3427 km respectively. The road network is theoretically divided into four categories (national roads, priority regional roads, secondary regional roads and local roads), however, the United Nations Joint Logistics Centre (UNJLC) reports that this classification is of little practical use because some roads simply do not exist. For example, National Road 9 is not operational and cannot be detected by remote sensing methods. The two principal highways are: The total road network in 2005, according to the UNJLC, consisted of: The UNJLC also points out that the pre-Second Congo War network no longer exists, and is dependent upon 20,000 bridges and 325 ferries, most of which are in need of repair or replacement. In contrast, a Democratic Republic of the Congo government document shows that, also in 2005, the network of main highways in good condition was as follows: The 2000 Michelin "Motoring and Tourist Map 955 of Southern and Central Africa", which categorizes roads as "surfaced", "improved" (generally unsurfaced but with gravel added and graded), "partially improved" and "earth roads" and "tracks" shows that there were 2694 km of paved highway in 2000. These figures indicate that, compared to the more recent figures above, there has been a deterioration this decade, rather than improvement. Three routes in the Trans-African Highway network pass through DR Congo: The DRC has more navigable rivers and moves more passengers and goods by boat and ferry than any other country in Africa. Kinshasa, with 7 km of river frontage occupied by wharfs and jetties, is the largest inland waterways port on the continent. However, much of the infrastructure — vessels and port handling facilities — has, like the railways, suffered from poor maintenance and internal conflict. The total length of waterways is estimated at 16,238 km including the Congo River, its tributaries, and unconnected lakes. The 1000-kilometre Kinshasa-Kisangani route on the Congo River is the longest and best-known. It is operated by river tugs pushing several barges lashed together, and for the hundreds of passengers and traders these function like small floating towns. Rather than mooring at riverside communities along the route, traders come out by canoe and small boat alongside the river barges and transfer goods on the move. Most waterway routes do not operate to regular schedules. It is common for an operator to moor a barge at a riverside town and collect freight and passengers over a period of weeks before hiring a river tug to tow or push the barge to its destination. The middle Congo River and its tributaries from the east are the principal domestic waterways in the DRC. The two principal river routes are: See the diagrammatic transport map above for other river waterways. The most-used domestic lake waterways are: Most large Congo river ferry boats were destroyed during the civil war. Only smaller boats are running and they are irregular. petroleum products 390 km 1 petroleum tanker Due to the lack of roads, operating railroads and ferry transportation many people traveling around the country fly on aircraft. As of 2016 the country does not have an international passenger airline and relies on foreign-based airlines for international connections. Congo Airways provides domestic flights and are based at Kinshasa's N'djili Airport which serves as the country's main international airport. Lubumbashi International Airport in the country's south-east is also serviced by several international airlines. "total:" 24 "over 3,047 m:" 4 "2,438 to 3,047 m:" 2 "1,524 to 2,437 m:" 16 "914 to 1,523 m:" 2 (2002 est.) "total:" 205 "1,524 to 2,437 m:" 19 "914 to 1,523 m:" 95 "under 914 m:" 91 (2002 est.) All air carriers certified by the Democratic Republic of the Congo have been banned from operating at airports in the European Community by the European Commission because of inadequate safety standards. The Democratic Republic of the Congo has a rocketry program called Troposphere.
https://en.wikipedia.org/wiki?curid=8028
Armed Forces of the Democratic Republic of the Congo The Armed Forces of the Democratic Republic of the Congo ( [FARDC]) is the state organisation responsible for defending the Democratic Republic of the Congo. The FARDC was rebuilt patchily as part of the peace process which followed the end of the Second Congo War in July 2003. The majority of FARDC members are land forces, but it also has a small air force and an even smaller navy. In 2010–11 the three services may have numbered between 144,000 and 159,000 personnel. In addition, there is a presidential force called the Republican Guard, but it and the Congolese National Police (PNC) are not part of the Armed Forces. The government in the capital city Kinshasa, the United Nations, the European Union, and bilateral partners which include Angola, South Africa, and Belgium are attempting to create a viable force with the ability to provide the Democratic Republic of Congo with stability and security. However, this process is being hampered by corruption, inadequate donor coordination, and competition between donors. The various military units now grouped under the FARDC banner are some of the most unstable in Africa after years of war and underfunding. To assist the new government, since February 2000 the United Nations has had the United Nations Mission in the Democratic Republic of Congo (now called MONUSCO), which currently has a strength of over 16,000 peacekeepers in the country. Its principal tasks are to provide security in key areas, such as the South Kivu and North Kivu in the east, and to assist the government in reconstruction. Foreign rebel groups are also in the Congo, as they have been for most of the last half-century. The most important is the Democratic Forces for the Liberation of Rwanda (FDLR), against which Laurent Nkunda's troops were fighting, but other smaller groups such as the anti-Ugandan Lord's Resistance Army are also present. The legal standing of the FARDC was laid down in the Transitional Constitution, articles 118 and 188. This was then superseded by provisions in the 2006 Constitution, articles 187 to 192. Law 04/023 of 12 November 2004 establishes the General Organisation of Defence and the Armed Forces. In mid-2010, the Congolese Parliament was debating a new defence law, provisionally designated Organic Law 130. The first organised Congolese troops, known as the , were created in 1888 when King Leopold II of Belgium, who held the Congo Free State as his private property, ordered his Secretary of the Interior to create military and police forces for the state. In 1908, under international pressure, Leopold ceded administration of the colony to the government of Belgium as the Belgian Congo. It remained under the command of a Belgian officer corps through to the independence of the colony in 1960. The "Force Publique" saw combat in Cameroun, and successfully invaded and conquered areas of German East Africa, notably present day Rwanda, during World War I. Elements of the "Force Publique" were also used to form Belgian colonial units that fought in the East African Campaign during World War II. At independence on 30 June 1960, the army suffered from a dramatic deficit of trained leaders, particularly in the officer corps. This was because the "Force Publique" had always only been officered by Belgian or other expatriate whites. The Belgian Government made no effort to train Congolese commissioned officers until the very end of the colonial period, and in 1958, only 23 African cadets had been admitted even to the military secondary school. The highest rank available to Congolese was adjutant, which only four soldiers achieved before independence. Though 14 Congolese cadets were enrolled in the Royal Military Academy in Brussels in May, they were not scheduled to graduate as second lieutenants until 1963. Ill-advised actions by Belgian officers led to an enlisted ranks' rebellion on 5 July 1960, which helped spark the Congo Crisis. Lieutenant General Émile Janssens, the "Force Publique" commander, wrote during a meeting of soldiers that 'Before independence=After Independence', pouring cold water on the soldiers' desires for an immediate raise in their status. Vanderstraeten says that on the morning of 8 July 1960, following a night during which all control had been lost over the soldiers, numerous ministers arrived at Camp Leopold with the aim of calming the situation. Both Prime Minister Patrice Lumumba and President Joseph Kasa-Vubu eventually arrived, and the soldiers listened to Kasa-Vubu "religiously." After his speech, Kasa-Vubu and the ministers present retired into the camp canteen to hear a delegation from the soldiers. Vanderstraeten says that, according to Joseph Ileo, their demands ("revendications") included the following: The "laborious" discussions which then followed were later retrospectively given the label of an "extraordinary ministerial council." Gérald-Libois writes that '..the special meeting of the council of ministers took steps for the immediate Africanisation of the officer corps and named Victor Lundula, who was born in Kasai and was burgomaster of Jadotville, as Commander-in-Chief of the ANC; Colonel Joseph-Désiré Mobutu as chief of staff; and the Belgian, Colonel Henniquiau, as chief advisor to the ANC. Thus General Janssens was dismissed. Both Lundula and Mobutu were former sergeants of the "Force Publique". On 8–9 July 1960, the soldiers were invited to appoint black officers, and 'command of the army passed securely into the hands of former sergeants,' as the soldiers in general chose the most-educated and highest-ranked Congolese army soldiers as their new officers. Most of the Belgian officers were retained as advisors to the new Congolese hierarchy, and calm returned to the two main garrisons at Leopoldville and Thysville. The "Force Publique" was renamed the "Armée nationale congolaise" (ANC), or Congolese National Armed Forces. However, in Katanga Belgian officers resisted the Africanisation of the army. On 9 July 1960, there was a "Force Publique" mutiny at Camp Massart at Elizabethville; five or seven Europeans were killed. The army revolt and resulting rumours caused severe panic across the country, and Belgium despatched troops and the naval Task Group 218.2 to protect its citizens. Belgian troops intervened in Elisabethville and Luluabourg (10 July), Matadi (11 July), Leopoldville (13 July) and elsewhere. There were immediate suspicions that Belgium planned to re-seize the country while doing so. Large numbers of Belgian colonists fled the country. At the same time, on 11 July, Moise Tshombe declared the independence of Katanga Province in the south-east, closely backed by remaining Belgian administrators and soldiers. On 14 July 1960, in response to requests by Prime Minister Lumumba, the UN Security Council adopted United Nations Security Council Resolution 143. This called upon Belgium to remove its troops and for the UN to provide 'military assistance' to the Congolese forces to allow them 'to meet fully their tasks'. Lumumba demanded that Belgium remove its troops immediately, threatening to seek help from the Soviet Union if they did not leave within two days. The UN reacted quickly and established the United Nations Operation in the Congo (ONUC). The first UN troops arrived the next day but there was instant disagreement between Lumumba and the UN over the new force's mandate. Because the Congolese army had been in disarray since the mutiny, Lumumba wanted to use the UN troops to subdue Katanga by force. Lumumba became extremely frustrated with the UN's unwillingness to use force against Tshombe and his secession. He cancelled a scheduled meeting with Secretary General Hammarskjöld on 14 August and wrote a series of angry letters instead. To Hammarskjöld, the secession of Katanga was an internal Congolese matter and the UN was forbidden to intervene by Article 2 of the United Nations Charter. Disagreements over what the UN force could and could not do continued throughout its deployment. By 20 July 1960, 3,500 troops for ONUC had arrived in the Congo. The first contingent of Belgian forces had left Leopoldville on 16 July upon the arrival of the United Nations troops. Following assurances that contingents of the Force would arrive in sufficient numbers, the Belgian authorities agreed to withdraw all their forces from the Leopoldville area by 23 July. The last Belgian troops left the country by 23 July, as United Nations forces continued to deploy throughout the Congo. The build of ONUC continued, its strength increasing to over 8,000 by 25 July and to over 11,000 by 31 July 1960. A basic agreement between the United Nations and the Congolese Government on the operation of the Force was agreed by 27 July. On 9 August, Albert Kalonji proclaimed the independence of South Kasai. During the crucial period of July–August 1960, Mobutu built up "his" national army by channeling foreign aid to units loyal to him, by exiling unreliable units to remote areas, and by absorbing or dispersing rival armies. He tied individual officers to him by controlling their promotion and the flow of money for payrolls. Researchers working from the 1990s have concluded that money was directly funnelled to the army by the U.S. Central Intelligence Agency, the UN, and Belgium. Despite this, by September 1960, following the four-way division of the country, there were four separate armed forces: Mobotu's ANC itself, numbering about 12,000, the South Kasai Constabulary loyal to Albert Kalonji (3,000 or less), the Katanga Gendarmerie which were part of Moise Tshombe's regime (totalling about 10,000), and the Stanleyville dissident ANC loyal to Antoine Gizenga (numbering about 8,000). In August 1960, due to rejection of requests to the UN for aid to suppress the South Kasai and Katanga revolts, Lumumba's government decided to request Soviet help. de Witte writes that 'Leopoldville asked the Soviet Union for planes, lorries, arms, and equipment. ... Shortly afterwards, on 22 or 23 August, about 1,000 soldiers left for Kasai.' de Witte goes on to write that on 26–27 August, the ANC seized Bakwanga, Albert Kalonji's capital in South Kasai, without serious resistance. "In the next two days it temporarily put an end to the secession of Kasai." The Library of Congress Country Study for the Congo says at this point that: "[On 5 September 1960] Kasavubu also appointed Mobutu as head of the ANC. Joseph Ileo was chosen as the new prime minister and began trying to form a new government. Lumumba and his cabinet responded by accusing Kasa-Vubu of high treason and voted to dismiss him. Parliament refused to confirm the dismissal of either Lumumba or Kasavubu and sought to bring about a reconciliation between them. After a week's deadlock, Mobutu announced on 14 September that he was assuming power until 31 December 1960, in order to "neutralize" both Kasavubu and Lumumba." Mobutu formed the College of Commissioners-General, a technocratic government of university graduates. In early January 1961, ANC units loyal to Lumumba invaded northern Katanga to support a revolt of Baluba tribesmen against Tshombe's secessionist regime. On 23 January 1961, Kasa-Vubu promoted Mobutu to major-general; De Witte argues that this was a political move, "aimed to strengthen the army, the president's sole support, and Mobutu's position within the army." United Nations Security Council Resolution 161 of 21 February 1961, called for the withdrawal of Belgian officers from command positions in the ANC, and the training of new Congolese officers with UN help. ONUC made a number of attempts to retrain the ANC from August 1960 to June 1963, often been set back by political changes. By March 1963 however, after the visit of Colonel Michael Greene of the United States Army, and the resulting "Greene Plan", the pattern of bilaterally agreed military assistance to various Congolese military components, instead of a single unified effort, was already taking shape. In early 1964, a new crisis broke out as Congolese rebels calling themselves "Simba" (Swahili for "Lion") rebelled against the government. They were led by Pierre Mulele, Gaston Soumialot and Christophe Gbenye who were former members of Gizenga's Parti Solidaire Africain (PSA). The rebellion affected Kivu and Eastern (Orientale) provinces. By August they had captured Stanleyville and set up a rebel government there. As the rebel movement spread, discipline became more difficult to maintain, and acts of violence and terror increased. Thousands of Congolese were executed, including government officials, political leaders of opposition parties, provincial and local police, school teachers, and others believed to have been Westernised. Many of the executions were carried out with extreme cruelty, in front of a monument to Lumumba in Stanleyville. Tshombe decided to use foreign mercenaries as well as the ANC to suppress the rebellion. Mike Hoare was employed to create the English-speaking 5 Commando ANC at Kamina, with the assistance of a Belgian officer, Colonel Frederic Vanderwalle, while 6 Commando ANC was French-speaking and originally under the command of a Belgian Army colonel, Lamouline. By August 1964, the mercenaries, with the assistance of other ANC troops, were making headway against the Simba rebellion. Fearing defeat, the rebels started taking hostages of the local white population in areas under their control. These hostages were rescued in Belgian airdrops (Dragon Rouge and Dragon Noir) over Stanleyville and Paulis airlift sed by U.S. aircraft. The operation coincided with the arrival of mercenary units (seemingly including the hurriedly formed 5th Mechanised Brigade) at Stanleyville which was quickly captured. It took until the end of the year to completely put down the remaining areas of rebellion. After five years of turbulence, in 1965 Mobutu used his position as ANC Chief of Staff to seize power in the 1965 Democratic Republic of the Congo coup d'état. Although Mobutu succeeded in taking power, his position was soon threatened by the Stanleyville mutinies, also known as the Mercenaries' Mutinies, which were eventually suppressed. As a general rule, since that time, the armed forces have not intervened in politics as a body, rather being tossed and turned as ambitious men have shaken the country. In reality, the larger problem has been the misuse and sometimes abuse of the military and police by political and ethnic leaders. On 16 May 1968 a parachute brigade of two regiments (each of three battalions) was formed which eventually was to grow in size to a full division. The country was renamed Zaire in 1971 and the army was consequently designated the (FAZ). In 1971 the army's force consisted of the 1st Groupement at Kananga, with one guard battalion, two infantry battalions, and a gendarmerie battalion attached, and the 2nd Groupement (Kinshasa), the 3rd Groupement (Kisangani), the 4th Groupement (Lubumbashi), the 5th Groupement (Bukavu), the 6th Groupement (Mbandaka), and the 7th Groupement (Boma). Each was about the size of a brigade, and commanded by aging generals who have had no military training, and often not much positive experience, since they were NCOs in the Belgian Force Publique.' By the late 1970s the number of groupements reached nine, one per administrative region. The parachute division (Division des Troupes Aéroportées Renforcées de Choc, DITRAC) operated semi-independently from the rest of the army. In July 1972 a number of the aging generals commanding the "groupements" were retired. Général d'armée Louis Bobozo, and Generaux de Corps d'Armée Nyamaseko Mata Bokongo, Nzoigba Yeu Ngoli, Muke Massaku, Ingila Grima, Itambo Kambala Wa Mukina, Tshinyama Mpemba, and General de Division Yossa Yi Ayira, the last having been commander of the Kamina base, were all retired on 25 July 1972. Taking over as military commander-in-chief, now titled Captain General, was newly promoted General de Division Bumba Moaso, former commander of the parachute division. A large number of countries supported the FAZ in the early 1970s. Three hundred Belgian personnel were serving as staff officers and advisors throughout the Ministry of Defence, Italians were supporting the Air Force, Americans were assisting with transport and communications, Israelis with airborne forces training, and there were British advisors with the engineers. In 1972 the state-sponsored political organization, the Mouvement Populaire de la Révolution (MPR), resolved at a party congress to form activist cells in each military unit. The decision caused consternation among the officer corps, as the army had been apolitical (and even anti-political) since before independence. On 11 June 1975 several military officers were arrested in what became known as the "coup monté et manqué." Amongst those arrested were Générals Daniel Katsuva wa Katsuvira, Land Forces Chief of Staff, Utshudi Wembolenga, Commandant of the 2nd Military Region at Kalemie; Fallu Sumbu, Military Attaché of Zaïre in Washington, Colonel Mudiayi wa Mudiayi, the military attaché of Zaïre in Paris, the military attache in Brussels, a paracommando battalion commander, and several others. The regime alleged these officers and others (including Mobutu's "secrétaire particulier") had plotted the assassination of Mobutu, high treason, and disclosure of military secrets, among other offences. The alleged coup was investigated by a revolutionary commission headed by Boyenge Mosambay Singa, at that time head of the Gendarmerie. Writing in 1988, Michael Schatzberg said the full details of the coup had yet to emerge. Meitho, writing many years later, says the officers were accused of trying to raise Mobutu's "secrétaire particulier", Colonel Omba Pene Djunga, from Kasai, to power. In late 1975, Mobutu, in a bid to install a pro-Kinshasa government in Angola and thwart the Marxist Popular Movement for the Liberation of Angola (MPLA)'s drive for power, deployed FAZ armoured cars, paratroopers, and three infantry battalions to Angola in support of the National Liberation Front of Angola (FNLA). On 10 November 1975, an anti-Communist force made up of 1,500 FNLA fighters, 100 Portuguese Angolan soldiers, and two FAZ battalions passed near the city of Quifangondo, only north of Luanda, at dawn on 10 November. The force, supported by South African aircraft and three 140 mm artillery pieces, marched in a single line along the Bengo River to face an 800-strong Cuban force across the river. Thus the Battle of Quifangondo began. The Cubans and MPLA fighters bombarded the FNLA with mortar and 122 mm rockets, destroying most of the FNLA's armoured cars and six Jeeps carrying antitank rockets in the first hour of fighting. Mobutu's support for the FNLA policy backfired when the MPLA won in Angola. The MPLA, then, acting ostensibly at least as the Front for Congolese National Liberation, occupied Zaire's southeastern Katanga Province, then known as Shaba, in March 1977, facing little resistance from the FAZ. This invasion is sometimes known as Shaba I. Mobutu had to request assistance, which was provided by Morocco in the form of regular troops who routed the MPLA and their Cuban advisors out of Katanga. Also important were Egyptian pilots who flew Zaire's Mirage 5 combat aircraft. The humiliation of this episode led to civil unrest in Zaire in early 1978, which the FAZ had to put down. The poor performance of Zaire's military during Shaba I gave evidence of chronic weaknesses. One problem was that some of the Zairian soldiers in the area had not received pay for extended periods. Senior officers often kept the money intended for the soldiers, typifying a generally disreputable and inept senior leadership in the FAZ. As a result, many soldiers simply deserted rather than fight. Others stayed with their units but were ineffective. During the months following the Shaba invasion, Mobutu sought solutions to the military problems that had contributed to the army's dismal performance. He implemented sweeping reforms of the command structure, including wholesale firings of high-ranking officers. He merged the military general staff with his own presidential staff and appointed himself chief of staff again, in addition to the positions of minister of defence and supreme commander that he already held. He also redeployed his forces throughout the country instead of keeping them close to Kinshasa, as had previously been the case. The Kamanyola Division, at the time considered the army's best formation, and considered the president's own, was assigned permanently to Shaba. In addition to these changes, the army's strength was reduced by 25 percent. Also, Zaire's allies provided a large influx of military equipment, and Belgian, French, and American advisers assisted in rebuilding and retraining the force. Despite these improvements, a second invasion by the former Katangan gendarmerie, known as Shaba II in May–June 1978, was only dispersed with the despatch of the French 2nd Foreign Parachute Regiment and a battalion of the Belgian Paracommando Regiment. Kamanyola Division units collapsed almost immediately. French units fought the Battle of Kolwezi to recapture the town from the FLNC. The U.S. provided logistical assistance. In July 1975, according to the IISS Military Balance, the FAZ was made up of 14 infantry battalions, seven "Guard" battalions, and seven other infantry battalions variously designated as "parachute" (or possibly "commando"; probably the units of the new parachute brigade originally formed in 1968). There were also an armoured car regiment and a mechanised infantry battalion. Organisationally, the army was made up of seven brigade groups and one parachute division. In addition to these units, a tank battalion was reported to have formed by 1979. In January 1979 "General de Division" Mosambaye Singa Boyenge was named as both military region commander and Region Commissioner for Shaba. In 1984, a militarised police force, the Civil Guard, was formed. It was eventually commanded by Général d'armée Kpama Baramoto Kata. Thomas Turner wrote in the late 1990s that "[m]ajor acts of violence, such as the killings that followed the "Kasongo uprising" in Bandundu Region in 1978, the killings of diamond miners in Kasai-Oriental Region in 1979, and, more recently, the massacre of students in Lubumbashi in 1990, continued to intimidate the population." The authors of the Library of Congress Country Study on Zaire commented in 1992–93 that: "The maintenance status of equipment in the inventory has traditionally varied, depending on a unit's priority and the presence or absence of foreign advisers and technicians. A considerable portion of military equipment is not operational, primarily as a result of shortages of spare parts, poor maintenance, and theft. For example, the tanks of the 1st Armoured Brigade often have a nonoperational rate approaching 70 to 80 percent. After a visit by a Chinese technical team in 1985, most of the tanks operated, but such an improved status generally has not lasted long beyond the departure of the visiting team. Several factors complicate maintenance in Zairian units. Maintenance personnel often lack the training necessary to maintain modern military equipment. Moreover, the wide variety of military equipment and the staggering array of spare parts necessary to maintain it not only clog the logistic network but also are expensive. The most important factor that negatively affects maintenance is the low and irregular pay that soldiers receive, resulting in the theft and sale of spare parts and even basic equipment to supplement their meager salaries. When not stealing spare parts and equipment, maintenance personnel often spend the better part of their duty day looking for other ways to profit. American maintenance teams working in Zaire found that providing a free lunch to the work force was a good, sometimes the only, technique to motivate personnel to work at least half of the duty day. The army's logistics corps [was tasked].. to provide logistic support and conduct direct, indirect, and depot-level maintenance for the FAZ. But because of Zaire's lack of emphasis on maintenance and logistics, a lack of funding, and inadequate training, the corps is understaffed, underequipped, and generally unable to accomplish its mission. It is organised into three battalions assigned to Mbandaka, Kisangani, and Kamina, but only the battalion at Kamina is adequately staffed; the others are little more than skeleton" units. The poor state of discipline of the Congolese forces became apparent again in 1990. Foreign military assistance to Zaire ceased following the end of the Cold War and Mobutu deliberately allowed the military's condition to deteriorate so that it did not threaten his hold on power. Protesting low wages and lack of pay, paratroopers began looting Kinshasa in September 1991 and were only stopped after intervention by French ('Operation Baumier') and Belgian ('Operation Blue Beam') forces. In 1993, according to the Library of Congress Country Studies, the 25,000-member FAZ ground forces consisted of one infantry division (with three infantry brigades); one airborne brigade (with three parachute battalions and one support battalion); one special forces (commando/counterinsurgency) brigade; the Special Presidential Division; one independent armoured brigade; and two independent infantry brigades (each with three infantry battalions, one support battalion). These units were deployed throughout the country, with the main concentrations in Shaba Region (approximately half the force). The Kamanyola Division, consisting of three infantry brigades operated generally in western Shaba Region; the 21st Infantry Brigade was located in Lubumbashi; the 13th Infantry Brigade was deployed throughout eastern Shaba; and at least one battalion of the 31st Airborne Brigade stayed at Kamina. The other main concentration of forces was in and around Kinshasa: the 31st Airborne Brigade was deployed at N'djili Airport on the outskirts of the capital; the Special Presidential Division (DSP) resided adjacent to the presidential compound; and the 1st Armoured Brigade was at Mbanza-Ngungu (in Bas-Congo, approximately southwest of Kinshasa). Finally the 41st Commando Brigade was at Kisangani. This superficially impressive list of units overstates the actual capability of the armed forces at the time. Apart from privileged formations such as the Presidential Division and the 31st Airborne Brigade, most units were poorly trained, divided and so badly paid that they regularly resorted to looting. What operational abilities the armed forces had were gradually destroyed by politicisation of the forces, tribalisation, and division of the forces, included purges of suspectedly disloyal groups, intended to allow Mobutu to divide and rule. All this occurred against the background of increasing deterioration of state structures under the kleptocratic Mobutu regime. Much of the origins of the recent conflict in what is now the Democratic Republic of the Congo stems from the turmoil following the Rwandan genocide of 1994, which then led to the Great Lakes refugee crisis. Within the largest refugee camps, beginning in Goma in Nord-Kivu, were Rwandan Hutu fighters, who were eventually organised into the Rassemblement Démocratique pour le Rwanda, who launched repeated attacks into Rwanda. Rwanda eventually backed Laurent-Désiré Kabila and his quickly organised Alliance of Democratic Forces for the Liberation of Congo (AFDL) in invading Zaire, aiming to stop the attacks on Rwanda in the process of toppling Mobutu's government. When the militias rebelled, backed by Rwanda, the FAZ, weakened as is noted above, proved incapable of mastering the situation and preventing the overthrow of Mobutu in 1997. Elements of the Mobutu-loyal FAZ managed to retreat into northern Congo, and from there into Sudan while attempting to escape the AFDL. Allying themselves with the Sudanese government which was fighting its own civil war at the time, these FAZ troops were destroyed by the Sudan People's Liberation Army during Operation Thunderbolt near Yei in March 1997. When Kabila took power in 1997, the country was renamed the Democratic Republic of the Congo and so the name of the national army changed once again, to the "Forces armées congolaises" (FAC). Tanzania sent six hundred military advisors to train Kabila's new army in May 1997. (Prunier says that the instructors were still at the Kitona base when the Second Congo War broke out, and had to be quickly returned to Tanzania. Prunier said "South African aircraft carried out the evacuation after a personal conversation between President Mkapa and not-yet-president Thabo Mbeki. Command over the armed forces in the first few months of Kabila's rule was vague. Gérard Prunier writes that "there was no minister of defence, no known chief of staff, and no ranks; all officers were Cuban-style 'commanders' called 'Ignace', 'Bosco', Jonathan', or 'James', who occupied connecting suites at the Intercontinental Hotel and had presidential list cell-phone numbers. None spoke French or Lingala, but all spoke Kinyarwanda, Swahili, and, quite often, English." On being asked by Belgian journalist Colette Braeckman what was the actual army command structure apart from himself, Kabila answered 'We are not going to expose ourselves and risk being destroyed by showing ourselves openly... . We are careful so that the true masters of the army are not known. It is strategic. Please, let us drop the matter.' Kabila's new "Forces armées congolaises" were riven with internal tensions. The new FAC had Banyamulenge fighters from South Kivu, "kadogo" child soldiers from various eastern tribes, such as Thierry Nindaga, Safari Rwekoze, etc... [the mostly] Lunda Katangese Tigers of the former FNLC, and former FAZ personnel. Mixing these disparate and formerly warring elements together led to mutiny. On 23 February 1998, a mostly Banyamulenge unit mutiniued at Bukavu after its officers tried to disperse the soldiers into different units spread all around the Congo. By mid-1998, formations on the outbreak of the Second Congo War included the Tanzanian-supported 50th Brigade, headquartered at Camp Kokolo in Kinshasa, and the 10th Brigade – one of the best and largest units in the army – stationed in Goma, as well as the 12th Brigade in Bukavu. The declaration of the 10th Brigade's commander, former DSP officer Jean-Pierre Ondekane, on 2 August 1998 that he no longer recognised Kabila as the state's president was one of the factors in the beginning of the Second Congo War. The FAC performed poorly throughout the Second Congo War and "demonstrated little skill or recognisable military doctrine". At the outbreak of the war in 1998 the Army was ineffective and the DRC Government was forced to rely on assistance from Angola, Chad, Namibia and Zimbabwe. As well as providing expeditionary forces, these countries unsuccessfully attempted to retrain the DRC Army. North Korea and Tanzania also provided assistance with training. During the first year of the war the Allied forces defeated the Rwandan force which had landed in Bas-Congo and the rebel forces south-west of Kinshasa and eventually halted the rebel and Rwandan offensive in the east of the DRC. These successes contributed to the Lusaka Ceasefire Agreement which was signed in July 1999. Following the Lusaka Agreement, in mid-August 1999 President Kabila issued a decree dividing the country into eight military regions. The first military region, Congolese state television reported, would consist of the two Kivu provinces, Orientale Province would form the second region, and Maniema and Kasai-Oriental provinces the third. Katanga and Équateur would fall under the fourth and fifth regions, respectively, while Kasai-Occidental and Bandundu would form the sixth region. Kinshasa and Bas-Congo would form the seventh and eighth regions, respectively. In November 1999 the Government attempted to form a 20,000-strong paramilitary force designated the People's Defence Forces. This force was intended to support the FAC and national police but never became effective. The Lusaka Ceasefire Agreement was not successful in ending the war, and fighting resumed in September 1999. The FAC's performance continued to be poor and both the major offensives the Government launched in 2000 ended in costly defeats. President Kabila's mismanagement was an important factor behind the FAC's poor performance, with soldiers frequently going unpaid and unfed while the Government purchased advanced weaponry which could not be operated or maintained. The defeats in 2000 are believed to have been the cause of President Kabila's assassination in January 2001. Following the assassination, Joseph Kabila assumed the presidency and was eventually successful in negotiating an end to the war in 2002–2003. The December 2002 Global and All-Inclusive Agreement devoted Chapter VII to the armed forces. It stipulated that the armed forces chief of staff, and the chiefs of the army, air force, and navy were not to come from the same warring faction. The new "national, restructured and integrated" army would be made up from Kabila's government forces (the FAC), the RCD, and the MLC. Also stipulated in VII(b) was that the RCD-N, RCD-ML, and the Mai-Mai would become part of the new armed forces. An intermediate mechanism for physical identification of the soldiers, and their origin, date of enrolment, and unit was also called for (VII(c)). It also provided for the creation of a Conseil Superieur de la Defense (Superior Defence Council) which would declare states of siege or war and give advice on security sector reform, disarmament/demobilization, and national defence policy. A decision on which factions were to name chiefs of staff and military regional commanders was announced on 19 August 2003 as the first move in military reform, superimposed on top of the various groups of fighters, government and former rebels. Kabila was able to name the armed forces chief of staff, Lieutenant General Liwanga Mata, who previously served as navy chief of staff under Laurent Kabila. Kabila was able to name the air force commander (John Numbi), the RCD-Goma received the Land Force commander's position (Sylvain Buki) and the MLC the navy (Dieudonne Amuli Bahigwa). Three military regional commanders were nominated by the former Kinshasa government, two commanders each by the RCD-Goma and the MLC, and one region commander each by the RCD-K/ML and RCD-N. However these appointments were announced for Kabila's "Forces armées congolaises" (FAC), not the later FARDC. Another report however says that the military region commanders were only nominated in January 2004, and that the troop deployment on the ground did not change substantially until the year afterward. On 24 January 2004, a decree created the "Structure Militaire d'Intégration" (SMI, Military Integration Structure). Together with the SMI, CONADER also was designated to manage the combined "tronc commun" DDR element and military reform programme. The first post-Sun City military law appears to have been passed on 12 November 2004, which formally created the new national Forces Armées de la République Démocratique du Congo (FARDC). Included in this law was article 45, which recognised the incorporation of a number of armed groups into the FARDC, including the former government army Forces Armées Congolaises (FAC), ex-FAZ personnel also known as former President Mobutu's 'les tigres', the RCD-Goma, RCD-ML, RCD-N, MLC, the Mai-Mai, as well as other government-determined military and paramilitary groups. Turner writes that the two most prominent opponents of military integration ("brassage") were Colonel Jules Mutebusi, a Munyamulenge from South Kivu, and Laurent Nkunda, a Rwandaphone Tutsi who Turner says was allegedly from Rutshuru in North Kivu. In May–June 2004 Mutebusi led a revolt against his superiors from Kinshasa in South Kivu. Nkunda began his long series of revolts against central authority by helping Mutebusi in May–June 2004. In November 2004 a Rwandan government force entered North Kivu to attack the FDLR, and, it seems, reinforced and resupplied RCD-Goma (ANC) at the same time. Kabila despatched 10,000 government troops to the east in response, launching an attack that was called "Operation Bima". In the midst of this tension, Nkunda's men launched attacks in North Kivu in December 2004. There was another major personnel reshuffle on 12 June 2007. FARDC chief General Kisempia Sungilanga Lombe was replaced with General Dieudonne Kayembe Mbandankulu. General Gabriel Amisi Kumba retained his post as Land Forces commander. John Numbi, a trusted member of Kabila's inner circle, was shifted from air force commander to Police Inspector General. U.S. diplomats reported that the former Naval Forces Commander Maj. General Amuli Bahigua (ex-MLC) became the FARDC's Chief of Operations; former FARDC Intelligence Chief General Didier Etumba (ex-FAC) was promoted to Vice Admiral and appointed Commander of Naval Forces; Maj. General Rigobert Massamba (ex-FAC), a former commander of the Kitona air base, was appointed as Air Forces Commander; and Brig. General Jean-Claude Kifwa, commander of the Republican Guard, was appointed as a regional military commander. Due to significant delays in the DDR and integration process, of the eighteen brigades, only seventeen have been declared operational, over two and a half years after the initial target date. Responding to the situation, the Congolese Minister of Defence presented a new defence reform master plan to the international community in February 2008. Essentially the three force tiers all had their readiness dates pushed back: the first, territorial forces, to 2008–12, the mobile force to 2008–10, and the main defence force to 2015. Much of the east of the country remains insecure, however. In the far northeast this is due primarily to the Ituri conflict. In the area around Lake Kivu, primarily in North Kivu, fighting continues among the Democratic Forces for the Liberation of Rwanda and between the government FARDC and Laurent Nkunda's troops, with all groups greatly exacerbating the issues of internal refugees in the area of Goma, the consequent food shortages, and loss of infrastructure from the years of conflict. In 2009, several United Nations officials stated that the army is a major problem, largely due to corruption that results in food and pay meant for soldiers being diverted and a military structure top-heavy with colonels, many of whom are former warlords. In a 2009 report itemizing FARDC abuses, Human Rights Watch urged the UN to stop supporting government offensives against eastern rebels until the abuses ceased. Caty Clement wrote in 2009: "One of the most notable [FARDC corruption] schemes was known as 'Opération Retour' (Operation Return). Senior officers ordered the soldiers' pay to be sent from Kinshasa to the commanders in the field, who took their cut and returned the remainder to their commander in Kinshasa instead of paying the soldiers. To ensure that foot soldiers would be paid their due, in late 2005, EUSEC suggested separating the chain of command from the chain of payment. The former remained within Congolese hands, while the EU mission delivered salaries directly to the newly 'integrated' brigades. Although efficient in the short term, this solution raises the question of sustainability and ownership in the long term. Once soldiers' pay could no longer be siphoned off via 'Opération Retour', however, two other budgetary lines, the 'fonds de ménage' and logistical support to the brigades, were soon diverted." In 2010, thirty FARDC officers were given scholarships to study in Russian military academies. This is part of a greater effort by Russia to help improve the FARDC. A new military attaché and other advisers from Russia visited the DRC. On 22 November 2012, Gabriel Amisi Kumba was suspended from his position in the Forces Terrestres by president Joseph Kabila due to an inquiry into his alleged role in the sale of arms to various rebel groups in the eastern part of the country, which may have implicated the rebel group M23. In December 2012 it was reported that members of Army units in the north east of the country are often not paid due to corruption, and these units rarely made against villages by the Lord's Resistance Army. The FARDC deployed 850 soldiers and 150 PNC police officers as part of an international force in the Central African Republic, which the DRC borders to the north. The country had been in a state of civil war since 2012, when the president was ousted by rebel groups. The DRC was urged by French president François Hollande to keep its troops in CAR. In July 2014, the Congolese army carried out a joint operation with UN troops in the Masisi and Walikale territories of the North Kivu province. In the process, they liberated over 20 villages and a mine from the control of two rebel groups, the Mai Mai Cheka and the Alliance for the Sovereign and Patriotic Congo. In October 2017 the UN published a report announcing that the FARDC no longer employed child soldiers but was still listed under militaries that committed sexual violations against children. Troops operating with MONUSCO in North Kivu were attacked by likely rebels from the Allied Democratic Forces on 8 December 2017. After a protracted firefight the troops suffered 5 dead along with 14 dead among the UN force. The President Félix Tshisekedi is the Commander-in-Chief of the Armed Forces. The Minister of Defence, formally Ministers of Defence and Veterans (Ancien Combattants) is Crispin Atama Tabe. The Colonel Tshatshi Military Camp in the Kinshasa suburb of Ngaliema hosts the defence department and the Chiefs of Staff central command headquarters of the FARDC. Jane's data from 2002 appears inaccurate; there is at least one ammunition plant in Katanga. Below the Chief of Staff, the current organisation of the FARDC is not fully clear. There is known to be a Military Intelligence branch – Service du Renseignement militaire (SRM), the former DEMIAP. The FARDC is known to be broken up into the Land Forces ("Forces Terrestres"), Navy and Air Force. The Land Forces are distributed around ten military regions, up from the previous eight, following the ten provinces of the country. There is also a training command, the Groupement des Écoles Supérieurs Militaires (GESM) or Group of Higher Military Schools, which, in January 2010, was under the command of Major General Marcellin Lukama. The Navy and Air Forces are composed of various "groupments" (see below). There is also a central logistics base. It should be made clear also that Joseph Kabila does not trust the military; the Republican Guard is the only component he trusts. Major General John Numbi, former Air Force chief, now inspector general of police, ran a parallel chain of command in the east to direct the 2009 Eastern Congo offensive, Operation Umoja Wetu; the regular chain of command was by-passed. Previously Numbi negotiated the agreement to carry out the "mixage" process with Laurent Nkunda. Commenting on a proposed vote of no confidence in the Minister of Defence in September 2012, Baoudin Amba Wetshi of "lecongolais.cd" described Ntolo as a "scapegoat". Wetshi said that all key military and security questions were handled in total secrecy by the President and other civil and military personalities trusted by him, such as John Numbi, Gabriel Amisi Kumba ('Tango Four'), Delphin Kahimbi, and others such as Kalev Mutond and Pierre Lumbi Okongo. The General Secretariat for Defense: is headed by an General Officer (Secretary General for Defense). He overseas the folliwing departments: Military Justice is an independent institution under the judiciary, responsible for upholding the law and strengthening order and discipline within the Armed Forces. The General Insoextorate includes the following people: The available information on armed forces' Chiefs of Staff is incomplete and sometimes contradictory. In addition to armed forces chiefs of staff, in 1966 Lieutenant Colonel Ferdinand Malila was listed as Army Chief of Staff. Virtually all officers have now changed positions, but this list gives an outline of the structure in January 2005. Despite the planned subdivision of the country into more numerous provinces, the actual splitting of the former provinces has not taken place. In September 2014, President Kabila reshuffled the command structure and in addition to military regions created three new 'defense zones' which would be subordinated directly to the general staff. The defense zones essentially created a new layer between the general staff and the provincial commanders. The military regions themselves were reorganized and do not correspond with the ones that existed prior to the reshuffle. New commanders of branches were also appointed: A Congolese military analyst based in Brussels, Jean-Jacques Wondo, provided an outline of the updated command structure of the FARDC following the shake up of the high command: Regional commanders: The following changes were announced in July 2018. The land forces are made up of about 14 integrated brigades of fighters from all the former warring factions who have gone through a "brassage" integration process (see next paragraph) and a not-publicly known number of non-integrated brigades that remain solely made up of single factions (the Congolese Rally for Democracy (RCD)'s "Armée national congolaise", the ex-government former Congolese Armed Forces (FAC), the ex-RCD KML, the ex-Movement for the Liberation of Congo, the armed groups of the Ituri conflict (the Mouvement des Révolutionnaires Congolais (MRC), Forces de Résistance Patriotique d'Ituri (FRPI), and the Front Nationaliste Intégrationniste (FNI)), and the Mai-Mai). It appears that about the same time that Presidential Decree 03/042 of 18 December 2003 established the National Commission for Demobilisation and Reinsertion (CONADER), "..all ex-combatants were officially declared as FARDC soldiers and the then FARDC brigades [were to] rest deployed until the order to leave for "brassage."" The reform plan adopted in 2005 envisaged the formation of eighteen integrated brigades through the "brassage" process as its first of three stages. The process consists firstly of regroupment, where fighters are disarmed. Then they are sent to orientation centres, run by CONADER, where fighters take the choice of either returning to civilian society or remaining in the armed forces. Combatants who choose demobilisation receive an initial cash payment of US$110. Those who choose to stay within the FARDC are then transferred to one of six integration centres for a 45-day training course, which aims to build integrated formations out of factional fighters previously heavily divided along ethnic, political and regional lines. The centres are spread out around the country at Kitona, Kamina, Kisangani, Rumangabo and Nyaleke (within the Virunga National Park) in Nord-Kivu, and Luberizi (on the border with Burundi) in South Kivu. The process has suffered severe difficulties due to construction delays, administration errors, and the amount of travel former combatants have to do, as the three stages' centres are widely separated. Following the first 18 integrated brigades, the second goal is the formation of a ready reaction force of two to three brigades, and finally, by 2010 when MONUC is anticipated to have withdrawn, the creation of a Main Defence Force of three divisions. In February 2008, then Defence Minister Chikez Diemu described the reform plan at the time as: "The short term, 2008–2010, will see the setting in place of a Rapid Reaction Force; the medium term, 2008–2015, with a Covering Force; and finally the long term, 2015–2020, with a Principal Defence Force." Diemu added that the reform plan rests on a programme of synergy based on the four pillars of dissuasion, production, reconstruction and excellence. "The Rapid Reaction Force is expected to focus on dissuasion, through a Rapid Reaction Force of 12 battalions, capable of aiding MONUC to secure the east of the country and to realise constitutional missions." Amid the other difficulties in building new armed forces for the DRC, in early 2007 the integration and training process was distorted as the DRC government under Kabila attempted to use it to gain more control over the dissident general Laurent Nkunda. A hastily negotiated verbal agreement in Rwanda saw three government FAC brigades integrated with Nkunda's former ANC 81st and 83rd Brigades in what was called "mixage". "Mixage" brought multiple factions into composite brigades, but without the 45-day retraining provided by "brassage", and it seems that actually, the process was limited to exchanging battalions between the FAC and Nkunda brigades in North Kivu, without further integration. Due to Nkunda's troops having greater cohesion, Nkunda effectively gained control of all five brigades, which was not the intention of the DRC central government. However, after Nkunda used the "mixage" brigades to fight the FDLR, strains arose between the FARDC and Nkunda-loyalist troops within the brigades and they fell apart in the last days of August 2007. The International Crisis Group says that "by 30 August [2007] Nkunda's troops had left the mixed brigades and controlled a large part of the Masisi and Rutshuru territories" (of North Kivu). Both formally integrated brigades and the non-integrated units continue to conduct arbitrary arrests, rapes, robbery, and other crimes and these human rights violations are "regularly" committed by both officers and members of the rank and file. Members of the Army also often strike deals to gain access to resources with the militias they are meant to be fighting. The various brigades and other formations and units number at least 100,000 troops. The status of these brigades has been described as "pretty chaotic." A 2007 disarmament and repatriation study said "army units that have not yet gone through the process of brassage are usually much smaller than what they ought to be. Some non-integrated brigades have only 500 men (and are thus nothing more than a small battalion) whereas some battalions may not even have the size of a normal company (over a 100 men)." A number of outside donor countries are also carrying out separate training programmes for various parts of the Forces du Terrestres (Land Forces). The People's Republic of China has trained Congolese troops at Kamina in Katanga from at least 2004 to 2009, and the Belgian government is training at least one "rapid reaction" battalion. When Kabila visited U.S. President George W. Bush in Washington D.C., he also asked the U.S. Government to train a battalion, and as a result, a private contractor, Protection Strategies Incorporated, started training a FARDC battalion at Camp Base, Kisangani, in February 2010. The company was supervised by United States Special Operations Command Africa. Three years later, the battalion broke and ran in the face of M23, raping women and young girls, looting, and carrying out arbitrary executions. The various international training programmes are not well integrated. Attempting to list the equipment available to the DRC's land forces is difficult; most figures are unreliable estimates based on known items delivered in the past. The figures below are from the IISS Military Balance 2014. Much of the Army's equipment is non-operational due to insufficient maintenance—in 2002 only 20 percent of the Army's armoured vehicles were estimated as being serviceable. In addition to these 2014 figures, in March 2010, it was reported that the DRC's land forces had ordered US$80 million worth of military equipment from Ukraine which included 20 T-72 main battle tanks, 100 trucks and various small arms. Tanks have been used in the Kivus in the 2005–09 period. In February 2014, Ukraine revealed that it had achieved the first export order for the T-64 tank to the DRC Land Forces for 50 T-64BV-1s. In June 2015 it was reported that Georgia had sold 12 of its Didgori-2 to the DRC for $4 million. The vehicles were specifically designed for reconnaissance and special operations. Two of the vehicles are a recently developed conversion to serve for medical field evacuation. The United Nations confirmed in 2011, both from sources in the Congolese military and from officials of the Commission nationale de contrôle des armes légères et de petit calibre et de réduction de la violence armée, that the ammunition plant called Afridex in Likasi, Katanga Province, manufactures ammunition for small arms and light weapons. In addition to the other land forces, President Joseph Kabila also has a Republican Guard presidential force ("Garde Républicaine" or GR), formerly known as the Special Presidential Security Group (GSSP). FARDC military officials state that the Garde Républicaine is not the responsibility of FARDC, but of the Head of State. Apart from Article 140 of the Law on the Army and Defence, no legal stipulation on the DRC's Armed Forces makes provision for the GR as a distinct unit within the national army. In February 2005 President Joseph Kabila passed a decree which appointed the GR's commanding officer and "repealed any previous provisions contrary" to that decree. The GR, more than 10,000 strong (the ICG said 10,000 to 15,000 in January 2007), has better working conditions and is paid regularly, but still commits rapes and robberies in the vicinity of its bases. In an effort to extend his personal control across the country, Joseph Kabila has deployed the GR at key airports, ostensibly in preparation for an impending presidential visit. there were Guards deployed in the central prison of Kinshasa, N'djili Airport, Bukavu, Kisangani, Kindu, Lubumbashi, Matadi, and Moanda, where they appear to answer to no local commander and have caused trouble with MONUC troops there. The GR is also supposed to undergo the integration process, but in January 2007, only one battalion had been announced as having been integrated. Formed at a brassage centre in the Kinshasa suburb of Kibomango, the battalion included 800 men, half from the former GSSP and half from the MLC and RCD Goma. Up until June 2016, the GR comprised three brigades, the 10th Brigade at Camp Tshatshi and the 11th at Camp Kimbembe, both in Kinshasa, and the 13th Brigade at Camp Simi Simi in Kisangani. It was reorganised on the basis of eight fighting regiments, the 14th Security and Honor Regiment, an artillery regiment, and a command brigade/regiment from that time. There are currently large numbers of United Nations troops stationed in the DRC. The United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), on had a strength of over 19,000 peacekeepers (including 16,998 military personnel) and has a mission of assisting Congolese authorities maintain security. The UN and foreign military aid missions, the most prominent being EUSEC RD Congo, are attempting to assist the Congolese in rebuilding the armed forces, with major efforts being made in trying to assure regular payment of salaries to armed forces personnel and also in military justice. Retired Canadian Lieutenant General Marc Caron also served for a time as Security Sector Reform advisor to the head of MONUC. Groups of anti-Rwandan government rebels like the FDLR, and other foreign fighters remain inside the DRC. The FDLR which is the greatest concern, was some 6,000 strong, in July 2007. By late 2010 the FDLR's strength however was estimated at 2,500. The other groups are smaller: the Ugandan Lord's Resistance Army, the Ugandan rebel group the Allied Democratic Forces in the remote area of Mt Rwenzori, and the Burundian Parti pour la Libération du Peuple Hutu—Forces Nationales de Liberation (PALIPEHUTU-FNL). Finally there is a government paramilitary force, created in 1997 under President Laurent Kabila. The National Service is tasked with providing the army with food and with training the youth in a range of reconstruction and developmental activities. There is not much further information available, and no internet-accessible source details the relationship of the National Service to other armed forces bodies; it is not listed in the constitution. President Kabila, in one of the few comments available, says National Service will provide a gainful activity for street children. Obligatory civil service administered through the armed forces was also proposed under the Mobutu regime during the "radicalisation" programme of December 1974 – January 1975; the FAZ was opposed to the measure and the plan "took several months to die." All military aircraft in the DRC are operated by the Air Force. Jane's World Air Forces states that the Air Force has an estimated strength of 1,800 personnel and is organised into two Air Groups. These Groups command five wings and nine squadrons, of which not all are operational. 1 Air Group is located at Kinshasa and consists of Liaison Wing, Training Wing and Logistical Wing and has a strength of five squadrons. 2 Tactical Air Group is located at Kaminia and consists of Pursuit and Attack Wing and Tactical Transport Wing and has a strength of four squadrons. Foreign private military companies have reportedly been contracted to provide the DRC's aerial reconnaissance capability using small propeller aircraft fitted with sophisticated equipment. Jane's states that National Air Force of Angola fighter aircraft would be made available to defend Kinshasa if it came under attack. Like the other services, the Congolese Air Force is not capable of carrying out its responsibilities. Few of the Air Force's aircraft are currently flyable or capable of being restored to service and it is unclear whether the Air Force is capable of maintaining even unsophisticated aircraft. Moreover, Jane's states that the Air Force's Ecole de Pilotage is 'in near total disarray' though Belgium has offered to restart the Air Force's pilot training program. Before the downfall of Mobutu, a small navy operated on the Congo river. One of its installations was at the village of N'dangi near the presidential residence in Gbadolite. The port at N'dangi was the base for several patrol boats, helicopters and the presidential yacht. The 2002 edition of "Jane's Sentinel" described the Navy as being "in a state of near total disarray" and stated that it did not conduct any training or have operating procedures. The Navy shares the same discipline problems as the other services. It was initially placed under command of the MLC when the transition began,so the current situation is uncertain. The 2007 edition of "Jane's Fighting Ships" states that the Navy is organised into four commands, based at Matadi, near the coast; the capital Kinshasa, further up the Congo river; Kalemie, on Lake Tanganyika; and Goma, on Lake Kivu. The International Institute for Strategic Studies, in its 2007 edition of the "Military Balance", confirms the bases listed in "Jane's" and adds a fifth base at Boma, a coastal city near Matadi. Various sources also refer to numbered Naval Regions. Operations of the 1st Naval Region have been reported in Kalemie, the 4th near the northern city of Mbandaka, and the 5th at Goma. The IISS lists the Navy at 1,000 personnel and a total of eight patrol craft, of which only one is operational, a Shanghai II Type 062 class gunboat designated "102". There are five other 062s as well as two Swiftships which are not currently operational, though some may be restored to service in the future. According to "Jane's", the Navy also operates barges and small craft armed with machine guns. As of 2012, the Navy on paper consisted of about 6,700 personnel and up to 23 patrol craft. In reality there was probably around 1,000 service members, and only 8 of the boats were 50 ft in length or larger, the sole operational vessel being a Shanghai II Type 062 class gunboat. The service maintains bases in Kinshasa, Boma, Matadi, Boma, and on Lake Tanganyika.
https://en.wikipedia.org/wiki?curid=8029
Geography of Denmark Denmark is a Nordic country located in Northern Europe. It consists of the Jutland peninsula and several islands in the Baltic sea, referred to as the Danish Archipelago. Denmark is located southwest of Sweden and due south of Norway and is bordered by the German state (and former possession) Schleswig-Holstein to the south, on Denmark's only land border, 68 kilometres (42 miles) long. Denmark borders both the Baltic and North Seas along its tidal shoreline. Denmark's general coastline is much shorter, at , as it would not include most of the 1,419 offshore islands (each defined as exceeding 100 square metres in area) and the 180 km long Limfjorden, which separates Denmark's second largest island, North Jutlandic Island, 4,686 km2 in size, from the rest of Jutland. No location in Denmark is further from the coast than . The land area of Denmark is estimated to be . However, it cannot be stated exactly since the ocean constantly erodes and adds material to the coastline, and there are human land reclamation projects. On the southwest coast of Jutland, the tide is between , and the tideline moves outward and inward on a stretch. Denmark has an Exclusive Economic Zone of . When including Greenland and the Faroe Islands the EEZ is the 15th largest in the world with . A circle enclosing the same total area as Denmark would have a diameter of 234 km (146 miles). Denmark has 443 named islands (1,419 islands above 100 m²), of which 72 are inhabited (, Statistics Denmark). The largest islands are Zealand "(Sjælland)" and Funen "(Fyn)". The island of Bornholm is located east of the rest of the country, in the Baltic Sea. Many of the larger islands are connected by bridges; the Øresund Bridge connects Zealand with Sweden; the Great Belt Bridge connects Funen with Zealand; and the Little Belt Bridge connects Jutland with Funen. Ferries or small aircraft connect to the smaller islands. Main cities are the capital Copenhagen on Zealand; Århus, Aalborg and Esbjerg in Jutland; and Odense on Funen. Denmark experiences a temperate climate. This means that the winters are mild and windy and the summers are cool. The local terrain is generally flat with a few gently rolling plains. The territory of Denmark includes the island of Bornholm in the Baltic Sea and the rest of metropolitan Denmark, but excludes the Faroe Islands and Greenland. Its position gives Denmark complete control of the Danish Straits (Skagerrak and Kattegat) linking the Baltic and North Seas. The country's natural resources include petroleum, natural gas, fish, salt, limestone, chalk, stone, gravel and sand. Irrigated land: 4,354 km² (2007) Total renewable water resources: 6 km3 (2011) Freshwater withdrawal (domestic/industrial/agricultural): "total:" 0.66 km3/yr (58%/5%/36%) "per capita:" 118.4 m3/yr (2009) Continental shelf: 200-m depth or to the depth of exploitation Exclusive economic zone: (excludes Greenland and Faroe Islands). Territorial sea:
https://en.wikipedia.org/wiki?curid=8032
Demographics of Denmark This article is about the demographic features of the population of Denmark, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. Since 1980, the number of Danes has remained constant at around 5 million in Denmark and nearly all the population growth from 5.1 up to the 2018 total of 5.8 million was due to immigration. According to 2017 figures from Statistics Denmark, 86.9% of Denmark's population of over 5,760,694 was of Danish descent, defined as having at least one parent who was born in Denmark and has Danish citizenship. The remaining 13.1% were of a foreign background, defined as immigrants or descendants of recent immigrants. With the same definition, the most common countries of origin were Poland, Turkey, Germany, Iraq, Romania, Syria, Somalia, Iran, Afghanistan, and Yugoslavia and its successor states. More than 752,618 individuals (13.1%) are migrants and their descendants (146,798 second generation migrants born in Denmark). Of these 752,618 immigrants and their descendants: Non-Scandinavian ethnic minorities include: Ethnic minorities in Denmark include a handful of groups: The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation. Data according to Statistics Denmark, which collects the official statistics for Denmark. Sources: Our World In Data and the United Nations. 1775-1950 1950-2015 Source: "UN World Population Prospects" The Church of Denmark () is state-supported and, according to statistics from January 2019, accounts for about 74.7% of Denmark's religious affiliation. Denmark has had religious freedom guaranteed since 1849 by the Constitution, and numerous other religions are officially recognised, including several Christian denominations, Muslim, Jewish, Buddhist, Hindu and other congregations as well as Forn Siðr, a revival of Scandinavian pagan tradition. The Department of Ecclesiastical Affairs recognises roughly a hundred religious congregations for tax and legal purposes such as conducting wedding ceremonies. Islam is the second largest religion in Denmark. For historical reasons, there is a formal distinction between 'approved' ("godkendte") and 'recognised' ("anerkendte") congregations of faith. The latter include 11 traditional denominations, such as Roman Catholics, the Reformed Church, the Mosaic Congregation, Methodists and Baptists, some of whose privileges in the country date hundreds of years back. These have the additional rights of having priests appointed by royal resolution and to christen/name children with legal effect. Denmark's population from 1769 to 2007. Demographic statistics according to the World Population Review in 2019. Demographic statistics according to the CIA World Factbook, unless otherwise indicated. note: data represent population by ancestry
https://en.wikipedia.org/wiki?curid=8033
Economy of Denmark The economy of Denmark is a modern mixed economy with comfortable living standards, a high level of government services and transfers, and a high dependence on foreign trade. The economy is dominated by the service sector with 80% of all jobs, whereas about 11% of all employees work in manufacturing and 2% in agriculture. Nominal gross national income per capita was the tenth-highest in the world at $55,220 in 2017. Correcting for purchasing power, per capita income was Int$52,390 or 16th-highest globally. Income distribution is relatively equal, but inequality has increased somewhat during the last decades, however, due to both a larger spread in gross incomes and various economic policy measures. In 2017, Denmark had the seventh-lowest Gini coefficient (a measure of economic inequality) of the 28 European Union countries. With 5,789,957 inhabitants (1 July 2018), Denmark has the 39th largest national economy in the world measured by nominal gross domestic product (GDP) and 60th largest in the world measured by purchasing power parity (PPP). As a small open economy, Denmark generally advocates a liberal trade policy, and its exports as well as imports make up circa 50% of GDP. Since 1990 Denmark has consistently had a current account surplus, with the sole exception of 1998. As a consequence, the country has become a considerable creditor nation, having acquired a net international investment position amounting to 65% of GDP in 2018. A decisive reason for this are the widespread compulsory funded labour market pensions schemes which have caused a considerable increase in private savings rates and today play an important role for the economy. Denmark has a very long tradition of adhering to a fixed exchange-rate system and still does so today. It is unique among OECD countries to do so while maintaining an independent currency: The Danish krone, which is pegged to the euro. Though eligible to join the EMU, the Danish voters in a referendum in 2000 rejected exchanging the krone for the euro. Whereas Denmark's neighbours like Norway, Sweden, Poland and United Kingdom generally follow inflation targeting in their monetary policy, the priority of Denmark's central bank is to maintain exchange rate stability. Consequently, the central bank has no role in domestic stabilization policy. Since February 2015, the central bank has maintained a negative interest rate to contain an upward exchange rate pressure. In an international context, a relatively large proportion of the population is part of the labour force, in particular because the female participation rate is very high. In 2017 78.8% of all 15-64-year-old people were active on the labour market, the sixth-highest number among all OECD countries. Unemployment is relatively low among European countries; in October 2018 4.8% of the Danish labour force were unemployed as compared to an average of 6.7% for all EU countries. There is no legal minimum wage in Denmark. The labour market is traditionally characterized by a high degree of union membership rates and collective agreement coverage. Denmark invests heavily in active labor market policies and the concept of flexicurity has been important historically. Denmark is an example of the Nordic model, characterized by an internationally high tax level, and a correspondingly high level of government-provided services (e.g. health care, child care and education services) and income transfers to various groups like retired or disabled people, unemployed persons, students, etc. Altogether, the amount of revenue from taxes paid in 2017 amounted to 46.1% of GDP. Danish fiscal policy is generally considered healthy. Net government debt is very close to zero, amounting to 1.3% of GDP in 2017. Danish fiscal policy is characterized by a long-term outlook, taking into account likely future fiscal demands. During the 2000s a challenge was perceived to government expenditures in future decades and hence ultimately fiscal sustainability from demographic development, in particular higher longevity. Responding to this, age eligibility rules for receiving public age-related transfers were changed. From 2012 calculations of future fiscal challenges from the government as well as independent analysts have generally perceived Danish fiscal policy to be sustainable – indeed in recent years overly sustainable. Denmark's long-term economic development has largely followed the same pattern as other Northwestern European countries. In most of recorded history Denmark has been an agricultural country with most of the population living on a subsistence level. Since the 19th century Denmark has gone through an intense technological and institutional development. The material standard of living has experienced formerly unknown rates of growth, and the country has been industrialized and later turned into a modern service society. Almost all of the land area of Denmark is arable. Unlike most of its neighbours, Denmark has not had extractable deposits of minerals or fossil fuels, except for the deposits of oil and natural gas in the North Sea, which started playing an economic role only during the 1980s. On the other hand, Denmark has had a logistic advantage through its long coastal line and the fact that no point on Danish land is more than 50 kilometers from the sea – an important fact for the whole period before the industrial revolution when sea transport was cheaper than land transport. Consequently, foreign trade has always been very important for the economic development of Denmark. Already during the Stone Age there was some foreign trade, and even though trade has made up only a very modest share of total Danish value added until the 19th century, it has been decisive for economic development, both in terms of procuring vital import goods (like metals) and because new knowledge and technological skills have often come to Denmark as a byproduct of goods exchange with other countries. The emerging trade implied specialization which created demand for means of payments, and the earliest known Danish coins date from the time of Svend Tveskæg around 995. According to economic historian Angus Maddison, Denmark was the sixth-most prosperous country in the world around 1600. The population size relative to arable agricultural land was small so that the farmers were relatively affluent, and Denmark was geographically close to the most dynamic and economically leading European areas since the 16th century: the Netherlands, the northern parts of Germany, and Britain. Still, 80 to 85% of the population lived in small villages on a subsistence level. Mercantilism was the leading economic doctrine during the 17th and 18th century in Denmark, leading to the establishment of monopolies like Asiatisk Kompagni, development of physical and financial infrastructure like the first Danish bank Kurantbanken in 1736 and the first "kreditforening" (a kind of building society) in 1797, and the acquisition of some minor Danish colonies like Tranquebar. At the end of the 18th century major agricultural reforms took place that entailed decisive structural changes. Politically, mercantilism was gradually replaced by liberal thoughts among the ruling elite. Following a monetary reform after the Napoleonic wars, the present Danish central bank Danmarks Nationalbank was founded in 1818. There exist national accounting data for Denmark from 1820 onwards thanks to the pioneering work of Danish economic historian Svend Aage Hansen. They find that there has been a substantial and permanent, though fluctuating, economic growth all the time since 1820. The period 1822–94 saw on average an annual growth in factor incomes of 2% (0.9% per capita) From around 1830 the agricultural sector experienced a major boom lasting several decades, producing and exporting grains, not least to Britain after 1846 when British grain import duties were abolished. When grain production became less profitable in the second half of the century, the Danish farmers made an impressive and uniquely successful change from vegetarian to animal production leading to a new boom period. Parallelly industrialization took off in Denmark from the 1870s. At the turn of the century industry (including artisanry) fed almost 30% of the population. During the 20th century agriculture slowly dwindled in importance relative to industry, but agricultural employment was only during the 1950s surpassed by industrial employment. The first half of the century was marked by the two world wars and the Great Depression during the 1930s. After World War II Denmark took part in the increasingly close international cooperation, joining OEEC/OECD, IMF, GATT/WTO, and from 1972 the European Economic Community, later European Union. Foreign trade increased heavily relative to GDP. The economic role of the public sector increased considerably, and the country was increasingly transformed from an industrial country to a country dominated by production of services. The years 1958–73 were an unprecedented high-growth period. The 1960s are the decade with the highest registered real per capita growth in GDP ever, i.e. 4.5% annually. During the 1970s Denmark was plunged into a crisis, initiated by the 1973 oil crisis leading to the hitherto unknown phenomenon stagflation. For the next decades the Danish economy struggled with several major so-called "balance problems": High unemployment, current account deficits, inflation, and government debt. From the 1980s economic policies have increasingly been oriented towards a long-term perspective, and gradually a series of structural reforms have solved these problems. In 1994 active labour market policies were introduced that via a series of labour market reforms have helped reducing structural unemployment considerably. A series of tax reforms from 1987 onwards, reducing tax deductions on interest payments, and the increasing importance of compulsory labour market-based funded pensions from the 1990s have increased private savings rates considerably, consequently transforming secular current account deficits to secular surpluses. The announcement of a consistent and hence more credible fixed exchange rate in 1982 has helped reducing the inflation rate. In the first decade of the 21st century new economic policy issues have emerged. A growing awareness that future demographic changes, in particular increasing longevity, could threaten fiscal sustainability, implying very large fiscal deficits in future decades, led to major political agreements in 2006 and 2011, both increasing the future eligibility age of receiving public age-related pensions. Mainly because of these changes, from 2012 onwards the Danish fiscal sustainability problem is generally considered to be solved. Instead, issues like decreasing productivity growth rates and increasing inequality in income distribution and consumption possibilities are prevalent in the public debate. The global Great Recession during the late 2000s, the accompanying Euro area debt crisis and their repercussions marked the Danish economy for several years. Until 2017, unemployment rates have generally been considered to be above their structural level, implying a relatively stagnating economy from a business-cycle point of view. From 2017/18 this is no longer considered to be the case, and attention has been redirected to the need of avoiding a potential overheating situation. Average per capita income is high in an international context. According to the World Bank, gross national income per capita was the tenth-highest in the world at $55,220 in 2017. Correcting for purchasing power, income was Int$52,390 or 16th-highest among the 187 countries. During the last three decades household saving rates in Denmark have increased considerably. This is to a large extent caused by two major institutional changes: A series of tax reforms from 1987 to 2009 considerably reduced the effective subsidization of private debt implicit in the rules for tax deductions of household interest payments. Secondly, compulsory funded pension schemes became normal for most employees from the 1990s. Over the years, the wealth of the Danish pension funds have accumulated so that in 2016 it constituted twice the size of Denmark's GDP. The pension wealth consequently is a very important both for the life-cycle of a typical individual Danish household and for the national economy. A large part of the pension wealth is invested abroad, thus giving rise to a fair amount of foreign capital income. In 2015, average household assets were more than 600% of their disposable income, among OECD countries second only to the Netherlands. At the same time, average household gross debt was almost 300% of disposable income, which is also at the highest level in OECD. Household balance sheets are consequently very large in Denmark compared to most other countries. Danmarks Nationalbank, the Danish central bank, has attributed this to a well-developed financial system. Income inequality has traditionally been low in Denmark. According to OECD figures, in 2000 Denmark had the lowest Gini coefficient of all countries. However, inequality has increased during the last decades. According to data from Statistics Denmark, the Gini coefficient for disposable income has increased from 22.1 in 1987 to 29.3 in 2017. The Danish Economic Council found in an analysis from 2016 that the increasing inequality in Denmark is due to several components: Pre-tax labour income is more unequally distributed today than before, capital income, which is generally less equally distributed than labour income, has increased as share of total income, and economic policy is less redistributive today, both because public income transfers play a smaller role today and because the tax system has become less progressive. In international comparisons, Denmark has a relatively equal income distribution. According to the CIA World Factbook, Denmark had the twentieth-lowest Gini coefficient (29.0) of 158 countries in 2016. According to data from Eurostat, Denmark was the EU country with the seventh-lowest Gini coefficient in 2017. Slovakia, Slovenia, Czechia, Finland, Belgium and the Netherlands had a lower Gini coefficient for disposable income than Denmark. The Danish labour market is characterized by a high degree of union membership rates and collective agreement coverage dating back from "Septemberforliget" (The September Settlement) in 1899 when the Danish Confederation of Trade Unions and the Confederation of Danish Employers recognized each other's right to organise and negotiate. The labour market is also traditionally characterized by a high degree of flexicurity, i.e. a combination of labour market flexibility and economic security for workers. The degree of flexibility is in part maintained through active labour market policies. Denmark first introduced active labour market policies (ALMPs) in the 1990s after an economic recession that resulted in high unemployment rates. Its labour market policies are decided through tripartite cooperation between employers, employees and the government. Denmark has one of the highest expenditures on ALMPs and in 2005, spent about 1.7% of its GDP on labour market policies. This was the highest amongst the OECD countries. Similarly, in 2010 Denmark was ranked number one amongst Nordic countries for expenditure on ALMPs. Denmark's active labour market policies particularly focus on tackling youth unemployment. They have had a "youth initiative" or the Danish Youth Unemployment Programme in place since 1996. This includes mandatory activation for those unemployed under the age of 30. While unemployment benefits are provided, the policies are designed to motivate job-seeking. For example, unemployment benefits decrease by 50% after 6 months. This is combined with education, skill development and work training programs. For instance, the Building Bridge to Education program was started in 2014 to provide mentors and skill development classes to youth that are at risk of unemployment. Such active labour market policies have been successful for Denmark in the short-term and the long-term. For example, 80% of participants in the Building Bridge for Education program felt that "the initiative has helped them to move towards completing an education". On a more macro scale, a study of the impact of ALMPs in Denmark between 1995 and 2005 showed that such policies had positive impact not just on employment but also on earnings. The effective compensation rate for unemployed workers has been declining for the last decades, however. Unlike in most Western countries there is no legal minimum wage in Denmark. A relatively large proportion of the population is active on the labour market, not least because of a very high female participation rate. The total participation rate for people aged 15 to 64 years was 78.8% in 2017. This was the 6th-highest number among OECD countries, only surpassed by Iceland, Switzerland, Sweden, New Zealand and the Netherlands. The average for all OECD countries together was 72.1%. According to Eurostat, the unemployment rate was 5.7% in 2017. This places unemployment in Denmark somewhat below the EU average, which was 7.6%. 10 EU member countries had a lower unemployment rate than Denmark in 2017. Altogether, total employment in 2017 amounted to 2,919,000 people according to Statistics Denmark. The share of employees leaving jobs every year (for a new job, retirement or unemployment) in the private sector is around 30% – a level also observed in the U.K. and U.S.- but much higher than in continental Europe, where the corresponding figure is around 10%, and in Sweden. This attrition can be very costly, with new and old employees requiring half a year to return to old productivity levels, but with attrition bringing the number of people that have to be fired down. As a small open economy, Denmark is very dependent on its foreign trade. In 2017, the value of total exports of goods and services made up 55% of GDP, whereas the value of total imports amounted to 47% of GDP. Trade in goods made up slightly more than 60% of both exports and imports, and trade in services the remaining close to 40%. Machinery, chemicals and related products like medicine and agricultural products were the largest groups of export goods in 2017. Service exports were dominated by freight sea transport services from the Danish merchant navy. Most of Denmark's most important trading partners are neighbouring countries. The five main importers of Danish goods and services in 2017 were Germany, Sweden, United Kingdom, United States and Norway. The five countries from which Denmark imported most goods and services in 2017 were Germany, Sweden, the Netherlands, China and United Kingdom. After having almost consistently an external balance of payments current account deficit since the beginning of the 1960s, Denmark has maintained a surplus on its BOP current account for every year since 1990, with the single exception of 1998. In 2017, the current account surplus amounted to approximately 8% of GDP. Consequently, Denmark has changed from a net debtor to a net creditor country. By 1 July 2018, the net foreign wealth or net international investment position of Denmark was equal to 64.6% of GDP, Denmark thus having the largest net foreign wealth relative to GDP of any EU country. As the annual current account is equal to the value of domestic saving minus total domestic investment, the change from a structural deficit to a structural surplus is due to changes in these two national account components. In particular, the Danish national saving rate in financial assets increased by 11 per cent of GDP from 1980 to 2015. Two main reasons for this large change in domestic saving behaviour were the growing importance of large-scale compulsory pension schemes and several Danish fiscal policy reforms during the period which considerably decreased tax deductions of household interest expense, thus reducing the tax subsidy to private debt. The Danish currency is the Danish krone, subdivided into 100 øre. The krone and øre were introduced in 1875, replacing the former rigsdaler and skilling. Denmark has a very long tradition of maintaining a fixed exchange-rate system, dating back to the period of the gold standard during the time of the Scandinavian Monetary Union from 1873 to 1914. After the breakdown of the international Bretton Woods system in 1971, Denmark devalued the krone repeatedly during the 1970s and the start of the 1980s, effectively maintaining a policy of "fixed, but adjustable" exchange rates. Rising inflation led to Denmark declaring a more consistent fixed exchange-rate policy in 1982. At first, the krone was pegged to the European Currency Unit or ECU, from 1987 to the Deutschmark, and from 1999 to the euro. Although eligible, Denmark chose not to join the European Monetary Union when it was founded. In 2000, the Danish government advocated Danish EMU membership and called a referendum to settle the issue. With a turn-out of 87.6%, 53% of the voters rejected Danish membership. Occasionally, the question of calling another referendum on the issue has been discussed, but since the Financial crisis of 2007–2008 opinion polls have shown a clear majority against Denmark joining the EMU, and the question is not high on the political agenda presently. Maintenance of the fixed exchange rate is the responsibility of Danmarks Nationalbank, the Danish central bank. As a consequence of the exchange rate policy, the bank must always adjust its interest rates to ensure a stable exchange rate and consequently cannot at the same time conduct monetary policy to stabilize e.g. domestic inflation or unemployment rates. This makes the conduct of stabilization policy fundamentally different from the situation in Denmark's neighbouring countries like Norway, Sweden, Poland og United Kingdom, in which the central banks have a central stabilizing role. Denmark is presently the only OECD member country maintaining an independent currency with a fixed exchange rate. Consequently, the Danish krone is the only currency in the European Exchange Rate Mechanism II (ERM II). In the first months of 2015, Denmark experienced the largest pressure against the fixed exchange rate for many years because of very large capital inflows, causing a tendency for the Danish krone to appreciate. Danmarks Nationalbank reacted in various ways, chiefly by lowering its interest rates to record low levels. On 6 February 2015 the certificates of deposit rate, one of the four official Danish central bank rates, was lowered to −0.75%. In January 2016 the rate was raised to −0.65%, at which level it has been maintained since then. Inflation in Denmark as measured by the official consumer price index of Statistics Denmark was 1.1% in 2017. Inflation has generally been low and stable for the last decades. Whereas in 1980 annual inflation was more than 12%, in the period 2000–2017 the average inflation rate was 1.8%. Since a local-government reform in 2007, the general government organization in Denmark is carried out on three administrative levels: central government, regions, and municipalities. Regions administer mainly health care services, whereas municipalities administer primary education and social services. Municipalities in principle independently levy income and property taxes, but the scope for total municipal taxation and expenditure is closely regulated by annual negotiations between the municipalities and the Finance Minister of Denmark. At the central government level, the Ministry of Finance carries out the coordinating role of conducting economic policy. In 2012, the Danish parliament passed a Budget Law (effective from January 2014) which governs the over-all fiscal framework, stating among other things that the structural deficit must never exceed 0.5% of GDP, and that Danish fiscal policy is required to be sustainable, i.e. have a non-negative fiscal sustainability indicator. The Budget Law also assigned the role of independent fiscal institution (IFI, informally known as "fiscal watchdog") to the already-existing independent advisory body of the Danish Economic Councils. Danish fiscal policy is generally considered healthy. Government net debt was close to zero at the end of 2017, amounting to DKK 27.3 billion, or 1.3% of GDP. The government sector having a fair amount of financial assets as well as liabilities, government gross debt amounted to 36.1% of GDP at the same date. The gross EMU-debt as percentage of GDP was the sixth-lowest among all 28 EU member countries, only Estonia, Luxembourg, Bulgaria, the Czech Republic and Romania having a lower gross debt. Denmark had a government budget surplus of 1.1% of GDP in 2017. Long-run annual fiscal projections from the Danish government as well as the independent Danish Economic Council, taking into account likely future fiscal developments caused by demographic developments etc. (e.g. a likely ageing of the population caused by a considerable expansion of life expectancy), consider the Danish fiscal policy to be overly sustainable in the long run. In Spring 2018, the so-called Fiscal Sustainability Indicator was calculated to be 1.2 (by the Danish government) respectively 0.9% (by the Danish Economic Council) of GDP. This implies that under the assumptions employed in the projections, fiscal policy could be permanently loosened (via more generous public expenditures and/or lower taxes) by ca. 1% of GDP while still maintaining a stable government debt-to-GDP ratio in the long run. The tax level as well as the government expenditure level in Denmark ranks among the highest in the world, which is traditionally ascribed to the Nordic model of which Denmark is an example, including the welfare state principles which historically evolved during the 20th century. In 2017, the official Danish tax level amounted to 46.1% of GDP. The all-record highest Danish tax level was 49.8% of GDP, reached in 2014 because of high extraordinary one-time tax revenues caused by a reorganization of the Danish-funded pension system. The Danish tax-to-GDP-ratio of 46% was the second-highest among all OECD countries, second only to France. The OECD average was 34.2%. The tax structure of Denmark (the relative weight of different taxes) also differs from the OECD average, as the Danish tax system in 2015 was characterized by substantially higher revenues from taxes on personal income, whereas on the other hand, no revenues at all derive from social security contributions. A lower proportion of revenues in Denmark derive from taxes on corporate income and gains and property taxes than in OECD generally, whereas the proportion deriving from payroll taxes, VAT, and other taxes on goods and services correspond to the OECD average. In 2016, the average marginal tax rate on labour income for all Danish tax-payers was 38.9%. The average marginal tax on personal capital income was 30.7%. Professor of Economics at Princeton University Henrik Kleven has suggested that three distinct policies in Denmark and its Scandinavian neighbours imply that the high tax rates cause only relatively small distortions to the economy: widespread use of third-party information reporting for tax collection purposes (ensuring a low level of tax evasion), broad tax bases (ensuring a low level of tax avoidance), and a strong subsidization of goods that are complementary to working (ensuring a high level of labour force participation). Parallel to the high tax level, government expenditures make up a large part of GDP, and the government sector carries out many different tasks. By September 2018, 831,000 people worked in the general government sector, corresponding to 29.9% of all employees. In 2017, total government expenditure amounted to 50.9% of GDP. Government consumption took up precisely 25% of GDP (e.g. education and health care expenditure), and government investment (infrastructure etc.) expenditure another 3.4% of GDP. Personal income transfers (for e.g. elderly or unemployed people) amounted to 16.8% of GDP. Denmark has an unemployment insurance system called the A-kasse ("arbejdsløshedskasse"). This system requires a paying membership of a state-recognized unemployment fund. Most of these funds are managed by trade unions, and part of their expenses are financed through the tax system. Members of an A-kasse are not obliged to be members of a trade union. Not every Danish citizen or employee qualifies for a membership of an unemployment fund, and membership benefits will be terminated after 2 years of unemployment. A person that is not a member of an A-kasse cannot receive unemployment benefits. Unemployment funds do not pay benefits to sick members, who will be transferred to a municipal social support system instead. Denmark has a countrywide, but municipally administered social support system against poverty, securing that qualified citizens have a minimum living income. All Danish citizens above 18 years of age can apply for some financial support if they cannot support themselves or their family. Approval is not automatic, and the extent of this system has generally been diminished since the 1980s. Sick people can receive some financial support throughout the extent of their illness. Their ability to work will be re-evaluated by the municipality after 5 months of illness. The welfare system related to the labor market has experienced several reforms and financial cuts since the late 1990s due to political agendas for increasing the labor supply. Several reforms of the rights of the unemployed have followed up, partially inspired by the Danish Economic Council. Halving the time unemployment benefits can be received from four to two years, and making it twice as hard to regain this right, was implemented in 2010 for example. Disabled people can apply for permanent social pensions. The extent of the support depends on the ability to work, and people below 40 can not receive social pension unless they are deemed incapable of any kind of work. Agriculture was once the most important industry in Denmark. Nowadays, it is of minor economic importance. In 2016, 62,000 people, or 2.5% of all employed people worked in agriculture and horticulture. Another 2,000 people worked in fishing. As value added per person is relatively low, the share of national value added is somewhat lower. Total gross value added in agriculture, forestry and fishing amounted to 1.6% of total output in Denmark (in 2017). Despite this, Denmark is still home to various types of agricultural production. Within animal husbandry, it includes dairy and beef cattle, pigs, poultry and fur animals (primarily mink) – all sectors that produce mainly for export. Regarding vegetable production, Denmark is a leading producer of grass-, clover- and horticultural seeds. The agriculture and food sector as a whole represented 25% of total Danish commodity exports in 2015. 63% of the land area of Denmark is used for agricultural production – the highest share in the world according to a report from University of Copenhagen in 2017. The Danish agricultural industry is historically characterized by freehold and family ownership, but due to structural development farms have become fewer and larger. In 2017 the number of farms was approximately 35,000, of which approximately 10,000 were owned by full-time farmers. The tendency toward fewer and larger farms has been accompanied by an increase in animal production, using fewer resources per produced unit. The number of dairy farmers has reduced to about 3,800 with an average herd size of 150 cows. The milk quota is 1,142 tonnes. Danish dairy farmers are among the largest and most modern producers in Europe. More than half of the cows live in new loose-housing systems. Export of dairy products accounts for more than 20 percent of the total Danish agricultural export. The total number of cattle in 2011 was approximately 1.5 million. Of these, 565,000 were dairy cows and 99,000 were suckler cows. The yearly number of slaughtering of beef cattle is around 550,000. For more than 100 years the production of pigs and pig meat was a major source of income in Denmark. The Danish pig industry is among the world's leaders in areas such as breeding, quality, food safety, animal welfare and traceability creating the basis for Denmark being among the world's largest pig meat exporters. Approximately 90 percent of the production is exported. This accounts for almost half of all agricultural exports and for more than 5 percent of Denmark's total exports. About 4,200 farmers produce 28 million pigs annually. Of these, 20.9 million are slaughtered in Denmark. Fur animal production on an industrial scale started in the 1930s in Denmark. Denmark is now the world's largest producer of mink furs, with 1,400 mink farmers fostering 17.2 million mink and producing around 14 million furs of the highest quality every year. Approximately 98 percent of the skins sold at Kopenhagen Fur Auction are exported. Fur ranks as Danish agriculture's third largest export article, at more than DKK 7 billion annually. The number of farms peaked in the late 1980s at more than 5,000 farms, but the number has declined steadily since, as individual farms grew in size. Danish mink farmers claim their business to be sustainable, feeding the mink food industry waste and using all parts of the dead animal as meat, bone meal and biofuel. Special attention is given to the welfare of the mink, and regular "Open Farm" arrangements are made for the general public. Mink thrive in, but are not a native to Denmark, and it is considered an invasive species. American Mink are now widespread in Denmark and continues to cause problems for the native wildlife, in particular waterfowl. Denmark also has a small production of fox, chinchilla and rabbit furs. Two hundred professional producers are responsible for the Danish egg production, which was 66 million kg in 2011. Chickens for slaughter are often produced in units with 40,000 broilers. In 2012, 100 million chickens were slaughtered. In the minor productions of poultry, 13 million ducks, 1.4 million geese and 5.0 million turkeys were slaughtered in 2012. Organic farming and production has increased considerably and continuously in Denmark since 1987 when the first official regulations of this particular agricultural method came into effect. In 2017, the export of organic products reached DK 2.95 billion, a 153% increase from 2012 five years earlier, and a 21% increase from 2016. The import of organic products has always been higher than the exports though and reached DK 3.86 billion in 2017. After some years of stagnation, close to 10% of the cultivated land is now categorized as organically farmed, and 13.6% for the dairy industry, as of 2017. Denmark has the highest retail consumption share for organic products in the world. In 2017, the share was at 13.3%, accounting for a total of DKK 12.1 billion. Denmark has some sources of oil and natural gas in the North Sea with Esbjerg being the main city for the oil and gas industry. Production has decreased in recent years, though. Whereas in 2006 output (measured as gross value added or GVA) in mining and quarrying industries made up more than 4% of Denmarks's total GVA, in 2017 it amounted to 1.2%. The sector is very capital-intensive, so the share of employment is much lower: About 2,000 persons worked in the oil and gas extraction sector in 2016, and another 1,000 persons in extraction of gravel and stone, or in total about 0.1% of total employment in Denmark. Denmark houses a number of significant engineering and high-technology firms, within the sectors of industrial equipment, aerospace, robotics, pharmaceutical and electronics. Danfoss, headquartered in Sønderborg, designs and manufactures industrial electronics, heating and cooling equipment, as well as drivetrains and power solutions. Denmark is also a large exporter of pumps, with the company Grundfos holding 50% of the market share, manufacturing circulation pumps. In 2017 total output (gross value added) in manufacturing industries amounted to 14.4% of total output in Denmark. 325,000 people or a little less than 12% of all employed persons worked in manufacturing (including utilities, mining and quarrying) in 2016. Main sub-industries are manufacture of pharmaceuticals, machinery, and food products. In 2017 total output (gross value added) in service industries amounted to 75.2% of total output in Denmark, and 79.9% of all employed people worked here (in 2016). Apart from public administration, education and health services, main service sub-industries were trade and transport services, and business services. Significant investment has been made in building road and rail links between Copenhagen and Malmö, Sweden (the Øresund Bridge), and between Zealand and Funen (the Great Belt Fixed Link). The Copenhagen Malmö Port was also formed between the two cities as the common port for the cities of both nations. The main railway operator is Danske Statsbaner (Danish State Railways) for passenger services and DB Schenker Rail for freight trains. The railway tracks are maintained by Banedanmark. Copenhagen has a small Metro system, the Copenhagen Metro and the greater Copenhagen area has an extensive electrified suburban railway network, the S-train. Private vehicles are increasingly used as a means of transport. New cars are taxed by means of a registration tax (85% to 150%) and VAT (25%). The motorway network now covers 1,300 km. Denmark is in a strong position in terms of integrating fluctuating and unpredictable energy sources such as wind power in the grid. It is this knowledge that Denmark now aims to exploit in the transport sector by focusing on intelligent battery systems (V2G) and plug-in vehicles. Denmark has changed its energy consumption from 99% fossil fuels (92% oil (all imported) and 7% coal) and 1% biofuels in 1972 to 73% fossil fuels (37% oil (all domestic), 18% coal and 18% natural gas (all domestic)) and 27% renewables (largely biofuels) in 2015. The goal is a full independence of fossil fuels by 2050. This drastic change was initially inspired largely by the discovery of Danish oil and gas reserves in the North Sea in 1972 and the 1973 oil crisis. The course took a giant leap forward in 1984, when the Danish North Sea oil and gas fields, developed by native industry in close cooperation with the state, started major productions. In 1997, Denmark became self-sufficient with energy and the overall CO2 emission from the energy sector began to fall by 1996. Wind energy contribution to the total energy consumption has risen from 1% in 1997 to 5% in 2015. Since 2000, Denmark has increased gross domestic product (GDP) and at the same time decreased energy consumption. Since 1972, the overall energy consumption has dropped by 6%, even though the GDP has doubled in the same period. Denmark had the 6th best energy security in the world in 2014. Denmark has had relatively high energy taxation to encourage careful use of energy since the oil crises in the 1970s, and Danish industry has adapted to this and gained a competitive edge. The so-called "green taxes" have been broadly criticised partly for being higher than in other countries, but also for being more of a tool for gathering government revenue than a method of promoting "greener" behaviour. Denmark has low electricity costs (including costs for cleaner energy) in EU, but general taxes (11.7 billion DKK in 2015) make the electricity price for households the highest in Europe. , Denmark has no environment tax on electricity. Denmark is a long time leader in wind energy and a prominent exporter of Vestas and Siemens wind turbines, and Denmark derives 3.1% of its gross domestic product from renewable (clean) energy technology and energy efficiency, or around €6.5 billion ($9.4 billion). It has integrated fluctuating and less predictable energy sources such as wind power into the grid. Wind produced the equivalent of 43% of Denmark's total electricity consumption in 2017. The share of total energy production is smaller: In 2015, wind accounted for 5% of total Danish energy production. Energinet.dk is the Danish national transmission system operator for electricity and natural gas. The electricity grids of western Denmark and eastern Denmark were not connected until 2010 when the 600MW Great Belt Power Link went into operation. Cogeneration plants are the norm in Denmark, usually with district heating which serves 1.7 million households. Waste-to-energy incinerators produce mostly heating and hot water. in Glostrup Municipality operates Denmark's largest incinerator, a cogeneration plant which supplies electricity to 80,000 households and heating equivalent to the consumption in 63,000 households (2016). Amager Bakke is an example of a new incinerator being built. In addition to Denmark proper, the Kingdom of Denmark comprises two autonomous constituent countries in the North Atlantic Ocean: Greenland and the Faroe Islands. Both use the Danish krone as their currency, but form separate economies, having separate national accounts etc. Both countries receive an annual fiscal subsidy from Denmark which amounts to about 25% of Greenland's GDP and 11% of Faroese GDP. For both countries, fishing industry is a major economic activity. Neither Greenland nor the Faroe Islands are members of the European Union. Greenland left the European Economic Community in 1986, and the Faroe Islands declined membership in 1973, when Denmark joined. The following table shows the main economic indicators in 1980–2017. Inflation under 2% is in green. Denmark has fostered and is home to many multi-national companies. Many of the largest are interdisciplinary with business – and sometimes research activities – in several fields. The most notable companies include: Many of the largest food producers are also engaged in biotechnology and research. Notable companies dedicated to the pharmaceutical and biotechnology sector, includes: Denmark has a long tradition for cooperative production and trade on a large scale. The most notable cooperative societies today includes the agricultural coop of Dansk Landbrugs Grovvareselskab (DLG), dairy producer Arla Foods and the retail cooperative Coop Danmark. Coop Danmark started out as ""Fællesforeningen for Danmarks Brugsforeninger"" (FDB) in 1896 and now has around 1.4 million members in Denmark as of 2017. It is part of the larger multi-sector cooperative Coop amba which has 1.7 million members in that same year. The cooperative structure also extends to both the housing and banking sector. Arbejdernes Landsbank, founded in 1919, is the largest bank cooperative and it is currently the 6th largest bank in the country as of 2018. The municipality of Copenhagen alone holds a total of 153 housing cooperatives and ""Arbejdernes Andelsboligforening Århus"" (AAB Århus) is the largest individual housing cooperative in Denmark, with 23,000 homes in Aarhus.
https://en.wikipedia.org/wiki?curid=8035
Transport in Denmark Transport in Denmark is developed and modern. The motorway network covers 1,111 km while the railway network totals 2,667 km of operational track. The Great Belt Fixed Link (opened in 1997) connecting the islands of Zealand and Funen and the New Little Belt Bridge (opened in 1970) connecting Funen and Jutland greatly improved the traffic flow across the country on both motorways and rail. The two largest airports of Copenhagen and Billund provide a variety of domestic and international connections, while ferries provide services to the Faroe Islands, Greenland, Iceland, Germany, Sweden, and Norway, as well as domestic routes servicing most Danish islands. In 2011, a total of appr. 28 million passengers used Danish airports. Copenhagen Airport is the largest airport in Scandinavia, handling approximately 29m passengers per year (2016). It is located at Kastrup, 8 km south-east of central Copenhagen. It is connected by train to Copenhagen Central Station and beyond as well as to Malmö and other towns in Sweden. For the west of the country, the major airport is Billund (3m passengers in 2016) although both Aalborg (1.4m passengers in 2011) and Aarhus (591.000 passengers in 2011) have smaller airports with regular connections to Copenhagen. Denmark's main airports are: Other airports include: Being an island state with a long coastline and always close to the sea, maritime transport has always been important in Denmark. From the primitive dugouts of the Stone Age to the complex designs of the Viking ships in the Viking Age, often built to exactly facilitate large scale cargo and passenger transportation. Denmark also engaged in the large scale cargo freights and slave transports of the European colonization endeavours in the Middle Ages and operated several smaller colonies of its own across the globe by the means of seafaring. Today Denmark's ports handle some 48 million passengers and 109 million tonnes of cargo per year. Passenger traffic is made up partly of ferry crossings within Denmark, partly of international ferry crossings and partly of cruise ship passengers. Some short ferry routes are being electrified and several more may be eligible, as in Norway. Among the most important ports for passenger traffic (thousands of passengers per year in 2007) are: In 2007, 288 cruise ships visited Copenhagen, rising to 376 in 2011 before returning to around 300 the following years. Around 800,000 cruise passengers and 200,000 crew visit Copenhagen each year. Among the most important ports for cargo traffic (millions of tonnes per year in 2007) are: Waterways have historically and traditionally been crucial to local transportation in Denmark proper. Especially the Gudenå river-system in central Jutland, has played an important role. The waterways were navigated by wooden barges and later on steamboats. A few historical steamboats are still in operation, like the SS Hjejlen from 1861 at Silkeborg. There is a 160 km natural canal through the shallow Limfjorden in northern Jutland, linking the North Sea to the Kattegat. Many waterways has formerly been redirected and led through manmade canals in the 1900s, but mainly for agricultural purposes and not to facilitate transportation on any major scale. Several cities have manmade canals used for transportation and traffic purposes. Of special mention are the and the Odense Canal, ferrying large numbers of both tourists and local citizens. Denmark has a large merchant fleet relative to its size. In 2018, the fleet surpassed 20 million gt as the government sought to repatriate Danish-owned tonnage registered abroad, with measures including removal of the registration fee. Denmark has created its own international register, called the Danish International Ship register (DIS), open to commercial vessels only. DIS ships do not have to meet Danish manning regulations. The largest railway operator in Denmark is Danske Statsbaner (DSB) — Danish State Railways. Arriva operates some routes in Jutland, and several other smaller operators provide local services. The total length of operational track is 2,667 km, 640 km electrified at 25 kV AC, 946 km double track (2008). 508 km is privately owned and operated. Track is standard gauge. The railway system is connected to Sweden by bridge in Copenhagen and ferry in Helsingør and Frederikshavn, by land to Germany in Padborg and ferry in Rødby and to Norway by ferry in Hirtshals. The road network in 2008 totalled 73,197 km of paved road, including 1,111 km of motorway. Motorways are toll-free except for the Great Belt Bridge joining Zealand and Funen and the Øresund Bridge linking Copenhagen to Malmö in Sweden. Bicycling in Denmark is a common and popular utilitarian and recreational activity. Bicycling infrastructure is a dominant feature of both city and countryside infrastructure, with bicycle paths and bicycle ways in many places and an extensive network of bicycle routes, extending more than nationwide. In comparison, Denmark's coastline is . As a unique feature, Denmark has a VIN-system for bicycles which is mandatory by law. Often bicycling and bicycle culture in Denmark is compared to the Netherlands as a bicycle-nation. Figures for 2007:
https://en.wikipedia.org/wiki?curid=8037
Danish Defence Danish Defence (, , ) is the unified armed forces of the Kingdom of Denmark, charged with the defence of Denmark and its constituent, self-governing nations Greenland and the Faroe Islands. The Defence also promote Denmark's wider interests, support international peacekeeping efforts and provide humanitarian aid. Since the creation of a standing military in 1510, the armed forces have seen action in many wars, most involving Sweden, but also involving the world's great powers, including the Thirty Years' War, the Great Northern War, and the Napoleonic Wars. Today, Danish Defence consists of: the Royal Danish Army, Denmark's principal land warfare branch; the Royal Danish Navy, a blue-water navy with a fleet of 20 commissioned ships; and the Royal Danish Air Force, an air force with an operational fleet consisting of both fixed-wing and rotary aircraft. The Defence also includes the Home Guard. Under the Danish Defence Law the Minister of Defence serves as the commander of Danish Defence (through the Chief of Defence and the Defence Command) and the Danish Home Guard (through the Home Guard Command). De facto the Danish Cabinet is the commanding authority of the Defence, though it cannot mobilize the armed forces, for purposes that are not strictly defence oriented, without the consent of parliament. The modern Danish military can be traced back to 1510, with the creation of the Royal Danish Navy. During this time, the Danish Kingdom held considerable territories, including Schleswig-Holstein, Norway, and colonies in Africa and the Americas. Following the defeat in the Second Schleswig War, the military became a political hot-button issue, with many wanting the disarm the military. Denmark managed to maintain its neutrality during the First World War, with a relative strong military force. However, following the Interwar period, a more pacifistic government came to power, decreasing the size of the military. This resulted in Denmark having a limited military, when Denmark was invaded in 1940. After World War II, the different branches were reorganized, and collected under "Danish Defence". This was to ensure a unified command when conducting joint operations, as learned from the War. With the defeat in 1864, Denmark had adopted a policy of neutrality. This was however abandoned after World War Two, when Denmark decided to support the UN peacekeeping forces and become a member of NATO. During the Cold War, Denmark began to rebuild its military and to prepare for possible attacks by the Soviet Union and its Warsaw Pact allies. During this time Denmark participated in a number of UN peacekeeping missions including UNEF and UNFICYP. Following the end of the Cold War, Denmark began a more active foreign policy, deciding to participate in international operations. This began with the participation in the Bosnian War, where the Royal Danish Army served as part of the United Nations Protection Force and were in two skirmishes. This was the first time the Danish Army was a part of a combat operation since World War 2. On April 29, 1994, the Royal Danish Army, while on an operation to relieve an observation post as part of the United Nations Protection Force, the Jutland Dragoon Regiment came under artillery fire from the town of Kalesija. The United Nations Protection Force quickly returned fire and eliminated the artillery positions. On October 24, 1994, the Royal Danish Army, while on an operation to reinforce an observation post in the town of Gradačac, were fired upon by a T-55 Bosnian Serb tank. One of the three Danish Leopard 1 tanks experienced slight damage, but all returned fired and put the T-55 tank out of action. With the September 11 attacks, Denmark joined US forces in the War on terror, participating in both the War in Afghanistan and the Iraq War. In Afghanistan, 37 soldiers have been killed in various hostile engagements or as a result of friendly fire, and 6 have been killed in non-combat related incidents, bringing the number of Danish fatalities to 43, being the highest loss per capita within the coalition forces. Denmark has since participated in Operation Ocean Shield, the 2011 military intervention in Libya and the American-led intervention in the Syrian Civil War. The purpose and task of the armed forces of Denmark is defined in Law no. 122 of February 27, 2001 and in force since March 1, 2001. It defines three purposes and six tasks. Its primary purpose is to prevent conflicts and war, preserve the sovereignty of Denmark, secure the continuing existence and integrity of the independent Kingdom of Denmark and further a peaceful development in the world with respect to human rights. Its primary tasks are: NATO participation in accordance with the strategy of the alliance, detect and repel any sovereignty violation of Danish territory (including Greenland and the Faroe Islands), defence cooperation with non-NATO members, especially Central and East European countries, international missions in the area of conflict prevention, crisis-control, humanitarian, peacemaking, peacekeeping, participation in "Total Defence" in cooperation with civilian resources and finally maintenance of a sizable force to execute these tasks at all times. Total Defence () is "the use of all resources in order to maintain an organized and functional society, and to protect the population and values of society". This is achieved by combining the military, Home Guard, Danish Emergency Management Agency and elements of the police. The concept of total defence was created following Word War 2, where it was clear that the defence of the country could not only rely on the military, but there also need to be other measures to ensure a continuation of society. As a part of the Total Defence, all former conscripts can be recalled to duty, in order to serve in cases of emergency. Since 1988, Danish defence budgets and security policy have been set by multi-year white paper agreements supported by a wide parliamentary majority including government and opposition parties. However, public opposition to increases in defence spending—during periods of economic constraints require reduced spending for social welfare — has created differences among the political parties regarding a broadly acceptable level of new defence expenditure. The latest Defence agreement ("Defence Agreement 2018–23") was signed 28 January 2018, and calls for an increase in spending, cyber security and capabilities to act in international operations and international stabilization efforts. The reaction speed is increased, with an entire brigade on standby readiness; the military retains the capability to continually deploy 2,000 soldiers in international service or 5,000 over a short time span. The standard mandatory conscription is expanded to include 500 more, with some of these having a longer service time, with more focus on national challenges. In 2006 the Danish military budget was the fifth largest single portion of the Danish Government's total budget, significantly less than that of the Ministry of Social Affairs (≈110 billion DKK), Ministry of Employment (≈67 billion DKK), Ministry of the Interior and Health (≈66 billion DKK) and Ministry of Education (≈30 billion DKK) and only slightly larger than that of the Ministry of Science, Technology and Innovation (≈14 billion DKK). This list lists the complete expenditures for the Danish Ministry of Defence. The Danish Defence Force, counting all branches and all departments, itself has an income equal to about 1–5% of its expenditures, depending on the year. They are not deducted in this listing. Approximately 95% of the budget goes directly to running the Danish military including the Home guard. Depending on year, 50–53% accounts for payment to personnel, roughly 14–21% on acquiring new material, 2–8% for larger ships, building projects or infrastructure and about 24–27% on other items, including purchasing of goods, renting, maintenance, services and taxes. The remaining 5% is special expenditures to NATO, branch shared expenditures, special services and civil structures, here in including running the Danish Maritime Safety Administration, Danish national rescue preparedness and the Administration of Conscientious Objectors (Militærnægteradministrationen). Because Denmark has a small and highly specialized military industry, the vast majority of Danish Defence's equipment is imported from NATO and the Nordic countries. Danish Defence expenditures (1949–1989) Danish Defence expenditures (1990–) The Danish Royal Army () consists of 2 brigades, organised into 3 regiments, and a number of support centres, all commanded through the Army Staff. The army is a mixture of Mechanized infantry and Armoured cavalry with a limited capabilities in Armoured warfare. The army also provides protection for the Danish royal family, in the form of the Royal Guard Company and the Guard Hussar Regiment Mounted Squadron. The Royal Danish Navy () consists of frigates, patrol vessels, mine-countermeasure vessels, and other miscellaneous vessels, many of which are issued with the modular mission payload system StanFlex. The navy's chief responsibility is maritime defence and maintaining the sovereignty of Danish, Greenlandic and Faroese territorial waters. A submarine service existed within the Royal Danish Navy for 95 years. The Royal Danish Air Force () consists of both fixed-wing and rotary aircraft. The Home Guard is voluntary service responsible for defence of the country, but has since 2008 also supported the army, in Afghanistan and Kosovo. Current deployment of Danish forces, per 10-03-2016: Women in the military can be traced back to 1946, with the creation of "Lottekorpset". This corps allowed women to serve, however, without entering with the normal armed forces, and they were not allowed to carry weapons. In 1962, women were allowed in the military. Currently 1,122 or 7.3% of all personnel in the armed forces are women. Women do not have to serve conscription in Denmark, since 1998, it is however possible to serve under conscription-like circumstances; 17% of those serving conscription or conscription-like are women. Between 1991 and 31 December 2017, 1,965 women have been deployed to different international missions. Of those 3 women have lost their lives. In 1998, Police Constable Gitte Larsen, was killed in Hebron on the West Bank. In 2003, "Overkonstabel" Susanne Lauritzen was killed in a traffic accident in Kosovo. In 2010, the first woman was killed in a combat situation, when "Konstabel" Sophia Bruun was killed by an IED in Afghanistan. In 2005, Line Bonde, became the first fighter pilot in Denmark. In 2016, Lone Træholt became the first female general. She was the only female general in the Danish armed forces until the army promoted Jette Albinus to the rank of brigadier general on 11 September 2017. In May 2018, the Royal Life Guards was forced to lower the height requirements for women, as the Danish Institute of Human Rights decided it was discrimination. Technically all Danish 18-year-old males are conscripts (37,897 in 2010, of whom 53% were considered suitable for duty). Due to the large number of volunteers, 96-99% of the number required in the past three years, the number of men actually called up is relatively low (4200 in 2012). There were additionally 567 female volunteers in 2010, who pass training on "conscript-like" conditions. Conscripts to Danish Defence (army, navy and air force) generally serve four months, except: There has been a right of conscientious objection since 1917.
https://en.wikipedia.org/wiki?curid=8038
Foreign relations of Denmark The foreign policy of Denmark is based on its identity as a sovereign state in Europe, the Arctic and the North Atlantic. As such its primary foreign policy focus is on its relations with other nations as a sovereign state compromising the three constituent countries: Denmark, Greenland and the Faroe Islands. Denmark has long had good relations with other nations. It has been involved in coordinating Western assistance to the Baltic states (Estonia, Latvia, and Lithuania). The country is a strong supporter of international peacekeeping. Danish forces were heavily engaged in the former Yugoslavia in the UN Protection Force (UNPROFOR), with IFOR, and now SFOR. Denmark also strongly supported American operations in Afghanistan and has contributed both monetarily and materially to the ISAF. These initiatives are a part of the "active foreign policy" of Denmark. Instead of the traditional adaptative foreign policy of The unity of the Realm, Kingdom of Denmark is today pursuing an active foreign policy, where human rights, democracy and other crucial values is to be defended actively. In recent years, Greenland and the Faroe Islands have been guaranteed a say in foreign policy issues, such as fishing, whaling and geopolitical concerns. Following World War II, Denmark ended its two-hundred-year-long policy of neutrality. Denmark has been a member of NATO since its founding in 1949, and membership in NATO remains highly popular. There were several serious confrontations between the U.S. and Denmark on security policy in the so-called "footnote era" (1982–88), when an alternative parliamentary majority forced the government to adopt specific national positions on nuclear and arms control issues. The alternative majority in these issues was because the Social liberal Party (Radikale Venstre) supported the governing majority in economic policy issues, but was against certain NATO policies and voted with the left in these issues. The conservative led Centre-right government accepted this variety of "minority parliamentarism", that is, without making it a question of the government's parliamentary survival. With the end of the Cold War, however, Denmark has been supportive of U.S. policy objectives in the Alliance. Danes have a reputation as "reluctant" Europeans. When they rejected ratification of the Maastricht Treaty on 2 June 1992, they put the EC's plans for the European Union on hold. In December 1992, the rest of the EC agreed to exempt Denmark from certain aspects of the European Union, including a common security and defense policy, a common currency, EU citizenship, and certain aspects of legal cooperation. The Amsterdam Treaty was approved in the referendum of 28 May 1998. In the autumn of 2000, Danish citizens rejected membership of the Euro currency group in a referendum. The Lisbon treaty was ratified by the Danish parliament alone. It was not considered a surrendering of national sovereignty, which would have implied the holding of a referendum according to article 20 of the constitution. In 1807 Denmark was neutral but Britain bombarded Copenhagen and seized the Danish Navy, Denmark became an ally of Napoleon. After Napoleon was profoundly defeated in Russia in 1812, the Allies repeatedly offered King Frederick VI a proposal to change sides and break with Napoleon. The king refused. Therefore at the peace of Kiel in 1814, Denmark was forced to cede Norway to Sweden. Denmark thus became one of the chief losers of the Napoleonic Wars. Danish historiography portrayed King Frederick VI as stubborn and incompetent, and motivated by a blind loyalty to Napoleon. However a more recent Danish historiographical approach emphasizes the Danish state was multi-territorial, and included the semi - separate Kingdom of Norway. It was dependent for food on grain imports controlled by Napoleon, and worried about Swedish ambitions. From the king's perspective, these factors called for an alliance with Napoleon. Furthermore the king expected the war would end in a negotiated international conference, with Napoleon playing a powerful role that included saving Norway for Denmark. The Danish government responded to the First World War by declaring neutrality 1914-1918. It maintained that status until 1945 and accordingly adjusted trade; humanitarianism; diplomacy; and attitudes. The war thus reshaped economic relations and shifting domestic power balances.
https://en.wikipedia.org/wiki?curid=8039
History of Djibouti Djibouti is a country in the Horn of Africa. It is bordered by Somalia to the southeast, Eritrea and the Red Sea to the north and northeast, Ethiopia to the west and south, and the Gulf of Aden to the east. In antiquity, the territory was part of the Land of Punt. The Djibouti area, along with other localities in the Horn region, was later the seat of the medieval Adal and Ifat Sultanates. In the late 19th century, the colony of French Somaliland was established following treaties signed by the ruling Somali and Afar Sultans with the French. It was subsequently renamed to the French Territory of the Afars and the Issas in 1967. A decade later, the Djiboutian people voted for independence, officially marking the establishment of the Republic of Djibouti. The Djibouti area has been inhabited since at least the Neolithic 12,000 years ago. Pottery predating the mid-2nd millennium BC has been found at Asa Koma, an inland lake area on the Gobaad Plain. The site's ware is characterized by punctate and incision geometric designs, which bear a similarity to the Sabir culture phase 1 ceramics from Ma'layba in Southern Arabia. Long-horned humpless cattle bones have also been discovered at Asa Koma, suggesting that domesticated cattle was present by around 3,500 years ago. Rock art of what appear to be antelopes and a giraffe are likewise found at Dorra and Balho. A team of archaeologists discovered funds stone houses, the walls of a rectangular edifice with orienteer recess to Mecca. They have also updated shards of ceramics, chipped stone tools and a glass bead. The oldest engravings discovered to date are from the fourth or third millennium BC in the pre-Islamic period, the most famous is the site of Handoga there where the ruins of a village squares subcircular dry stone delivered different objects. An old settlement, Handoga is the site of numerous ancient ruins and buildings, many of obscure origins, including ceramic shards, matching vases used brazier , containers that can hold water, several choppers and microliths, blades, drills, trenchers basalt, rhyolite or obsidian. A team of archaeologists discovered an elephant 1.6 million years BC near the area. They also found a pearl orange coralline, three glass paste , but there were no metal objects discovered. Together with northern Somalia, Eritrea and the Red Sea coast of Sudan, Djibouti is considered the most likely location of the land known to the ancient Egyptians as "Punt" (or "Ta Netjeru", meaning "God's Land"). The old territory's first mention dates to the 25th century BC. The Puntites were a nation of people that had close relations with Ancient Egypt during the times of Pharaoh Sahure of the fifth dynasty and Queen Hatshepsut of the eighteenth dynasty. They "traded not only in their own produce of incense, ebony and short-horned cattle, but also in goods from other neighbouring regions, including gold, ivory and animal skins." According to the temple reliefs at Deir el-Bahari, the Land of Punt at the time of Hatshepsut was ruled by King Parahu and Queen Ati. The Macrobians (Μακροβίοι) were a legendary people and kingdom positioned in the Horn of Africa mentioned by Herodotus. Later authors (such as Pliny on the authority of Ctesias' "Indika") place them in India instead. It is one of the legendary peoples postulated at the extremity of the known world (from the perspective of the Greeks), in this case in the extreme south, contrasting with the Hyperboreans in the extreme east. Their name is due to their legendary longevity; an average person supposedly living to the age of 120. They were said to be the "tallest and handsomest of all men". According to Herodotus' account, the Persian Emperor Cambyses II upon his conquest of Egypt (525 BC) sent ambassadors to Macrobia, bringing luxury gifts for the Macrobian king to entice his submission. The Macrobian ruler, who was elected based at least in part on stature, replied instead with a challenge for his Persian counterpart in the form of an unstrung bow: if the Persians could manage to string it, they would have the right to invade his country; but until then, they should thank the gods that the Macrobians never decided to invade their empire. Islam was introduced to the area early on from the Arabian peninsula, shortly after the hijra. Zeila's two-mihrab Masjid al-Qiblatayn dates to the 7th century, and is the oldest mosque in the city. In the late 9th century, Al-Yaqubi wrote that Muslims were living along the northern Horn seaboard. He also mentioned that the Adal kingdom had its capital in Zeila, a port city in the northwestern Awdal region abutting Djibouti. This suggests that the Adal Sultanate with Zeila as its headquarters dates back to at least the 9th or 10th century. According to I.M. Lewis, the polity was governed by local dynasties consisting of Somalized Arabs or Arabized Somalis, who also ruled over the similarly-established Sultanate of Mogadishu in the Benadir region to the south. Adal's history from this founding period forth would be characterized by a succession of battles with neighbouring Abyssinia. At its height, the Adal kingdom controlled large parts of modern-day Djibouti, Somalia, Eritrea and Ethiopia. Between Djibouti City and Loyada are a number of anthropomorphic and phallic stelae. The structures are associated with graves of rectangular shape flanked by vertical slabs, as also found in Tiya, central Ethiopia. The Djibouti-Loyada stelae are of uncertain age, and some of them are adorned with a T-shaped symbol. Additionally, archaeological excavations at Tiya have yielded tombs. As of 1997, 118 stelae were reported in the area. Along with the stelae in the Hadiya Zone, the structures are identified by local residents as "Yegragn Dingay" or "Gran's stone", in reference to Imam Ahmad ibn Ibrahim al-Ghazi (Ahmad "Gurey" or "Gran"), ruler of the Adal Sultanate. The Ifat Sultanate was a medieval kingdom in the Horn of Africa. Founded in 1285 by the Walashma dynasty, it was centered in Zeila. Ifat established bases in Djibouti and northern Somalia, and from there expanded southward to the Ahmar Mountains. Its Sultan Umar Walashma (or his son Ali, according to another source) is recorded as having conquered the Sultanate of Shewa in 1285. Taddesse Tamrat explains Sultan Umar's military expedition as an effort to consolidate the Muslim territories in the Horn, in much the same way as Emperor Yekuno Amlak was attempting to unite the Christian territories in the highlands during the same period. These two states inevitably came into conflict over Shewa and territories further south. A lengthy war ensued, but the Muslim sultanates of the time were not strongly unified. Ifat was finally defeated by Emperor Amda Seyon I of Ethiopia in 1332, and withdrew from Shewa. Governor Abou Baker ordered the Egyptian garrison at Sagallo to retire to Zeila. The cruiser Seignelay reached Sagallo shortly after the Egyptians had departed. French troops occupied the fort despite protests from the British Agent in Aden, Major Frederick Mercer Hunter, who dispatched troops to safeguard British and Egyptian interests in Zeila and prevent further extension of French influence in that direction. On the 14 April 1884 the Commander of the patrol sloop L’Inferent reported on the Egyptian occupation in the Gulf of Tadjoura. The Commander of the patrol sloop Le Vaudreuil reported that the Egyptians were occupying the interior between Obock and Tadjoura. Emperor Johannes IV of Ethiopia signed an accord with the United Kingdom to cease fighting the Egyptians and to allow the evacuation of Egyptian forces from Ethiopia and the Somali Coast ports. The Egyptian garrison was withdrawn from Tadjoura. Léonce Lagarde deployed a patrol sloop to Tadjoura the following night. The boundaries of the present-day Djibouti nation state were established during the Scramble for Africa. It was Rochet d'Hericourt's exploration into Shoa (1839–42) that marked the beginning of French interest in the Djiboutian coast of the Red Sea. Rochet d'Héricourt acquired the town of Tadjoura from the King Of Shewa in 1842. The problem was that this king was not the owner of Tadjoura, but a local sultan who did not recognize the purchase contract, further exploration by Henri Lambert, French Consular Agent at Aden, and Captain Fleuriot de Langle led to a treaty of friendship and assistance between France and the sultans of Raheita, Tadjoura, and Gobaad, from whom the French purchased the anchorage of Obock in 1862. Growing French interest in the area took place against a backdrop of British activity in Egypt and the opening of the Suez Canal in 1869. Between 1883–87, France signed various treaties with the then ruling Somali and Afar Sultans, which allowed it to expand the protectorate to include the Gulf of Tadjoura. Léonce Lagarde was subsequently installed as the protectorate's governor. In 1894, he established a permanent French administration in the city of Djibouti and named the region "Côte française des Somalis" (French Somaliland), a name which continued until 1967. The territory's border with Ethiopia, marked out in 1897 by France and Emperor Menelik II of Ethiopia, was later reaffirmed by agreements with Emperor Haile Selassie I of Ethiopia in 1945 and 1954. In 1889, a Russian by the name of Nikolay Ivanovitch Achinov (b. 1856), arrived with settlers, infantry and an Orthodox priest to Sagallo on the Gulf of Tadjoura. The French considered the presence of the Russians as a violation of their territorial rights and dispatched two gunboats. The Russians were bombarded and after some loss of life, surrendered. The colonists were deported to Odessa and the dream of Russian expansion in East Africa came to an end in less than one year. The administrative capital was moved from Obock in 1896. The city of Djibouti, which had a harbor with good access that attracted trade caravans crossing East Africa, became the new administrative capital. The Franco-Ethiopian railway, linking Djibouti to the heart of Ethiopia, began in 1897 and reached Addis Ababa in June 1917, increasing the volume of trade passing through the port. After the Italian invasion and occupation of Ethiopia in the mid-1930s, constant border skirmishes occurred between French forces in French Somaliland and Italian forces in Italian East Africa. In June 1940, during the early stages of World War II, France fell and the colony was then ruled by the pro-Axis Vichy (French) government. British and Commonwealth forces fought the neighboring Italians during the East African Campaign. In 1941, the Italians were defeated and the Vichy forces in French Somaliland were isolated. The Vichy French administration continued to hold out in the colony for over a year after the Italian collapse. In response, the British blockaded the port of Djibouti City but it could not prevent local French from providing information on the passing ship convoys. In 1942, about 4,000 British troops occupied the city. A local battalion from French Somaliland participated in the Liberation of Paris in 1944. In 1958, on the eve of neighboring Somalia's independence in 1960, a referendum was held in Djibouti to decide whether or not to join the Somali Republic or to remain with France. The referendum turned out in favour of a continued association with France, partly due to a combined yes vote by the sizable Afar ethnic group and resident Europeans. There were also reports of widespread vote rigging, with the French expelling thousands of Somalis before the referendum reached the polls. The majority of those who voted no were Somalis who were strongly in favour of joining a united Somalia as had been proposed by Mahmoud Harbi, Vice President of the Government Council. Harbi died in a plane crash two years later under mysterious circumstances. In 1960, with the fall of the ruling Dini administration, Ali Aref Bourhan, a Harbist politician, assumed the seat of Vice President of the Government Council of French Somaliland, representing the UNI party. He would hold that position until 1966. That same year, France rejected the United Nations' recommendation that it should grant French Somaliland independence. In August, an official visit to the territory by then French President, General Charles de Gaulle, was also met with demonstrations and rioting. In response to the protests, de Gaulle ordered another referendum. On 19 March 1967, a second plebiscite was held to determine the fate of the territory. Initial results supported a continued but looser relationship with France. Voting was also divided along ethnic lines, with the resident Somalis generally voting for independence, with the goal of eventual reunion with Somalia, and the Afars largely opting to remain associated with France. However, the referendum was again marred by reports of vote rigging on the part of the French authorities, with some 10,000 Somalis deported under the pretext that they did not have valid identity cards. According to official figures, although the territory was at the time inhabited by 58,240 Somali and 48,270 Afar, only 14,689 Somali were allowed to register to vote versus 22,004 Afar. Somali representatives also charged that the French had simultaneously imported thousands of Afar nomads from neighboring Ethiopia to further tip the odds in their favor, but the French authorities denied this, suggesting that Afars already greatly outnumbered Somalis on the voting lists. Announcement of the plebiscite results sparked civil unrest, including several deaths. France also increased its military force along the frontier. In 1967, shortly after the second referendum was held, the former "Côte française des Somalis" (French Somaliland) was renamed to "Territoire français des Afars et des Issas". This was both in acknowledgement of the large Afar constituency and to downplay the significance of the Somali composition (the Issa being a Somali sub-clan). The French Territory of Afars and Issas also differed from French Somaliland in terms of government structure, as the position of governor changed to that of high commissioner. A nine-member council of government was also implemented. With a steadily enlarging Somali population, the likelihood of a third referendum appearing successful had grown even more dim. The prohibitive cost of maintaining the colony and the fact that after 1975, France found itself to be the last remaining colonial power in Africa was another factor that compelled observers to doubt that the French would attempt to hold on to the territory. In 1976, the French garrison, centered on the 13th Demi-Brigade of the Foreign Legion (13 DBLE), had to be reinforced to contain Somali irredentist aspirations, revolting against the French-engineered Afar domination of the emerging government. The French forces were involved in the response to the on 3 February 1976. On June 27, 1977, a third vote took place. A landslide 98.8% of the electorate supported disengagement from France, officially marking Djibouti's independence. Hassan Gouled Aptidon, a Somali politician who had campaigned for a yes vote in the referendum of 1958, eventually became the nation's first president (1977–1999). After independence the new government signed an agreement calling for a strong French garrison, though the 13 DBLE was envisaged to be withdrawn. While the unit was reduced in size, a full withdrawal never actually took place. In 1981, Aptidon turned the country into a one party state by declaring that his party, the Rassemblement Populaire pour le Progrès (RPP) (People's Rally for Progress), was the sole legal one. Clayton writes that the French garrison played the major role in suppressing further minor unrest about this time, during which Djibouti became a one-party state on a much broader ethnic and political basis. A civil war broke out in 1991, between the government and a predominantly Afar rebel group, the Front for the Restoration of Unity and Democracy (FRUD). The FRUD signed a peace accord with the government in December 1994, ending the conflict. Two FRUD members were made cabinet members, and in the presidential elections of 1999 the FRUD campaigned in support of the RPP. Aptidon resigned as president 1999, at the age of 83, after being elected to a fifth term in 1997. His successor was his nephew, Ismail Omar Guelleh. On May 12, 2001, President Ismail Omar Guelleh presided over the signing of what is termed the final peace accord officially ending the decade-long civil war between the government and the armed faction of the FRUD, led by Ahmed Dini Ahmed, an Afar nationalist and former Gouled political ally. The peace accord successfully completed the peace process begun on February 7, 2000 in Paris. Ahmed Dini Ahmed represented the FRUD. In the presidential election held April 8, 2005, Ismail Omar Guelleh was re-elected to a second 6-year term at the head of a multi-party coalition that included the FRUD and other major parties. A loose coalition of opposition parties again boycotted the election. Currently, political power is shared by a Somali president and an Afar prime minister, with an Afar career diplomat as Foreign Minister and other cabinet posts roughly divided. However, Issas are predominate in the government, civil service, and the ruling party. That, together with a shortage of non-government employment, has bred resentment and continued political competition between the Issa Somalis and the Afars. In March 2006, Djibouti held its first regional elections and began implementing a decentralization plan. The broad pro-government coalition, including FRUD candidates, again ran unopposed when the government refused to meet opposition preconditions for participation. In the 2008 elections, the opposition Union for a Presidential Majority (UMP) party boycotted the election, leaving all 65 seats to the ruling RPP. Voter turnout figures were disputed. Guelleh was re-elected in the 2011 presidential election. Due to its strategic location at the mouth of the Bab el Mandeb gateway to the Red Sea and the Suez Canal, Djibouti also hosts various foreign military bases. Camp Lemonnier is a United States Naval Expeditionary Base, situated at Djibouti-Ambouli International Airport and home to the Combined Joint Task Force - Horn of Africa (CJTF-HOA) of the U.S. Africa Command (USAFRICOM). In 2011, Japan also opened a local naval base staffed by 180 personnel to assist in marine defense. This initiative is expected to generate $30 million in revenue for the Djiboutian government.
https://en.wikipedia.org/wiki?curid=8041
Geography of Djibouti Djibouti is a city in the Horn of Africa. It is bordered by Eritrea in the north, Ethiopia in the west and south, and Somalia in the southeast. To the east is its coastline on the Red Sea and the Gulf of Aden. Rainfall is sparse, and most of the territory has a semi-arid to arid environment. Lake Assal is a saline lake which lies 155 m (509 ft) below sea level, making it the lowest point on land in Africa and the third-lowest point on Earth after the Sea of Galilee and the Dead Sea. Djibouti has the fifth smallest population in Africa. Djibouti's major settlements include the capital Djibouti City, the port towns of Tadjoura and Obock, and the southern cities of Ali Sabieh and Dikhil. It is the forty-six country by area in Africa and 147st largest country in the world by land area, covering a total of , of which is land and is water. Djibouti shares of border with Eritrea, with Ethiopia, and with Somalia (total ). It has a strategic location on the Horn of Africa and the Bab el Mandeb, along a route through the Red Sea and Suez Canal. Djibouti's coastline serves as a commercial gateway between the Arabian Peninsula and the Horn region's interior. The country is also the terminus of rail traffic into Ethiopia. Djibouti can be divided into three physiographic regions A great arc of mountains, consisting of the Mousa Ali, Goda Mountains, and Arrei Mountains surrounds Djibouti. Djibouti has eight mountain ranges with peaks of over 1,000 m (3,281 ft). The Grand Bara Desert covers parts of southern Djibouti in the Arta Region, Ali Sabieh Region and Dikhil Region. The majority of the Grand Bara Desert lies at a relatively low elevation, below 1,700 feet (560 m). Home of the popular Grand Bara footrace. Most of Djibouti has been described as part of the Ethiopian xeric grasslands and shrublands ecoregion. The exception is a strip along the Red Sea coast, which is part of the Eritrean coastal desert; it is noted as an important migration route for birds of prey. The area of the regions of Djibouti is set out in the table below. There is not much seasonal variation in Djibouti's climate. Hot conditions prevail year-round along with winter rainfall. Mean daily maximum temperatures range from 32 to 41 °C (90 to 106 °F), except at high elevations. In Djibouti City, for instance, afternoon highs in April typically range from 28 °C (82 °F) to 34 °C (93 °F) in April. Nationally, mean daily minima generally vary between sites from about 15 to 30 °C (59 to 86 °F). The greatest range in climate occurs in eastern Djibouti, where temperatures sometimes surpass 41 °C (106 °F) in July on the littoral plains and fall below freezing point during December in the highlands. In this region, relative humidity ranges from about 40% in the mid-afternoon to 85% at night, changing somewhat according to the season. Djibouti has 988,000 people live there. Djibouti has either a hot semi-arid climate ("BSh") or a hot desert climate ("BWh"), although temperatures are much moderated at the high elevations. On the coastal seaboard, annual rainfall is less than 5 inches (131 mm); in the highlands, it is about 8 to 16 inches (200 to 400 mm). Although the coastal regions are hot and humid throughout the year, the hinterland is typically hot and dry. The climate conditions are highly variable within the country and vary locally by altitude. Summers are very humid along the coast but dry in the highlands. Heat waves are frequent. Annual precipitation amounts vary greatly from one year to another. In general, rain falls more frequently and extensively in the mountains. Sudden and brutal storms are also known to occur. Wadis turn for a few hours into raging torrents tearing everything in their path, and their course is regularized. Rainwater serves as an additional water supply for livestock and plants alongside seasonal watercourses. The highlands have temperate climate throughout out the year. The climate of most lowland zones is arid and semiarid. The climate of the interior shows notable differences from the coastline. Especially in the mornings, the temperature is pleasant: it is so in Arta, Randa and Day (where temperatures of 10 degrees Celsius have been recorded). Graphically the seasons can be represented this way: Lake Assal is the lowest point in Africa. Land use: "arable land:" 0.1% "permanent pasture:" 73.3% "forest:" 0.2% "other:" 26.4% (2011) Irrigated land: (2012) Water is becoming a scarce resource in Djibouti due to climate change, which leads to different rainfall patterns as well as to inefficient methods of distribution within the country. Most of Djibouti's rainfall is in the four months, but over the last 25 years, the Djibouti's Ministry of Environment estimates that rainfall has decreased overall between 5 and 20 percent. It is predicted that in future years, there will be higher temperatures, lower rainfall, and longer droughts, leading to even less access to water. Moreover, seawater intrusion or fossil saltwater contamination of the limited freshwater aquifers due to groundwater overexploitation affect those who live close to the coastline. In recent years, population growth has increased rapidly with the addition of many refugees. Unlike much of the Horn of Africa and Middle East which is rich in lucrative crude oil, Djibouti has limited natural resources. These include potential geothermal power, gold, clay, granite, limestone, marble, salt, diatomite, gypsum, pumice, petroleum. Natural hazards include earthquakes, drought, and occasional cyclonic disturbances from the Indian Ocean, which bring heavy rains, and flash floods. Natural resources include geothermal energy. Inadequate supplies of potable water, limited arable land and desertification are current issues. Djibouti is a party to international agreements on biodiversity, climate change, desertification, endangered species, Law of the Sea, ozone layer protection, ship pollution, and wetlands. Djibouti has a coastline which measures about 314 kilometres (195 mi). Much of the coastline is accessible and quite varied in geography and habitats. As of 2015, the population of Djibouti is 846 thousand. For statistical purposes, the country has three areas; Djibouti City (population 529,000), Ali Sabieh (population 55,000), and Dikhil (population 54,000). Djibouti's population is diverse demographically; 60% Somali, 35% Afar, and 3% Arabs. In terms of religion, 94% Muslim, 6% Christian. This is a list of the extreme points of Djibouti, the points that are farther north, south, east or west than any other location.
https://en.wikipedia.org/wiki?curid=8042
Demographics of Djibouti This article is about the demographics of Djibouti, including population density, ethnicity, education level, health, economic status, religious affiliations and other aspects of the population. Djibouti is a multiethnic country. As of 2018, it has a population of around 884,017 inhabitants. Djibouti's population grew rapidly during the latter half of the 20th century, increasing from about 69,589 in 1955 to around 869,099 by 2015. The two largest ethnic groups are the Somali (60%) and the Afar (35%). The Somali clan component is mainly composed of the Issas, a sub-clan of the larger Dir. The remaining 5% of Djibouti's population primarily consists of Arabs, Ethiopians and Europeans (French and Italians). Approximately 76% of local residents are urban dwellers; the remainder are pastoralists. 40,000 people from Yemen live in Djibouti, counting for 4.2% of its total population. 4,000 soldiers from the United States live in Djibouti, they represent 0.4% of its total population. Djibouti is a multilingual nation. The majority of local residents speak Somali (524,000 speakers) and Afar (306,000 speakers) as a first language. These idioms are the mother tongues of the Somali and Afar ethnic groups, respectively. Both languages belong to the larger Afroasiatic family. There are three official languages in Djibouti: Somali, Arabic and French. Arabic is of religious importance. In formal settings, it consists of Modern Standard Arabic. Colloquially, about 59,000 local residents speak the Ta'izzi-Adeni Arabic dialect, also known as "Djibouti Arabic". French serves as a statutory national language. It was inherited from the colonial period, and is the primary language of instruction. Around 17,000 Djiboutians speak it as a first language. Immigrant languages include Omani Arabic (38,900 speakers), Amharic (1,400 speakers), Greek (1,000 speakers) and Hindi (600 speakers). According to , the total population was in compared to 62,000 in 1950. The proportion of children below the age of 15 in 2010 was 35.8%, 60.9% was between 15 and 65 years of age, while 3.3% was 65 years or older. The following are UN medium variant projections; numbers are in thousands: Registration of vital events in Djibouti is incomplete. The Population Department of the United Nations prepared the following estimates. Births and deaths Demographic statistics according to the World Population Review in 2019. The following demographic statistics are from the CIA World Factbook. note: "highly pathogenic H5N1 avian influenza has been identified in this country; it poses a negligible risk with extremely rare cases possible among US citizens who have close contact with birds (2013)" Afar 35%, Somali 60% and Arab 2% The religious adherents of Djibouti are: The languages of Djibouti are:
https://en.wikipedia.org/wiki?curid=8043
Politics of Djibouti Politics of Djibouti takes place in a framework of a presidential representative democratic republic, whereby the executive power is exercised by the President and the Government. Legislative power is vested in both the Government and the National Assembly. The party system and legislature are dominated by the socialist People's Rally for Progress. In April 2010, a new constitutional amendment was approved. The President serves as both the head of state and head of government, and is directly elected for single six-year term. Government is headed by the President, who appoints the Prime Minister and the Council of Ministers on the proposal of the latter. There is also a 65-member chamber of deputies, where representatives are popularly elected for terms of five years. Administratively, the country is divided into five regions and one city, with eleven additional district subdivisions. Djibouti is also part of various international organisations, including the United Nations and Arab League. In 1958, on the eve of neighboring Somalia's independence in 1960, a referendum was held in Djibouti to decide whether to join the Somali Republic or to remain with France. The referendum turned out in favour of a continued association with France, partly due to a combined "yes" vote by the sizeable Afar ethnic group and resident Europeans. There was also widespread vote rigging, with the French expelling thousands of Somalis before the referendum reached the polls. The majority of those who had voted "no" were Somalis who were strongly in favour of joining a united Somalia as had been proposed by Mahmoud Harbi, Vice President of the Government Council. Harbi was killed in a plane crash two years later. In 1967, a second plebiscite was held to determine the fate of the territory. Initial results supported a continued but looser relationship with France. Voting was also divided along ethnic lines, with the resident Somalis generally voting for independence, with the goal of eventual union with Somalia, and the Afars largely opting to remain associated with France. However, the referendum was again marred by reports of vote rigging on the part of the French authorities. Shortly after the referendum was held, the former "Côte française des Somalis" (French Somaliland) was renamed to "Territoire français des Afars et des Issas". In 1977, a third referendum took place. A landslide 98.8% of the electorate supported disengagement from France, officially marking Djibouti's independence. Hassan Gouled Aptidon, a Somali politician who had campaigned for a "yes" vote in the referendum of 1958, eventually wound up as the nation's first president (1977–1999). He was re-elected, unopposed, to a second 6-year term in April 1987 and to a third 6-year term in May 1993 multiparty elections. The electorate approved the current constitution in September 1992. Many laws and decrees from before independence remain in effect. In early 1992, the government decided to permit multiple party politics and agreed to the registration of four political parties. By the time of the national assembly elections in December 1992, only three had qualified. They are the "Rassemblement Populaire Pour le Progres" (People's Rally for Progress) (RPP) which was the only legal party from 1981 until 1992, the "Parti du Renouveau Démocratique" (The Party for Democratic Renewal) (PRD), and the "Parti National Démocratique" (National Democratic Party) (PND). Only the RPP and the PRD contested the national assembly elections, and the PND withdrew, claiming that there were too many unanswered questions on the conduct of the elections and too many opportunities for government fraud. The RPP won all 65 seats in the national assembly, with a turnout of less than 50% of the electorate. In 1999, President Aptidon's chief of staff, head of security, and key adviser for over 20 years, Ismail Omar Guelleh was elected to the Presidency as the RPP candidate. He received 74% of the vote, the other 26% going to opposition candidate Moussa Ahmed Idriss, of the Unified Djiboutian Opposition (ODU). For the first time since independence, no group boycotted the election. Moussa Ahmed Idriss and the ODU later challenged the results based on election "irregularities" and the assertion that "foreigners" had voted in various districts of the capital; however, international and locally based observers considered the election to be generally fair, and cited only minor technical difficulties. Guelleh took the oath of office as the second President of the Republic of Djibouti on May 8, 1999, with the support of an alliance between the RPP and the government-recognised section of the Afar-led FRUD. Currently, political power is shared by a Somali Issa president and an Afar prime minister, with cabinet posts roughly divided. However, it is the Issas who dominate the government, civil service, and the ruling party, a situation that has bred resentment and political competition between the Somali Issas and the Afars. The government is dominated by the Somali Issa Mamasen, who enjoy the support of the Somali clans, especially the Isaaq (the clan of the current president's wife) and the Gadabuursi Dir (who are the second most prominent Somali clan in Djibouti politics). In early November 1991, civil war erupted in Djibouti between the government and a predominantly Afar rebel group, the Front for the Restoration of Unity and Democracy (FRUD). The FRUD signed a peace accord with the government in December 1994, ending the conflict. Two FRUD members were subsequently made cabinet members, and in the presidential elections of 1999 the FRUD campaigned in support of the RPP. In February 2000, another branch of FRUD signed a peace accord with the government. On 12 May 2001, President Ismail Omar Guelleh presided over the signing of what is termed the final peace accord officially ending the decade-long civil war between the government and the armed faction of the FRUD. The treaty successfully completed the peace process begun on 7 February 2000 in Paris, with Ahmed Dini Ahmed representing the FRUD. On 8 April 2005, President Guelleh was sworn in for his second six-year term after a one-man election. He took 100% of the votes in a 78.9% turnout. In early 2011, the Djiboutian citizenry took part in a series of protests against the long-serving government, which were associated with the larger Arab Spring demonstrations. Guelleh was re-elected to a third term later that year, with 80.63% of the vote in a 75% turnout. Although opposition groups boycotted the ballot over changes to the constitution permitting Guelleh to run again for office, international observers generally described the election as free and fair. On 31 March 2013, Guelleh replaced long-serving Prime Minister Dilleita Mohamed Dilleita with former president of the Union for a Presidential Majority (UMP) Abdoulkader Kamil Mohamed. The President is directly elected by popular vote for a five-year term. The Prime Minister is appointed by the President, and the Council of Ministers is solely responsible to the President, as specified in . Djibouti is sectioned into five administrative regions and one city: Ali Sabieh Region, Arta Region, Dikhil Region, Djibouti Region, Obock Region and Tadjourah Region. The country is further sub-divided into eleven districts. ACCT, ACP, AfDB, AFESD, AL, AMF, ECA, FAO, G-77, IBRD, ICAO, ICC, ICRM, IDA, IDB, IFAD, IFC, IFRCS, IGAD, ILO, IMF, IMO, Intelsat (nonsignatory user), Interpol, IOC, ITU, ITUC, NAM, OAU, OIC, OPCW, UN, UNCTAD, UNESCO, UNIDO, UPU, WFTU, WHO, WMO, WToO, WTrO
https://en.wikipedia.org/wiki?curid=8044
Economy of Djibouti The economy of Djibouti is derived in large part from its strategic location on the Red Sea. Djibouti is mostly barren, with little development in the agricultural and industrial sectors. The country has a harsh climate, a largely unskilled labour force, and limited natural resources. The country's most important economic asset is its strategic location connecting the Red Sea and the Gulf of Aden. As such, Djibouti's economy is commanded by the services sector, providing services as both a transit port for the region and as an international transshipment and refueling centre. From 1991 to 1994, Djibouti experienced a civil war which had devastating effects on the economy. Since then, the country has benefited from political stability. In recent years, Djibouti has seen significant improvement in macroeconomic stability, with its annual gross domestic product improving at an average of over 3 percent since 2003. This comes after a decade of negative or low growth. This is attributed to fiscal adjustment measures aimed at improving public financing, as well as reforms in port management. Despite the recent modest and stable growth, Djibouti is faced with many economic challenges, particularly job creation and poverty reduction. With an average annual population growth rate of 2.5 percent, the economy cannot significantly benefit national income per capita growth. Unemployment is extremely high at over 43 percent and is a major contributor to widespread poverty. Efforts are needed in creating conditions that will enhance private sector development and accumulate human capital. These conditions can be achieved through improvements in macroeconomic and fiscal framework, public administration, and labour market flexibility. Djibouti was ranked the 177th safest investment destination in the world in the March 2011 Euromoney Country Risk rankings. Djibouti has experienced stable economic growth in recent years as a result of achievements in macroeconomic adjustment efforts. Fiscal adjustment measures included downsizing the civil service, implementing a pension reform that placed the system on a much stronger financial footing, and strengthening public expenditure institutions. From 2003 to 2005, annual real GDP growth averaged 3.1 percent driven by good performance in the services sector and strong consumption. Inflation has been kept low (only 1 percent in 2004, compared with 2.2 percent in 2003), due to the fixed peg of the Djibouti franc to the US dollar. However, as mentioned above, unemployment has remained high at over 40 percent in recent years. Djibouti's gross domestic product expanded by an average of more than 6 percent per year, from US$341 million in 1985 to US$1.5 billion in 2015. The government fiscal balance is in deficit because the government has not been able to raise sufficient tax revenues to cover expenses. In 2004, a substantial increase in expenditure resulted in a deterioration of the fiscal position. As a result, the government deficit increased to US$17 million in 2004 from US$7 million in 2003. But improvement in expenditure management brought down the fiscal deficit to US$11 million in 2005. Djibouti's merchandise trade balance has shown a large deficit. This is due to the country's enormous need for imports and narrow base of exports. Although Djibouti runs a substantial surplus in its services balance, the surplus has been smaller than the deficit in the merchandise trade balance. As a result, Djibouti's current account balance has been in deficit. There is very limited information for Djibouti's current account; the country's merchandise trade deficit was estimated at US$737 million in 2004. Positioned on a primary shipping lane between the Gulf of Aden and the Red Sea, Djibouti holds considerable strategic value in the international trade and shipping industries. The facilities of the Port of Djibouti are important to sea transportation companies for fuel bunkering and refuelling. Its transport facilities are used by several landlocked African countries for the re-export of their goods. Djibouti earns transit taxes and harbour fees from this trade, these form the bulk of government revenue. Threats of pirates patrolling the Gulf of Aden, off the coast of Somalia, with the intentions of capturing large cargo ships, oil, and chemical tankers has created the need for larger nations such as the United States, France, and Japan to embed logistics bases or military camps from which they can defend their freight from piracy. The port of Djibouti functions as a small French naval facility, and the United States has also stationed hundreds of troops in Camp Lemonnier, Djibouti, its only African base, in an effort to counter terrorism in the region. Recently China has stated they are in talks to build “logistics facilities” in Obock to provide support peacekeeping and anti-piracy missions near Somalia and the Gulf of Aden. Additional international presence will increase both Djibouti's economic value as well its strategic importance in the region. This is a chart of trend of gross domestic product of Djibouti at market prices estimated by the International Monetary Fund with figures in millions of Djiboutian francs. For purchasing power parity comparisons, the US dollar is exchanged at 76.03 Djiboutian francs. Mean wages were $1.30 per person-hour in 2009. The following table shows the main economic indicators in 1980–2017. Djibouti's economy is based on service activities connected with the country's strategic location and status as a free trade zone in the Horn of Africa. Two-thirds of inhabitants live in the capital and the remainder of the populace is mostly nomadic herders. Low amounts of rainfall limit crop production to fruits and vegetables, and requiring most food to be imported. The government provides services as both a transit port for the region and an international transshipment and refueling centre. Djibouti has few natural resources and little industry. All of these factors contribute to its heavy dependence on foreign assistance to help support its balance of payments and to finance development projects. An unemployment rate of 50 percent continues to be a major problem. Inflation is not a concern, however, because of the fixed tie of the franc to the US dollar. Per capita consumption dropped an estimated 35 percent over the last seven years because of recession, civil war, and a high population growth rate. Faced with a multitude of economic difficulties, the government has fallen in arrears on long-term external debt and has been struggling to meet the stipulations of foreign aid donors. The government of Djibouti welcomes all foreign direct investment. Djibouti's assets include a strategic geographic location, an open trade regime, a stable currency, substantial tax breaks and other incentives. Potential areas of investment include Djibouti's port and the telecommunications sectors. President Ismail Omar Guellehh first elected in 1999, has named privatization, economic reform, and increased foreign investment as top priorities for his government. The president pledged to seek the help of the international private sector to develop the country's infrastructure. Djibouti has no major laws that would discourage incoming foreign investment. In principle there is no screening of investment or other discriminatory mechanisms. That said, certain sectors, most notably public utilities, are state owned and some parts are not currently open to investors. Conditions of the structural adjustment agreement recently signed by Djibouti and the International Monetary Fund stipulate increased privatization of parastatal and government-owned monopolies. There are no patent laws in Djibouti. As in most African nations, access to licenses and approvals is complicated not so much by law as by administrative procedures. In Djibouti, the administrative process has been characterized as a form of 'circular dependency.' For example, the finance ministry will issue a license only if an investor possesses an approved investor visa, while the interior ministry will only issue an investor visa to a licensed business. The Djiboutian government is increasingly realizing the importance of establishing a one-stop shop to facilitate the investment process. In May 2015 Choukri Djibah, Director of Gender in the Department of Women and Family, launched the project SIHA (Strategic Initiative for the Horn of Africa) designed to support and reinforce the economic capacity of women in Djibouti, funded with a grant from the European Union of 28 Million Djibouti francs. Principal exports from the region transiting Djibouti are coffee, salt, hides, dried beans, cereals, other agricultural products, chalk, and wax. Djibouti itself has few exports, and the majority of its imports come from France. Most imports are consumed in Djibouti and the remainder goes to Ethiopia and Somalia. Djibouti's unfavourable balance of trade is offset partially by invisible earnings such as transit taxes and harbour dues. In 1999, U.S. exports to Djibouti totalled $26.7 million while U.S. imports from Djibouti were less than $1 million. The City of Djibouti has the only paved airport in the republic. In 2013, 63,000 foreign tourists visited Djibouti, Djibouti City is the principal tourist destination for visitors, revenues from tourism fell just US$43 million in 2013.
https://en.wikipedia.org/wiki?curid=8045
Transport in Djibouti Transport in Djibouti is overseen by the Ministry of Infrastructure & Transport. Over the last years, the Government of Djibouti have significantly increased funding for rail and road construction to build an infrastructure. They include highways, airports and seaports, in addition to various forms of public and private vehicular, maritime and aerial transportation. The country's first railway, Ethio-Djibouti Railway, was a metre gauge railway that connected Ethiopia to Djibouti. It was built between 1894 and 1917 by the French who ruled the country at the time as French Somaliland. The railway is no longer operational. Currently (2018), Djibouti has 93 km of railways. The new Addis Ababa-Djibouti Railway, an electrified standard gauge railway built by two Chinese government firms, began regular operations in January 2018. Its main purpose is to facilitate freight services between the Ethiopian hinterland and the Djiboutian Port of Doraleh. Railway services are provided by the "Ethio-Djibouti Standard Gauge Rail Transport Share Company", a bi-national company between Ethiopia and Djibouti, which operates all commuter and freight railway services in the country. Djibouti has a total of four railway stations, of which three (Nagad, Holhol and Ali Sabieh) can handle passenger traffic. The Djiboutian highway system is named according to the road classification. One routes in the Trans-African Highway network originate in Djibouti City. Djibouti also has multiple highway links with Ethiopia. Roads that are considered primary roads are those that are fully asphalted (throughout their entire length) and in general they connect all the major towns in Djibouti. There is a total of of roads, with paved and unpaved, according to a 2000 estimate. Djibouti has an improved natural harbor that consists of a roadstead, outer harbor, and inner harbor, known as the Port of Djibouti. The roadstead is well protected by reefs and by the configuration of the land. 95% of Ethiopia’s imports and exports move through Djiboutian ports. Car ferries pass the Gulf of Tadjoura from Djibouti City to Tadjoura. For decades, the Port of Djibouti was Djibouti's only freight port. It is now in the process of being replaced by the Port of Doraleh west of Djibouti City. In addition to the Port of Doraleh, which handles general cargo and oil imports, Djibouti currently (2018) has three other major ports for the import and export of bulk goods and livestock, the Port of Tadjourah (potash), the Damerjog Port (livestock) and the Port of Goubet (salt). Djibouti had one ship of over 1,000 GT: 1,369 GT/ according to a 1999 estimate. In 2004, there were an estimated 13 airports, only 3 of which had paved runways as of 2005. Djibouti–Ambouli International Airport, which is situated about 6 km from the city of Djibouti, is the country's international air terminal. There are also local airports at Tadjoura and Obock. Beginning in 1963, the state-owned Air Djibouti also provided domestic service to various domestic centers and flew to many overseas destinations. The national carrier discontinued operations in 2002. Daallo Airlines, a Somali-owned private carrier, has also offered air transportation since its foundation in 1991. With its hub at the Djibouti–Ambouli International Airport, the airline provides flights to a number of domestic and overseas destinations. "total:" 3 "over 3,047 m:" 1 "1,524 to 3,047 m:" 2 (2013 est.) "total:" 10 "1,524 to 2,437 m:" 1 "914 to 1,523 m:" 7 "under 914 m:" 2 (2013 est.)
https://en.wikipedia.org/wiki?curid=8047
Djibouti Armed Forces The Djibouti Armed Forces (DJAF; , ) are the military forces of Djibouti. They consist of the Djiboutian National Army and its sub-branches the Djiboutian Air Force and Djiboutian Navy. As of 2018, the Djibouti Armed Forces consists of 20,470 (2018 est.) ground troops, which are divided into several regiments and battalions garrisoned in various areas throughout the country. The Djibouti Armed Forces are an important player in the Bab-el-Mandeb and Red Sea. In 2015 General Zakaria Chiek Imbrahim was "chief d'etat-major general" (chief of staff) of the "Forces Armees Djiboutiennes". He assumed command in November 2013. Djibouti has always been a very active member in the African Union and the Arab League. Historically, Somali society accorded prestige to the warrior ("waranle") and rewarded military prowess. Except for men of religion ("wadaad"), who were few in number, all Somali males were considered potential warriors. Djibouti's many Sultanates each maintained regular troops. In the early Middle Ages, the conquest of Shewa by the Ifat Sultanate ignited a rivalry for supremacy with the Solomonic Dynasty. Many similar battles were fought between the succeeding Sultanate of Adal and the Solomonids, with both sides achieving victory and suffering defeat. During the protracted Ethiopian-Adal War (1529–1559), Imam Ahmad ibn Ibrihim al-Ghazi defeated several Ethiopian Emperors and embarked on a conquest referred to as the "Futuh Al-Habash" ("Conquest of Abyssinia"), which brought three-quarters of Christian Abyssinia under the power of the Muslim Adal Sultanate. Al-Ghazi's forces and their Ottoman allies came close to extinguishing the ancient Ethiopian kingdom, but the Abyssinians managed to secure the assistance of Cristóvão da Gama's Portuguese troops and maintain their domain's autonomy. However, both polities in the process exhausted their resources and manpower, which resulted in the contraction of both powers and changed regional dynamics for centuries to come. The 1st Battalion of Somali Skirmishers, formed in 1915 from recruits from the French Somali Coast, was a unit belonging to the French Colonial Army. They distinguished himself during the First World War, notably during the resumption of Fort Douaumont, Battle of Verdun in October 1916 alongside the Régiment d'infanterie-chars de marine and the Second Battle of the Aisne in October 1917. In May and June 1918, they took part in the Third Battle Of The Aisne and in July in the Second Battle of the Marne. In August and September 1918, the Somali battalion fought on the Oise front and in October 1918 he obtained his second citation to the order of the army as well as the right to wear a Fourragère in the colors of the ribbon of the Croix de guerre 1914–1918. Between 1915 and 1918, over 2,088 Djiboutians served as combat in the First World War. Their losses are estimated at 517 killed and 1,000 to 1,200 injured. During the Second World War a battalion of Somali skirmishers to participate in the battles for the liberation of France, it participated in particular in the fighting at Pointe de Grave in April 1945. On April 22, 1945, General de Gaulle awarded the Somali battalion a citation to the army and decorated the battalion's pennant in Soulac-sur-Mer. The Somali battalion is dissolved in June 25, 1946. The Ogaden War (13 July 1977 – 15 March 1978) was a conflict fought between the Ethiopian government and Somali government. The Djibouti government supported Somalia with military intelligence. In a notable illustration of the nature of Cold War alliances, the Soviet Union switched from supplying aid to Somalia to supporting Ethiopia, which had previously been backed by the United States. This in turn prompted the U.S. to later start supporting Somalia. The war ended when Somali forces retreated back across the border and a truce was declared. The first war which involved the Djiboutian armed forces was the Djiboutian Civil War between the Djiboutian government, supported by France, and the Front for the Restoration of Unity and Democracy ("FRUD"). The war lasted from 1991 to 2001, although most of the hostilities ended when the moderate factions of FRUD signed a peace treaty with the government after suffering an extensive military setback when the government forces captured most of the rebel-held territory. A radical group continued to fight the government, but signed its own peace treaty in 2001. The war ended in a government victory, and FRUD became a political party. Djibouti has fought in clashes against Eritrea over the Ras Doumeira peninsula, which both countries claim to be under their sovereignty. The first clash occurred in 1996 after a nearly two-months stand-off. In 1999, a political crisis occurred when both sides accused each other for supporting its enemies. In 2008, the countries clashed again when Djibouti refused to return Eritrean deserters and Eritrea responded by firing at the Djiboutian forces. In the following battles, some 44 Djiboutian troops and some estimated 100 Eritreans were killed. In 2011, Djibouti troops also joined the African Union Mission to Somalia. As of 2013, the Djibouti Armed Forces (DJAF) are composed of three branches: the Djibouti National Army, which consists of the Coastal Navy, the Djiboutian Air Force (Force Aerienne Djiboutienne, FAD), and the National Gendarmerie (GN). The Army is by far the largest, followed by the Air Force and Navy. The Commander-in-Chief of the DJAF is the President of Djibouti and the Minister of Defence oversees the DJAF on a day-to-day basis. Refer to decree No 2003-0166/PR/MDN on organization of Djibouti Armed Forces. The armed forces consist of: The Djiboutian National Army is the largest branch of the Djibouti Armed Forces. Djibouti maintains a modest military force of approximately 20,470 troops; the army is made of 18,600 troops (IISS 2018). The latter are divided into several regiments and battalions garrisoned in various areas throughout the country. The Army has four military districts (the Tadjourah, Dikhil, Ali-Sabieh and Obock districts). Clashes with the Military of Eritrea, in 2008, demonstrated the superior nature of the Djiboutian forces’ training and skills, but also highlighted the fact that the small military would be unable to counter the larger, if less well-equipped forces of its neighbours. The army has concentrated on mobility in its equipment purchases, suitable for patrol duties and counterattacks but ill-suited for armoured warfare. The 2008 border clashes at least temporarily swelled the ranks of the Djiboutian army, with retired personnel being recalled, but the military’s size and capabilities are much reduced since the 1990s. The army to address more effectively its major defense disadvantage: lack of strategic depth. Thus in the early 2000s it looked outward for a model of army organization that would best advance defensive capabilities by restructuring forces into smaller, more mobile units instead of traditional divisions. The official tasks of the armed forces include strengthening the country against external attack, and maintaining border security. Djiboutian troops continue to monitor its borders with Eritrea, in the case of an attack. The Djiboutian Army is one of the small professional advanced armies in East Africa. Its maneuver units are: Italy delivered 10 howitzers M-109L (in 2013), tens IVECO trucks (ACM90, cranes, tankers, etc.), some IVECO armoured car Puma 4X4 and IVECO utility vehicles VM90. In reforming the Djiboutian Army, most of the available financial resources have been directed to the development of the Land Forces. Over the years, Djiboutian Army has established partnerships with militaries in France, Egypt, Saudi Arabia, Morocco and the United States. Currently, the amount allocated to defense represents the largest single entry in the country’s budget. The Djiboutian Navy is the naval service branch of the Djibouti Armed Forces. The Djiboutian Navy has about 1,000 regular personnel as of 2013, to protect national maritime rights and to support the nation's foreign policies. It is responsible for securing Djibouti's territorial waters and 314 km seaboard. The force was launched two years after Djibouti gained its independence in 1977. Initially, it comprised the remnants of the Gendarmerie and was focused on port safety and traffic monitoring. This is an area known to have considerable fish stocks, sustaining an active fisheries industry. The acquisition of several boats from the US in 2006 considerably increased the navy's ability to patrol over longer distances and to remain at sea for several days at a time. Cooperation with the US and Yemeni navies is also increasing in an effort to protect and maintain the safety and security of the Sea Lanes of Communication (SLOC). In 2004 Italy delivered 2 former Italian Coast Guard "" patrol boats (ex CP 230 and CP 234 ) and 2 new type "CP 500" motorboats. The Djiboutian Air Force (DAF) (French: Force Aérienne du Djibouti (FADD) was established as part of the Djibouti Armed Forces after the country obtained its independence on June 27, 1977. Its first aircraft included three Nord N.2501 Noratlas transport aircraft and an Allouette II helicopter presented to it by the French. In 1982, the Djibouti Air Force was augmented by two Aerospatiale AS.355F Ecureuil 2 helicopters and a Cessna U206G Stationair, this was followed in 1985 by a Cessna 402C Utiliner. In 1985, the Allouette II was withdrawn from use and put on display at Ambouli Air Base at Djibouti's airport. In 1987, the three N.2501 Noratlas were also retired and subsequently returned to France. New equipment came, in 1991, in the form of a Cessna 208 Caravan, followed by Russian types in the early nineties. These included four Mil Mi 2, six Mil Mi 8 and two Mil Mi 17 helicopters and a single Antonov An 28 light transport aircraft. Pilot training for the 360 men of the DAF, if necessary, is conducted in France with continued on type flight training at home. The DAF has no units of its own and forms in whole a part of the Army, its sole base is Ambouli. The main doctrine consists of the following principles: The size and structure of the Djibouti Armed Forces is continually evolving. As of 2018, Djibouti Armed Forces were reported to have 18,000–20,000 active personnel, 10,500–11,000 reserve personnel. Djibouti has committed to strengthening international action through the African Union to achieve collective security and uphold the goals enshrined in the Purposes and Principles of the UN Charter and the Constitutive Act of the African Union. Deployed in 2 countries in Somalia and Sudan. Djibouti’s first contribution to UN peacekeeping was in 2010 in the Darfur, Sudan. The Chinese naval support base in Djibouti began construction in 2016 and was officially opened in 2017. France's 5e RIAOM are currently stationed in Djibouti. The Italian "Base Militare Nazionale di Supporto" (National Support Military Base) is capable to host 300 troops and some UAVs. The Japan Self-Defense Force Base Djibouti was established in 2011. The "Deployment Airforce for Counter-Piracy Enforcement" (DAPE): Established in 2011 with approximately 600 deployed personnel from the Japan Maritime Self-Defense Force, on a rotational basis, operating naval vessels and maritime patrol aircraft. Japan reportedly pays $30 million per year for the military facilities, from which it conducts anti-piracy operations in the region. The base also acts as a hub for operations throughout the East African coastline. There is also Combined Joint Task Force - Horn of Africa, a U.S. force of more than 3,500, currently deployed in the country at Camp Lemonnier.
https://en.wikipedia.org/wiki?curid=8048
Demographics of Dominica This article is about the demographic features of the population of Dominica, including population density, ethnicity, religious affiliations and other aspects of the population. According to the preliminary 2011 census results Dominica has a population of 71,293. The population growth rate is very low, due primarily to emigration to more prosperous Caribbean Islands, the United Kingdom, the United States, Canada, and Australia. The estimated mid-year population of is (). Structure of the population (31.12.2006) (Estimates) : The vast majority of Dominicans are of African descent (75% at the 2014 census). There is a significant mixed population (19%) at the 2014 census due to intermarriage, along with a small European origin minority (0.8%; descendants of French, British, and Irish colonists), East Indians (0.1%) groups, and there are small numbers of Lebanese/Syrians (0.1%) and Asians. Dominica is the only Eastern Caribbean island that still has a population of pre-Columbian native Caribs (also known as Kalinago), who were exterminated, driven from neighbouring islands, or mixed with Africans and/or Europeans. According to the 2001 census there are only 2,001 Caribs remaining (2.9% of the total population). A considerable growth occurred since the 1991 census when 1,634 Caribs were counted (2.4% of the total population). The Caribs live in eight villages on the east coast of Dominica. This special Carib Territory was granted by the British Crown in 1903. The present number of Kalinago is estimated at 4% more than 3,000. Demographic statistics according to the CIA World Factbook, unless otherwise indicated. English is the official language and universally understood; however, because of historic French domination, Antillean Creole, a French-lexified creole language, is also widely spoken. According to the 2001 census, 91.2% percent of the population of Dominica is considered Christian, 1.6% has a non-Christian religion and 6.1% has no religion or did not state a religion (1.1%). Roughly two thirds of Christians are Roman Catholics (61.4% of the total population), a reflection of early French influence on the island, and one third are Protestant. The Evangelicals constitute the largest Protestant group, with 6.7% of the population. Seventh-day Adventists are the second largest group (6.1%). The next largest group are Pentecostals (5.6% of the population), followed by Baptists (4.1%). Other Christians include Methodists (3.7%), Church of God (1.2%), Jehovah's Witnesses (1.2%), Anglicanism (0.6%) and Brethren Christian (0.3%). During the past decades the number of Roman Catholics and Anglicans has decreased, while the number of other Protestants has increased, especially Evangelicals, Seventh-day Adventists, Pentecostals (5.6% of the population) and Baptists). The number of non-Christians is small. These religious groups include the Rastafarian Movement (1.3% of the population), Hinduism (0.1%) and Muslims (0.2%).
https://en.wikipedia.org/wiki?curid=8053
Telecommunications in Dominica Telecommunications in Dominica comprises telephone, radio, television and internet services. The primary regulatory authority is the National Telecommunication Regulatory Commission which regulates all related industries in order to comply with The Telecommunications Act 8 of 2000. Calls from Dominica to the US, Canada, and other NANP Caribbean nations, are dialed as 1 + NANP area code + 7-digit number. Calls from Dominica to non-NANP countries are dialed as 011 + country code + phone number with local area code. AM 0, FM 15, shortwave 0 (2007) 46,000 (1997) 0 (however, there are three cable television companies, Dominica Broadcast, Marpin Telecoms and Digicel Play - a merger of Digicel and SAT Telecommunications Ltd.) 11,000 (2007)
https://en.wikipedia.org/wiki?curid=8056
Military of Dominica There has been no standing army in Dominica since 1981. Defense is the responsibility of Regional Security System (RSS). The civil Commonwealth of Dominica Police Force includes a Special Service Unit, Coast Guard. In the event of war or other emergency, if proclaimed by the authorities, the Police Force shall be a military force which may be employed for State defence ("Police Act", Chapter 14:01).
https://en.wikipedia.org/wiki?curid=8058
Foreign relations of Dominica Like its Eastern Caribbean neighbours, the main priority of Dominica's foreign relations is economic development. The country maintains missions in Washington, New York, London, and Brussels and is represented jointly with other Organisation of Eastern Caribbean States (OECS) members in Canada. Dominica is also a member of the Caribbean Development Bank (CDB), and the Commonwealth of Nations. It became a member of the United Nations and the International Monetary Fund (IMF) in 1978 and of the World Bank and Organization of American States (OAS) in 1979. As a member of CARICOM, in July 1994 Dominica strongly backed efforts by the United States to implement United Nations Security Council Resolution 940, designed to facilitate the departure of Haiti's de facto authorities from power. The country agreed to contribute personnel to the multinational force, which restored the democratically elected government of Haiti in October 1994. In May 1997, Prime Minister James joined 14 other Caribbean leaders, and President Clinton, during the first-ever U.S.-regional summit in Bridgetown, Barbados. The summit strengthened the basis for regional cooperation on justice and counternarcotics issues, finance and development, and trade. Dominica previously maintained official relations with the Republic of China (commonly known as "Taiwan") instead of the People's Republic of China, but on 23 March 2004, a joint communique was signed in Beijing, paving the way for diplomatic recognition of the People's Republic. Beijing responded to Dominica's severing relations with the Republic of China by giving them a $12 million aid package, which includes $6 million in budget support for the year 2004 and $1 million annually for six years. Dominica is also a member of the International Criminal Court with a Bilateral Immunity Agreement of protection for the US-military (as covered under Article 98). Dominica claims Venezuelan controlled Isla Aves (Known in Dominica as Bird Rock) located roughly 90 km. west of Dominica. The Commonwealth of Dominica has been a member of the Commonwealth of Nations since 1978, when it became an independent Commonwealth republic from the United Kingdom. Dominica's highest court of appeal is the Caribbean Court of Justice, in effect from 6 March 2015. Previously, the nation's ultimate court of appeal was the Judicial Committee of the Privy Council in London. The Commonwealth of Dominica has been a member of Organisation internationale de la Francophonie since 1979. As a French and then British colony; Antillean Creole, a French-based creole language, is spoken by 90% of the population. The Commonwealth of Dominica is nestled between two insular overseas outer regions of France, Guadeloupe (situated to the north), and Martinique (situated to the south). A number of links exist between Dominica and its French neighbours to north and south.
https://en.wikipedia.org/wiki?curid=8059
Dominican Republic The Dominican Republic ( ; , ) is a country located on the island of Hispaniola in the Greater Antilles archipelago of the Caribbean region. It occupies the eastern five-eighths of the island, which it shares with Haiti, making Hispaniola one of only two Caribbean islands, along with Saint Martin, that are shared by two sovereign states. The Dominican Republic is the second-largest nation in the Antilles by area (after Cuba) at , and third by population with approximately 10.5 million people (2020 est.), of whom approximately 3.3 million live in the metropolitan area of Santo Domingo, the capital city. The official language of the country is Spanish. The native Taíno people had inhabited Hispaniola before the arrival of the Europeans, dividing it into five chiefdoms. The Taíno people had eventually moved north over many years, and lived around the Caribbean islands. The Taíno natives had done quite well for themselves and were on their way to being an organized civilization. Christopher Columbus explored and claimed the island, landing here on his first voyage in 1492. The colony of Santo Domingo became the site of the first permanent European settlement in the Americas, the oldest continuously inhabited city, and the first seat of the Spanish colonial rule in the New World. Meanwhile, France occupied the western third of Hispaniola, naming their colony Saint-Domingue, which became the independent state of Haiti in 1804. After more than three hundred years of Spanish rule the Dominican people declared independence in November 1821. The leader of the independence movement José Núñez de Cáceres, intended the Dominican nation to unite with the country of Gran Colombia, but the newly independent Dominicans were forcefully annexed by Haiti in February 1822. Independence came 22 years later in 1844, after victory in the Dominican War of Independence. Over the next 72 years the Dominican Republic experienced mostly internal conflicts and a brief return to Spanish colonial status before permanently ousting the Spanish during the Dominican War of Restoration of 1863–1865. The United States occupied the country between 1916 and 1924; a subsequent calm and prosperous six-year period under Horacio Vásquez followed. From 1930 the dictatorship of Rafael Leónidas Trujillo ruled until 1961. A civil war in 1965, the country's last, was ended by U.S. military occupation and was followed by the authoritarian rule of Joaquín Balaguer (1966–1978 and 1986–1996). Since 1978 the Dominican Republic has moved toward representative democracy. Danilo Medina, the Dominican Republic's current president, succeeded Fernández in 2012, winning 51% of the electoral vote over his opponent ex-president Hipólito Mejía. The Dominican Republic has the largest economy in the Caribbean and Central American region and is the eighth-largest economy in Latin America. Over the last 25 years, the Dominican Republic has had the fastest-growing economy in the Western Hemisphere – with an average real GDP growth rate of 5.3% between 1992 and 2018. GDP growth in 2014 and 2015 reached 7.3 and 7.0%, respectively, the highest in the Western Hemisphere. In the first half of 2016 the Dominican economy grew 7.4% continuing its trend of rapid economic growth. Recent growth has been driven by construction, manufacturing, tourism, and mining. The country is the site of the second largest gold mine in the world, the Pueblo Viejo mine. Private consumption has been strong, as a result of low inflation (under 1% on average in 2015), job creation, and a high level of remittances. The Dominican Republic is the most visited destination in the Caribbean. The year-round golf courses are major attractions. A geographically diverse nation, the Dominican Republic is home to both the Caribbean's tallest mountain peak, Pico Duarte, and the Caribbean's largest lake and point of lowest elevation, Lake Enriquillo. The island has an average temperature of and great climatic and biological diversity. The country is also the site of the first cathedral, castle, monastery, and fortress built in the Americas, located in Santo Domingo's Colonial Zone, a World Heritage Site. Music and sport are of great importance in the Dominican culture, with Merengue and Bachata as the national dance and music, and baseball as the most popular sport. The "Dominican" word comes from the Latin "Dominicus", meaning Sunday. However, the island has this name by Santo Domingo de Guzmán (Saint Dominic), founder of the Order of the Dominicans. The Dominicans established a house of high studies in the island of Santo Domingo that today is known as the Universidad Autónoma de Santo Domingo and dedicated themselves to the protection of the native Taíno people, who were subjected to slavery, and to the education of the inhabitants of the island. For most of its history, up until independence, the country was known as "" – the name of its present capital and patron saint, Saint Dominic – and continued to be commonly known as such in English until the early 20th century. The residents were called "Dominicans" (), the adjectival form of "Domingo", and the revolutionaries named their newly independent country "Dominican Republic" (). In the national anthem of the Dominican Republic (), the term "Dominicans" does not appear. The author of its lyrics, Emilio Prud'Homme, consistently uses the poetic term "Quisqueyans" (). The word "Quisqueya" derives from a native tongue of the Taíno Indians and means "Mother of the lands" (). It is often used in songs as another name for the country. The name of the country is often shortened to "the D.R." () The Arawakan-speaking Taíno moved into Hispaniola from the north east region of what is now known as South America, displacing earlier inhabitants, c. 650 C.E. They engaged in farming and fishing and hunting and gathering. The fierce Caribs drove the Taíno to the northeastern Caribbean during much of the 15th century. The estimates of Hispaniola's population in 1492 vary widely, including one hundred thousand, three hundred thousand, and four hundred thousand to two million. Determining precisely how many people lived on the island in pre-Columbian times is next to impossible, as no accurate records exist. By 1492 the island was divided into five Taíno chiefdoms. The Taíno name for the entire island was either "Ayiti" or "Quisqueya". The Spaniards arrived in 1492. After initially friendly relationships, the Taínos resisted the conquest, led by the female Chief Anacaona of Xaragua and her ex-husband Chief Caonabo of Maguana, as well as Chiefs Guacanagaríx, Guamá, Hatuey, and Enriquillo. The latter's successes gained his people an autonomous enclave for a time on the island. Within a few years after 1492, the population of Taínos had declined drastically, due to smallpox, measles, and other diseases that arrived with the Europeans, and from other causes discussed below. The first recorded smallpox outbreak in the Americas occurred on Hispaniola in 1507. The last record of pure Taínos in the country was from 1864. Still, Taíno biological heritage survived to an important extent, due to intermixing. Census records from 1514 reveal that 40% of Spanish men in Santo Domingo were married to Taino women, and some present-day Dominicans have Taíno ancestry. Remnants of the Taino culture include their cave paintings, (including the Pomier Caves) as well as pottery designs which are still used in the small artisan village of Higüerito, Moca. Christopher Columbus arrived on the island on December 5, 1492, during the first of his four voyages to the Americas. He claimed the land for Spain and named it "La Española" due to its diverse climate and terrain which reminded him of the Spanish landscape. Traveling further east Columbus came across the Yaque del Norte River in the Cibao region, which he named Rio de Oro after discovering gold deposits nearby. On Columbus's return during his second voyage he established the settlement of La Isabela in what is now Puerto Plata on Jan. 1494, while he sent Alonso de Ojeda to search for gold in the region. In 1496 Bartholomew Columbus, Christopher's brother, built the city of Santo Domingo, Western Europe's first permanent settlement in the "New World." The colony thus became the springboard for the further Spanish conquest of the Americas and for decades the headquarters of Spanish colonial power in the hemisphere. Soon after the largest discovery of gold in the island was made in the cordillera central region, which led to a mining boom. By 1501, Columbus's cousin Giovanni Columbus, had also discovered gold near Buenaventura, the deposits were later known as Minas Nuevas. Two major mining areas resulted, one along San Cristóbal-Buenaventura, and another in Cibao within the La Vega-Cotuy-Bonao triangle, while Santiago de los Caballeros, Concepcion, and Bonao became mining towns. The gold rush of 1500–1508 ensued. Ferdinand II of Aragon "ordered gold from the richest mines reserved for the Crown." Thus, Ovando expropriated the gold mines of Miguel Diaz and Francisco de Garay in 1504, as pit mines became royal mines, though placers were open to private prospectors. Furthermore, Ferdinand wanted the "best Indians" working his royal mines, and kept 967 in the San Cristóbal mining area supervised by salaried miners. Under Nicolás de Ovando y Cáceres' governorship, the Indians were made to work in the gold mines, "where they were grossly overworked, mistreated, and underfed," according to Pons. By 1503, the Spanish Crown legalized the distribution of Indians to work the mines as part of the encomienda system. According to Pons, "Once the Indians entered the mines, hunger and disease literally wiped them out." By 1508 the Indian population of about 400,000 was reduced to 60,000, and by 1514, only 26,334 remained. About half were located in the mining towns of Concepción, Santiago, Santo Domingo, and Buenaventura. The repartimiento of 1514 accelerated emigration of the Spanish colonists, coupled with the exhaustion of the mines. In 1516, a smallpox epidemic killed an additional 8,000, of the remaining 11,000 Indians, in one month. By 1519, according to Pons, "Both the gold economy and the Indian population became extinct at the same time." The southern city of Santo Domingo served as the administrative heart of the expanding Spanish empire. Conquistadors like Hernán Cortés and Francisco Pizarro lived and worked in Santo Domingo before they embarked on their prosperous endeavors in the American continent. Sugar cane was introduced to Hispaniola from the Canary Islands, and the first sugar mill in the New World was established in 1516, on Hispaniola. The need for a labor force to meet the growing demands of sugar cane cultivation led to an exponential increase in the importation of slaves over the following two decades. The sugar mill owners soon formed a new colonial elite and convinced the Spanish king to allow them to elect the members of the Real Audiencia from their ranks. Poorer colonists subsisted by hunting the herds of wild cattle that roamed throughout the island and selling their leather. In the 1560s English pirates joined the French in regularly raiding Spanish shipping in the Americas. With the conquest of the American mainland, Hispaniola's sugar plantation economy quickly declined. Most Spanish colonists left for the silver-mines of Mexico and Peru, while new immigrants from Spain bypassed the island. Agriculture dwindled, new imports of slaves ceased, and white colonists, free people of color, and slaves lived in similar conditions, weakening the racial hierarchy and aiding "intermixing," resulting in a population of predominantly mixed Spaniard, Taino, and African descent. Except for the city of Santo Domingo, which managed to maintain some legal exports, Dominican ports were forced to rely on contraband trade, which, along with livestock, became one of the main sources of livelihood for the island's inhabitants. By the mid-17th century the French sent colonists and privateers to settle the northwestern coast of Hispaniola due to its strategic position in the region. In order to entice the pirates, the French supplied them with women who had been taken from prisons, accused of prostitution and thieving. After decades of armed struggles with the French, Spain ceded the western coast of the island to France with the 1697 Treaty of Ryswick, whilst the Central Plateau remained under Spanish domain. France created a wealthy colony in the island, while the Spanish colony continued to suffer an economic decline. On April 17, 1655, the English landed on nearby Hispaniola and marched 30 miles overland to Santo Domingo, the main Spanish stronghold on the island. The sweltering heat soon felled many of the northern European invaders. The Spanish defenders, having had time to prepare an ambush for the aimlessly thrashing, mosquito-swatting newcomers, sprang on them with mounted lancers, sending them careening back toward the beach in utter confusion. Their commander, Venables, hid behind a tree where, in the words of one disgusted observer, he was “so much possessed with terror that he could hardly speak.” The elite defenders of Santo Domingo were amply rewarded with titles from the Spanish Crown. The French attacked Santiago in 1667, and this was followed by a devastating hurricane the next year and a smallpox epidemic that killed about 1,500 in 1669. In 1687, the Spaniards captured the fort at Petit-Goave, but the French fought back and hanged their leaders. Two years later Louis XIV was at war and ordered the French to invade the Spaniards, and Tarin de Cussy sacked Santiago. In 1691, the Spaniards attacked the north and sacked Cap-François. Island tensions subsided once peace was restored and Spain's last Habsburg monarch—the deformed invalid Charles II—died on 30 November 1700, being succeeded by the sixteen-year-old French Bourbon princeling Philip of Anjou. The House of Bourbon replaced the House of Habsburg in Spain in 1700 and introduced economic reforms that gradually began to revive trade in Santo Domingo. The crown progressively relaxed the rigid controls and restrictions on commerce between Spain and the colonies and among the colonies. The last "flotas" sailed in 1737; the monopoly port system was abolished shortly thereafter. By the middle of the century, the population was bolstered by emigration from the Canary Islands, resettling the northern part of the colony and planting tobacco in the Cibao Valley, and importation of slaves was renewed. The colony of Santo Domingo saw a population increase during the 17th century, as it rose to about 91,272 in 1750. Of this number approximately 38,272 were white landowners, 38,000 were free mixed people of color, and some 15,000 were slaves. This contrasted sharply with the population of the French colony of Saint-Domingue (present-day Haiti) – the wealthiest colony in the Caribbean and whose population of one-half a million was 90% enslaved and overall seven times as numerous as the Spanish colony of Santo Domingo. The 'Spanish' settlers, whose blood by now was mixed with that of Tainos, Africans and Canary Guanches, proclaimed: 'It does not matter if the French are richer than us, we are still the true inheritors of this island. In our veins runs the blood of the heroic "conquistadores" who won this island of ours with sword and blood.' When the War of Jenkins' Ear between Spain and Britain broke out in 1739, Spanish privateers, particularly from Santo Domingo, began to troll the Caribbean Sea, a development that lasted until the end of the eighteenth century. During this period, Spanish privateers from Santo Domingo sailed into enemy ports looking for ships to plunder, thus harming commerce with Britain and New York. As a result, the Spanish obtained stolen merchandise—foodstuffs, ships, enslaved persons—that were sold in Hispaniola's ports, with profits accruing to individual sea raiders. The revenue acquired in these acts of piracy was invested in the economic expansion of the colony and led to repopulation from Europe. As restrictions on colonial trade were relaxed, the colonial elites of St. Domingue offered the principal market for Santo Domingo's exports of beef, hides, mahogany, and tobacco. With the outbreak of the Haitian Revolution in 1791, the rich urban families linked to the colonial bureaucracy fled the island, while most of the rural "hateros" (cattle ranchers) remained, even though they lost their principal market. Although the population of Spanish Santo Domingo was perhaps one-fourth that of French Saint-Domingue, this did not prevent the Spanish king from launching an invasion of the French side of the island in 1793, attempting to take advantage of the chaos sparked by the French Revolution. French forces checked Spanish progress toward Port-au-Prince in the south, but the Spanish pushed rapidly through the north, most of which they occupied by 1794. Although the Spanish military effort went well on Hispaniola, it did not so in Europe (see War of the Pyrenees). As a consequence, Spain was forced to cede Santo Domingo to the French under the terms of the Treaty of Basel (July 22, 1795) in order to get the French to withdraw from Spain. From 1795 to 1822, Santo Domingo (the city) changed hands several times along with the colony it headed. It was ceded to France in 1795 after years of struggles. However, the French failed to consolidate this cession, mainly because of the continued presence of British troops in Saint-Domingue (they remained there until 1798). As the news of Santo Domingo's cession became known on the island, many Dominicans had sided with Britain against France, welcoming British ships into their ports, pledging allegiance to the British and enlisting in the military forces of France's longtime opponent. The city was briefly captured by Haitian rebels in 1801, recovered by France in 1802, and was once again reclaimed by Spain in 1809. Toussaint Louverture, who at least in theory represented imperial France, marched into Santo Domingo from Saint-Domingue to enforce the terms of the treaty. Toussaint's army committed numerous atrocities; as a consequence, the Spanish population fled from Santo Domingo in exodus proportions. French control of the former Spanish colony passed from Toussaint Louverture to Gen. Charles Leclerc when he seized the city of Santo Domingo in early 1802. Following the defeat of the French under Gen. Donatien de Rochembeau at Le Cap in November 1803 by the Haitians, their new leader, Dessalines, attempted to drive the French out of Santo Domingo. He invaded the Spanish side of the island, defeated the French-led Spanish colonials at River Yaque del Sur, and besieged the capital on March 5, 1805. At the same time, the Haitian General Christophe marched north through Cibao, capturing Santiago where he massacred prominent individuals who had sought refuge in a church. The arrival of small French squadrons off the Haitian coast at Goncaives and at Santo Domingo forced the Haitians to withdraw. As Christophe retreated across the island, he slaughtered and burned. In October 1808 the landowner Juan Sánchez Ramírez began a rebellion against the French colonial government in Santo Domingo and the insurgents were aided by Puerto Rico and Jamaica. A combined Anglo-Spanish force recaptured the territory in 1809. The Spaniards, upon re-establishing control, not only tried to re-establish slavery in Santo Domingo, but many of them also mounted raiding expeditions into Haiti to capture blacks and enslave them as well. After a dozen years of discontent and failed independence plots by various opposing groups, Santo Domingo's former Lieutenant-Governor (top administrator), José Núñez de Cáceres, declared the colony's independence from the Spanish crown as Spanish Haiti, on November 30, 1821. This period is also known as the Ephemeral independence. The newly independent republic ended two months later under the Haitian government led by Jean-Pierre Boyer. As Toussaint Louverture had done two decades earlier, the Haitians abolished slavery. In order to raise funds for the huge indemnity of 150 million francs that Haiti agreed to pay the former French colonists, and which was subsequently lowered to 60 million francs, the Haitian government imposed heavy taxes on the Dominicans. Since Haiti was unable to adequately provision its army, the occupying forces largely survived by commandeering or confiscating food and supplies at gunpoint. Attempts to redistribute land conflicted with the system of communal land tenure ("terrenos comuneros"), which had arisen with the ranching economy, and some people resented being forced to grow cash crops under Boyer and Joseph Balthazar Inginac's "Code Rural". In the rural and rugged mountainous areas, the Haitian administration was usually too inefficient to enforce its own laws. It was in the city of Santo Domingo that the effects of the occupation were most acutely felt, and it was there that the movement for independence originated. The Haitians associated the Roman Catholic Church with the French slave-masters who had exploited them before independence and confiscated all church property, deported all foreign clergy, and severed the ties of the remaining clergy to the Vatican. All levels of education collapsed; the university was shut down, as it was starved both of resources and students, with young Dominican men from 16 to 25 years old being drafted into the Haitian army. Boyer's occupation troops, who were largely Dominicans, were unpaid and had to "forage and sack" from Dominican civilians. Haiti imposed a "heavy tribute" on the Dominican people. Haiti's constitution forbade white elites from owning land, and Dominican major landowning families were forcibly deprived of their properties. During this time, many white elites in Santo Domingo did not consider owning slaves due to the economic crisis that Santo Domingo faced during the España Boba period. The few landowners that wanted slavery established in Santo Domingo had to emigrate to other colonies such as Cuba, Puerto Rico, or Gran Colombia. Many landowning families stayed on the island, with a heavy concentration of landowners settling in the cibao region. After independence, and eventually being under Spanish rule once again in 1861, many families returned to Santo Domingo including new waves of immigration from Spain. In 1838, Juan Pablo Duarte founded a secret society called La Trinitaria, which sought the complete independence of Santo Domingo without any foreign intervention. Also Francisco del Rosario Sánchez and Ramon Matias Mella, despite not being among the founding members of La Trinitaria, were decisive in the fight for independence. Duarte, Mella, and Sánchez are considered the three Founding Fathers of the Dominican Republic. The "Trinitarios" took advantage of a Haitian rebellion against the dictator Jean-Pierre Boyer. They rose up on January 27, 1843, ostensibly in support of the Haitian Charles Hérard who was challenging Boyer for the control of Haiti. However, the movement soon discarded its pretext of support for Hérard and now championed Dominican independence. After overthrowing Boyer, Hérard executed some Dominicans, and threw many others into prison; Duarte escaped. After subduing the Dominicans, Hérard, a mulatto, faced a rebellion by blacks in Port-au-Prince. Haiti had formed two regiments composed of Dominicans from the city of Santo Domingo; these were used by Hérard to suppress the uprising. On February 27, 1844, the surviving members of "La Trinitaria" declared the independence from Haiti. They were backed by Pedro Santana, a wealthy cattle rancher from El Seibo, who became general of the army of the nascent republic. The Dominican Republic's first Constitution was adopted on November 6, 1844, and was modeled after the United States Constitution. The decades that followed were filled with tyranny, factionalism, economic difficulties, rapid changes of government, and exile for political opponents. Archrivals Santana and Buenaventura Báez held power most of the time, both ruling arbitrarily. They promoted competing plans to annex the new nation to another power: Santana favored Spain, and Báez the United States. Threatening the nation's independence were renewed Haitian invasions. On 19 March 1844, the Haitian Army, under the personal command of President Hérard, invaded the eastern province from the north and progressed as far as Santiago, but was soon forced to withdraw after suffering disproportionate losses. According to José María Imbert's (the General defending Santiago) report of April 5, 1844 to Santo Domingo, “in Santiago, the enemy did not leave behind in the battlefield less than six hundred dead and…the number of wounded was very superior…[while on] our part we suffered not one casualty.” The Dominicans repelled the Haitian forces, on both land and sea, by December 1845. The Haitians invaded again in 1849 after France recognized the Dominican Republic as an independent nation. In an overwhelming onslaught, the Haitians seized one frontier town after another. Santana being called upon to assume command of the troops, met the enemy at Ocoa, April 21, 1849, with only 400 men, and succeeded in utterly defeating the Haitian army. In November 1849 Báez launched a naval offensive against Haiti to forestall the threat of another invasion. His seamen under the French adventurer, Fagalde, raided the Haitian coasts, plundered seaside villages, as far as Cape Dame Marie, and butchered crews of captured enemy ships. In 1855, Haiti invaded again, but its forces were repulsed at the bloodiest clashes in the history of the Dominican–Haitian wars, the Battle of Santomé in December 1855 and the Battle of Sabana Larga in January 1856. The Dominican Republic's first constitution was adopted on November 6, 1844. The state was commonly known as Santo Domingo in English until the early 20th century. It featured a presidential form of government with many liberal tendencies, but it was marred by Article 210, imposed by Pedro Santana on the constitutional assembly by force, giving him the privileges of a dictatorship until the war of independence was over. These privileges not only served him to win the war but also allowed him to persecute, execute and drive into exile his political opponents, among which Duarte was the most important. In Haiti after the fall of Boyer, black leaders had ascended to the power once enjoyed exclusively by the mulatto elite. Without adequate roads, the regions of the Dominican Republic developed in isolation from one another. In the south, also known at the time as Ozama, the economy was dominated by cattle-ranching (particularly in the southeastern savannah) and cutting mahogany and other hardwoods for export. This region retained a semi-feudal character, with little commercial agriculture, the hacienda as the dominant social unit, and the majority of the population living at a subsistence level. In the north (better-known as Cibao), the nation's richest farmland, peasants supplemented their subsistence crops by growing tobacco for export, mainly to Germany. Tobacco required less land than cattle ranching and was mainly grown by smallholders, who relied on itinerant traders to transport their crops to Puerto Plata and Monte Cristi. Santana antagonized the Cibao farmers, enriching himself and his supporters at their expense by resorting to multiple peso printings that allowed him to buy their crops for a fraction of their value. In 1848, he was forced to resign and was succeeded by his vice-president, Manuel Jimenes. After defeating a new Haitian invasion in 1849, Santana marched on Santo Domingo and deposed Jimenes in a coup d'état. At his behest, Congress elected Buenaventura Báez as President, but Báez was unwilling to serve as Santana's puppet, challenging his role as the country's acknowledged military leader. In 1853, Santana was elected president for his second term, forcing Báez into exile. Three years later, after repulsing another Haitian invasion, he negotiated a treaty leasing a portion of Samaná Peninsula to a U.S. company; popular opposition forced him to abdicate, enabling Báez to return and seize power. With the treasury depleted, Báez printed eighteen million uninsured pesos, purchasing the 1857 tobacco crop with this currency and exporting it for hard cash at immense profit to himself and his followers. Cibao tobacco planters, who were ruined when hyperinflation ensued, revolted and formed a new government headed by José Desiderio Valverde and headquartered in Santiago de los Caballeros. In July 1857 General Juan Luis Franco Bidó besieged Santo Domingo. The Cibao-based government declared an amnesty to exiles and Santana returned and managed to replace Franco Bidó in September 1857. After a year of civil war, Santana captured Santo Domingo in June 1858, overthrew both Báez and Valverde and installed himself as president. In 1861, after imprisoning, silencing, exiling, and executing many of his opponents and due to political and economic reasons, Santana signed a pact with the Spanish Crown and reverted the Dominican nation to colonial status. This action was supported by the cattlemen of the south while the northern elites opposed it. Spanish rule finally came to an end with the War of Restoration in 1865, after four years of conflict between Dominican nationalists and Spanish sympathizers. The war claimed more than 50,000 lives. Political strife again prevailed in the following years; warlords ruled, military revolts were extremely common, and the nation amassed debt. In 1869, President Ulysses S. Grant ordered U.S. Marines to the island for the first time. Pirates operating from Haiti had been raiding U.S. commercial shipping in the Caribbean, and Grant directed the Marines to stop them at their source. Following the virtual takeover of the island, Báez offered to sell the country to the United States. Grant desired a naval base at Samaná and also a place for resettling newly freed Blacks. The treaty, which included U.S. payment of $1.5 million for Dominican debt repayment, was defeated in the United States Senate in 1870 on a vote of 28–28, two-thirds being required. Báez was toppled in 1874, returned, and was toppled for good in 1878. A new generation was thence in charge, with the passing of Santana (he died in 1864) and Báez from the scene. Relative peace came to the country in the 1880s, which saw the coming to power of General Ulises Heureaux. "Lilís", as the new president was nicknamed, enjoyed a period of popularity. He was, however, "a consummate dissembler", who put the nation deep into debt while using much of the proceeds for his personal use and to maintain his police state. Heureaux became rampantly despotic and unpopular. In 1899, he was assassinated. However, the relative calm over which he presided allowed improvement in the Dominican economy. The sugar industry was modernized, and the country attracted foreign workers and immigrants. From 1902 on, short-lived governments were again the norm, with their power usurped by caudillos in parts of the country. Furthermore, the national government was bankrupt and, unable to pay Heureaux's debts, faced the threat of military intervention by France and other European creditor powers. United States President Theodore Roosevelt sought to prevent European intervention, largely to protect the routes to the future Panama Canal, as the canal was already under construction. He made a small military intervention to ward off European powers, to proclaim his famous Roosevelt Corollary to the Monroe Doctrine, and also to obtain his 1905 Dominican agreement for U.S. administration of Dominican customs, which was the chief source of income for the Dominican government. A 1906 agreement provided for the arrangement to last 50 years. The United States agreed to use part of the customs proceeds to reduce the immense foreign debt of the Dominican Republic and assumed responsibility for said debt. After six years in power, President Ramón Cáceres (who had himself assassinated Heureaux) was assassinated in 1911. The result was several years of great political instability and civil war. U.S. mediation by the William Howard Taft and Woodrow Wilson administrations achieved only a short respite each time. A political deadlock in 1914 was broken after an ultimatum by Wilson telling the Dominicans to choose a president or see the U.S. impose one. A provisional president was chosen, and later the same year relatively free elections put former president (1899–1902) Juan Isidro Jimenes Pereyra back in power. To achieve a more broadly supported government, Jimenes named opposition individuals to his cabinet. But this brought no peace and, with his former Secretary of War Desiderio Arias maneuvering to depose him and despite a U.S. offer of military aid against Arias, Jimenes resigned on May 7, 1916. Wilson thus ordered the U.S. occupation of the Dominican Republic. U.S. Marines landed on May 16, 1916, and had control of the country two months later. The military government established by the U.S., led by Vice Admiral Harry Shepard Knapp, was widely repudiated by the Dominicans, with many factions within the country leading guerrilla campaigns against U.S. forces. The occupation regime kept most Dominican laws and institutions and largely pacified the general population. The occupying government also revived the Dominican economy, reduced the nation's debt, built a road network that at last interconnected all regions of the country, and created a professional National Guard to replace the warring partisan units. Vigorous opposition to the occupation continued, nevertheless, and after World War I it increased in the U.S. as well. There, President Warren G. Harding (1921–23), Wilson's successor, worked to put an end to the occupation, as he had promised to do during his campaign. The U.S. government's rule ended in October 1922, and elections were held in March 1924. The victor was former president (1902–03) Horacio Vásquez, who had cooperated with the U.S. He was inaugurated on July 13, and the last U.S. forces left in September. In six years, the Marines were involved in at least 467 engagements, with 950 insurgents killed or wounded in action. Vásquez gave the country six years of stable governance, in which political and civil rights were respected and the economy grew strongly, in a relatively peaceful atmosphere. During the government of Horacio Vásquez, Rafael Trujillo held the rank of lieutenant colonel and was chief of police. This position helped him launch his plans to overthrow the government of Vásquez. Trujillo had the support of Carlos Rosario Peña, who formed the Civic Movement, which had as its main objective to overthrow the government of Vásquez. In February 1930, when Vásquez attempted to win another term, his opponents rebelled in secret alliance with the commander of the National Army (the former National Guard), General Rafael Leonidas Trujillo Molina. Trujillo secretly cut a deal with rebel leader Rafael Estrella Ureña; in return for letting Ureña take power, Trujillo would be allowed to run for president in new elections. As the rebels marched toward Santo Domingo, Vásquez ordered Trujillo to suppress them. However, feigning "neutrality," Trujillo kept his men in barracks, allowing Ureña's rebels to take the capital virtually uncontested. On March 3, Ureña was proclaimed acting president with Trujillo confirmed as head of the police and the army. As per their agreement, Trujillo became the presidential nominee of the newly formed Patriotic Coalition of Citizens (Spanish: Coalición patriotica de los ciudadanos), with Ureña as his running mate. During the election campaign, Trujillo used the army to unleash his repression, forcing his opponents to withdraw from the race. Trujillo stood to elect himself, and in May he was elected president virtually unopposed after a violent campaign against his opponents, ascending to power on August 16, 1930. There was considerable economic growth during Rafael Trujillo's long and iron-fisted regime, although a great deal of the wealth was taken by the dictator and other regime elements. There was progress in healthcare, education, and transportation, with the building of hospitals and clinics, schools, and roads and harbors. Trujillo also carried out an important housing construction program and instituted a pension plan. He finally negotiated an undisputed border with Haiti in 1935 and achieved the end of the 50-year customs agreement in 1941, instead of 1956. He made the country debt-free in 1947. This was accompanied by absolute repression and the copious use of murder, torture, and terrorist methods against the opposition. Trujillo renamed Santo Domingo to "Ciudad Trujillo" (Trujillo City), the nation's – and the Caribbean's – highest mountain "La Pelona Grande" (Spanish for: The Great Bald) to "Pico Trujillo" (Spanish for: Trujillo Peak), and many towns and a province. Some other places he renamed after members of his family. By the end of his first term in 1934 he was the country's wealthiest person, and one of the wealthiest in the world by the early 1950s; near the end of his regime his fortune was an estimated $800 million. He used the secret police extensively to eliminate political opposition and to prevent several coup attempts during and after World War II. The secret police allegedly murdered more than 500,000 people during the Trujillo era. Although one-quarter Haitian, Trujillo promoted propaganda against them. In 1937, he ordered what became known as the Parsley Massacre or, in the Dominican Republic, as "El Corte" (The Cutting), directing the army to kill Haitians living on the Dominican side of the border. The army killed an estimated 17,000 to 35,000 Haitian men, women, and children over six days, from the night of October 2, 1937, through October 8, 1937. To avoid leaving evidence of the army's involvement, the soldiers used edged weapons rather than guns. The soldiers were said to have interrogated anyone with dark skin, using the shibboleth "perejil" (parsley) to distinguish Haitians from Afro-Dominicans when necessary; the 'r' of "perejil" was of difficult pronunciation for Haitians. As a result of the massacre, the Dominican Republic agreed to pay Haiti US$750,000, later reduced to US$525,000. In 1938, reports from the Dominican Republic revealed hundreds more Haitians had been killed and thousands deported. On November 25, 1960, Trujillo killed three of the four Mirabal sisters, nicknamed "Las Mariposas" (The Butterflies). The victims were Patria Mercedes Mirabal (born on February 27, 1924), Argentina Minerva Mirabal (born on March 12, 1926), and Antonia María Teresa Mirabal (born on October 15, 1935). Along with their husbands, the sisters were conspiring to overthrow Trujillo in a violent revolt. The Mirabals had communist ideological leanings as did their husbands. The sisters have received many honors posthumously and have many memorials in various cities in the Dominican Republic. Salcedo, their home province, changed its name to Provincia Hermanas Mirabal (Mirabal Sisters Province). The International Day for the Elimination of Violence against Women is observed on the anniversary of their deaths. For a long time, the U.S. and the Dominican elite supported the Trujillo government. This support persisted despite the assassinations of political opposition, the massacre of Haitians, and Trujillo's plots against other countries. The U.S. believed Trujillo was the lesser of two or more evils. The U.S. finally broke with Trujillo in 1960, after Trujillo's agents attempted to assassinate the Venezuelan president, Rómulo Betancourt, a fierce critic of Trujillo. Trujillo had become expendable. Dissidents inside the Dominican Republic argued that assassination was the only certain way to remove Trujillo. According to Chester Bowles, the U.S. Undersecretary of State, internal Department of State discussions in 1961 on the topic were vigorous. Richard N. Goodwin, Assistant Special Counsel to the President, who had direct contacts with the rebel alliance, argued for intervention against Trujillo. Quoting Bowles directly: "The next morning I learned that in spite of the clear decision against having the dissident group request our assistance Dick Goodwin following the meeting sent a cable to CIA people in the Dominican Republic without checking with State or CIA; indeed, with the protest of the Department of State. The cable directed the CIA people in the Dominican Republic to get this request at any cost. When Allen Dulles found this out the next morning, he withdrew the order. We later discovered it had already been carried out." Trujillo was assassinated on May 30, 1961 with weapons supplied by the United States Central Intelligence Agency (CIA). In February 1963, a democratically elected government under leftist Juan Bosch took office but it was overthrown in September. On April 24, 1965, after 19 months of military rule, a pro-Bosch revolt broke out. Days later U.S. President Lyndon Johnson, concerned that Communists might take over the revolt and create a "second Cuba," sent the Marines, followed immediately by the U.S. Army's 82nd Airborne Division and other elements of the XVIIIth Airborne Corps, in Operation Powerpack. "We don't propose to sit here in a rocking chair with our hands folded and let the Communist set up any government in the western hemisphere," Johnson said. The forces were soon joined by comparatively small contingents from the Organization of American States. All these remained in the country for over a year and left after supervising elections in 1966 won by Joaquín Balaguer. He had been Trujillo's last puppet-president. The Dominican death toll for the entire period of civil war and occupation totaled more than three thousand, many of them black civilians killed when the US-backed military junta engaged in a campaign of ethnic cleansing in the northern (also the industrial) part of Santo Domingo. Balaguer remained in power as president for 12 years. His tenure was a period of repression of human rights and civil liberties, ostensibly to keep pro-Castro or pro-communist parties out of power; 11,000 persons were killed. His rule was criticized for a growing disparity between rich and poor. It was, however, praised for an ambitious infrastructure program, which included the construction of large housing projects, sports complexes, theaters, museums, aqueducts, roads, highways, and the massive Columbus Lighthouse, completed in 1992 during a later tenure. In 1978, Balaguer was succeeded in the presidency by opposition candidate Antonio Guzmán Fernández, of the Dominican Revolutionary Party (PRD). Another PRD win in 1982 followed, under Salvador Jorge Blanco. Under the PRD presidents, the Dominican Republic enjoyed a period of relative freedom and basic human rights. Balaguer regained the presidency in 1986 and was re-elected in 1990 and 1994, this last time just defeating PRD candidate José Francisco Peña Gómez, a former mayor of Santo Domingo. The 1994 elections were flawed, bringing on international pressure, to which Balaguer responded by scheduling another presidential contest in 1996. Balaguer was not a candidate. The PSRC candidate was his Vice President Jacinto Peynado Garrigosa. In the 1996 presidential election, Leonel Fernández achieved the first-ever win for the Dominican Liberation Party (PLD), which Bosch had founded in 1973 after leaving the PRD (which he also had founded). Fernández oversaw a fast-growing economy: growth averaged 7.7% per year, unemployment fell, and there were stable exchange and inflation rates. In 2000, the PRD's Hipólito Mejía won the election. This was a time of economic troubles. Mejía was defeated in his re-election effort in 2004 by Leonel Fernández of the PLD. In 2008, Fernández was as elected for a third term. Fernández and the PLD are credited with initiatives that have moved the country forward technologically, such as the construction of the Metro Railway ("El Metro"). On the other hand, his administrations have been accused of corruption. Danilo Medina of the PLD was elected president in 2012 and re-elected in 2016. On the other hand, a significant increase in crime, government corruption and a weak justice system threaten to overshadow their administrative period. The Dominican Republic has the ninth-largest economy in Latin America and is the largest economy in the Caribbean and Central American region. Over the last two decades, the Dominican Republic has had one of the fastest-growing economies in the Americas – with an average real GDP growth rate of 5.4% between 1992 and 2014. GDP growth in 2014 and 2015 reached 7.3 and 7.0%, respectively, the highest in the Western Hemisphere. In the first half of 2016 the Dominican economy grew 7.4% continuing its trend of rapid economic growth. Recent growth has been driven by construction, manufacturing, tourism, and mining. Private consumption has been strong, as a result of low inflation (under 1% on average in 2015), job creation, as well as a high level of remittances. The 20th century brought many prominent Dominican writers, and saw a general increase in the perception of Dominican literature. Writers such as Juan Bosch (one of the greatest storytellers in Latin America), Pedro Mir (national poet of the Dominican Republic), Aida Cartagena Portalatin (poetess par excellence who spoke in the Era of Rafael Trujillo), Emilio Rodríguez Demorizi (the most important Dominican historian, with more than 1000 written works), Manuel del Cabral (main Dominican poet featured in black poetry), Hector Inchustegui Cabral (considered one of the most prominent voices of the Caribbean social poetry of the twentieth century), Miguel Alfonseca (poet belonging to Generation 60), Rene del Risco (acclaimed poet who was a participant in the June 14 Movement), Mateo Morrison (excellent poet and writer with numerous awards), among many more prolific authors, put the island in one of the most important in Literature in the twentieth century. New 21st century Dominican writers have not yet achieved the renown of their 20th century counterparts. However, writers such as Frank Báez (won the 2006 Santo Domingo Book Fair First Prize), Junot Díaz (2008 Pulitzer Prize for Fiction for his novel "The Brief Wondrous Life of Oscar Wao)" and Emil Cerda (won the Premio Joven Destacado Award 2019 for his novel Más allá de lo espiritual Vol. 1), lead Dominican literature in the 21st century. The Dominican Republic comprises the eastern five-eighths of Hispaniola, the second largest island in the Greater Antilles, with the Atlantic Ocean to the north and the Caribbean Sea to the south. It shares the island roughly at a 2:1 ratio with Haiti, the north-to-south (though somewhat irregular) border between the two countries being . To the north and north-west lie The Bahamas and the Turks and Caicos Islands, and to the east, across the Mona Passage, the US Commonwealth of Puerto Rico. The country's area is reported variously as (by the embassy in the United States) and , making it the second largest country in the Antilles, after Cuba. The Dominican Republic's capital and largest city Santo Domingo is on the southern coast. The Dominican Republic has four important mountain ranges. The most northerly is the "Cordillera Septentrional" ("Northern Mountain Range"), which extends from the northwestern coastal town of Monte Cristi, near the Haitian border, to the Samaná Peninsula in the east, running parallel to the Atlantic coast. The highest range in the Dominican Republic – indeed, in the whole of the West Indies – is the "Cordillera Central" ("Central Mountain Range"). It gradually bends southwards and finishes near the town of Azua, on the Caribbean coast. In the Cordillera Central are the four highest peaks in the Caribbean: Pico Duarte ( above sea level), La Pelona (), La Rucilla (), and Pico Yaque (). In the southwest corner of the country, south of the Cordillera Central, there are two other ranges: the more northerly of the two is the "Sierra de Neiba", while in the south the "Sierra de Bahoruco" is a continuation of the Massif de la Selle in Haiti. There are other, minor mountain ranges, such as the "Cordillera Oriental" ("Eastern Mountain Range"), "Sierra Martín García", "Sierra de Yamasá", and "Sierra de Samaná". Between the Central and Northern mountain ranges lies the rich and fertile Cibao valley. This major valley is home to the cities of Santiago and La Vega and most of the farming areas of the nation. Rather less productive are the semi-arid San Juan Valley, south of the Central Cordillera, and the Neiba Valley, tucked between the Sierra de Neiba and the Sierra de Bahoruco. Much of the land around the Enriquillo Basin is below sea level, with a hot, arid, desert-like environment. There are other smaller valleys in the mountains, such as the Constanza, Jarabacoa, Villa Altagracia, and Bonao valleys. The "Llano Costero del Caribe" ("Caribbean Coastal Plain") is the largest of the plains in the Dominican Republic. Stretching north and east of Santo Domingo, it contains many sugar plantations in the savannahs that are common there. West of Santo Domingo its width is reduced to as it hugs the coast, finishing at the mouth of the Ocoa River. Another large plain is the "Plena de Azua" ("Azua Plain"), a very arid region in Azua Province. A few other small coastal plains are on the northern coast and in the Pedernales Peninsula. Four major rivers drain the numerous mountains of the Dominican Republic. The Yaque del Norte is the longest and most important Dominican river. It carries excess water down from the Cibao Valley and empties into Monte Cristi Bay, in the northwest. Likewise, the Yuna River serves the Vega Real and empties into Samaná Bay, in the northeast. Drainage of the San Juan Valley is provided by the San Juan River, tributary of the Yaque del Sur, which empties into the Caribbean, in the south. The Artibonito is the longest river of Hispaniola and flows westward into Haiti. There are many lakes and coastal lagoons. The largest lake is Enriquillo, a salt lake at below sea level, the lowest elevation in the Caribbean. Other important lakes are Laguna de Rincón or Cabral, with fresh water, and Laguna de Oviedo, a lagoon with brackish water. There are many small offshore islands and cays that form part of the Dominican territory. The two largest islands near shore are Saona, in the southeast, and Beata, in the southwest. Smaller islands include the Cayos Siete Hermanos, Isla Cabra, Cayo Jackson, Cayo Limón, Cayo Levantado, Cayo la Bocaina, Catalanita, Cayo Pisaje and Isla Alto Velo. To the north, at distances of , are three extensive, largely submerged banks, which geographically are a southeast continuation of the Bahamas: Navidad Bank, Silver Bank, and Mouchoir Bank. Navidad Bank and Silver Bank have been officially claimed by the Dominican Republic. Isla Cabritos lies within Lago Enriquillo. The Dominican Republic is located near fault action in the Caribbean. In 1946, it suffered a magnitude 8.1 earthquake off the northeast coast, triggering a tsunami that killed about 1,800, mostly in coastal communities. Caribbean countries and the United States have collaborated to create tsunami warning systems and are mapping high-risk low-lying areas. The Dominican Republic has a tropical rainforest climate in the coastal and lowland areas. Due to its diverse topography, Dominican Republic's climate shows considerable variation over short distances and is the most varied of all the Antilles. The annual average temperature is . At higher elevations the temperature averages while near sea level the average temperature is . Low temperatures of are possible in the mountains while high temperatures of are possible in protected valleys. January and February are the coolest months of the year while August is the hottest month. Snowfall can be seen on rare occasions on the summit of Pico Duarte. The wet season along the northern coast lasts from November through January. Elsewhere the wet season stretches from May through November, with May being the wettest month. Average annual rainfall is countrywide, with individual locations in the Valle de Neiba seeing averages as low as while the Cordillera Oriental averages . The driest part of the country lies in the west. Tropical cyclones strike the Dominican Republic every couple of years, with 65% of the impacts along the southern coast. Hurricanes are most likely between June and October. The last major hurricane that struck the country was Hurricane Georges in 1998. The Dominican Republic is a representative democracy or democratic republic, with three branches of power: executive, legislative, and judicial. The president of the Dominican Republic heads the executive branch and executes laws passed by the congress, appoints the cabinet, and is commander in chief of the armed forces. The president and vice-president run for office on the same ticket and are elected by direct vote for 4-year terms. The national legislature is bicameral, composed of a senate, which has 32 members, and the Chamber of Deputies, with 178 members. Judicial authority rests with the Supreme Court of Justice's 16 members. They are appointed by a council composed of the president, the leaders of both houses of Congress, the President of the Supreme Court, and an opposition or non–governing-party member. The court "alone hears actions against the president, designated members of his Cabinet, and members of Congress when the legislature is in session." The Dominican Republic has a multi-party political system. Elections are held every two years, alternating between the presidential elections, which are held in years evenly divisible by four, and the congressional and municipal elections, which are held in even-numbered years not divisible by four. "International observers have found that presidential and congressional elections since 1996 have been generally free and fair." The Central Elections Board (JCE) of nine members supervises elections, and its decisions are unappealable. Starting from 2016, elections will be held jointly, after a constitutional reform. The three major parties are the conservative Social Christian Reformist Party (), in power 1966–78 and 1986–96; and the social democratic Dominican Revolutionary Party (), in power in 1963, 1978–86, and 2000–04; and the Dominican Liberation Party (), in power 1996–2000 and since 2004. The presidential elections of 2008 were held on May 16, 2008, with incumbent Leonel Fernández winning 53% of the vote. He defeated Miguel Vargas Maldonado, of the PRD, who achieved a 40.48% share of the vote. Amable Aristy, of the PRSC, achieved 4.59% of the vote. Other minority candidates, which included former Attorney General Guillermo Moreno from the Movement for Independence, Unity and Change (), and PRSC former presidential candidate and defector Eduardo Estrella, obtained less than 1% of the vote. In the 2012 presidential elections, the incumbent president Leonel Fernández (PLD) declined his aspirations and instead the PLD elected Danilo Medina as its candidate. This time the PRD presented ex-president Hipolito Mejia as its choice. The contest was won by Medina with 51.21% of the vote, against 46.95% in favor of Mejia. Candidate Guillermo Moreno obtained 1.37% of the votes. In 2014, the Modern Revolutionary Party () was created by a faction of leaders from the PRD and has since become the predominant opposition party, polling in second place for the upcoming May 2016 general elections. The Dominican Republic has a close relationship with the United States, mostly with the Commonwealth of Puerto Rico, and with the other states of the Inter-American system. The Dominican Republic's relationship with neighbouring Haiti is strained over mass Haitian migration to the Dominican Republic, with citizens of the Dominican Republic blaming the Haitians for increased crime and other social problems. The Dominican Republic is a regular member of the Organisation Internationale de la Francophonie. The Dominican Republic has a Free Trade Agreement with the United States, Costa Rica, El Salvador, Guatemala, Honduras and Nicaragua via the Dominican Republic-Central America Free Trade Agreement. And an Economic Partnership Agreement with the European Union and the Caribbean Community via the Caribbean Forum. Congress authorizes a combined military force of 44,000 active duty personnel. Actual active duty strength is approximately 32,000. Approximately 50% of those are used for non-military activities such as security providers for government-owned non-military facilities, highway toll stations, prisons, forestry work, state enterprises, and private businesses. The commander in chief of the military is the president. The army is larger than the other services combined with approximately 56,780 active duty personnel, consisting of six infantry brigades, a combat support brigade, and a combat service support brigade. The air force operates two main bases, one in the southern region near Santo Domingo and one in the northern region near Puerto Plata. The navy operates two major naval bases, one in Santo Domingo and one in Las Calderas on the southwestern coast, and maintains 12 operational vessels. The Dominican Republic has the largest military in the Caribbean region surpassing Cuba. The armed forces have organized a Specialized Airport Security Corps (CESA) and a Specialized Port Security Corps (CESEP) to meet international security needs in these areas. The secretary of the armed forces has also announced plans to form a specialized border corps (CESEF). The armed forces provide 75% of personnel to the National Investigations Directorate (DNI) and the Counter-Drug Directorate (DNCD). The Dominican National Police force contains 32,000 agents. The police are not part of the Dominican armed forces but share some overlapping security functions. Sixty-three percent of the force serve in areas outside traditional police functions, similar to the situation of their military counterparts. In 2018, Dominican Republic signed the UN treaty on the Prohibition of Nuclear Weapons. The Dominican Republic is divided into 31 provinces. Santo Domingo, the capital, is designated Distrito Nacional (National District). The provinces are divided into municipalities ("municipios"; singular "municipio"). They are the second-level political and administrative subdivisions of the country. The president appoints the governors of the 31 provinces. Mayors and municipal councils administer the 124 municipal districts and the National District (Santo Domingo). They are elected at the same time as congressional representatives. The Dominican Republic is the largest economy (according to the U.S. State Department and the World Bank) in the Caribbean and Central American region. It is an upper middle-income developing country, with a 2015 GDP per capita of US$14,770, in PPP terms. Over the last 25 years, the Dominican Republic has had the fastest-growing economy in the Americas – with an average real GDP growth rate of 5.53% between 1992 and 2018. GDP growth in 2014 and 2015 reached 7.3 and 7.0%, respectively, the highest in the Western Hemisphere. In the first half of 2016 the Dominican economy grew 7.4%. , the average wage in nominal terms is US$392 per month (RD$17,829). The country is the site of the second largest gold mine in the world, the Pueblo Viejo mine. During the last three decades, the Dominican economy, formerly dependent on the export of agricultural commodities (mainly sugar, cocoa and coffee), has transitioned to a diversified mix of services, manufacturing, agriculture, mining, and trade. The service sector accounts for almost 60% of GDP; manufacturing, for 22%; tourism, telecommunications and finance are the main components of the service sector; however, none of them accounts for more than 10% of the whole. The Dominican Republic has a stock market, Bolsa de Valores de la Republica Dominicana (BVRD). and advanced telecommunication system and transportation infrastructure. High unemployment and income inequality are long-term challenges. International migration affects the Dominican Republic greatly, as it receives and sends large flows of migrants. Mass illegal Haitian immigration and the integration of Dominicans of Haitian descent are major issues. A large Dominican diaspora exists, mostly in the United States, contributes to development, sending billions of dollars to Dominican families in remittances. Remittances in Dominican Republic increased to US$4571.30 million in 2014 from US$3333 million in 2013 (according to data reported by the Inter-American Development Bank). Economic growth takes place in spite of a chronic energy shortage, which causes frequent blackouts and very high prices. Despite a widening merchandise trade deficit, tourism earnings and remittances have helped build foreign exchange reserves. Following economic turmoil in the late 1980s and 1990, during which the gross domestic product (GDP) fell by up to 5% and consumer price inflation reached an unprecedented 100%, the Dominican Republic entered a period of growth and declining inflation until 2002, after which the economy entered a recession. This recession followed the collapse of the second-largest commercial bank in the country, Baninter, linked to a major incident of fraud valued at US$3.5 billion. The Baninter fraud had a devastating effect on the Dominican economy, with GDP dropping by 1% in 2003 as inflation ballooned by over 27%. All defendants, including the star of the trial, Ramón Báez Figueroa (the great-grandson of President Buenaventura Báez), were convicted. According to the 2005 Annual Report of the United Nations Subcommittee on Human Development in the Dominican Republic, the country is ranked No. 71 in the world for resource availability, No. 79 for human development, and No. 14 in the world for resource mismanagement. These statistics emphasize national government corruption, foreign economic interference in the country, and the rift between the rich and poor. The Dominican Republic has a noted problem of child labor in its coffee, rice, sugarcane, and tomato industries. The labor injustices in the sugarcane industry extend to forced labor according to the U.S. Department of Labor. Three large groups own 75% of the land: the State Sugar Council (Consejo Estatal del Azúcar, CEA), Grupo Vicini, and Central Romana Corporation. According to the 2016 Global Slavery Index, an estimated 104,800 people are enslaved in the modern day Dominican Republic, or 1.00% of the population. Some slaves in the Dominican Republic are held on sugar plantations, guarded by men on horseback with rifles, and forced to work. The Dominican peso (abbreviated $ or RD$; ISO 4217 code is "DOP") is the national currency, with the United States dollar, the Euro, the Canadian dollar and the Swiss franc also accepted at most tourist sites. The exchange rate to the U.S. dollar, liberalized by 1985, stood at 2.70 pesos per dollar in August 1986, 14.00 pesos in 1993, and 16.00 pesos in 2000. the rate was 50.08 pesos per dollar. The Dominican Republic is the most visited destination in the Caribbean. The year-round golf courses are major attractions. A geographically diverse nation, the Dominican Republic is home to both the Caribbean's tallest mountain peak, Pico Duarte, and the Caribbean's largest lake and point of lowest elevation, Lake Enriquillo. The island has an average temperature of and great climatic and biological diversity. The country is also the site of the first cathedral, castle, monastery, and fortress built in the Americas, located in Santo Domingo's Colonial Zone, a World Heritage Site. Tourism is one of the fueling factors in the Dominican Republic's economic growth. The Dominican Republic is the most popular tourist destination in the Caribbean. With the construction of projects like Cap Cana, San Souci Port in Santo Domingo, Casa De Campo and the Hard Rock Hotel & Casino (ancient Moon Palace Resort) in Punta Cana, the Dominican Republic expects increased tourism activity in the upcoming years. Ecotourism has also been a topic increasingly important in this nation, with towns like Jarabacoa and neighboring Constanza, and locations like the Pico Duarte, Bahia de las Aguilas, and others becoming more significant in efforts to increase direct benefits from tourism. Most residents from other countries are required to get a tourist card, depending on the country they live in. In the last 10 years the Dominican Republic has become one of the worlds notably progressive states in terms of recycling and waste disposal. A UN report cited there was a 221.3% efficiency increase in the previous 10 years due, in part, to the opening of the largest open air landfill site located in the north 10 km from the Haitian border. The country has three national trunk highways, which connect every major town. These are DR-1, DR-2, and DR-3, which depart from Santo Domingo toward the northern (Cibao), southwestern (Sur), and eastern (El Este) parts of the country respectively. These highways have been consistently improved with the expansion and reconstruction of many sections. Two other national highways serve as spur (DR-5) or alternative routes (DR-4). In addition to the national highways, the government has embarked on an expansive reconstruction of spur secondary routes, which connect smaller towns to the trunk routes. In the last few years the government constructed a 106-kilometer toll road that connects Santo Domingo with the country's northeastern peninsula. Travelers may now arrive in the Samaná Peninsula in less than two hours. Other additions are the reconstruction of the DR-28 (Jarabacoa – Constanza) and DR-12 (Constanza – Bonao). Despite these efforts, many secondary routes still remain either unpaved or in need of maintenance. There is currently a nationwide program to pave these and other commonly used routes. Also, the Santiago light rail system is in planning stages but currently on hold. There are two main bus transportation services in the Dominican Republic: one controlled by the government, through the Oficina Técnica de Transito Terrestre (OTTT) and the Oficina Metropolitana de Servicios de Autobuses (OMSA), and the other controlled by private business, among them, Federación Nacional de Transporte La Nueva Opción (FENATRANO) and the Confederacion Nacional de Transporte (CONATRA). The government transportation system covers large routes in metropolitan areas such as Santo Domingo and Santiago. There are many privately owned bus companies, such as Metro Servicios Turísticos and Caribe Tours, that run daily routes. The Dominican Republic has a rapid transit system in Santo Domingo, the country's capital. It is the most extensive metro system in the insular Caribbean and Central American region by length and number of stations. The Santo Domingo Metro is part of a major "National Master Plan" to improve transportation in Santo Domingo as well as the rest of the nation. The first line was planned to relieve traffic congestion in the Máximo Gómez and Hermanas Mirabal Avenue. The second line, which opened in April 2013, is meant to relieve the congestion along the Duarte-Kennedy-Centenario Corridor in the city from west to east. The current length of the Metro, with the sections of the two lines open , is . Before the opening of the second line, 30,856,515 passengers rode the Santo Domingo Metro in 2012. With both lines opened, ridership increased to 61,270,054 passengers in 2014. The Dominican Republic has a well developed telecommunications infrastructure, with extensive mobile phone and landline services. Cable Internet and DSL are available in most parts of the country, and many Internet service providers offer 3G wireless internet service. The Dominican Republic became the second country in Latin America to have 4G LTE wireless service. The reported speeds are from 1 Mbit/s up to 100 Mbit/s for residential services. For commercial service there are speeds from 256 kbit/s up to 154 Mbit/s. (Each set of numbers denotes downstream/upstream speed; that is, to the user/from the user.) Projects to extend Wi-Fi hot spots have been made in Santo Domingo. The country's commercial radio stations and television stations are in the process of transferring to the digital spectrum, via HD Radio and HDTV after officially adopting ATSC as the digital medium in the country with a switch-off of analog transmission by September 2015. The telecommunications regulator in the country is INDOTEL ("Instituto Dominicano de Telecomunicaciones"). The largest telecommunications company is Claro – part of Carlos Slim's América Móvil – which provides wireless, landline, broadband, and IPTV services. In June 2009 there were more than 8 million phone line subscribers (land and cell users) in the D.R., representing 81% of the country's population and a fivefold increase since the year 2000, when there were 1.6 million. The communications sector generates about 3.0% of the GDP. There were 2,439,997 Internet users in March 2009. In November 2009, the Dominican Republic became the first Latin American country to pledge to include a "gender perspective" in every information and communications technology (ICT) initiative and policy developed by the government. This is part of the regional eLAC2010 plan. The tool the Dominicans have chosen to design and evaluate all the public policies is the APC Gender Evaluation Methodology (GEM). Electric power service has been unreliable since the Trujillo era, and as much as 75% of the equipment is that old. The country's antiquated power grid causes transmission losses that account for a large share of billed electricity from generators. The privatization of the sector started under a previous administration of Leonel Fernández. The recent investment in a 345 kilovolt "Santo Domingo–Santiago Electrical Highway" with reduced transmission losses, is being heralded as a major capital improvement to the national grid since the mid-1960s. During the Trujillo regime electrical service was introduced to many cities. Almost 95% of usage was not billed at all. Around half of the Dominican Republic's 2.1 million houses have no meters and most do not pay or pay a fixed monthly rate for their electric service. Household and general electrical service is delivered at 110 volts alternating at 60 Hz. Electrically powered items from the United States work with no modifications. The majority of the Dominican Republic has access to electricity. Tourist areas tend to have more reliable power, as do business, travel, healthcare, and vital infrastructure. Concentrated efforts were announced to increase efficiency of delivery to places where the collection rate reached 70%. The electricity sector is highly politicized. Some generating companies are undercapitalized and at times unable to purchase adequate fuel supplies. The Dominican Republic's population was in . In 2010, 31.2% of the population was under 15 years of age, with 6% of the population over 65 years of age. There were an estimated 102.3 males for every 100 females in 2020. The annual population growth rate for 2006–2007 was 1.5%, with the projected population for the year 2015 being 10,121,000. The population density in 2007 was 192 per km² (498 per sq mi), and 63% of the population lived in urban areas. The southern coastal plains and the Cibao Valley are the most densely populated areas of the country. The capital city Santo Domingo had a population of 2,907,100 in 2010. Other important cities are Santiago de los Caballeros ( 745,293), La Romana (pop. 214,109), San Pedro de Macorís (pop. 185,255), Higüey (153,174), San Francisco de Macorís (pop. 132,725), Puerto Plata (pop. 118,282), and La Vega (pop. 104,536). Per the United Nations, the urban population growth rate for 2000–2005 was 2.3%. In a 2014 population survey, 70.4% self-identified as mixed (mestizo/indio 58%, mulatto 12.4%), 15.8% as black, 13.5% as white, and 0.3% as "other". Ethnic immigrant groups in the country include West Asians—mostly Lebanese, Syrians, and Palestinians. East Asians, primarily ethnic Chinese and Japanese, can also be found. Europeans are represented mostly by Spanish whites but also with smaller populations of German Jews, Italians, Portuguese, British, Dutch, Danes, and Hungarians. Some converted Sephardic Jews from Spain were part of early expeditions; only Catholics were allowed to come to the New World. Later there were Jewish migrants coming from the Iberian peninsula and other parts of Europe in the 1700s. Some managed to reach the Caribbean as refugees during and after the Second World War. Some Sephardic Jews reside in Sosúa while others are dispersed throughout the country. Self-identified Jews number about 3,000; other Dominicans may have some Jewish ancestry because of marriages among converted Jewish Catholics and other Dominicans since the colonial years. Some Dominicans born in the United States now reside in the Dominican Republic, creating a kind of expatriate community. The population of the Dominican Republic is mostly Spanish-speaking. The local variant of Spanish is called Dominican Spanish, which closely resembles other Spanish vernaculars in the Caribbean and has similarities to Canarian Spanish. In addition, it has influences from African languages and borrowed words from indigenous Caribbean languages particular to the island of Hispaniola. Schools are based on a Spanish educational model; English and French are mandatory foreign languages in both private and public schools, although the quality of foreign languages teaching is poor. Some private educational institutes provide teaching in other languages, notably Italian, Japanese and Mandarin. Haitian Creole is the largest minority language in the Dominican Republic and is spoken by Haitian immigrants and their descendants. There is a community of a few thousand people whose ancestors spoke Samaná English in the Samaná Peninsula. They are the descendants of formerly enslaved African Americans who arrived in the nineteenth century, but only a few elders speak the language today. Tourism, American pop culture, the influence of Dominican Americans, and the country's economic ties with the United States motivate other Dominicans to learn English. The Dominican Republic is ranked 2nd in Latin America and 23rd in the World on English proficiency. 95.0% Christians 2.6% No religion 2.2% Other religions , 57% of the population (5.7 million) identified themselves as Roman Catholics and 23% (2.3 million) as Protestants (in Latin American countries, Protestants are often called "Evangelicos" because they emphasize personal and public evangelising and many are Evangelical Protestant or of a Pentecostal group). From 1896 to 1907 missionaries from the Episcopal, Free Methodist, Seventh-day Adventist and Moravians churches began work in the Dominican Republic. Three percent of the 10.63 million Dominican Republic population are Seventh-day Adventists. Recent immigration as well as proselytizing efforts have brought in other religious groups, with the following shares of the population: Spiritist: 2.2%, The Church of Jesus Christ of Latter-day Saints: 1.3%, Buddhist: 0.1%, Bahá'í: 0.1%, Chinese Folk Religion: 0.1%, Islam: 0.02%, Judaism: 0.01%. The Catholic Church began to lose its strong dominance in the late 19th century. This was due to a lack of funding, priests, and support programs. During the same time, Protestant Evangelicalism began to gain a wider support "with their emphasis on personal responsibility and family rejuvenation, economic entrepreneurship, and biblical fundamentalism". The Dominican Republic has two Catholic patroness saints: "Nuestra Señora de la Altagracia" (Our Lady Of High Grace) and "Nuestra Señora de las Mercedes" (Our Lady Of Mercy). The Dominican Republic has historically granted extensive religious freedom. According to the United States Department of State, "The constitution specifies that there is no state church and provides for freedom of religion and belief. A concordat with the Vatican designates Catholicism as the official religion and extends special privileges to the Catholic Church not granted to other religious groups. These include the legal recognition of church law, use of public funds to underwrite some church expenses, and complete exoneration from customs duties." In the 1950s restrictions were placed upon churches by the government of Trujillo. Letters of protest were sent against the mass arrests of government adversaries. Trujillo began a campaign against the Catholic Church and planned to arrest priests and bishops who preached against the government. This campaign ended before it was put into place, with his assassination. During World War II a group of Jews escaping Nazi Germany fled to the Dominican Republic and founded the city of Sosúa. It has remained the center of the Jewish population since. In the 20th century, many Arabs (from Lebanon, Syria, and Palestine), Japanese, and, to a lesser degree, Koreans settled in the country as agricultural laborers and merchants. The Chinese companies found business in telecom, mining, and railroads. The Arab community is rising at an increasing rate and is estimated at 80,000. In addition, there are descendants of immigrants who came from other Caribbean islands, including St. Kitts and Nevis, Antigua, St. Vincent, Montserrat, Tortola, St. Croix, St. Thomas, and Guadeloupe. They worked on sugarcane plantations and docks and settled mainly in the cities of San Pedro de Macorís and Puerto Plata. Puerto Rican, and to a lesser extent, Cuban immigrants fled to the Dominican Republic from the mid-1800s until about 1940 due to a poor economy and social unrest in their respective home countries. Many Puerto Rican immigrants settled in Higüey, among other cities, and quickly assimilated due to similar culture. Before and during World War II, 800 Jewish refugees moved to the Dominican Republic. Numerous immigrants have come from other Caribbean countries, as the country has offered economic opportunities. There are about 32,000 Jamaicans living in the Dominican Republic. There is an increasing number of Puerto Rican immigrants, especially in and around Santo Domingo; they are believed to number around 10,000. There are over 700,000 people of Haitian descent, including a generation born in the Dominican Republic. Haiti is the neighboring nation to the Dominican Republic and is considerably poorer, less developed and is additionally the least developed country in the western hemisphere. In 2003, 80% of all Haitians were poor (54% living in abject poverty) and 47.1% were illiterate. The country of nine million people also has a fast growing population, but over two-thirds of the labor force lack formal jobs. Haiti's per capita GDP (PPP) was $1,800 in 2017, or just over one-tenth of the Dominican figure. As a result, hundreds of thousands of Haitians have migrated to the Dominican Republic, with some estimates of 800,000 Haitians in the country, while others put the Haitian-born population as high as one million. They usually work at low-paying and unskilled jobs in building construction and house cleaning and in sugar plantations. There have been accusations that some Haitian immigrants work in slavery-like conditions and are severely exploited. Due to the lack of basic amenities and medical facilities in Haiti a large number of Haitian women, often arriving with several health problems, cross the border to Dominican soil. They deliberately come during their last weeks of pregnancy to obtain medical attention for childbirth, since Dominican public hospitals do not refuse medical services based on nationality or legal status. Statistics from a hospital in Santo Domingo report that over 22% of childbirths are by Haitian mothers. Haiti also suffers from severe environmental degradation. Deforestation is rampant in Haiti; today less than 4 percent of Haiti's forests remain, and in many places the soil has eroded right down to the bedrock. Haitians burn wood charcoal for 60% of their domestic energy production. Because of Haiti running out of plant material to burn, some Haitian bootleggers have created an illegal market for charcoal on the Dominican side. Conservative estimates calculate the illegal movement of 115 tons of charcoal per week from the Dominican Republic to Haiti. Dominican officials estimate that at least 10 trucks per week are crossing the border loaded with charcoal. In 2005, Dominican President Leonel Fernández criticized collective expulsions of Haitians as having taken place "in an abusive and inhuman way." After a UN delegation issued a preliminary report stating that it found a profound problem of racism and discrimination against people of Haitian origin, Dominican Foreign Minister Carlos Morales Troncoso issued a formal statement denouncing it, asserting that "our border with Haiti has its problems[;] this is our reality and it must be understood. It is important not to confuse national sovereignty with indifference, and not to confuse security with xenophobia." The children of Haitian immigrants are eligible for Haitian nationality, are denied it by Haiti because of a lack of proper documents or witnesses. The first of three late-20th century emigration waves began in 1961 after the assassination of dictator Trujillo, due to fear of retaliation by Trujillo's allies and political uncertainty in general. In 1965, the United States began a military occupation of the Dominican Republic to end a civil war. Upon this, the U.S. eased travel restrictions, making it easier for Dominicans to obtain U.S. visas. From 1966 to 1978, the exodus continued, fueled by high unemployment and political repression. Communities established by the first wave of immigrants to the U.S. created a network that assisted subsequent arrivals. In the early 1980s, underemployment, inflation, and the rise in value of the dollar all contributed to a third wave of emigration from the Dominican Republic. Today, emigration from the Dominican Republic remains high. In 2012, there were approximately 1.7 million people of Dominican descent in the U.S., counting both native- and foreign-born. There was also a growing Dominican immigration to Puerto Rico, with nearly 70,000 Dominicans living there . Although that number is slowly decreasing and immigration trends have reversed because of Puerto Rico's economic crisis . There is a significant Dominican population in Spain. In 2020, the Dominican Republic had an estimated birth rate of 18.5 per 1000 and a death rate of 6.3 per 1000. Primary education is regulated by the Ministry of Education, with education being a right of all citizens and youth in the Dominican Republic. Preschool education is organized in different cycles and serves the 2–4 age group and the 4–6 age group. Preschool education is not mandatory except for the last year. Basic education is compulsory and serves the population of the 6–14 age group. Secondary education is not compulsory, although it is the duty of the state to offer it for free. It caters to the 14–18 age group and is organized in a common core of four years and three modes of two years of study that are offered in three different options: general or academic, vocational (industrial, agricultural, and services), and artistic. The higher education system consists of institutes and universities. The institutes offer courses of a higher technical level. The universities offer technical careers, undergraduate and graduate; these are regulated by the Ministry of Higher Education, Science and Technology. In 2012, the Dominican Republic had a murder rate of 22.1 per 100,000 population. There was a total of 2,268 murders in the Dominican Republic in 2012. The Dominican Republic has become a trans-shipment point for Colombian drugs destined for Europe as well as the United States and Canada. Money-laundering via the Dominican Republic is favored by Colombian drug cartels for the ease of illicit financial transactions. In 2004, it was estimated that 8% of all cocaine smuggled into the United States had come through the Dominican Republic. The Dominican Republic responded with increased efforts to seize drug shipments, arrest and extradite those involved, and combat money-laundering. The often light treatment of violent criminals has been a continuous source of local controversy. In April 2010, five teenagers, aged 15 to 17, shot and killed two taxi drivers and killed another five by forcing them to drink drain-cleaning acid. On September 24, 2010, the teens were sentenced to prison terms of three to five years, despite the protests of the taxi drivers' families. Due to cultural syncretism, the culture and customs of the Dominican people have a European cultural basis, influenced by both African and native Taíno elements, although endogenous elements have emerged within Dominican culture; culturally the Dominican Republic is among the most-European countries in Spanish America, alongside Puerto Rico, Cuba, Central Chile, Argentina, and Uruguay. Spanish institutions in the colonial era were able to predominate in the Dominican culture's making-of as a relative success in the acculturation and cultural assimilation of African slaves diminished African cultural influence in comparison to other Caribbean countries. Music and sport are of great importance in the Dominican culture, with Merengue and Bachata as the national dance and music, and baseball as the favorite sport. Dominican art is perhaps most commonly associated with the bright, vibrant colors and images that are sold in every tourist gift shop across the country. However, the country has a long history of fine art that goes back to the middle of the 1800s when the country became independent and the beginnings of a national art scene emerged. Historically, the painting of this time were centered around images connected to national independence, historical scenes, portraits but also landscapes and images of still life. Styles of painting ranged between neoclassicism and romanticism. Between 1920 and 1940 the art scene was influenced by styles of realism and impressionism. Dominican artists were focused on breaking from previous, academic styles in order to develop more independent and individual styles. The architecture in the Dominican Republic represents a complex blend of diverse cultures. The deep influence of the European colonists is the most evident throughout the country. Characterized by ornate designs and baroque structures, the style can best be seen in the capital city of Santo Domingo, which is home to the first cathedral, castle, monastery, and fortress in all of the Americas, located in the city's Colonial Zone, an area declared as a World Heritage Site by UNESCO. The designs carry over into the villas and buildings throughout the country. It can also be observed on buildings that contain stucco exteriors, arched doors and windows, and red tiled roofs. The indigenous peoples of the Dominican Republic have also had a significant influence on the architecture of the country. The Taíno people relied heavily on the mahogany and guano (dried palm tree leaf) to put together crafts, artwork, furniture, and houses. Utilizing mud, thatched roofs, and mahogany trees, they gave buildings and the furniture inside a natural look, seamlessly blending in with the island's surroundings. Lately, with the rise in tourism and increasing popularity as a Caribbean vacation destination, architects in the Dominican Republic have now begun to incorporate cutting-edge designs that emphasize luxury. In many ways an architectural playground, villas and hotels implement new styles, while offering new takes on the old. This new style is characterized by simplified, angular corners and large windows that blend outdoor and indoor spaces. As with the culture as a whole, contemporary architects embrace the Dominican Republic's rich history and various cultures to create something new. Surveying modern villas, one can find any combination of the three major styles: a villa may contain angular, modernist building construction, Spanish Colonial-style arched windows, and a traditional Taino hammock in the bedroom balcony. Dominican cuisine is predominantly , Taíno, and African. The typical cuisine is quite similar to what can be found in other Latin American countries. One breakfast dish consists of eggs and "mangú" (mashed, boiled plantain). Heartier versions of "mangú" are accompanied by deep-fried meat (Dominican salami, typically), cheese, or both. Lunch, generally the largest and most important meal of the day, usually consists of rice, meat, beans, and salad. "La Bandera" (literally "The Flag") is the most popular lunch dish; it consists of meat and red beans on white rice. "Sancocho" is a stew often made with seven varieties of meat. Meals tend to favor meats and starches over dairy products and vegetables. Many dishes are made with "sofrito", which is a mix of local herbs used as a wet rub for meats and sautéed to bring out all of a dish's flavors. Throughout the south-central coast, bulgur, or whole wheat, is a main ingredient in "quipes" or "tipili" (bulgur salad). Other favorite Dominican foods include "chicharrón", "yuca", "casabe", "pastelitos" (empanadas), "batata", yam, "pasteles en hoja", "chimichurris", and "tostones". Some treats Dominicans enjoy are "arroz con leche" (or "arroz con dulce"), "bizcocho dominicano" (lit. Dominican cake), "habichuelas con dulce", flan, "frío frío" (snow cones), dulce de leche, and "caña" (sugarcane). The beverages Dominicans enjoy are "Morir Soñando", rum, beer, "Mama Juana", "batida" (smoothie), jugos naturales (freshly squeezed fruit juices), "mabí", coffee, and "chaca" (also called "maiz caqueao/casqueado", "maiz con dulce" and "maiz con leche"), the last item being found only in the southern provinces of the country such as San Juan. Musically, the Dominican Republic is known for the world popular musical style and genre called "merengue", a type of lively, fast-paced rhythm and dance music consisting of a tempo of about 120 to 160 beats per minute (though it varies) based on musical elements like drums, brass, chorded instruments, and accordion, as well as some elements unique to the Spanish-speaking Caribbean, such as the "tambora" and "güira". Its syncopated beats use Latin percussion, brass instruments, bass, and piano or keyboard. Between 1937 and 1950 merengue music was promoted internationally by Dominican groups like Billo's Caracas Boys, Chapuseaux and Damiron "Los Reyes del Merengue," Joseito Mateo, and others. Radio, television, and international media popularized it further. Some well known merengue performers are Wilfrido Vargas, Johnny Ventura, singer-songwriter Los Hermanos Rosario, Juan Luis Guerra, Fernando Villalona, Eddy Herrera, Sergio Vargas, Toño Rosario, Milly Quezada, and Chichí Peralta. Merengue became popular in the United States, mostly on the East Coast, during the 1980s and 1990s, when many Dominican artists residing in the U.S. (particularly New York) started performing in the Latin club scene and gained radio airplay. They included Victor Roque y La Gran Manzana, Henry Hierro, Zacarias Ferreira, Aventura, and Milly Jocelyn Y Los Vecinos. The emergence of "bachata", along with an increase in the number of Dominicans living among other Latino groups in New York, New Jersey, and Florida, has contributed to Dominican music's overall growth in popularity. Bachata, a form of music and dance that originated in the countryside and rural marginal neighborhoods of the Dominican Republic, has become quite popular in recent years. Its subjects are often romantic; especially prevalent are tales of heartbreak and sadness. In fact, the original name for the genre was "amargue" ("bitterness," or "bitter music,"), until the rather ambiguous (and mood-neutral) term "bachata" became popular. Bachata grew out of, and is still closely related to, the pan-Latin American romantic style called "bolero". Over time, it has been influenced by merengue and by a variety of Latin American guitar styles. Palo is an Afro-Dominican sacred music that can be found throughout the island. The drum and human voice are the principal instruments. Palo is played at religious ceremonies—usually coinciding with saints' religious feast days—as well as for secular parties and special occasions. Its roots are in the Congo region of central-west Africa, but it is mixed with European influences in the melodies. Salsa music has had a great deal of popularity in the country. During the late 1960s Dominican musicians like Johnny Pacheco, creator of the Fania All Stars, played a significant role in the development and popularization of the genre. Dominican rock and Reggaeton are also popular. Many, if not the majority, of its performers are based in Santo Domingo and Santiago. The country boasts one of the ten most important design schools in the region, La Escuela de Diseño de Altos de Chavón, which is making the country a key player in the world of fashion and design. Noted fashion designer Oscar de la Renta was born in the Dominican Republic in 1932, and became a US citizen in 1971. He studied under the leading Spaniard designer Cristóbal Balenciaga and then worked with the house of Lanvin in Paris. By 1963, he had designs bearing his own label. After establishing himself in the US, de la Renta opened boutiques across the country. His work blends French and Spaniard fashion with American styles. Although he settled in New York, de la Renta also marketed his work in Latin America, where it became very popular, and remained active in his native Dominican Republic, where his charitable activities and personal achievements earned him the Juan Pablo Duarte Order of Merit and the Order of Cristóbal Colón. De la Renta died of complications from cancer on October 20, 2014. Some of the Dominican Republic's important symbols are the flag, the coat of arms, and the national anthem, titled "Himno Nacional". The flag has a large white cross that divides it into four quarters. Two quarters are red and two are blue. Red represents the blood shed by the liberators. Blue expresses God's protection over the nation. The white cross symbolizes the struggle of the liberators to bequeath future generations a free nation. An alternative interpretation is that blue represents the ideals of progress and liberty, whereas white symbolizes peace and unity among Dominicans. In the center of the cross is the Dominican coat of arms, in the same colors as the national flag. The coat of arms pictures a red, white, and blue flag-draped shield with a Bible, a gold cross, and arrows; the shield is surrounded by an olive branch (on the left) and a palm branch (on the right). The Bible traditionally represents the truth and the light. The gold cross symbolizes the redemption from slavery, and the arrows symbolize the noble soldiers and their proud military. A blue ribbon above the shield reads, "Dios, Patria, Libertad" (meaning "God, Fatherland, Liberty"). A red ribbon under the shield reads, "República Dominicana" (meaning "Dominican Republic"). Out of all the flags in the world, the depiction of a Bible is unique to the Dominican flag. The national flower is the Bayahibe Rose and the national tree is the West Indian Mahogany. The national bird is the "Cigua Palmera" or Palmchat ("Dulus dominicus"). The Dominican Republic celebrates Dia de la Altagracia on January 21 in honor of its patroness, Duarte's Day on January 26 in honor of one of its founding fathers, Independence Day on February 27, Restoration Day on August 16, "Virgen de las Mercedes" on September 24, and Constitution Day on November 6. Baseball is by far the most popular sport in the Dominican Republic. The country has a baseball league of six teams. Its season usually begins in October and ends in January. After the United States, the Dominican Republic has the second highest number of Major League Baseball (MLB) players. Ozzie Virgil Sr. became the first Dominican-born player in the MLB on September 23, 1956. Juan Marichal, Pedro Martínez, and Vladimir Guerrero are the only Dominican-born players in the Baseball Hall of Fame. Other notable baseball players born in the Dominican Republic are José Bautista, Adrián Beltré, George Bell, Robinson Canó, Rico Carty, Bartolo Colón, Nelson Cruz, Edwin Encarnación, Ubaldo Jiménez, Francisco Liriano, David Ortiz, Plácido Polanco, Albert Pujols, Hanley Ramírez, Manny Ramírez, José Reyes, Sammy Sosa, and Miguel Tejada. Felipe Alou has also enjoyed success as a manager and Omar Minaya as a general manager. In 2013, the Dominican team went undefeated "en route" to winning the World Baseball Classic. In boxing, the country has produced scores of world-class fighters and several world champions, such as Carlos Cruz, his brother Leo, Juan Guzman, and Joan Guzman. Basketball also enjoys a relatively high level of popularity. Tito Horford, his son Al, Felipe Lopez, and Francisco Garcia are among the Dominican-born players currently or formerly in the National Basketball Association (NBA). Olympic gold medalist and world champion hurdler Félix Sánchez hails from the Dominican Republic, as does NFL defensive end Luis Castillo. Other important sports are volleyball, introduced in 1916 by U.S. Marines and controlled by the Dominican Volleyball Federation, taekwondo, in which Gabriel Mercedes won an Olympic silver medal in 2008, and judo.
https://en.wikipedia.org/wiki?curid=8060
Deutsches Institut für Normung Founded in 1917 as the ' (NADI, "Standardization Committee of German Industry"), the NADI was renamed ' (DNA, "German Standardization Committee") in 1926 to reflect that the organization now dealt with standardization issues in many fields; viz., not just for industrial products. In 1975 it was renamed again to "", or 'DIN' and is recognized by the German government as the official national-standards body, representing German interests at the international and European levels. The acronym, 'DIN' is often incorrectly expanded as ' ("German Industry Standard"). This is largely due to the historic origin of the DIN as "NADI". The NADI indeed published their standards as ' ('). For example, the first published standard was " (about tapered pins) in 1918. Many people still mistakenly associate DIN with the old ' naming convention. One of the earliest, and probably the best known, is DIN 476 — the standard that introduced the A-series paper sizes in 1922 — adopted in 1975 as International Standard ISO 216. Common examples in modern technology include DIN and mini-DIN connectors for electronics, and the DIN rail. The designation of a DIN standard shows its origin (# denotes a number):
https://en.wikipedia.org/wiki?curid=8062
Biopolymer Biopolymers are natural polymers produced by living organisms; in other words, they are polymeric biomolecules derived from cellular or extracellular matter. Biopolymers contain monomeric units that are covalently bonded to form larger structures. There are three main classes of biopolymers, classified according to the monomeric units used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. More specifically, polynucleotides, such as RNA and DNA, are long polymers composed of 13 or more nucleotide monomers. Polypeptides or proteins, are short polymers of amino acids and some major examples include collagen, actin, and fibrin. The last class, polysaccharides,are often linear bonded polymeric carbohydrate structures and some examples include cellulose and alginate. Other examples of biopolymers include rubber, suberin, melanin and lignin. Biopolymers have various applications such as in the food industry, manufacturing, packaging and biomedical engineering. A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of the biopolymers. In contrast, most "synthetic polymers"' have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most "in vivo" systems, all biopolymers of a type (say one specific protein) are all alike: they all contain the similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a polydispersity index of 1. The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids. The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer. Sugar-based biopolymers are often difficult with regards to convention. Sugar polymers can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins. There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners. Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non toxic, easily absorbable, biodegradable and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy. Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silk worm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess antiagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro. Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nonoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection. Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nano fibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets. Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined together by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field. Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition. Poly(e-Caprolactone) (PCL): PCL is a biodegradable, biocompatible polyester which is a type of polymer that has an ester functional group for their main chain. PCL is used vastly in the biomedical field. It is used to create scaffolds to be used in cell and tissue engineering and can support many cell types. PCL is especially useful for tissue engineering applications because under physiological conditions it is degraded but hydrolysis if its ester linkages. This property makes it ideal for long term implantable biomaterials due to the low degradation rate. Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Lots of biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bio-activity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body. More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground breaking research, as these are inexpensive and easily attainable materials.Gelatin polymer is often used on dressing wounds where it acts as an adhesive.Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing. As collagen is one of the more popular biopolymer used in biomedical science, here are some examples of their use: Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation. Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin. Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen bases haemostat reduces blood loss in tissues and helps manage bleeding in cellular organs like the liver and spleen. Chitosan is another popular biopolymer in biomedical research. Chitosan is the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of Chitosan. Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. in addition Chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue. Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram positive bacteria of different yeast species. Chitosan composite for tissue engineering: Blended power of Chitosan along with alginate are used together to form functional wound dressings. These dressings create a moist environment which aids in the healing process. This wound dressing is also very biocompatible, biodegradable and has porous structures that allows cells to grow into the dressing. Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic Acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films are can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body. Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoate (PHA), polylactic acid (PLA), and starch. Starch and PLA are commercially available biodegradable making them a common choice for packaging. However, their barrier properties and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can effect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch. Water purification: A newer biopolymer called chitosan has been used for water purification. Chitosan is used as a flocculant that only takes a few weeks or months rather than years to degrade into the environment. Chitosan purify's water by Chelation when it removes metals from the water. Chelation is when binding sites along the polymer chain bind with the metal in the water forming clelates. Chitosan has been used in many situations to clean out storm or waste water that may have been contaminated. Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics. Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting. Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways: Sugar beet > Glyconic acid > Polyglyconic acid Starch > (fermentation) > Lactic acid > Polylactic acid (PLA) Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping. Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant materials which can be grown indefinitely. These plant materials come from agricultural non food crops. Therefore, the use of biopolymers would create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral. Biopolymers are biodegradable, and some are also compostable. Some biopolymers are biodegradable: they are broken down into CO2 and water by microorganisms. Some of these biodegradable biopolymers are compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
https://en.wikipedia.org/wiki?curid=3974
2001 United Kingdom general election The 2001 United Kingdom general election was held on Thursday 7 June 2001, four years after the previous election on 1 May 1997, to elect 659 members to the House of Commons. The governing Labour Party was re-elected to serve a second term in government with another landslide victory, returning 413 members of Parliament versus 418 from the 1997 general election, a net loss of five seats, though with a significantly lower turnout than before—59.4%, compared to 71.3% at the previous election. Tony Blair went on to become the first Labour Prime Minister to serve two consecutive full terms in office. As Labour retained almost all of their seats won in the 1997 landslide victory, the media dubbed the 2001 election "the quiet landslide". There was little change outside Northern Ireland, with 620 out of the 641 seats in Great Britain electing candidates from the same party as they did in 1997. Factors contributing to the Labour victory included a strong economy, falling unemployment, and public perception that the Labour government had delivered on many key election pledges that it had made in 1997. The opposition Conservative Party, under William Hague's leadership, was still deeply divided on the issue of Europe and the party's policy platform had drifted considerably to the right. A series of publicity stunts that backfired also harmed Hague, and he resigned as party leader three months after the election, becoming the first leader of the Conservative and Unionist Party in the House of Commons since Austen Chamberlain nearly eighty years prior not to serve as prime minister. The election was largely a repeat of the 1997 general election, with Labour losing only six seats overall and the Conservatives making a net gain of one seat (gaining nine seats but losing eight). The Conservatives gained a seat in Scotland, which ended the party's status as an "England-only" party in the prior parliament, but failed again to win any seats in Wales. Although they did not gain many seats, three of the few new MPs elected were future Conservative Prime Ministers David Cameron and Boris Johnson and future Conservative Chancellor of the Exchequer George Osborne; Osborne would serve in the same Cabinet as Cameron from 2010 to 2016. The Liberal Democrats made a net gain of six seats. The 2001 general election is the last to date in which any government has held an overall majority of more than 100 seats in the House of Commons, and the second of only two since the Second World War (the other being 1997) in which a single party won over 400 MPs. Notable departing MPs included former Prime Ministers Edward Heath (also Father of the House) and John Major, former Deputy Prime Minister Michael Heseltine, former Liberal Democrat leader Paddy Ashdown, former Cabinet ministers Tony Benn, Tom King, John Morris, Mo Mowlam, John MacGregor and Peter Brooke, Teresa Gorman, and then Mayor of London Ken Livingstone. Change was seen in Northern Ireland, with the moderate unionist Ulster Unionist Party (UUP) losing four seats to the more hardline Democratic Unionist Party (DUP). A similar transition appeared in the nationalist community, with the moderate Social Democratic and Labour Party (SDLP) losing votes to the more staunchly republican and abstentionist Sinn Féin. Exceptionally low voter turnout, which fell below 60% for the first (and so far, only) time since 1918, also marked this election. The election was broadcast live on the BBC and presented by David Dimbleby, Jeremy Paxman, Andrew Marr, Peter Snow, and Tony King. The 2001 general election was notable for being the first in which pictures of the party logos appeared on the ballot paper. Prior to this, the ballot paper had only displayed the candidate's name, address, and party name. The election had been expected on 3 May, to coincide with local elections, but on 2 April 2001, both were postponed to 7 June because of rural movement restrictions imposed in response to the foot-and-mouth outbreak that had started in February. The elections were marked by voter apathy, with turnout falling to 59.4%, the lowest (and first under 70%) since the Coupon Election of 1918. Throughout the election the Labour Party had maintained a significant lead in the opinion polls and the result was deemed to be so certain that some bookmakers paid out for a Labour majority before election day. However, the opinion polls the previous autumn had shown the first Tory lead (though only by a narrow margin) in the opinion polls for eight years as they benefited from the public anger towards the government over the fuel protests which had led to a severe shortage of motor fuel. By the end of 2000, however, the dispute had been resolved and Labour were firmly back in the lead of the opinion polls. In total, a mere 29 parliamentary seats changed hands at the 2001 Election. 2001 also saw the rare election of an independent. Dr. Richard Taylor of Independent Kidderminster Hospital and Health Concern (usually now known simply as "Health Concern") unseated a government MP, David Lock, in Wyre Forest. There was also a high vote for British National Party leader Nick Griffin in Oldham West and Royton, in the wake of recent race riots in the town of Oldham. In Northern Ireland, the election was far more dramatic and marked a move by unionists away from support for the Good Friday Agreement, with the moderate unionist Ulster Unionist Party (UUP) losing to the more hardline Democratic Unionist Party (DUP). This polarisation was also seen in the nationalist community, with the Social Democratic and Labour Party (SDLP) vote losing out to more left-wing and republican Sinn Féin. It also saw a tightening of the parties as the small UK Unionist Party lost its only seat. For Labour, the last four years had run relatively smoothly. The party had successfully defended all their by election seats, and many suspected a Labour win was inevitable from the start. Many in the party, however, were afraid of voter apathy, which was epitomised in a poster of "Hague with Lady Thatcher's hair", captioned "Get out and vote. Or they get in." Despite recessions in mainland Europe and the United States, due to the bursting of global tech bubbles, Britain was notably unaffected and Labour however could rely on a strong economy as unemployment continued to decline toward election day, putting to rest any fears of a Labour government putting the economic situation at risk. For William Hague, however, the Conservative Party had still not fully recovered from the loss in 1997. The party was still divided over Europe, and talk of a referendum on joining the Eurozone was rife. As Labour remained at the political centre, the Tories moved to the right. A policy gaffe by Oliver Letwin over public spending cuts left the party with an own goal that Labour soon exploited. Margaret Thatcher also added to Hague's troubles when speaking out strongly against the Euro to applause. Hague himself, although a witty performer at Prime Minister's Questions, was dogged in the press and reminded of his speech, given at the age of 16, at the 1977 Conservative Conference. "The Sun" newspaper only added to the Conservatives' woes by backing Labour for a second consecutive election, calling Hague a "dead parrot" during the Conservative Party's conference in October 1998. The Tories campaigned on a strongly right-wing platform, emphasising the issues of Europe, immigration and tax, the fabled "Tebbit Trinity". They also released a poster showing a heavily pregnant Tony Blair, stating “Four years of Labour and he still hasn’t delivered”. However, Labour countered by asking where the proposed tax cuts were going to come from, and decried the Tory policy as "cut here, cut there, cut everywhere", in reference to the widespread belief that the Conservatives would make major cuts to public services in order to fund tax cuts. Charles Kennedy contested his first election as leader of the Liberal Democrats. During the election Sharron Storer, a resident of Birmingham, criticised Prime Minister Tony Blair in front of television cameras about conditions in the National Health Service. The widely televised incident happened on 16 May during a campaign visit by Blair to the Queen Elizabeth Hospital in Birmingham. Sharron Storer's partner, Keith Sedgewick, a cancer patient with non-Hodgkin's lymphoma and therefore highly susceptible to infection, was being treated at the time in the bone marrow unit, but no bed could be found for him and he was transferred to the casualty unit for his first 24 hours. The election result was effectively a repeat of 1997, as the Labour Party retained an overwhelming majority with BBC announcing the victory at 02:58 on the early morning of 8 June. Having presided over relatively serene political, economic and social conditions, the feeling of prosperity in the United Kingdom had been maintained into the new millennium, and Labour would have a free hand to assert its ideals in the subsequent parliament. Despite the victory, voter apathy was a major issue, as turnout fell below 60%, 12 percentage points down on 1997. All three of the main parties saw their total votes fall, with Labour's total vote dropping by 2.8 million on 1997, the Conservatives 1.3 million, and the Liberal Democrats 428,000. Some suggested this dramatic fall was a sign of the general acceptance of the status quo and the likelihood of Labour's majority remaining unassailable. For the Conservatives, this huge loss they had sustained in 1997 was repeated. Despite gaining nine seats, the Tories lost seven to the Liberal Democrats, and one even to Labour. William Hague was quick to announce his resignation, doing so at 07:44 outside the Conservative Party headquarters. Some believed that Hague had been unlucky; although most considered him to be a talented orator and an intelligent statesman, he had come up against the charismatic Tony Blair in the peak of his political career, and it was no surprise that little progress was made in reducing Labour's majority after a relatively smooth parliament. Staying at what they considered rock bottom, however, showed that the Conservatives had failed to improve their negative public image, had remained somewhat disunited over Europe, and had not regained the trust that they had lost in the 1990s. But in Scotland, despite flipping one seat from the Scottish National Party, their vote collapse continued. They failed to retake former strongholds in Scotland as the Nationalists consolidated their grip on the Northeastern portion of the country. The Liberal Democrats could point to steady progress under their new leader, Charles Kennedy, gaining more seats than the main two parties—albeit only six overall—and maintaining the performance of a pleasing 1997 election, where the party had doubled its number of seats from 20 to 46. While they had yet to become electable as a government, they underlined their growing reputation as a worthwhile alternative to Labour and Conservative, offering plenty of debate in Parliament and representing more than a mere protest vote. The SNP failed to gain any new seats and lost a seat to the Conservatives by just 79 votes. In Wales, Plaid Cymru both gained a seat from Labour and lost one to them. In Northern Ireland the Ulster Unionists, despite gaining North Down, lost five other seats. "All parties with more than 500 votes shown." "The seat gains reflect changes on the 1997 general election result. Two seats had changed hands in by-elections in the intervening period. These were as follows:" The results of the election give a Gallagher index of dis-proportionality of 17.74.
https://en.wikipedia.org/wiki?curid=3975
Book of Mormon The Book of Mormon is a sacred text of the Latter Day Saint movement, which, according to adherents, contains writings of ancient prophets who lived on the American continent from approximately 2200 BC to AD 421. It was first published in March 1830 by Joseph Smith as "The Book of Mormon: An Account Written by the Hand of Mormon upon Plates Taken from the Plates of Nephi". According to Smith's account and the book's narrative, the Book of Mormon was originally written in otherwise unknown characters referred to as "reformed Egyptian" engraved on golden plates. Smith said that the last prophet to contribute to the book, a man named Moroni, buried it in the Hill Cumorah in present-day Manchester, New York, before his death, and then appeared in a vision to Smith in 1827 as an angel, revealing the location of the plates, and instructing him to translate the plates into English for use in the restoration of Christ's true church in the latter days. Critics claim that it was authored by Smith, drawing on material and ideas from contemporary 19th-century works rather than translating an ancient record. The Book of Mormon has a number of original and distinctive doctrinal discussions on subjects such as the fall of Adam and Eve, the nature of the Christian atonement, eschatology, redemption from physical and spiritual death, and the organization of the latter-day church. The pivotal event of the book is an appearance of Jesus Christ in the Americas shortly after his resurrection. The Book of Mormon is the earliest of the unique writings of the Latter-day Saint movement, the denominations of which typically regard the text primarily as scripture, and secondarily as a historical record of God's dealings with the ancient inhabitants of the Americas. The archaeological, historical, and scientific communities do not accept the Book of Mormon as an ancient record of actual historical events. The Book of Mormon is divided into smaller books, titled after the individuals named as primary authors and, in most versions, divided into chapters and verses. It is written in English very similar to the Early Modern English linguistic style of the King James Version of the Bible, and has since been fully or partially translated into 111 languages. As of 2011, more than 150 million copies of the Book of Mormon had been printed. According to Joseph Smith, he was seventeen years of age when an angel of God named Moroni appeared to him and said that a collection of ancient writings was buried in a nearby hill in present-day Wayne County, New York, engraved on golden plates by ancient prophets. The writings were said to describe a people whom God had led from Jerusalem to the Western hemisphere 600 years before Jesus' birth. According to the narrative, Moroni was the last prophet among these people and had buried the record, which God had promised to bring forth in the latter days. Smith stated that this vision occurred on the evening of September 21, 1823 and that on the following day, via divine guidance, he located the burial location of the plates on this hill; was instructed by Moroni to meet him at the same hill on September 22 of the following year to receive further instructions; and that, in four years from this date, the time would arrive for "bringing them forth", i.e., translating them. Smith's description of these events recounts that he was allowed to take the plates on September 22, 1827, exactly four years from that date, and was directed to translate them into English. Accounts vary of the way in which Smith dictated the Book of Mormon. Smith himself implied that he read the plates directly using spectacles prepared by the Lord for the purpose of translating. Other accounts variously state that he used one or more seer stones placed in a top hat. Beginning around 1832, both the special spectacles and the seer stone were at times referred to as the "Urim and Thummim". During the translating process itself, Smith sometimes separated himself from his scribe with a blanket between them. Additionally, the plates were not always present during the translating process, and when present, they were always covered up. Smith's first published description of the plates said that the plates "had the appearance of gold". They were described by Martin Harris, one of Smith's early scribes, as "fastened together in the shape of a book by wires." Smith called the engraved writing on the plates "reformed Egyptian". A portion of the text on the plates was also "sealed" according to his account, so its content was not included in the Book of Mormon. In addition to Smith's account regarding the plates, eleven others stated that they saw the golden plates and, in some cases, handled them. Their written testimonies are known as the Testimony of Three Witnesses and the Testimony of Eight Witnesses. These statements have been published in most editions of the Book of Mormon. Smith enlisted his neighbor Martin Harris as a scribe during his initial work on the text. (Harris later mortgaged his farm to underwrite the printing of the Book of Mormon.) In 1828, Harris, prompted by his wife Lucy Harris, repeatedly requested that Smith lend him the current pages that had been translated. Smith reluctantly acceded to Harris's requests. Lucy Harris is thought to have stolen the first 116 pages. After the loss, Smith recorded that he had lost the ability to translate, and that Moroni had taken back the plates to be returned only after Smith repented. Smith later stated that God allowed him to resume translation, but directed that he begin translating another part of the plates (in what is now called the Book of Mosiah). In 1829, work resumed on the Book of Mormon, with the assistance of Oliver Cowdery, and was completed in a short period (April–June 1829). Smith said that he then returned the plates to Moroni upon the publication of the book. The Book of Mormon went on sale at the bookstore of E. B. Grandin in Palmyra, New York on March 26, 1830. Today, the building in which the Book of Mormon was first published and sold is known as the Book of Mormon Historic Publication Site. The first edition print-run was 5,000 copies. The publisher charged $3,000 for the production cost (wholesale to the author Joseph Smith at 60 cents per book). Since its first publication and distribution, critics of the Book of Mormon have claimed that it was fabricated by Smith and that he drew material and ideas from various sources rather than translating an ancient record. Works that have been suggested as sources include the King James Bible, "The Wonders of Nature", "View of the Hebrews", and an unpublished manuscript written by Solomon Spalding. FairMormon maintains that all of these theories have been disproved and discredited, arguing that both Mormon and non-Mormon historians have found serious flaws in their research. The position of most adherents of the Latter Day Saint movement and the official position of The Church of Jesus Christ of Latter-day Saints (LDS Church) is that the book is an accurate historical record. Smith said the title page, and presumably the actual title of the 1830 edition, came from the translation of "the very last leaf" of the golden plates, and was written by the prophet-historian Moroni. The title page states that the purpose of the Book of Mormon is "to [show] unto the remnant of the house of Israel what great things the Lord hath done for their fathers; ... and also to the convincing of the Jew and Gentile that Jesus is the Christ, the eternal God, manifesting himself unto all nations." The Book of Mormon is organized as a compilation of smaller books, each named after its main named narrator or a prominent leader, beginning with the First Book of Nephi (1 Nephi) and ending with the Book of Moroni. The book's sequence is primarily chronological based on the narrative content of the book. Exceptions include the Words of Mormon and the Book of Ether. The Words of Mormon contains editorial commentary by Mormon. The Book of Ether is presented as the narrative of an earlier group of people who had come to America before the immigration described in 1 Nephi. First Nephi through Omni are written in first-person narrative, as are Mormon and Moroni. The remainder of the Book of Mormon is written in third-person historical narrative, said to be compiled and abridged by Mormon (with Moroni abridging the Book of Ether and writing the latter part of Mormon and the Book of Moroni). Most modern editions of the book have been divided into chapters and verses. Most editions of the book also contain supplementary material, including the "Testimony of Three Witnesses" and the "Testimony of Eight Witnesses". The books from First Nephi to Omni are described as being from "the small plates of Nephi". This account begins in ancient Jerusalem around 600 BC. It tells the story of a man named Lehi, his family, and several others as they are led by God from Jerusalem shortly before the fall of that city to the Babylonians in 586 BC. The book describes their journey across the Arabian peninsula, and then to the "promised land", the Americas, by ship. These books recount the group's dealings from approximately 600 BC to about 130 BC, during which time the community grew and split into two main groups, which are called the Nephites and the Lamanites, that frequently warred with each other. Following this section is the Words of Mormon. This small book, said to be written in AD 385 by Mormon, is a short introduction to the books of Mosiah, Alma, Helaman, Third Nephi, and Fourth Nephi. These books are described as being abridged from a large quantity of existing records called "the large plates of Nephi" that detailed the people's history from the time of Omni to Mormon's own life. The Book of Third Nephi is of particular importance within the Book of Mormon because it contains an account of a visit by Jesus from heaven to the Americas sometime after his resurrection and ascension. The text says that during this American visit, he repeated much of the same doctrine and instruction given in the Gospels of the Bible and he established an enlightened, peaceful society which endured for several generations, but which eventually broke into warring factions again. The portion of the greater Book of Mormon called the Book of Mormon is an account of the events during Mormon's life. Mormon is said to have received the charge of taking care of the records that had been hidden, once he was old enough. The book includes an account of the wars, Mormon's leading of portions of the Nephite army, and his retrieving and caring for the records. Mormon is eventually killed after having handed down the records to his son Moroni. According to the text, Moroni then made an abridgment (called the Book of Ether) of a record from a previous people called the Jaredites. The account describes a group of families led from the Tower of Babel to the Americas, headed by a man named Jared and his brother. The Jaredite civilization is presented as existing on the American continent beginning about 2500 BC,—long before Lehi's family arrived shortly after 600 BC—and as being much larger and more developed. The Book of Moroni then details the final destruction of the Nephites and the idolatrous state of the remaining society. It also includes significant doctrinal teachings and closes with Moroni's testimony and an invitation to pray to God for a confirmation of the truthfulness of the account. The Book of Mormon contains doctrinal and philosophical teachings on a wide range of topics, from basic themes of Christianity and Judaism to political and ideological teachings. Stated on the title page, the Book of Mormon's central purpose is for the "convincing of the Jew and Gentile that Jesus is the Christ, the Eternal God, manifesting himself unto all nations." Jesus is mentioned every 1.7 verses and is referred to by one hundred different names. The book describes Jesus, prior to his birth, as a spirit "without flesh and blood", although with a spirit "body" that looked similar to how Jesus would appear during his physical life. Jesus is described as "the Father and the Son". He is said to be: "God himself [who] shall come down among the children of men, and shall redeem his people ... [b]eing the Father and the Son—the Father, because he was conceived by the power of God; and the Son, because of the flesh; thus becoming the Father and Son—and they are one God, yea, the very Eternal Father of heaven and of earth." Other parts of the book portray the Father, the Son, and the Holy Ghost as "one." As a result, beliefs among the churches of the Latter Day Saint movement encompass nontrinitarianism (in The Church of Jesus Christ of Latter-day Saints) to trinitarianism (in the Community of Christ). See Godhead (Latter Day Saints). In furtherance of its theme of reconciling Jews and Gentiles to Jesus, the book describes a variety of visions or visitations to some early inhabitants in the Americas involving Jesus. Most notable among these is a described visit of Jesus to a group of early inhabitants shortly after his resurrection. Many of the book's contributors described other visions of Jesus, including one by the Brother of Jared who, according to the book, lived before Jesus, and saw the "body" of Jesus' spirit thousands of years prior to his birth. According to the book, a narrator named Nephi described a vision of the birth, ministry, and death of Jesus, including a prophecy of Jesus' name, said to have taken place nearly 600 years prior to Jesus' birth. In the narrative, at the time of King Benjamin (about 130 BC), the Nephite believers were called "the children of Christ". At another place, the faithful members of the church at the time of Captain Moroni (73 BC) were called "Christians" by their enemies, because of their belief in Jesus Christ. The book also states that for nearly 200 years after Jesus' appearance at the temple in the Americas the land was filled with peace and prosperity because of the people's obedience to his commandments. Later, the prophet Mormon worked to convince the faithless people of his time (AD 360) of Christ. Many other prophets in the book write of the reality of the Messiah, Jesus Christ. In the Bible, Jesus spoke to the Jews in Jerusalem of "other sheep" who would hear his voice. The Book of Mormon claims this meant that the Nephites and other remnants of the lost tribes of Israel throughout the world were to be visited by Jesus after his resurrection. The book delves into political theology within a Christian or Jewish context. Among these themes are American exceptionalism. According to the book, the Americas are portrayed as a "land of promise", the world's most exceptional land of the time. The book states that any righteous society possessing the land would be protected, whereas if they became wicked they would be destroyed and replaced with a more righteous civilization. On the issue of war and violence, the book teaches that war is justified for people to "defend themselves against their enemies". However, they were never to "give an offense," or to "raise their sword ... except it were to preserve their lives." The book praises the faith of a group of former warriors who took an oath of complete pacifism, refusing to take arms even to defend themselves and their people. However, 2,000 of their descendants, who had not taken the oath of their parents not to take up arms against their enemies, chose to go to battle against the Lamanites, and it states that in their battles the 2,000 men were protected by God through their faith and, though many were injured, none of them died. The book recommends monarchy as an ideal form of government, but only when the monarch is righteous. The book warns of the evil that occurs when the king is wicked, and therefore suggests that it is not generally good to have a king. The book further records the decision of the people to be ruled no longer by kings, choosing instead a form of democracy led by elected judges. When citizens referred to as "king-men" attempted to overthrow a democratically elected government and establish an unrighteous king, the book praises a military commander who executed pro-monarchy citizens who had vowed to destroy the church of God and were unwilling to defend their country from hostile invading forces. The book also speaks favorably of a particular instance of what appears to be a peaceful Christ-centered theocracy, which lasted approximately 194 years before contentions began again. The book supports notions of economic justice, achieved through voluntary donation of "substance, every man according to that which he had, to the poor." In one case, all the citizens held their property in common. When individuals within a society began to disdain and ignore the poor, to "wear costly apparel", and otherwise engage in wickedness for personal gain, such societies are repeatedly portrayed in the book as being ripe for destruction. Joseph Smith characterized the Book of Mormon as the "keystone" of Mormonism, and claimed that it was "the most correct of any book on earth". Smith produced a written revelation in 1832 that condemned the "whole church" for treating the Book of Mormon lightly. The Book of Mormon is one of four sacred texts or standard works of the LDS Church. Church leaders have frequently restated Smith's claims of the book's significance to the faith. Church members believe that the Book of Mormon is more correct than the Bible because the Bible was the result of a multiple-generation translation process and the Book of Mormon was not. For most of the history of the LDS Church, the Book of Mormon was not used as much as other books of scripture such as the New Testament and the Doctrine and Covenants. This changed in the 1980s when efforts were made to reemphasize the Book of Mormon. As part of this effort, a new edition was printed with the added subtitle "Another Testament of Jesus Christ". The importance of the Book of Mormon was a focus of Ezra Taft Benson, the church's thirteenth president. Benson stated that the church was still under condemnation for treating the Book of Mormon lightly. In an August 2005 message, LDS Church president Gordon B. Hinckley challenged each member of the church to re-read the Book of Mormon before the year's end. The book's importance is commonly stressed at the twice-yearly general conference, at special devotionals by general authorities, and in the church's teaching publications. Since the late 1980s, church members have been encouraged to read from the Book of Mormon daily. The LDS Church encourages discovery of the book's truth by following the suggestion in its final chapter to study, ponder, and pray to God concerning its veracity. This passage is sometimes referred to as "Moroni's Promise". As of April 2011, the LDS Church has published more than 150 million copies of the Book of Mormon. The Community of Christ, formerly known as the Reorganized Church of Jesus Christ of Latter Day Saints, views the Book of Mormon as an additional witness of Jesus Christ and publishes two versions of the book through its official publishing arm, Herald House: the Authorized Edition, which is based on the original printer's manuscript, and the 1837 Second Edition (or "Kirtland Edition") of the Book of Mormon. Its content is similar to the Book of Mormon published by the LDS Church, but the versification is different. The Community of Christ also publishes a 1966 "Revised Authorized Edition," which attempts to modernize some language. In 2001, Community of Christ President W. Grant McMurray reflected on increasing questions about the Book of Mormon: "The proper use of the Book of Mormon as sacred scripture has been under wide discussion in the 1970s and beyond, in part because of long-standing questions about its historical authenticity and in part because of perceived theological inadequacies, including matters of race and ethnicity." At the 2007 Community of Christ World Conference, President Stephen M. Veazey ruled out-of-order a resolution to "reaffirm the Book of Mormon as a divinely inspired record." He stated that "while the Church affirms the Book of Mormon as scripture, and makes it available for study and use in various languages, we do not attempt to mandate the degree of belief or use. This position is in keeping with our longstanding tradition that belief in the Book of Mormon is not to be used as a test of fellowship or membership in the church." There are a number of other churches that are part of the Latter Day Saint movement. Most of these churches were created as a result of issues ranging from differing doctrinal interpretations and acceptance of the movement's scriptures, including the Book of Mormon, to disagreements as to who was the divinely chosen successor to Joseph Smith. These groups all have in common the acceptance of the Book of Mormon as scripture. It is this acceptance which distinguishes the churches of the Latter Day Saint movement from other Christian denominations. Separate editions of the Book of Mormon have been published by a number of churches in the Latter Day Saint movement, along with private individuals and foundations not endorsed by any specific denomination. Most of the archaeological, historical and scientific communities do not consider the Book of Mormon an ancient record of actual historical events. Their skepticism tends to focus on four main areas: Most adherents of the Latter Day Saint movement consider the Book of Mormon to generally be a historically accurate account. Within the Latter Day Saint movement there are several apologetic groups that disagree with the skeptics and seek to reconcile the discrepancies in diverse ways. Among these apologetic groups, much work has been published by Foundation for Ancient Research and Mormon Studies (FARMS), and Foundation for Apologetic Information & Research (FAIR), defending the Book of Mormon as a literal history, countering arguments critical of its historical authenticity, or reconciling historical and scientific evidence with the text. One of the more common recent arguments is the limited geography model, which states that the people of the Book of Mormon covered only a limited geographical region in either Mesoamerica, South America, or the Great Lakes area. The LDS Church has published material indicating that science will support the historical authenticity of the Book of Mormon. The Book of Mormon was dictated by Joseph Smith to several scribes over a period of 13 months, resulting in three manuscripts. The 116 lost pages contained the first portion of the Book of Lehi; it was lost after Smith loaned the original, uncopied manuscript to Martin Harris. The first completed manuscript, called the original manuscript, was completed using a variety of scribes. Portions of the original manuscript were also used for typesetting. In October 1841, the entire original manuscript was placed into the cornerstone of the Nauvoo House, and sealed up until nearly forty years later when the cornerstone was reopened. It was then discovered that much of the original manuscript had been destroyed by water seepage and mold. Surviving manuscript pages were handed out to various families and individuals in the 1880s. Only 28 percent of the original manuscript now survives, including a remarkable find of fragments from 58 pages in 1991. The majority of what remains of the original manuscript is now kept in the LDS Church's Archives. The second completed manuscript, called the printer's manuscript, was a copy of the original manuscript produced by Oliver Cowdery and two other scribes. It is at this point that initial copyediting of the Book of Mormon was completed. Observations of the original manuscript show little evidence of corrections to the text. Shortly before his death in 1850, Cowdery gave the printer's manuscript to David Whitmer, another of the Three Witnesses. In 1903, the manuscript was bought from Whitmer's grandson by the Reorganized Church of Jesus Christ of Latter-day Saints, now known as the Community of Christ. On September 20, 2017, the LDS Church purchased the manuscript from the Community of Christ at a reported price of $35 million. The printer's manuscript is now the earliest surviving complete copy of the Book of Mormon. The manuscript was imaged in 1923 and was recently made available for viewing online. Critical comparisons between surviving portions of the manuscripts show an average of two to three changes per page from the original manuscript to the printer's manuscript, with most changes being corrections of scribal errors such as misspellings or the correction, or standardization, of grammar inconsequential to the meaning of the text. The printer's manuscript was further edited, adding paragraphing and punctuation to the first third of the text. The printer's manuscript was not used fully in the typesetting of the 1830 version of Book of Mormon; portions of the original manuscript were also used for typesetting. The original manuscript was used by Smith to further correct errors printed in the 1830 and 1837 versions of the Book of Mormon for the 1840 printing of the book. In the late-19th century the extant portion of the printer's manuscript remained with the family of David Whitmer, who had been a principal founder of the Latter Day Saints and who, by the 1870s, led the Church of Christ (Whitmerite). During the 1870s, according to the "Chicago Tribune", the LDS Church unsuccessfully attempted to buy it from Whitmer for a record price. LDS Church president Joseph F. Smith refuted this assertion in a 1901 letter, believing such a manuscript "possesses no value whatever." In 1895, Whitmer's grandson George Schweich inherited the manuscript. By 1903, Schweich had mortgaged the manuscript for $1,800 and, needing to raise at least that sum, sold a collection including 72-percent of the book of the original printer's manuscript (John Whitmer's manuscript history, parts of Joseph Smith's translation of the Bible, manuscript copies of several revelations, and a piece of paper containing copied Book of Mormon characters) to the RLDS Church (now the Community of Christ) for $2,450, with $2,300 of this amount for the printer's manuscript. The LDS Church had not sought to purchase the manuscript. In 2015, this remaining portion was published by the Church Historian's Press in its "Joseph Smith Papers" series, in Volume Three of "Revelations and Translations"; and, in 2017, the LDS Church bought the printer's manuscript for . The original 1830 publication did not have verse markers, although the individual books were divided into relatively long chapters. Just as the Bible's present chapter and verse notation system is a later addition of Bible publishers to books that were originally solid blocks of undivided text, the chapter and verse markers within the books of the Book of Mormon are conventions, not part of the original text. Publishers from different factions of the Latter Day Saint movement have published different chapter and verse notation systems. The two most significant are the LDS system, introduced in 1879, and the RLDS system, which is based on the original 1830 chapter divisions. The RLDS 1908 edition, RLDS 1966 edition, the Church of Christ (Temple Lot) edition, and Restored Covenant editions use the RLDS system while most other current editions use the LDS system. The Book of Mormon is currently printed by the following publishers: The following non-current editions marked major developments in the text or reader's helps printed in the Book of Mormon. The following versions are published online: Although some earlier unpublished studies had been prepared, not until the early 1970s was true textual criticism applied to the Book of Mormon. At that time BYU Professor Ellis Rasmussen and his associates were asked by the LDS Church to begin preparation for a new edition of the church's scriptures. One aspect of that effort entailed digitizing the text and preparing appropriate footnotes, another aspect required establishing the most dependable text. To that latter end, Stanley R. Larson (a Rasmussen graduate student) set about applying modern text critical standards to the manuscripts and early editions of the Book of Mormon as his thesis project—which he completed in 1974. Larson carefully examined the original manuscript (the one dictated by Joseph Smith to his scribes) and the printer's manuscript (the copy Oliver Cowdery prepared for the printer in 1829–1830), and compared them with the first, second, and third editions of the Book of Mormon; this was done to determine what sort of changes had occurred over time and to make judgments as to which readings were the most original. Larson proceeded to publish a useful set of well-argued articles on the phenomena which he had discovered. Many of his observations were included as improvements in the 1981 LDS edition of the Book of Mormon. By 1979, with the establishment of the Foundation for Ancient Research and Mormon Studies (FARMS) as a California non-profit research institution, an effort led by Robert F. Smith began to take full account of Larson's work and to publish a critical text of the Book of Mormon. Thus was born the FARMS Critical Text Project which published the first volume of the three-volume Book of Mormon Critical Text in 1984. The third volume of that first edition was published in 1987, but was already being superseded by a second, revised edition of the entire work, greatly aided through the advice and assistance of a team that included Yale doctoral candidate Grant Hardy, Dr. Gordon C. Thomasson, Professor John W. Welch (the head of FARMS), Professor Royal Skousen. However, these were merely preliminary steps to a far more exacting and all-encompassing project. In 1988, with that preliminary phase of the project completed, Skousen took over as editor and head of the FARMS Critical Text of the Book of Mormon Project and proceeded to gather still scattered fragments of the original manuscript of the Book of Mormon and to have advanced photographic techniques applied to obtain fine readings from otherwise unreadable pages and fragments. He also closely examined the printer's manuscript (then owned by RLDS Church) for differences in types of ink or pencil, in order to determine when and by whom they were made. He also collated the various editions of the Book of Mormon down to the present to see what sorts of changes have been made through time. Thus far, Skousen has published complete transcripts of the Original and Printer's Manuscripts, as well as a six-volume analysis of textual variants. Still in preparation are a history of the text, and a complete electronic collation of editions and manuscripts (volumes 3 and 5 of the Project, respectively). Yale University has in the meantime published an edition of the Book of Mormon which incorporates all aspects of Skousen's research. Differences between the original and printer's manuscript, the 1830 printed version, and modern versions of the Book of Mormon have led some critics to claim that evidence has been systematically removed that could have proven that Smith fabricated the Book of Mormon, or are attempts to hide embarrassing aspects of the church's past, with Mormon scholars viewing the changes as superficial, done to clarify the meaning of the text. The LDS version of the Book of Mormon has been translated into 83 languages and selections have been translated into an additional 25 languages. In 2001, the LDS Church reported that all or part of the Book of Mormon was available in the native language of 99 percent of Latter-day Saints and 87 percent of the world's total population. Translations into languages without a tradition of writing (e.g., Kaqchikel, Tzotzil) are available on audio cassette. Translations into American Sign Language are available on videocassette and DVD. Typically, translators are members of the LDS Church who are employed by the church and translate the text from the original English. Each manuscript is reviewed several times before it is approved and published. In 1998, the LDS Church stopped translating selections from the Book of Mormon, and instead announced that each new translation it approves will be a full edition. Events of the Book of Mormon are the focus of several LDS Church films, including "The Life of Nephi" (1915), "How Rare a Possession" (1987) and "The Testaments of One Fold and One Shepherd" (2000). Such films in LDS cinema (i.e., films not officially commissioned by the LDS Church) include "" (2003) and "Passage to Zarahemla" (2007). Second Nephi 9:20–27 from the Book of Mormon is quoted in a funeral service in Alfred Hitchcock's film "Family Plot". In 2003, a "South Park" episode titled "All About Mormons" parodied the origins of the Book of Mormon. In 2011, a long-running religious satire musical titled "The Book of Mormon", written by South Park creators Trey Parker and Matt Stone in collaboration with Robert Lopez, premiered on Broadway, winning 9 Tony Awards, including Best Musical. Its London production won the Olivier Award for best musical. The LDS Church, which distributes free copies of the Book of Mormon, reported in 2011 that 150 million copies of the book have been printed since its initial publication. The initial printing of the Book of Mormon in 1830 produced 5000 copies. The 50 millionth copy was printed in 1990, with the 100 millionth following in 2000 and reaching 150 million in 2011. The Book of Mormon has occasionally been analyzed in a non-religious context for its literary merits. Terryl Givens wrote, Grant Hardy wrote, In 2019, Oxford University published "Americanist Approaches to The Book of Mormon."
https://en.wikipedia.org/wiki?curid=3978
Baptists Baptists form a major branch of Protestantism distinguished by baptizing professing believers only (believer's baptism, as opposed to infant baptism), and doing so by complete immersion (as opposed to affusion or aspersion). Baptist churches also generally subscribe to the doctrines of soul competency (the responsibility and accountability of every person before God), "sola fide" (salvation by faith alone), "sola scriptura" (scripture alone as the rule of faith and practice) and congregationalist church government. Baptists generally recognize two ordinances: baptism and communion. Diverse from their beginning, those identifying as Baptists today differ widely from one another in what they believe, how they worship, their attitudes toward other Christians, and their understanding of what is important in Christian discipleship. Historians trace the earliest "Baptist" church to 1609 in Amsterdam, Dutch Republic with English Separatist John Smyth as its pastor. In accordance with his reading of the New Testament, he rejected baptism of infants and instituted baptism only of believing adults. Baptist practice spread to England, where the General Baptists considered Christ's atonement to extend to all people, while the Particular Baptists believed that it extended only to the elect. Thomas Helwys formulated a distinctively Baptist request that the church and the state be kept separate in matters of law, so that individuals might have freedom of religion. Helwys died in prison as a consequence of the religious conflict with English dissenters under King James I. In 1638, Roger Williams established the first Baptist congregation in the North American colonies. In the 18th and 19th centuries, the First and Second Great Awakening increased church membership in the United States. Baptist missionaries have spread their faith to every continent. Baptist historian Bruce Gourley outlines four main views of Baptist origins: Modern Baptist churches trace their history to the English Separatist movement in the 1600s, the century after the rise of the original Protestant denominations. This view of Baptist origins has the most historical support and is the most widely accepted. Adherents to this position consider the influence of Anabaptists upon early Baptists to be minimal. It was a time of considerable political and religious turmoil. Both individuals and churches were willing to give up their theological roots if they became convinced that a more biblical "truth" had been discovered. During the Protestant Reformation, the Church of England (Anglicans) separated from the Roman Catholic Church. There were some Christians who were not content with the achievements of the mainstream Protestant Reformation. There also were Christians who were disappointed that the Church of England had not made corrections of what some considered to be errors and abuses. Of those most critical of the Church's direction, some chose to stay and try to make constructive changes from within the Anglican Church. They became known as "Puritans" and are described by Gourley as cousins of the English Separatists. Others decided they must leave the Church because of their dissatisfaction and became known as the Separatists. Historians trace the earliest Baptist church back to 1609 in Amsterdam, with John Smyth as its pastor. Three years earlier, while a Fellow of Christ's College, Cambridge, he had broken his ties with the Church of England. Reared in the Church of England, he became "Puritan, English Separatist, and then a Baptist Separatist," and ended his days working with the Mennonites. He began meeting in England with 60–70 English Separatists, in the face of "great danger." The persecution of religious nonconformists in England led Smyth to go into exile in Amsterdam with fellow Separatists from the congregation he had gathered in Lincolnshire, separate from the established church (Anglican). Smyth and his lay supporter, Thomas Helwys, together with those they led, broke with the other English exiles because Smyth and Helwys were convinced they should be baptized as believers. In 1609 Smyth first baptized himself and then baptized the others. In 1609, while still there, Smyth wrote a tract titled "The Character of the Beast," or "The False Constitution of the Church." In it he expressed two propositions: first, infants are not to be baptized; and second, "Antichristians converted are to be admitted into the true Church by baptism." Hence, his conviction was that a scriptural church should consist only of regenerate believers who have been baptized on a personal confession of faith. He rejected the Separatist movement's doctrine of infant baptism (paedobaptism). Shortly thereafter, Smyth left the group, and layman Thomas Helwys took over the leadership, leading the church back to England in 1611. Ultimately, Smyth became committed to believers' baptism as the only biblical baptism. He was convinced on the basis of his interpretation of Scripture that infants would not be damned should they die in infancy. Smyth, convinced that his self-baptism was invalid, applied with the Mennonites for membership. He died while waiting for membership, and some of his followers became Mennonites. Thomas Helwys and others kept their baptism and their Baptist commitments. The modern Baptist denomination is an outgrowth of Smyth's movement. Baptists rejected the name Anabaptist when they were called that by opponents in derision. McBeth writes that as late as the 18th century, many Baptists referred to themselves as "the Christians commonly—though "falsely"—called Anabaptists." Another milestone in the early development of Baptist doctrine was in 1638 with John Spilsbury, a Calvinistic minister who helped to promote the strict practice of believer's baptism by immersion. According to Tom Nettles, professor of historical theology at Southern Baptist Theological Seminary, "Spilsbury's cogent arguments for a gathered, disciplined congregation of believers baptized by immersion as constituting the New Testament church gave expression to and built on insights that had emerged within separatism, advanced in the life of John Smyth and the suffering congregation of Thomas Helwys, and matured in Particular Baptists." A minority view is that early-17th-century Baptists were influenced by (but not directly connected to) continental Anabaptists. According to this view, the General Baptists shared similarities with Dutch Waterlander Mennonites (one of many Anabaptist groups) including believer's baptism only, religious liberty, separation of church and state, and Arminian views of salvation, predestination and original sin. Representative writers including A.C. Underwood and William R. Estep. Gourley wrote that among some contemporary Baptist scholars who emphasize the faith of the community over soul liberty, the Anabaptist influence theory is making a comeback. However, the relations between Baptists and Anabaptists were early strained. In 1624, the then five existing Baptist churches of London issued a condemnation of the Anabaptists. Furthermore, the original group associated with Smyth and popularly believed to be the first Baptists broke with the Waterlander Mennonite Anabaptists after a brief period of association in the Netherlands. Traditional Baptist historians write from the perspective that Baptists had existed since the time of Christ. However, the Southern Baptist Convention passed resolutions rejecting this view in 1859. Proponents of the Baptist successionist or perpetuity view consider the Baptist movement to have existed independently from Roman Catholicism and prior to the Protestant Reformation. The perpetuity view is often identified with "The Trail of Blood", a booklet of five lectures by J.M. Carrol published in 1931. Other Baptist writers who advocate the successionist theory of Baptist origins are John T. Christian, Thomas Crosby, G. H. Orchard, J. M. Cramp, William Cathcart, Adam Taylor and D. B. Ray This view was also held by English Baptist preacher, Charles Spurgeon as well as Jesse Mercer, the namesake of Mercer University. In 1898 William Whitsitt was pressured to resign his presidency of the Southern Baptist Theological Seminary for denying Baptist successionism In 1612, Thomas Helwys established a Baptist congregation in London, consisting of congregants from Smyth's church. A number of other Baptist churches sprang up, and they became known as the General Baptists. The Particular Baptists were established when a group of Calvinist Separatists adopted believers' Baptism. The Particular Baptists consisted of seven churches by 1644 and had created a confession of faith called the First London Confession of Faith. Both Roger Williams and John Clarke, his compatriot and coworker for religious freedom, are variously credited as founding the earliest Baptist church in North America. In 1639, Williams established a Baptist church in Providence, Rhode Island, and Clarke began a Baptist church in Newport, Rhode Island. According to a Baptist historian who has researched the matter extensively, "There is much debate over the centuries as to whether the Providence or Newport church deserved the place of 'first' Baptist congregation in America. Exact records for both congregations are lacking." The Great Awakening energized the Baptist movement, and the Baptist community experienced spectacular growth. Baptists became the largest Christian community in many southern states, including among the black population. Baptist missionary work in Canada began in the British colony of Nova Scotia (present day Nova Scotia and New Brunswick) in the 1760s. The first official record of a Baptist church in Canada was that of the Horton Baptist Church (now Wolfville) in Wolfville, Nova Scotia on 29 October 1778. The church was established with the assistance of the New Light evangelist Henry Alline. Many of Alline's followers, after his death, would convert and strengthen the Baptist presence in the Atlantic region. Two major groups of Baptists formed the basis of the churches in the Maritimes. These were referred to as Regular Baptist (Calvinistic in their doctrine) and Free Will Baptists (Arminian in their doctrine). In May 1845, the Baptist congregations in the United States split over slavery and missions. The Home Mission Society prevented slaveholders from being appointed as missionaries. The split created the Southern Baptist Convention, while the northern congregations formed their own umbrella organization now called the American Baptist Churches USA (ABC-USA). The Methodist Episcopal Church, South had recently separated over the issue of slavery, and southern Presbyterians would do so shortly thereafter. The Baptist churches in Ukraine were preceded by the German Anabaptist and Mennonite communities, who had been living in the south of Ukraine since the 16th century, and who practiced adult believers baptism. The first Baptist baptism (adult baptism by full immersion) in Ukraine took place in 1864 on the river Inhul in the Yelizavetgrad region (now Kropyvnytskyi region), in a German settlement. In 1867, the first Baptist communities were organized in that area. From there, the Baptist movement spread across the south of Ukraine and then to other regions as well. One of the first Baptist communities was registered in Kiev in 1907, and in 1908 the First All-Russian Convention of Baptists was held there, as Ukraine was still controlled by the Russian Empire. The All-Russian Union of Baptists was established in the town of Yekaterinoslav (now Dnipro) in Southern Ukraine. At the end of the 19th century, estimates are that there were between 100,000 and 300,000 Baptists in Ukraine. An independent All-Ukrainian Baptist Union of Ukraine was established during the brief period of Ukraine's independence in early 20th-century, and once again after the fall of the Soviet Union, the largest of which is currently known as the Evangelical Baptist Union of Ukraine. Many Baptist churches choose to affiliate with organizational groups that provide fellowship without control. The largest such group in the US is the Southern Baptist Convention. There also are a substantial number of smaller cooperative groups. Finally, there are Independent Baptist churches that choose to remain independent of any denomination, organization, or association. It has been suggested that a primary Baptist principle is that local Baptist Churches are independent and self-governing, and if so the term 'Baptist denomination' may be considered somewhat incongruous. In 1905, Baptists worldwide formed the Baptist World Alliance (BWA). The BWA's goals include caring for the needy, leading in world evangelism and defending human rights and religious freedom. Though it played a role in the founding of the BWA, the Southern Baptist Convention severed its affiliation with BWA in 2004. In 2010, 100 million Christians identify themselves as Baptist or belong to Baptist-type churches. In 2017, the Baptist World Alliance has 47 million people. Not all Baptist groups cooperate with the Alliance, notably the Southern Baptist Convention (which actually participated in its founding) does not cooperate with the Alliance, having withdrawn in 2004. Baptists are present in almost all continents in large denominations. The largest communities that are part of the Baptist World Alliance are in Nigeria (3.5 million) and Democratic Republic of the Congo (2 million) in Africa, India (2.5 million) and Myanmar (1 million) in Asia, the United States (35 million) and Brazil (1.8 million) in the Americas. In 1991, Ukraine had the second largest Baptist community in the world, behind only the United States. According to the Barna Group researchers, Baptists are the largest denominational grouping of born again Christians in the USA. A 2009 ABCNEWS/Beliefnet phone poll of 1,022 adults suggests that fifteen percent of Americans identify themselves as Baptists. A large percentage of Baptists in North America are found in six bodies—the Southern Baptist Convention (SBC); American Baptist Association (ABA); National Baptist Convention (NBC); National Baptist Convention of America, Inc.; (NBCA); American Baptist Churches USA (ABC); and Baptist Bible Fellowship International (BBFI). There are three states in the world with a Baptist majority: Mississippi and Texas in USA (over 50%), and Nagaland in India (more than 75%). Membership policies vary due to the autonomy of churches, but the traditional method by which an individual becomes a member of a church is through believer's baptism (which is a public profession of faith in Jesus, followed by water baptism). Most baptists do not believe that baptism is a requirement for salvation, but rather a public expression of one's inner repentance and faith. Therefore, some churches will admit into membership persons who make a profession without believer's baptism. In general, Baptist churches do not have a stated age restriction on membership, but believer's baptism requires that an individual be able to freely and earnestly profess their faith. (See Age of Accountability) Baptists, like other Christians, are defined by school of thought—some of it common to all orthodox and evangelical groups and a portion of it distinctive to Baptists. Through the years, different Baptist groups have issued confessions of faith—without considering them to be "creeds"—to express their particular doctrinal distinctions in comparison to other Christians as well as in comparison to other Baptists. Baptist denominations are traditionally seen as belonging to two parties, General Baptists who uphold Arminian theology and Particular Baptists who uphold Reformed theology. During the holiness movement, some General Baptists accepted the teaching of a second work of grace and formed denominations that emphasized this belief, such as the Ohio Valley Association of the Christian Baptist Churches of God and the Holiness Baptist Association. Most Baptists are evangelical in doctrine, but Baptist beliefs can vary due to the congregational governance system that gives autonomy to individual local Baptist churches. Historically, Baptists have played a key role in encouraging religious freedom and separation of church and state. Shared doctrines would include beliefs about one God; the virgin birth; miracles; atonement for sins through the death, burial, and bodily resurrection of Jesus; the Trinity; the need for salvation (through belief in Jesus Christ as the Son of God, his death and resurrection); grace; the Kingdom of God; last things (eschatology) (Jesus Christ will return personally and visibly in glory to the earth, the dead will be raised, and Christ will judge everyone in righteousness); and evangelism and missions. Some historically significant Baptist doctrinal documents include the 1689 London Baptist Confession of Faith, 1742 Philadelphia Baptist Confession, the 1833 New Hampshire Baptist Confession of Faith, the Southern Baptist Convention's "Baptist Faith and Message," and written church covenants which some individual Baptist churches adopt as a statement of their faith and beliefs. Most Baptists hold that no church or ecclesiastical organization has inherent authority over a Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include a few churches that submit to the leadership of a body of elders, as well as the Episcopal Baptists that have an Episcopal system. Baptists generally believe in the literal Second Coming of Christ. Beliefs among Baptists regarding the "end times" include amillennialism, dispensationalism, and historic premillennialism, with views such as postmillennialism and preterism receiving some support. Some additional distinctive Baptist principles held by many Baptists: Since there is no hierarchical authority and each Baptist church is autonomous, there is no official set of Baptist theological beliefs. These differences exist both among associations, and even among churches within the associations. Some doctrinal issues on which there is widespread difference among Baptists are: Baptists have faced many controversies in their 400-year history, controversies of the level of crises. Baptist historian Walter Shurden says the word "crisis" comes from the Greek word meaning "to decide." Shurden writes that contrary to the presumed negative view of crises, some controversies that reach a crisis level may actually be "positive and highly productive." He claims that even schism, though never ideal, has often produced positive results. In his opinion crises among Baptists each have become decision-moments that shaped their future. Some controversies that have shaped Baptists include the "missions crisis", the "slavery crisis", the "landmark crisis", and the "modernist crisis". Early in the 19th century, the rise of the modern missions movement, and the backlash against it, led to widespread and bitter controversy among the American Baptists. During this era, the American Baptists were split between missionary and anti-missionary. A substantial secession of Baptists went into the movement led by Alexander Campbell, to return to a more fundamental church. Leading up to the American Civil War, Baptists became embroiled in the controversy over slavery in the United States. Whereas in the First Great Awakening Methodist and Baptist preachers had opposed slavery and urged manumission, over the decades they made more of an accommodation with the institution. They worked with slaveholders in the South to urge a paternalistic institution. Both denominations made direct appeals to slaves and free blacks for conversion. The Baptists particularly allowed them active roles in congregations. By the mid-19th century, northern Baptists tended to oppose slavery. As tensions increased, in 1844 the Home Mission Society refused to appoint a slaveholder as a missionary who had been proposed by Georgia. It noted that missionaries could not take servants with them, and also that the board did not want to appear to condone slavery. The Southern Baptist Convention was formed by nine state conventions in 1845. They believed that the Bible sanctions slavery and that it was acceptable for Christians to own slaves. They believed slavery was a human institution which Baptist teaching could make less harsh. By this time many planters were part of Baptist congregations, and some of the denomination's prominent preachers, such as the Rev. Basil Manly, Sr., president of the University of Alabama, were also planters who owned slaves. As early as the late 18th century, black Baptists began to organize separate churches, associations and mission agencies. Blacks set up some independent Baptist congregations in the South before the American Civil War. White Baptist associations maintained some oversight of these churches. In the postwar years, freedmen quickly left the white congregations and associations, setting up their own churches. In 1866 the Consolidated American Baptist Convention, formed from black Baptists of the South and West, helped southern associations set up black state conventions, which they did in Alabama, Arkansas, Virginia, North Carolina, and Kentucky. In 1880 black state conventions united in the national Foreign Mission Convention, to support black Baptist missionary work. Two other national black conventions were formed, and in 1895 they united as the National Baptist Convention. This organization later went through its own changes, spinning off other conventions. It is the largest black religious organization and the second-largest Baptist organization in the world. Baptists are numerically most dominant in the Southeast. In 2007, the Pew Research Center's Religious Landscape Survey found that 45% of all African Americans identify with Baptist denominations, with the vast majority of those being within the historically black tradition. Elsewhere in the Americas, in the Caribbean in particular, Baptist missionaries and members took an active role in the anti-slavery movement. In Jamaica, for example, William Knibb, a prominent British Baptist missionary, worked toward the emancipation of slaves in the British West Indies (which took place in full in 1838). Knibb also supported the creation of "Free Villages" and sought funding from English Baptists to buy land for freedmen to cultivate; the Free Villages were envisioned as rural communities to be centred around a Baptist church where emancipated slaves could farm their own land. Thomas Burchell, missionary minister in Montego Bay, also was active in this movement, gaining funds from Baptists in England to buy land for what became known as Burchell Free Village. Prior to emancipation, Baptist deacon Samuel Sharpe, who served with Burchell, organized a general strike of slaves seeking better conditions. It developed into a major rebellion of as many as 60,000 slaves, which became known as the Christmas Rebellion (when it took place) or the Baptist War. It was put down by government troops within two weeks. During and after the rebellion, an estimated 200 slaves were killed outright, with more than 300 judicially executed later by prosecution in the courts, sometimes for minor offenses. Baptists were active after emancipation in promoting the education of former slaves; for example, Jamaica's Calabar High School, named after the port of Calabar in Nigeria, was founded by Baptist missionaries. At the same time, during and after slavery, slaves and free blacks formed their own Spiritual Baptist movements - breakaway spiritual movements which theology often expressed resistance to oppression. In the American South, the interpretation of the American Civil War, abolition of slavery and postwar period has differed sharply by race since those years. Americans have often interpreted great events in religious terms. Historian Wilson Fallin contrasts the interpretation of Civil War and Reconstruction in white versus black memory by analyzing Baptist sermons documented in Alabama. Soon after the Civil War, most black Baptists in the South left the Southern Baptist Convention, reducing its numbers by hundreds of thousands or more. They quickly organized their own congregations and developed their own regional and state associations and, by the end of the 19th century, a national convention. White preachers in Alabama after Reconstruction expressed the view that: Black preachers interpreted the Civil War, Emancipation and Reconstruction as: "God's gift of freedom." They had a gospel of liberation, having long identified with the Book of Exodus from slavery in the Old Testament. They took opportunities to exercise their independence, to worship in their own way, to affirm their worth and dignity, and to proclaim the fatherhood of God and the brotherhood of man. Most of all, they quickly formed their own churches, associations, and conventions to operate freely without white supervision. These institutions offered self-help and racial uplift, a place to develop and use leadership, and places for proclamation of the gospel of liberation. As a result, black preachers said that God would protect and help him and God's people; God would be their rock in a stormy land. The Southern Baptist Convention supported white supremacy and its results: disenfranchising most blacks and many poor whites at the turn of the 20th century by raising barriers to voter registration, and passage of racial segregation laws that enforced the system of Jim Crow. Its members largely resisted the civil rights movement in the South, which sought to enforce their constitutional rights for public access and voting; and enforcement of midcentury federal civil rights laws. On 20 June 1995, the Southern Baptist Convention voted to adopt a resolution renouncing its racist roots and apologizing for its past defense of slavery. More than 20,000 Southern Baptists registered for the meeting in Atlanta. The resolution declared that messengers, as SBC delegates are called, "unwaveringly denounce racism, in all its forms, as deplorable sin" and "lament and repudiate historic acts of evil such as slavery from which we continue to reap a bitter harvest." It offered an apology to all African Americans for "condoning and/or perpetuating individual and systemic racism in our lifetime" and repentance for "racism of which we have been guilty, whether consciously or unconsciously." Although Southern Baptists have condemned racism in the past, this was the first time the convention, predominantly white since the Reconstruction era, had specifically addressed the issue of slavery. The statement sought forgiveness "from our African-American brothers and sisters" and pledged to "eradicate racism in all its forms from Southern Baptist life and ministry." In 1995 about 500,000 members of the 15.6-million-member denomination were African Americans and another 300,000 were ethnic minorities. The resolution marked the denomination's first formal acknowledgment that racism played a role in its founding. Southern Baptist Landmarkism sought to reset the ecclesiastical separation which had characterized the old Baptist churches, in an era when inter-denominational union meetings were the order of the day. James Robinson Graves was an influential Baptist of the 19th century and the primary leader of this movement. While some Landmarkers eventually separated from the Southern Baptist Convention, the movement continued to influence the Convention into the 20th and 21st centuries. For instance, in 2005, the Southern Baptist International Mission Board forbade its missionaries to receive alien immersions for baptism. The rise of theological modernism in the latter 19th and early 20th centuries also greatly affected Baptists. The Landmark movement, already mentioned, has been described as a reaction among Southern Baptists in the United States against incipient modernism . In England, Charles Haddon Spurgeon fought against modernistic views of the Scripture in the Downgrade Controversy and severed his church from the Baptist Union as a result. The Northern Baptist Convention in the United States had internal conflict over modernism in the early 20th century, ultimately embracing it. Two new conservative associations of congregations that separated from the Convention were founded as a result: the General Association of Regular Baptist Churches in 1933 and the Conservative Baptist Association of America in 1947. Following similar conflicts over modernism, the Southern Baptist Convention adhered to conservative theology as its official position. In the late 20th century, Southern Baptists who disagreed with this direction founded two new groups: the liberal Alliance of Baptists in 1987 and the more moderate Cooperative Baptist Fellowship in 1991. Originally both schisms continued to identify as Southern Baptist, but over time "became permanent new families of Baptists."
https://en.wikipedia.org/wiki?curid=3979
Blackjack Blackjack, formerly also Black Jack and Vingt-Un, is the American member of a global family of banking games known as Twenty-One, whose relatives include Pontoon and Vingt-et-Un. It is a comparing card game between one or more players and a dealer, where each player in turn competes against the dealer. Players do not compete against each other. It is played with one or more decks of 52 cards, and is the most widely played casino banking game in the world. Blackjack's precursor was "twenty-one", a game of unknown origin. The first written reference is found in a book by the Spanish author Miguel de Cervantes, most famous for writing "Don Quixote". Cervantes was a gambler, and the main characters of his tale "Rinconete y Cortadillo", from "Novelas Ejemplares", are a couple of cheats working in Seville. They are proficient at cheating at "veintiuna" (Spanish for twenty-one), and state that the object of the game is to reach 21 points without going over and that the ace values 1 or 11. The game is played with the Spanish "baraja" deck. This short story was written between 1601 and 1602, implying that "ventiuna" was played in Castile since the beginning of the 17th century or earlier. Later references to this game are found in France and Spain. There is a popular myth that, when Vingt-Un ("Twenty-One") was introduced into the United States in the early 1800s - other sources say during the First World War and still others the 1930s - gambling houses offered bonus payouts to stimulate players' interest. One such bonus was a ten-to-one payout if the player's hand consisted of the ace of spades and a black jack (either the jack of clubs or the jack of spades). This hand was called a "blackjack", and it is claimed that the name stuck to the game even though the ten-to-one bonus was soon withdrawn. French card historian, Thierry Depaulis has recently debunked this story, showing that the name Blackjack was first given to the game of American Vingt-Un by prospectors during the Klondike Gold Rush (1896-99), the bonus being the usual Ace and any 10-point card. Since the term 'blackjack' also refers to the mineral zincblende, which was often associated with gold or silver deposits, he suggests that the mineral name was transferred by prospectors to the top bonus in the game. He was unable to find any historical evidence for a special bonus for having the combination of an Ace with a black Jack. The first scientific and mathematically sound attempt to devise an optimal blackjack playing strategy was revealed in September 1956. Roger Baldwin, Wilbert Cantey, Herbert Maisel and James McDermott published a paper titled "The Optimum Strategy in Blackjack" in the Journal of the American Statistical Association. This paper would become the foundation of future sound efforts to beat the game of blackjack. Ed Thorp would use Baldwin's hand calculations to verify the basic strategy and later publish (in 1963) his famous book "Beat the Dealer". Players are each dealt two cards, face up or down depending on the casino and the table. In the U.S., the dealer is also dealt two cards, normally one up (exposed) and one down (hidden). In most other countries, the dealer only receives one card face up. The value of cards two through ten is their pip value (2 through 10). Face cards (Jack, Queen, and King) are all worth ten. Aces can be worth one or eleven. A hand's value is the sum of the card values. Players are allowed to draw additional cards to improve their hands. A hand with an ace valued as 11 is called "soft", meaning that the hand will not bust by taking an additional card. The value of the ace will become one to prevent the hand from exceeding 21. Otherwise, the hand is called "hard". Once all the players have completed their hands, it is the dealer's turn. The dealer hand will not be completed if all players have either busted or received blackjacks. The dealer then reveals the hidden card and must hit until the cards total up to 17 points. At 17 points or higher the dealer must stay. (At most tables the dealer also hits on a "soft" 17, i.e. a hand containing an ace and one or more other cards totaling six.) You are betting that you have a better hand than the dealer. The better hand is the hand where the sum of the card values is closer to 21 without exceeding 21. The detailed outcome of the hand follows: Blackjack has over 100 rule variations. Since the 1960s, blackjack has been a high-profile target of advantage players, particularly card counters, who track the profile of cards that have been dealt and adapt their wagers and playing strategies accordingly. In response, casinos have introduced counter-measures that can increase the difficulty of advantage play. Blackjack has inspired other casino games, including Spanish 21 and pontoon. At a casino blackjack table, the dealer faces five to seven playing positions from behind a semicircular table. Between one and eight standard 52-card decks are shuffled together. At the beginning of each round, up to three players can place their bets in the "betting box" at each position in play. That is, there could be up to three players at each position at a table in jurisdictions that allow back betting. The player whose bet is at the front of the betting box is deemed to have control over the position, and the dealer will consult the controlling player for playing decisions regarding the hand; the other players of that box are said to "play behind". Any player is usually allowed to control or bet in as many boxes as desired at a single table, but it is prohibited for an individual to play on more than one table at a time or to place multiple bets within a single box. In many U.S. casinos, however, players are limited to playing two or three positions at a table and often only one person is allowed to bet on each position. The dealer deals cards from their left (the position on the dealer's far left is often referred to as "first base") to their far right ("third base"). Each box is dealt an initial hand of two cards visible to the people playing on it, and often to any other players. The dealer's hand receives its first card face up, and in "hole card" games immediately receives its second card face down (the hole card), which the dealer peeks at but does not reveal unless it makes the dealer's hand a blackjack. Hole card games are sometimes played on tables with a small mirror or electronic sensor that is used to peek securely at the hole card. In European casinos, "no hole card" games are prevalent; the dealer's second card is neither drawn nor consulted until the players have all played their hands. Cards are dealt either from one or two handheld decks, from a dealer's shoe, or from a shuffling machine. Single cards are dealt to each wagered-on position clockwise from the dealer's left, followed by a single card to the dealer, followed by an additional card to each of the positions in play. The players' initial cards may be dealt face up or face down (more common in single-deck games). The object of the game from the player's perspective is to win money by creating card totals that are higher than those of the dealer's hand, but do not exceed 21 ("busting"/"breaking"), or alternatively, by "standing" (not taking a card) at any total in the hope that dealer will bust. On their turn, players must choose whether to "hit" (take a card), "stand" (end their turn), "double" (double wager, take a single card and finish), "split" (if the two cards have the same value, separate them to make two hands) or "surrender" (give up a half-bet and retire from the game). Number cards count as their natural value; the jack, queen, and king (also known as "face cards" or "pictures") count as 10; aces are valued as either 1 or 11 according to the player's choice. If the hand value exceeds 21 points, it busts, and all bets on it are immediately forfeit. After all boxes have finished playing, the dealer's hand is resolved by drawing cards until the hand busts or achieves a value of 17 or higher (a dealer total of 17 including an ace valued as 11, also known as a "soft 17", must be drawn to in some games and must stand in others). The dealer never doubles, splits, or surrenders. If the dealer busts, all remaining player hands win. If the dealer does not bust, each remaining bet wins if its hand is higher than the dealer's, and loses if it is lower. If a player receives 21 on the 1st and 2nd card it is considered a "natural" or "blackjack" and the player is paid out immediately unless dealer also has a natural, in which case the hand ties. In the case of a tied score, known as "push" or "standoff", bets are normally returned without adjustment; however, a blackjack beats any hand that is not a blackjack, even one with a value of 21. Wins are paid out at 1:1, or equal to the wager, except for player blackjacks which are traditionally paid at 3:2 (meaning the player receives three dollars for every two bet) or one-and-a-half times the wager. Many casinos today pay blackjacks at less than 3:2 at some tables; for instance, single-deck blackjack tables often pay 6:5 for a blackjack instead of 3:2. Blackjack games almost always provide a side bet called insurance, which may be played when dealer's upcard is an ace. Additional side bets, such as "Dealer Match" which pays when the player's cards match the dealer's up card, are sometimes available. After receiving an initial two cards, the player has up to four standard options: "hit", "stand", "double down", or "split". Each option has a corresponding hand signal. Some games give the player a fifth option, "surrender". Hand signals are used to assist the "eye in the sky", a person or video camera located above the table and sometimes concealed behind one-way glass. The eye in the sky usually makes a video recording of the table, which helps in resolving disputes and identifying dealer mistakes, and is also used to protect the casino against dealers who steal chips or players who cheat. The recording can further be used to identify advantage players whose activities, while legal, make them undesirable customers. In the event of a disagreement between a player's hand signals and their words, the hand signal takes precedence. Each hand may normally "hit" as many times as desired so long as the total is not above hard 20. On reaching 21 (including soft 21), the hand is normally required to stand; busting is an irrevocable loss and the players' wagers are immediately forfeited to the house. After a bust or a stand, play proceeds to the next hand clockwise around the table. When the last hand has finished being played, the dealer reveals the hole card, and stands or draws further cards according to the rules of the game for dealer drawing. When the outcome of the dealer's hand is established, any hands with bets remaining on the table are resolved (usually in counterclockwise order): bets on losing hands are forfeited, the bet on a push is left on the table, and winners are paid out. If the dealer's up card (the card that is showing) is an ace, you are allowed to make an “insurance” bet. This is a side bet that the dealer has a ten-value card as the down card, giving the dealer a Blackjack. The dealer will ask for insurance bets from all players before the first player plays. You make this bet by placing chips equal to a maximum of half of your current bet on the “insurance bar” just above your cards. If the dealer has a ten, the insurance bet pays 2:1. In most casinos, the dealer then peeks at the down card and pays or takes the insurance bet immediately. In other casinos, the payoff waits until the end of the play. In face-down games, if you are playing more than one hand, you are allowed to look at all of your hands before deciding. This is the only time that you are allowed to look at the second hand before playing the first hand. Using one hand, look at your hands one at a time. Players with a blackjack may also take insurance, and in taking maximum insurance they will win an amount equal to their main wager. Fully insuring a blackjack against blackjack is thus referred to as "taking even money". There is no difference in results between taking even money and insuring a blackjack. Insurance bets are expected to lose money in the long run, because the dealer is likely to have a blackjack less than one-third of the time. However the insurance outcome is strongly anti-correlated with that of the main wager, and if the player's priority is to reduce variance, they might choose to make this bet. The insurance bet is susceptible to advantage play. It is advantageous to make an insurance bet whenever the hole card has more than a one in three chance of being a ten. Card counting techniques can identify such situations. "Note: where changes in the house edge due to changes in the rules are stated in percentage terms, the difference is usually stated here in percentage points, not percentage. That is, if an edge of 10% is reduced to 9%: it is reduced by one percentage point, not reduced by ten percent." The rules of casino blackjack are generally determined by law or regulation, which establishes certain rule variations allowed at the discretion of the casino. The rule variations of any particular game are generally posted on or near the table. You can ask the dealer if the variations are not clearly posted. Over 100 variations of blackjack have been documented. As with all casino games, blackjack incorporates a "house edge", a statistical advantage for the casino that is built into the game. This house edge is primarily due to the fact that the player will lose when both the player and dealer bust. However, blackjack players using basic strategy will lose less than 1% of their total wagered amount with average luck, which is a substantially lower house edge than most other casino games. This is not true in games where blackjack pays 6:5 as that rule increases the house edge by about 1.4%. The expected loss rate of players who deviate from basic strategy through poor play will be greater, often much greater. Surrender, for those games that allow it, is usually not permitted against a dealer blackjack; if the dealer's first card is an ace or ten, the hole card is checked to make sure there is no blackjack before surrender is offered. This rule protocol is consequently known as "late" surrender. The alternative, "early" surrender, gives player the option to surrender "before" the dealer checks for blackjack, or in a no-hole-card game. Early surrender is much more favorable to the player than late surrender. For late surrender, however, while it is tempting to opt for surrender on any hand which will probably lose, the correct strategy is to only surrender on the very worst hands, because having even a one in four chance of winning the full bet is better than losing half the bet and pushing the other half, as entailed by surrendering. In most non-U.S. casinos, a 'no hole card' game is played, meaning that the dealer does not draw nor consult his or her second card until after all players have finished making decisions. With no hole card, it is almost never correct basic strategy to double or split against a dealer ten or ace, since a dealer blackjack will result in the loss of the split and double bets; the only exception is with a pair of aces against a dealer 10, where it is still correct to split. In all other cases, a stand, hit or surrender is called for. For instance, holding 11 against a dealer 10, the correct strategy is to double in a hole card game (where the player knows the dealer's second card is not an ace), but to hit in a no hole card game. The no hole card rule adds approximately 0.11% to the house edge. The "original bets only" rule variation appearing in certain no hole card games states that if the player's hand loses to a dealer blackjack, only the mandatory initial bet ("original") is forfeited, and all optional bets, meaning doubles and splits, are pushed. "Original bets only" is also known by the acronym OBO; it has the same effect on basic strategy and house edge as reverting to a hole card game. Each blackjack game has a "basic strategy", which prescribes the optimal method of playing any hand against any dealer up-card so that the long-term house advantage (the expected loss of the player) is minimized. An example of a basic strategy is shown in the table below, which applies to a game with the following specifications: Key: The bulk of basic strategy is common to all blackjack games, with most rule variations calling for changes in only a few situations. For example, to use the table above on a game with the stand on soft 17 rule (which favors the player, and is typically found only at higher-limit tables today) only 6 cells would need to be changed: hit on 11 "vs." A, hit on 15 "vs." A, stand on 17 "vs." A, stand on A,7 "vs." 2, stand on A,8 "vs." 6, and split on 8,8 "vs." A. Regardless of the specific rule variations, taking insurance or "even money" is never the correct play under basic strategy. Estimates of the house edge for blackjack games quoted by casinos and gaming regulators are generally based on the assumption that the players follow basic strategy and do not systematically change their bet size. Most blackjack games have a house edge of between 0.5% and 1%, placing blackjack among the cheapest casino table games from the perspective of the player. Casino promotions such as complimentary match play vouchers or 2:1 blackjack payouts allow the player to acquire an advantage without deviating from basic strategy. Basic strategy is based upon a player's point total and the dealer's visible card. Players may be able to improve on this decision by considering the precise composition of their hand, not just the point total. For example, players should ordinarily stand when holding 12 against a dealer 4. However, in a single deck game, players should hit if their 12 consists of a 10 and a 2. The presence of a 10 in the player's hand has two consequences: However, even when basic and composition-dependent strategy lead to different actions, the difference in expected reward is small, and it becomes even smaller with more decks. Using a composition-dependent strategy rather than basic strategy in a single deck game reduces the house edge by 4 in 10,000, which falls to 3 in 100,000 for a six-deck game. Blackjack has been a high-profile target for advantage players since the 1960s. Advantage play is the attempt to win more using skills such as memory, computation, and observation. These techniques, while generally legal, can be powerful enough to give the player a long-term edge in the game, making them an undesirable customer for the casino and potentially leading to ejection or blacklisting if they are detected. The main techniques of advantage play in blackjack are as follows: During the course of a blackjack shoe, the dealer exposes the dealt cards. Careful accounting of the exposed cards allows a player to make inferences about the cards which remain to be dealt. These inferences can be used in the following ways: A card counting system assigns a point score to each rank of card (e.g., 1 point for 2–6, 0 points for 7–9 and −1 point for 10–A). When a card is exposed, a counter adds the score of that card to a running total, the 'count'. A card counter uses this count to make betting and playing decisions according to a table which they have learned. The count starts at 0 for a freshly shuffled deck for "balanced" counting systems. Unbalanced counts are often started at a value which depends on the number of decks used in the game. Blackjack's house edge is usually between 0.5%–1% when players use basic strategy. Card counting can give the player an edge of up to 2% over the house. Card counting is most rewarding near the end of a complete shoe when as few as possible cards remain. Single-deck games are therefore particularly advantageous to the card counting player. As a result, casinos are more likely to insist that players do not reveal their cards to one another in single-deck games. In games with more decks of cards, casinos limit penetration by ending the shoe and reshuffling when one or more decks remain undealt. Casinos also sometimes use a shuffling machine to reintroduce the exhausted cards every time a deck has been played. Card counting is legal and is not considered cheating as long as the counter is not using an external device, but if a casino realizes a player is counting, the casino might inform them that they are no longer welcome to play blackjack. Sometimes a casino might ban a card counter from the property. The use of external devices to help counting cards is illegal in all US states that license blackjack card games. Techniques other than card counting can swing the advantage of casino blackjack toward the player. All such techniques are based on the value of the cards to the player and the casino as originally conceived by Edward O. Thorp. One technique, mainly applicable in multi-deck games, involves tracking groups of cards (also known as slugs, clumps, or packs) during the play of the shoe, following them through the shuffle, and then playing and betting accordingly when those cards come into play from the new shoe. Shuffle tracking requires excellent eyesight and powers of visual estimation but is more difficult to detect since the player's actions are largely unrelated to the composition of the cards in the shoe. Arnold Snyder's articles in "Blackjack Forum" magazine brought shuffle tracking to the general public. His book, "The Shuffle Tracker's Cookbook," mathematically analyzed the player edge available from shuffle tracking based on the actual size of the tracked slug. Jerry L. Patterson also developed and published a shuffle-tracking method for tracking favorable clumps of cards and cutting them into play and tracking unfavorable clumps of cards and cutting them out of play. The player can also gain an advantage by identifying cards from distinctive wear markings on their backs, or by hole carding (observing during the dealing process the front of a card dealt face down). These methods are generally legal although their status in particular jurisdictions may vary. Many blackjack tables offer a side bet on various outcomes including: The side wager is typically placed in a designated area next to the box for the main wager. A player wishing to wager on a side bet is usually required to place a wager on blackjack. Some games require that the blackjack wager should equal or exceed any side bet wager. A non-controlling player of a blackjack hand is usually permitted to place a side bet regardless of whether the controlling player does so. The house edge for side bets is generally far higher than for the blackjack game itself. Nonetheless side bets can be susceptible to card counting. A side count, designed specifically for a particular side bet, can improve the player edge. Only a few side bets, like "Lucky Ladies", offer a sufficient win rate to justify the effort of advantage play. In team play it is common for team members to be dedicated toward counting only a sidebet using a specialized count. Blackjack can be played in tournament form. Players start with an equal numbers of chips; the goal is to finish among the top chip-holders. Depending on the number of competitors, tournaments may be held over several rounds, with one or two players qualifying from each table after a set number of deals to meet the qualifiers from the other tables in the next round. Another tournament format, Elimination Blackjack, drops the lowest-stacked player from the table at pre-determined points in the tournament. Good strategy for blackjack tournaments can differ from non-tournament strategy because of the added dimension of choosing the amount to be wagered. As in poker tournaments, players pay the casino an initial entry fee to participate in a tournament, and re-buys are sometimes permitted. Some casinos, as well as general betting outlets, provide blackjack among a selection of casino-style games at electronic consoles. Video blackjack game rules are generally more favorable to the house; e.g., paying out only even money for winning blackjacks. Video and online blackjack games generally deal each round from a fresh shoe, rendering card counting ineffective in most situations. Blackjack is a member of a large family of traditional card games played recreationally all around the world. Most of these games have not been adapted for casino play. Furthermore, the casino game development industry is very active in producing blackjack variants, most of which are ultimately not adopted for widespread use in casinos. The following are the prominent twenty-one themed comparing card games which have been adapted or invented for use in casinos and have become established in the gambling industry. Examples of the many local traditional and recreational blackjack-like games include French/German Blackjack, called "Vingt-et-un" (French: Twenty-one) or "Siebzehn und Vier" (German: Seventeen and Four). The French/German game does not allow splitting. An ace can only count as eleven, but two aces count as a blackjack. It is mostly played in private circles and barracks. A British variation is called "Pontoon", the name being probably a corruption of "Vingt-et-un". Blackjack is also featured in various television shows. Here are a few shows inspired by the game. In 2002, professional gamblers around the world were invited to nominate great blackjack players for admission into the Blackjack Hall of Fame. Seven members were inducted in 2002, with new people inducted every year after. The Hall of Fame is at the Barona Casino in San Diego. Members include Edward O. Thorp, author of the 1960s book "Beat the Dealer" which proved that the game could be beaten with a combination of basic strategy and card counting; Ken Uston, who popularized the concept of team play; Arnold Snyder, author and editor of the "Blackjack Forum" trade journal; Stanford Wong, author and popularizer of the "Wonging" technique of only playing at a positive count, and several others. Novels have been written around blackjack and the possibility of winning games via some kind of method. Among these were "The Blackjack Hijack" (Charles Einstein, 1976), later produced as the TV movie "Nowhere to Run", and "Bringing Down the House" (Ben Mezrich), also filmed as "21". An almost identical theme was shown in the 2004 Canadian film "The Last Casino". In "The Hangover", an American comedy, four friends try to count cards to win back enough money to secure the release of their friend from the clutches of a notorious criminal they stole from the previous night while blacked out. A central part of the plot of "Rain Man" is that Raymond (Dustin Hoffman), an autistic savant, is able to win at blackjack by counting cards. In the 2014 film "The Gambler" we see Jim Bennett (Mark Wahlberg) playing high stakes Blackjack in order to win large sums of money. This movie displays different blackjack lingo and risky moves that have high rewards.
https://en.wikipedia.org/wiki?curid=3981
Bicarbonate In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula . Bicarbonate serves a crucial biochemical role in the physiological pH buffering system. The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The prefix "bi" in "bicarbonate" comes from an outdated naming system and is based on the observation that there is twice as much carbonate () per sodium ion in sodium bicarbonate (NaHCO3) and other bicarbonates than in sodium carbonate (Na2CO3) and other carbonates. The name lives on as a trivial name. According to the Wikipedia article IUPAC nomenclature of inorganic chemistry, the prefix bi– is a deprecated way of indicating the presence of a single hydrogen ion. The recommended nomenclature today mandates explicit referencing of the presence of the single hydrogen ion: sodium hydrogen carbonate or sodium hydrogencarbonate. A parallel example is sodium bisulfite (NaHSO3). The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid ; and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions: A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality. Bicarbonate () is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of CO2 in the body is converted into carbonic acid (H2CO3), which is the conjugate acid of and can quickly turn into it. With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Bicarbonate also serves much in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach. Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle. In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH. The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking. The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle. Ammonium bicarbonate is used in digestive biscuit manufacture. In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with carbon dioxide, chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051). The parameter "standard bicarbonate concentration" (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C.
https://en.wikipedia.org/wiki?curid=3982
Bernie Federko Bernard Allan "Bernie" Federko (born May 12, 1956) is a Canadian retired professional ice hockey centre of Ukrainian ancestry who played fourteen seasons in the National Hockey League from 1976 through 1990. Federko began playing hockey at a young age in his home town of Foam Lake, Saskatchewan. He was captain of the 1971 Bantam provincial champions. He also played Senior hockey with the local Foam Lake Flyers of the Fishing Lake Hockey League, winning the league scoring title as a bantam-aged player. Federko continued his career with the Saskatoon Blades of the WHL where he set and still holds the team record for assists. He played three seasons with the Blades, and in his final year with the club he led the league in assists and points in both the regular season "and" playoffs. As a reward, Federko was drafted 7th overall by the St. Louis Blues in the 1976 NHL Amateur Draft. He started the next season with the Kansas City Blues of the Central Hockey League and was leading the league in points when he was called up mid-season to play 31 games with St. Louis. He scored three hat tricks in those 31 games. In the 1978–79 NHL season, Federko developed into a bona fide star, as he scored 95 points. Federko scored 100 points in a season four times, and was a consistent and underrated performer for the Blues. Federko scored at least 90 points in seven of the eight seasons between 1978 and 1986, and became the first player in NHL history to record at least 50 assists in 10 consecutive seasons. However, in an era when Wayne Gretzky was scoring 200 points a season, Federko never got the attention many felt he deserved. In 1986, in a poll conducted by GOAL magazine, he was named the most overlooked talent in hockey. His General Manager Ron Caron said he was "A great playmaker. He makes the average or above average player look like a star at times. He's such an unselfish player." On March 19, 1988, Federko became the 22nd NHL player to record 1000 career points. After a poor season for Federko in 1988–89, he was traded to the Detroit Red Wings with Tony McKegney for future Blues star Adam Oates, and Paul MacLean. In Detroit, Federko re-united with former Blues head coach Jacques Demers, but he had to play behind Steve Yzerman and did not get his desired ice time. After his lowest point output since his rookie season, Federko decided to retire after the 1989–90 season, having played exactly 1,000 NHL games with his final game on April 1, 1990. Less than a year after retiring as a player, the Blues retired number 24 in his honor on March 16, 1991. Federko was eventually inducted into the Hockey Hall of Fame in 2002, the first Hall of Famer to earn his credentials primarily as a Blue. Currently, Federko is a television color commentator for Fox Sports Midwest during Blues broadcasts. Federko was the head coach/general manager of the St. Louis Vipers roller hockey team of the Roller Hockey International for the 1993 and 1994 seasons. - First player to get 50 assists in 10 consecutive seasons in NHL history.
https://en.wikipedia.org/wiki?curid=3984
Buffalo, New York Buffalo is the second largest city in the U.S. state of New York and the largest city in Western New York. 's census estimates, the population was 255,284. The city is the county seat of Erie County and serves as a major gateway for commerce and travel across the Canadian border, forming part of the bi-national Buffalo Niagara Region and Buffalo–Niagara Falls metropolitan area. As of 2018, the Buffalo–Niagara Falls metropolitan area had a population of 1,130,152; the combined statistical area, which adds Cattaraugus County, had a population of 1,215,826 inhabitants. The Buffalo area was inhabited before the 17th century by the Native American Iroquois tribe and later by French colonizers. The city grew significantly in the 19th and 20th centuries as a result of immigration, the construction of the Erie Canal and rail transportation, and its close proximity to Lake Erie. This growth provided an abundance of fresh water and an ample trade route to the Midwestern United States while grooming its economy for the grain, steel and automobile industries that dominated the city's economy in the 20th century. Since the city's economy relied heavily on manufacturing, deindustrialization in the latter half of the 20th century led to a steady decline in population. While some manufacturing activity remained following the Great Recession, Buffalo's economy has transitioned to service industries with a greater emphasis on healthcare, research and higher education including being home to a top research college in the University at Buffalo. Buffalo is on the eastern shore of Lake Erie, at the head of the Niagara River, south of Niagara Falls. Its early embrace of electric power led to the nickname "The City of Light". The city is also famous for its urban planning and layout by Joseph Ellicott, an extensive system of parks designed by Frederick Law Olmsted, as well as significant architectural works. Its culture blends Northeastern and Midwestern traditions, with annual festivals including Taste of Buffalo and Allentown Art Festival, two professional sports teams and a Division I college team (Buffalo Bills, Buffalo Sabres and Buffalo Bulls), and a thriving and progressive music and arts scene. The city of Buffalo received its name from a nearby creek called Buffalo Creek. British military engineer Captain John Montresor made reference to "Buffalo Creek" in his 1764 journal, which may be the earliest recorded appearance of the name. There are several theories regarding how Buffalo Creek received its name. While it is possible its name originated from French fur traders and Native Americans calling the creek "Beau Fleuve" (French for "Beautiful River"), it is also possible Buffalo Creek was named after the American buffalo, whose historical range may have extended into Western New York. The first inhabitants of the New York State are believed to have been nomadic Paleo-Indians, who migrated after the disappearance of Pleistocene glaciers during or before 7000 BCE. Around 1000 CE, the Woodland period began, marked by the rise of the Iroquois Confederacy and its tribes throughout the state. During French exploration of the region in 1620, the region was occupied simultaneously by the agrarian Erie people, a tribe outside of the Five Nations of the Iroquois southwest of Buffalo Creek, and the Wenro people or "Wenrohronon", an Iroquoian-speaking tribal offshoot of the large Neutral Nation who lived along the inland south shore of Lake Ontario and at the east end of Lake Erie and a bit of its northern shore. For trading, the Neutral people made a living by growing tobacco and hemp to trade with the Iroquois, using animal paths or warpaths to travel and move goods across the state. These paths were later paved, and now function as major roads. Later, during the Beaver Wars of the 1640s-1650s, the combined warriors of the Five Nations of the Iroquois conquered the populous Neutrals and their peninsular territory, while the Senecas alone took out the Wenro and their territory, c. 1651–1653. Soon after, the Iroquois destroyed Erie nation and territory over their assistance to Huron people during the Beaver Wars. Louis Hennepin and Sieur de La Salle made the earliest European discoveries of the upper Niagara and Ontario regions in the late 1600s. On August 7, 1679, La Salle launched a vessel, Le Griffon, that became the first full-sized ship to sail across the Great Lakes before it disappeared in Green Bay, Wisconsin. After the American Revolution, the Province of New York—now a U.S. state—began westward expansion, looking for habitable land by following trends of the Iroquois. Land near fresh water was of considerable importance. New York and Massachusetts were fighting for the territory Buffalo lies on, and Massachusetts had the right to purchase all but a one-mile (1600-meter) wide portion of land. The rights to the Massachusetts' territories were sold to Robert Morris in 1791, and two years later to the Holland Land Company. As a result of the war, in which the Iroquois tribe sided with the British Army, Iroquois territory was gradually reduced in the mid-to-late-1700s by European settlers through successive treaties statewide, such as the Treaty of Fort Stanwix (1784), the First Treaty of Buffalo Creek (1788), and the Treaty of Geneseo (1797). The Iroquois were corralled onto reservations, including Buffalo Creek. By the end of the 18th century, only of reservation territory remained. Former slave Joseph "Black Joe" Hodges, and Cornelius Winney, a Dutch trader from Albany who arrived in 1789, were early settlers along the mouth of Buffalo Creek. The first white settlers along the creek were prisoners captured during the Revolutionary War. The first resident and landowner of Buffalo with a permanent presence was Captain William Johnston, a white Iroquois interpreter who had been in the area since the days after the Revolutionary War and who the Senecas granted creekside land as a gift of appreciation. His house stood at present-day Washington and Seneca streets. On July 20, 1793, the Holland Land Purchase was completed, containing the land of present-day Buffalo, brokered by Dutch investors from Holland. The Treaty of Big Tree removed Iroquois title to lands west of the Genesee River in 1797. In the fall of 1797, Joseph Ellicott, the architect who helped survey Washington, D.C. with brother Andrew, was appointed as the Chief of Survey for the Holland Land Company. Over the next year, he began to survey the tract of land at the mouth of Buffalo Creek. This was completed in 1803, and the new village boundaries extended from the creekside in the south to present-day Chippewa Street in the north and Carolina Street to the west, which is where most settlers remained for the first decade of the 19th century. Although the company named the settlement "New Amsterdam," the name did not catch on, reverting to Buffalo within ten years. Buffalo had the first road to Pennsylvania built in 1802 for migrants passing through to the Connecticut Western Reserve in Ohio. In 1804, Ellicott designed a radial grid plan that would branch out from the village forming bicycle-like spokes, interrupted by diagonals, like the system used in the nation's capital. In the middle of the village was the intersection of eight streets, in what would become Niagara Square. Several blocks to the southeast he designed a semicircle fronting Main Street with an elongated park green, formerly his estate. This would be known as Shelton Square, at that time the center of the city (which would be dramatically altered in the mid-20th century), with the intersecting streets bearing the names of Dutch Holland Land Company members, today Erie, Church and Niagara streets. Lafayette Square also lies one block to the north, which was then bounded by streets bearing Iroquois names. According to an early resident, the village had sixteen residences, a schoolhouse and two stores in 1806, primarily near Main, Swan and Seneca streets. There were also blacksmith shops, a tavern and a drugstore. The streets were small at 40 feet wide, and the village was still surrounded by woods. The first lot sold by the Holland Land Company was on September 11, 1806, to Zerah Phelps. By 1808, lots would sell from $25 to $50. In 1804, Buffalo's population was estimated at 400, similar to Batavia, but Erie County's growth was behind Chautauqua, Genesee and Wyoming counties. Neighboring village Black Rock to the northwest (today a Buffalo neighborhood) was also an important center. Horatio J. Spafford noted in "A Gazetteer of the State of New York" that in fact, despite the growth the village of Buffalo had, Black Rock "is deemed a better trading site for a great trading town than that of Buffalo," especially when considering the regional profile of mundane roads extending eastward. Before the east-to-west turnpike was completed, travelling from Albany to Buffalo would take a week, while even a trip from nearby Williamsville to Batavia could take upwards of three days. Although slavery was rare in the state, limited instances of slavery had taken place in Buffalo during the early part of the 19th century. General Peter Buell Porter is said to have had five slaves during his time in Black Rock, and several news ads also advertised slaves for sale. In 1810, a courthouse was built. By 1811, the population was 500, with many people farming or doing manual labor. The first newspaper to be published was the "Buffalo Gazette" in October that same year. On December 31, 1813, Buffalo and the village of Black Rock were burned by the British after the Battle of Buffalo. The battle and subsequent fire was in response to the unprovoked destruction of Niagara-on-the-Lake, then known as "Newark," by American forces. On August 4, 1814, British forces under Lt. Colonel John Tucker and Lt. Colonel William Drummond, General Gordon Drummond's nephew, attempted to raid Black Rock and Buffalo as part of a diversion to force an early surrender at Fort Erie the next day, but were defeated by a small force of American riflemen under Major Lodwick Morgan at the Battle of Conjocta Creek, and withdrew back into Canada. Consequently, Fort Erie's siege under Gordon Drummond later failed, and British forces withdrew. Though only three buildings remained in the village, rebuilding was swift, finishing in 1815. The population in 1840 was 18,213. The village of Buffalo was part of and the seat of Niagara County until the legislature passed an act separating the two on April 2, 1861. On October 26, 1825, the Erie Canal was completed, formed from part of Buffalo Creek, with Buffalo a port-of-call for settlers heading westward. At the time, the population was about 2,400. By 1826, the 130 sq. mile Buffalo Creek Reservation at the western border of the village was transferred to Buffalo. The Erie Canal brought a surge in population and commerce, which led Buffalo to incorporate as a city in 1832. The canal area was mature by 1847, with passenger and cargo ship activity leading to congestion in the harbor. The mid-1800s saw a population boom, with the city doubling in size from 1845 to 1855. In 1855, almost two-thirds of the city's population were foreign-born immigrants, largely a mix of unskilled or educated Irish and German Catholics, who began self-segregating in different parts of the city. The Irish immigrants planted their roots along the railroad-heavy Buffalo River and Erie Canal to the southeast, to which there is still a heavy presence today; German immigrants found their way to the East Side, living a more laid-back, residential life. Some immigrants were apprehensive about the change of environment and left the city for the western region, while others tried to stay behind in the hopes of expanding their native cultures. Fugitive black slaves began to make their way northward to Buffalo in the 1840s, and many of them settled on the city's East Side. In 1845, construction began on the Macedonia Baptist Church, a meeting spot in the Michigan and William Street neighborhood where blacks first settled. Political activity surrounding the anti-slavery movement took place in Buffalo during this time, including conventions held by the National Convention of Colored Citizens and the Liberty Party and its offshoots. Buffalo was a terminus point of the Underground Railroad with many fugitive slaves crossing the Niagara River to Fort Erie, Ontario in search of freedom. During the 1840s, Buffalo's port continued to develop. Both passenger and commercial traffic expanded with some 93,000 passengers heading west from the port of Buffalo. Grain and commercial goods shipments led to repeated expansion of the harbor. In 1843, the world's first steam-powered grain elevator was constructed by local merchant Joseph Dart and engineer Robert Dunbar. "Dart's Elevator" enabled faster unloading of lake freighters along with the transshipment of grain in bulk from barges, canal boats, and rail cars. By 1850, the city's population was 81,000. In 1860, many railway companies and lines crossed through and terminated in Buffalo. Major ones were the Buffalo, Bradford and Pittsburgh Railroad (1859), Buffalo and Erie Railroad and the New York Central Railroad (1853). During this time, Buffalonians controlled a quarter of all shipping traffic on Lake Erie, and shipbuilding was a thriving industry for the city. Later, the Lehigh Valley Railroad would have its line terminate at Buffalo in 1867. At the dawn of the 20th century, local mills were among the first to benefit from hydroelectric power generated by the Niagara River. The city got the nickname "The City of Light" at this time due to the widespread electric lighting. It was also part of the automobile revolution, hosting the brass era car builders Pierce Arrow and the Seven Little Buffaloes early in the century. At the same time, an exit of local entrepreneurs and industrial titans brought about a nascent stage that would see the city lose its competitiveness against Pittsburgh, Cleveland and Detroit. President William McKinley was shot and mortally wounded by an anarchist at the Pan-American Exposition in Buffalo on September 6, 1901. McKinley died in the city eight days later, and Theodore Roosevelt was sworn in at the Wilcox Mansion. The Great Depression of 1929–39 saw severe unemployment, especially among working-class men. The New Deal relief programs operated full force. The city became a stronghold of labor unions and the Democratic Party. During World War II, Buffalo saw the return of prosperity and full employment due to its position as a manufacturing center. As one of the most populous cities of the 1950s, Buffalo's economy revolved almost entirely on its manufacturing base. Major companies such as Republic Steel and Lackawanna Steel employed tens of thousands of Buffalonians. Integrated national shipping routes would use the Soo Locks near Lake Superior and a vast network of railroads and yards that crossed the city. Lobbying by local businesses and interest groups against the St. Lawrence Seaway began in the 1920s, long before its construction in 1957, which cut the city off from valuable trade routes. Its approval was reinforced by legislation shortly before its construction. Shipbuilding in Buffalo, such as the American Ship Building Company, shut down in 1962, ending an industry that had been a sector of the city's economy since 1812, and a direct result of reduced waterfront activity. With deindustrialization, and the nationwide trend of suburbanization; the city's economy began to deteriorate. Like much of the Rust Belt, Buffalo, home to more than half a million people in the 1950s, has seen its population decline as heavy industries shut down and people left for the suburbs or other cities. Buffalo is on Lake Erie's eastern end, opposite Fort Erie, Ontario, Canada. It is at the origin of the Niagara River, which flows northward over Niagara Falls and into Lake Ontario. The city is south-southeast from Toronto. Buffalo is from Rochester, from Syracuse, from the New York State capital of Albany, and from New York City. Interstate 90 connects Buffalo to Cleveland, Ohio, and Detroit, Michigan. Cleveland and Detroit are the largest populated areas in the United States closer than the New York metropolitan area or Albany. Relative to downtown, the city is generally flat with the exception of areas surrounding North and High streets, where a hill of 90 feet gradually develops approaching from the south and north. The Southtowns include the Boston Hills, while the Appalachian Mountains sit in the Southern Tier below them. To the north and east, the region maintains a flatter profile descending to Lake Ontario. Various types of shale, limestone and lagerstätten are prevalent in the geographic makeup of Buffalo and surrounding areas, which line the waterbeds within and bordering the city. Although there have not been any recent or significant earthquakes, Buffalo sits atop of the Southern Great Lakes Seismic Zone, which is part of the Great Lakes tectonic zone. Buffalo has four channels that flow through its boundaries: the Niagara River, Buffalo River and Creek, Scajaquada Creek, and the Black Rock Canal, which is adjacent to the Niagara River. According to the United States Census Bureau, the city has a total area of , of which is land and the rest water. The total area is 22.66% water. In 2010, the city of Buffalo had a population of 6,470.6 per square mile. The city consists of 31 different neighborhoods. Buffalo's most prominent neighborhoods (J. N. Adam–AM&A Historic District, Canalside, Buffalo Niagara Medical Campus, University Heights) are located in or near the downtown area. The J. N. Adam–AM&A Historic District is a national historic district. Its main department store was designed by Starrett & van Vleck and built in 1935. Canalside originally began as an Italian-dominated area, and the Buffalo Niagara Medical Campus was established in 2001. Canalside and University Heights are predominantly mix-used districts. Buffalo and its suburbs have been redeveloping neighborhoods and districts since the early 2000s in efforts to mitigate a declining population and attract businesses. In June 2020, the Buffalo-based Green Organization acquired an apartment complex with the intent to remodel it and bring new residents. Buffalo's architecture is diverse, with a collection of buildings from the 19th and 20th centuries. Most structures and works are still standing, such as the country's largest intact parks system designed by Frederick Law Olmsted and Calvert Vaux. At the end of the 19th century, the Guaranty Building—constructed by Louis Sullivan—was a prominent example of an early high-rise skyscraper. The Darwin D. Martin House designed by Frank Lloyd Wright and built between 1903 and 1905, is considered to be one of the most important projects from Wright's Prairie School era. The Larkin Administration Building, now demolished, was Frank Lloyd Wright's first commercial commission. The 20th century saw works such as the Art Deco-style Buffalo City Hall and Buffalo Central Terminal, Electric Tower, the Richardson Olmsted Complex, and the Rand Building. Urban renewal from the 1950s–1970s gave way to the construction of the Brutalist-style Buffalo City Court Building and One Seneca Tower—formerly the HSBC Center, the city's tallest building. Buffalo has a humid continental climate (Köppen "Dfb" bordering on "Dfa"), which is common in the Great Lakes region. Buffalo has snowy winters, but it is rarely the snowiest city in New York state. The Blizzard of 1977 resulted from a combination of high winds and snow accumulated on land and on frozen Lake Erie. Snow does not typically impair the city's operation, but can cause significant damage during the autumn as with the October 2006 storm. In November 2014, the region had a record-breaking storm, producing over of snow; this storm was named "Snowvember". Buffalo has the sunniest and driest summers of any major city in the Northeast, but still has enough rain to keep vegetation green and lush. Summers are marked by plentiful sunshine and moderate humidity and temperature. Obscured by the notoriety of Buffalo's winter snow is the fact Buffalo benefits from other lake effects such as the cooling southwest breezes off Lake Erie in summer that gently temper the warmest days. As a result, temperatures only rise above three times in the average year, and the Buffalo station of the National Weather Service has never recorded an official temperature of or more. Rainfall is moderate but typically occurs at night. Lake Erie's stabilizing effect continues to inhibit thunderstorms and enhance sunshine in the immediate Buffalo area through most of July. August usually has more showers and is hotter and more humid as the warmer lake loses its temperature-stabilizing influence. The highest recorded temperature in Buffalo was on August 27, 1948 and the lowest recorded temperature was , which occurred twice, on February 9, 1934 and February 2, 1961. In his 2019 State of the City address, Mayor Byron Brown dubbed Buffalo a "Climate Refuge City" because the city is unusually well-insulated against climate change. Experts say the region's cool climate and ample fresh water could make it an attractive destination as the planet heats up. Like most former industrial cities of the Great Lakes region in the United States, Buffalo is recovering from an economic depression from suburbanization and the loss of its industrial base. The city's population peaked in 1950 when it was the 15th largest city in the United States, down from the 8th largest city in America in 1900, and its population has been spreading out to the suburbs every census since then. In 2010, Buffalo had a population of 261,310, and in 2019 an estimated 255,284 inhabitants. The city's median household income was $24,536 and the median family income was $30,614 in 2010. Males had a median income of $30,938 versus $23,982 for females. The city's per capita income was $14,991. Of the population 26.6%, and 23% of families, were below the poverty line. Of the total population, 38.4% of those under the age of 18 and 14% of those 65 and older were living below the poverty line. The U.S. Census Bureau determined the median household income in 2018 was $35,893 and the per capita income was $23,297. Of the population, 30.3% lived at or below the poverty line in 2018. Common to many U.S. cities from the 1950s to the 1990s, Buffalo has become a diverse city. The city's diversification is due in part to white flight, the Great Migration, and immigration. Since 2015, Buffalo has become a majority-minority city primarily dominated by African Americans and Hispanic or Latin Americans. At the American Community Survey's 2018 estimates, 42.5% of the population was non-Hispanic white, 34.3% African American, 0.3% American Indian or Alaska Native, 6.5% Asian, 0.1% from some other race and 3.3% from two or more races. Approximately 13% of Buffalonians were of Hispanic or Latin American origin. The largest Latin American groups in 2018 were Puerto Ricans (9.7%), Mexicans (0.7%), and Cubans (0.3%). Since 2003, there has been an ever-growing number of Burmese refugees, mostly of the Karen ethnicity, with an estimated 4,665 residing in Buffalo as of 2016. In 2018, 10% of the population were foreign-born. At the 2010 census, the city's population was 50.4% white (45.8% non-Hispanic white), 38.6% black or African-American, 0.8% American Indian and Alaska Native, 3.2% Asian, 3.9% from some other race and 3.1% from two or more races, while 10.5% of the population was Hispanic or Latino of any race. Per "Sperling's BestPlaces" in 2020, nearly 60% of Buffalonians identify with a religion. Overall, Buffalo and Upstate New York are more religious than Downstate New York. Largely a result of British and French colonialism and missionary work, Christianity is the largest religion in Buffalo and Western New York. The largest Christian groups in Buffalo and the surrounding area are the Catholic Church (38.8%) and Baptists (2.9%). Buffalo's Catholic population primarily make up the Latin Church's Diocese of Buffalo. The Catholic Diocese of Buffalo covers Western New York except for the nearby Diocese of Rochester. Its episcopal see is St. Joseph Cathedral. Baptists in the city mainly affiliate with the American Baptist Churches USA, National Baptist Convention, USA, and the National Baptist Convention of America. There is one Cooperative Baptist church within the metropolitan area as of 2020. The third largest Christian group in the city are Lutherans (2.7%), primarily served by the Evangelical Lutheran Church in America. Methodists (2.0%), Presbyterians (1.9%), and Pentecostals (1.2%) were the following largest Christian groups. The Methodist and Presbyterian Buffalonian communities are dominated by the United Methodist Church and Presbyterian Church (USA). Pentecostals are generally affiliated with the Assemblies of God USA and Church of God in Christ. Nearly 1% of local Christians identified as Anglican or Episcopalian. Most align themselves with the Diocese of Western New York of the Episcopal Church in the U.S. Its cathedral is St. Paul's Cathedral. The remainder are affiliated with Continuing Anglican or Evangelical Episcopal denominations. There are two Anglican Church in North America-affiliated churches further east in the Rochester metropolitan area. Approximately 0.3% professed Mormonism and 3.3% were of another Christian faith including the Eastern Orthodox and Oriental Orthodox churches, non-denominational Protestants, and others. The largest Eastern Orthodox jurisdictions are the Greek Orthodox Archdiocese of America (Ecumenical Patriarchate) and Diocese of New York and New Jersey (Orthodox Church in America). Islam is Buffalo's second largest religion (1.8%). Sunni Islam is the predominant branch practiced. Most Sunni mosques are members of the Islamic Society of North America. The Nation of Islam has one mosque in Buffalo. Judaism made up the third largest religion in the area (0.9%). As of 2020, Orthodox, Conservative, and Reform Judaism were the most prevalent groups affiliated with throughout Buffalo and the surrounding area. A little over 0.5% professed an eastern faith including Buddhism, Hinduism, and Sikhism. The remainder of Buffalo and the surrounding area was spiritual but not religious, agnostic, deistic, or atheist, though some Buffalonians identified with contemporary pagan religions including Wicca, Nature religion, and other smaller new religious movements. Many contemporary pagans, spiritual but not religious, and New Age residents attend the city's Winter Solstice celebrations annually. They are also participants of the Western New York Pagan Pride celebrations. Buffalo's economic sectors include industrial, light manufacturing, high technology and services. The State of New York, with over 15,000 employees, is the city's largest employer. Other major employers include the United States government, Kaleida Health, M&T Bank (which is headquartered in Buffalo), the University at Buffalo, General Motors, Time Warner Cable and Tops Friendly Markets. Buffalo is home to Rich Products, Canadian brewer Labatt, cheese company Sorrento Lactalis, Delaware North Companies and New Era Cap Company. More recently, the Tesla Gigafactory 2 opened in South Buffalo in summer 2017, as a result of the Buffalo Billion program. The loss of traditional jobs in manufacturing, rapid suburbanization and high labor costs have led to economic decline and made Buffalo one of the poorest U.S. cities with populations of more than 250,000 people. An estimated 28.7–29.9% of Buffalo residents lived below the poverty line, behind either only Detroit, or only Detroit and Cleveland in 2011. Buffalo's median household income of $27,850 was third-lowest among large cities, behind only Miami and Cleveland; however the metropolitan area's median household income was $57,000. This, in part, has led to the Buffalo metropolitan statistical area having one of the most affordable housing markets in the United States. The quarterly NAHB/Wells Fargo Housing Opportunity Index (HOI) noted nearly 90% of the new and existing homes sold in the metropolitan area during the second quarter were affordable to families making the area's median income of $57,000. , the median home price in the city was $95,000. Buffalo's economy has begun to see significant improvements since the early 2010s. Money from New York State Governor Andrew Cuomo through a program known locally as "Buffalo Billion" has brought new construction, increased economic development, and hundreds of new jobs to the area. As of March 2015, Buffalo's unemployment rate was 5.9%, slightly above the national average of 5.5%. In 2016, the U.S. Bureau of Economic Analysis valued the Buffalo area's economy at $54.9 billion. Buffalo's cuisine encompasses a variety of cultural contributions, including Sicilian, Italian, Irish, Jewish, German, Polish, African-American, Greek and American influences. In 2015, the National Geographic Society ranked Buffalo third on their list of "The World's Top Ten Food Cities". Locally owned restaurants offer Chinese, German, Japanese, Korean, Vietnamese, Thai, Mexican, Sicilian, Italian, Arab, Indian, Myanmar, Caribbean, soul food and French cuisine. Buffalo's local pizzerias differ from the thin-crust New York–style pizzerias and deep-dish Chicago-style pizzerias and is locally known for being a midpoint between the two. The Beef on weck sandwich, kielbasa, sponge candy, pastry hearts, pierogi, pizza logs, chicken finger subs and haddock fish fries are local favorites, as is a loganberry-flavored beverage that remains relatively obscure outside of Western New York and Southern Ontario. Teressa Bellissimo first prepared the now widespread Chicken Wings at the Anchor Bar in October 1964. Buffalo has several well-known food companies. Non-dairy whipped topping was invented in Buffalo in 1945 by Robert E. Rich, Sr. His company, Rich Products, is one of the city's largest private employers. General Mills was organized in Buffalo and Gold Medal brand flour, Wheaties, Cheerios and other General Mills brand cereals are manufactured here. Archer Daniels Midland operates its largest flour mill in the city. Buffalo is home to one of the world's largest privately held food companies, Delaware North Companies, which operates concessions in sports arenas, stadiums, resorts and many state and federal parks. The Taste of Buffalo and National Buffalo Wing Festival showcase food from the Buffalo area. These are two of the many festivals that take place in Buffalo during the summer. Buffalo is home to over 50 private and public art galleries, most notably the Albright-Knox Art Gallery, home to a collection of modern and contemporary art, and the Burchfield-Penney Art Center. In 2012, "AmericanStyle" ranked Buffalo twenty-fifth in its list of top mid-sized cities for art. It is also home to many independent media and literary arts organizations like Squeaky Wheel Film and Media Arts Center. The Buffalo area's largest theater is Shea's Performing Arts Center, designed to accommodate 4,000 people with interiors by Louis Comfort Tiffany. Built in 1926, the theater presents Broadway musicals and concerts. The theater community in the Buffalo Theater District includes over 20 professional companies. The Allentown Art Festival showcases local and national artists every summer, in Buffalo's Allentown district. Buffalo is also home to the Freedom Wall, which is at the corner of Michigan Avenue and East Ferry Street. The Albright-Knox Art Gallery Public Art Initiative commissioned the Freedom Wall with support from the Niagara Frontier Transportation Authority. The Buffalo Philharmonic Orchestra, which performs at Kleinhans Music Hall, is one of the city's most prominent performing arts institutions. During the 1960s and 1970s, under the musical leadership of Lukas Foss and Michael Tilson Thomas, the Philharmonic collaborated with Grateful Dead and toured with the Boston Pops Orchestra. Buffalo has the roots of many jazz and classical musicians, and it is also the founding city for several mainstream bands and musicians, including Rick James, Billy Sheehan, Cannibal Corpse, Malevolent Creation, Aqueous, The Quakes, Brian McKnight, Joe Public (band) and The Goo Goo Dolls. Vincent Gallo, a Buffalo-born filmmaker and musician, played in several local bands. Jazz fusion band Spyro Gyra and jazz saxophonists Grover Washington Jr. also got their starts in Buffalo. Composer Harold Arlen, who wrote "Somewhere over the Rainbow", was born and started his career in Buffalo. Pianist and composer Leonard Pennario was born in Buffalo in 1924 and made his debut concert at Carnegie Hall in 1943. Buffalo's "Colored Musicians Club", an extension of what was long ago a separate musicians' union local, is thriving today and maintains a significant jazz history within its walls. Well-known indie artist Ani DiFranco hails from Buffalo. Although the region's primary tourism destination is Niagara Falls to the north, Buffalo's tourism relies on historical attractions and outdoor recreation. The city's points of interest include the Edward M. Cotter fireboat, considered the world's oldest active fireboat and is a United States National Historic Landmark, Buffalo and Erie County Botanical Gardens, the Buffalo and Erie County Historical Society, Buffalo Museum of Science, the Buffalo Zoo—the third oldest in the United States—Forest Lawn Cemetery, Buffalo and Erie County Naval & Military Park, the Anchor Bar and Darwin D. Martin House. Redeveloped historical neighborhoods have also attracted tourism. The site of the former Erie Canal Harbor, Canalside has become a popular destination for tourists and residents since 2007 when Buffalo and the New York Power Authority began to redevelop the former site of the Buffalo Memorial Auditorium into historically accurate canals. Larkin Square, in the former "Hydraulics" neighborhood and headquarters for the Larkin Company, has also become popular, featuring food trucks, concerts, and other events during the summer. Buffalo is one of the largest Polish American centers in the United States. As a result, many aspects of Polish culture have found a home in the city from food to festivals. One of the best examples is the yearly celebration of Easter Monday, known to many Eastern Europeans as Dyngus Day. Buffalo and the surrounding region is home to three major leagues professional sports teams. The NHL's Buffalo Sabres and the NLL's Buffalo Bandits both play in KeyBank Center, while the NFL's Buffalo Bills play in suburban Orchard Park, New York, where they have been since 1973. The Bills, established in 1959, played in War Memorial Stadium until 1973, when Rich Stadium, now New Era Field, opened. The city of Buffalo brought home its two major league sports titles when the Bills won the American Football League Championship in both 1964 and 1965. The team competes in the AFC East division. The Bills currently have 10 Division Titles to their name. Since the AFL–NFL merger in 1970, the Bills have won the AFC Championship four times (1990, 1991, 1992, 1993), resulting in four lost Super Bowls (Super Bowl XXV, Super Bowl XXVI, Super Bowl XXVII and Super Bowl XXVIII). The Sabres, established in 1970, played in Buffalo Memorial Auditorium until 1996, when Marine Midland Arena, now KeyBank Center opened. The team plays in the Atlantic Division of the NHL. The team has won one Presidents' Trophy (2006–2007) and three Prince of Wales Trophies (conference championships) (1974–1975, 1979–1980 and 1998–1999). However, unlike the Bills, the Sabres don't have a league championship, having lost the 1975 Stanley Cup to the Philadelphia Flyers and the 1999 Stanley Cup to the Dallas Stars. Since 2014, both the Bills and Sabres have been owned by Terrence Pegula, a key investor in Buffalo's revitalization efforts. The Buffalo Bulls are a Division I college team representing the University at Buffalo. The Buffalo Bulls football team were champions of the 2008 Mid-American Conference Football Championship, as well as three MAC East championships (2007, 2008, 2018), and the 2019 team were champions of the Bahamas Bowl. The Bulls Men's Basketball Team has won 4 MAC Championships in a span of 5 years (2015, 2016, 2018, 2019) as well as 4 Regular Season Championships (2009, 2015, 2018, 2019), and 5 Divisions Titles (2009, 2014, 2015, 2018, 2019). The Bulls women's team has won two MAC Championships (2016, 2019) and has advanced to the round of 32 twice (2018, 2019) as well as the Sweet Sixteen in 2018. The Buffalo Bandits where established in 1992 and played their home games in Buffalo Memorial Auditorium until 1996 when they followed the Sabres to Marine Midland Arena. They have won eight division championships and four league championships (1991–1992, 1992–1993, 1995–1996 and 2007–2008) The Buffalo Braves played in the National Basketball Association from 1970 to 1978, with their home games held at the Buffalo Memorial Auditorium. After the team struggled financially, it relocated to California and became the San Diego Clippers. Buffalo is also home to several minor sports teams, including the Buffalo Bisons (baseball; an affiliate of the MLB's Toronto Blue Jays since 2014), FC Buffalo (soccer) as well as a professional women's team, the Buffalo Beauts (hockey). The Buffalo Beauts were the NWHL Champions in 2016-2017 and have appeared in all four NWHL finals. * American Football League (AFL) championships were earned prior to the NFL merging with the AFL in 1970. † Date refers to current incarnation; Buffalo Bisons previously operated from the 1870s until 1970 and the current Bisons count this team as part of their history. Buffalo Bulls championships are Mid-American Conference championships which the University at Buffalo joined in 1998. The Buffalo parks system has over 20 parks with several parks accessible from any part of the city. The Olmsted Park and Parkway System is the hallmark of Buffalo's many green spaces. Three-fourths of city parkland is part of the system, which comprises six major parks, eight connecting parkways, nine circles and seven smaller spaces. Constructed in 1868 by Frederick Law Olmsted and his partner Calvert Vaux, the system was integrated into the city and marks the first attempt in America to lay out a coordinated system of public parks and parkways. The Olmsted-designed portions of the Buffalo park system are listed on the National Register of Historic Places and are maintained by the Buffalo Olmsted Parks Conservancy (BOPC), a non-profit, for public benefit corporation which serves as the city's parks department. It is the first non-governmental organization of its kind to serve in such a capacity in the United States. Situated at the confluence of Lake Erie and the Buffalo and Niagara rivers, Buffalo is a waterfront city. Its rise to economic power came through its waterways in the form of transshipment, manufacturing, and an endless source of energy. Buffalo's waterfront remains, though to a lesser degree, a hub of commerce, trade and industry. Beginning in 2009, a significant portion of Buffalo's waterfront began to be transformed into a focal point for social and recreational activity. To this end, Buffalo Harbor State Park, nicknamed "Outer Harbor", was opened in 2014. Buffalo's intent was to stress its architectural and historical heritage to create a tourism destination, and early data indicates they were successful. At the municipal level, the city of Buffalo has a mayor and a council of nine councilmembers. Buffalo also serves as the seat of Erie County with some of the 11 members of county legislature representing at least a portion of Buffalo. At the state level, there are three states assemblymembers and two state senators representing parts of the city proper. At the federal level, Buffalo is the heart of in the House of Representatives, represented by Democrat Brian Higgins. In a trend common to northern "Rust Belt" regions, the Democratic Party has dominated Buffalo's political life for the last half-century. The last time anyone other than a Democrat held the position of Mayor in Buffalo was Chester A. Kowal in 1965. In 1977, Democratic Mayor James D. Griffin was elected as the nominee of two minor parties, the Conservative Party and the Right to Life Party, after he lost the Democratic primary for Mayor to then Deputy State Assembly Speaker Arthur Eve. Griffin switched political allegiances several times during his 16 years as Mayor, generally hewing to socially conservative platforms. Griffin's successor, Democrat Anthony M. Masiello (elected in 1993) continued to campaign on social conservatism, often crossing party lines in his endorsements and alliances. However, in 2005, Democrat Byron Brown was elected the city's first African-American mayor in a landslide (64%–27%) over Republican Kevin Helfer, who ran on a conservative platform. In 2013, the Conservative Party endorsed Brown for a third term because of his pledge to cut taxes. This change in local politics was preceded by a fiscal crisis in 2003 when years of economic decline, a diminishing tax-base and civic mismanagement left the city deep in debt and on the edge of bankruptcy. At New York State Comptroller Alan Hevesi's urging, the state took over the management of Buffalo's finances, appointing the Buffalo Fiscal Stability Authority, a New York State public-benefit corporation. Mayor Tony Masiello began conversations about merging the city with the larger Erie County government the following year, but they came to nought. The offices of the Buffalo District, US Army Corps of Engineers are next to the Black Rock Lock in the Erie Canal's Black Rock channel. In addition to maintaining and operating the lock, the District plans, designs, constructs and maintains water resources projects from Toledo, Ohio to Massena, New York. These include the flood-control dam at Mount Morris, New York, oversight of the lower Great Lakes (Lake Erie and Lake Ontario), review and permitting of wetlands construction, and remedial action for hazardous waste sites. Buffalo is also the home of a major office of the National Weather Service (NOAA), which serves all of western and much of central New York State. Buffalo is home to one of the 56 national FBI field offices. The field office covers all of Western New York and parts of the Southern Tier and Central New York. The field office operates several task forces in conjunction with local agencies to help combat issues such as gang violence, terrorism threats and health care fraud. Buffalo is also the location of the chief judge, United States Attorney and administrative offices for the United States District Court for the Western District of New York. Buffalo's crime rate in 2015 was higher than the national average; during that year, 41 murders, 1,033 robberies and 1,640 assaults were reported. In 2016, bizjournals.com published an article including an FBI report that ranked Buffalo's violent crime rate as the 15th-worst in the nation. Buffalo's major newspaper is "The Buffalo News". Established in 1880 as the "Buffalo Evening News", the newspaper has 181,540 in daily circulation and 266,123 on Sundays. With the radio stations WBEN (later WBEN-AM), WBEN-FM, and television station WBEN-TV, Buffalo's first and for several years only television station, the Buffalo Evening News dominated the local media market until 1977, when the newspaper and the stations were separated. The stations showed their affiliation with the newspaper in their call sign: WBEN. Other newspapers in the Buffalo area include "The Public", "The Challenger Community News", and "Buffalo Business First." According to Nielsen Media Research, the Buffalo television market is the 52nd largest in the United States . Movies shot with significant footage of Buffalo include: "Hide in Plain Sight" (1980), "Tuck Everlasting" (1981), "Best Friends" (1982), "The Natural" (1984), "Vamping" (1984), "Canadian Bacon" (1995), "Buffalo '66" (1998), "Manna from Heaven" (2002), "Bruce Almighty "(2003), "The Savages" (2007), "Henry's Crime" (2011), "" (2014), "" (2016), "Marshall" (2016), "Accidental Switch" (2016), and "The American Side" (2017). Although additional movies, such as "Promised Land" (2012), have used Buffalo as a setting, filming often takes place in other locations such as Pittsburgh or Canada. High production costs are blamed for filmmakers shooting all or most of their Buffalo-based scenes elsewhere. The Buffalo History Museum has compiled a lengthy and comprehensive filmography of feature films, documentary films, and television productions filmed or set in the Buffalo area. Buffalo Public Schools serve most of the city of Buffalo. The city has 78 public schools, including a growing number of charter schools. , the total enrollment was 41,089 students with a student-teacher ratio of 13.5 to 1. The graduation rate is up to 52% in 2008, up from 45% in 2007, and 50% in 2006. More than 27% of teachers have a master's degree or higher and the median amount of experience in the field is 15 years. The metropolitan area has 292 schools with 172,854 students. Buffalo's magnet school system attracts students with special interests, such as science, bilingual studies, and Native American studies. Specialized facilities include the Buffalo Elementary School of Technology; the Dr Martin Luther King Jr., Multicultural Institute; the International School; the Dr. Charles R. Drew Science Magnet; BUILD Academy; Leonardo da Vinci High School; PS 32 Bennett Park Montessori; the Buffalo Academy for Visual and Performing Arts, BAVPA; the Riverside Institute of Technology; Lafayette High School/Buffalo Academy of Finance; Hutchinson Central Technical High School; Burgard Vocational High School; South Park High School; and the Emerson School of Hospitality. The city is home to 47 private schools and the metropolitan region has 150 institutions. Most private schools, such as Bishop Timon – St. Jude High School, Canisius High School (the city's only Jesuit school), Mount Mercy Academy, and Nardin Academy have a Catholic affiliation. In addition, there are two Islamic schools, Darul Uloom Al-Madania and Universal School of Buffalo. There are also nonsectarian options including The Buffalo Seminary (the only private, nonsectarian, all-girls school in Western New York state), Nichols School and numerous Charter Schools. Private school tuition is approximately 40% less than Buffalo Public Schools' per student spending. Private schools graduate nearly 100% of students, public schools only approximately 30%. Complementing its standard function, the Buffalo Public Schools Adult and Continuing Education Division provides education and services to adults throughout the community. In addition, the Career and Technical Education Department offers more than 20 academic programs, and is attended by about 6,000 students each year. The State University of New York (SUNY) operates three institutions within the city of Buffalo. The State University of New York at Buffalo is known as "Buffalo" or "UB" and is the largest public university in New York. The University at Buffalo is the only university in Buffalo and is a nationally ranked tier 1 research university. Buffalo State College and Erie Community College are a college and a community college, respectively. Additionally, the private institutions Canisius College and D'Youville College are within the city. The city is home to two private healthcare systems, which combined operate eight hospitals and countless clinics in the greater metropolitan area, as well as three public hospitals operated by Erie County and the State of New York. Oishei Children's Hospital opened in November 2017 and is the one of the only free-standing children's hospital in New York. Buffalo General Medical Center and the Gates Vascular Institute have earned top rankings in the US for their cutting-edge research and treatment into the stroke and neurological care. Erie County Medical Center has been accredited as a Level One Trauma Center and serves as the trauma and burn care center for Western New York, much of the Southern Tier, and portions of Northwestern Pennsylvania and Ontario, Canada. Roswell Park has also become recognized as one of the United States' leading cancer treatment and research centers, and it recruits physicians and researchers from across the world to come live and work in the Buffalo area. The Niagara Frontier Transportation Authority (NFTA) operates Buffalo Niagara International Airport, reconstructed in 1997, in the suburb of Cheektowaga. The airport serves Western New York and much of the Finger Lakes and Southern Tier Regions. The Buffalo Metro Rail, also operated by the NFTA, is a long, single line light rail system that extends from Erie Canal Harbor in downtown Buffalo to the University Heights district (specifically, the South Campus of University at Buffalo) in the city's northeastern part. The line's downtown section runs above ground and is free of charge to passengers. North of Fountain Plaza Station, at the northern end of downtown, the line moves underground until it reaches its northern terminus at University Heights. Passengers pay a fare to ride this section of the rail. Two train stations, Buffalo-Depew and Buffalo-Exchange Street, serve the city and are operated by Amtrak. Historically, the city was a major stop on through routes between Chicago and New York City through the lower Ontario peninsula. Buffalo is at the Lake Erie's eastern end and serves as a playground for many personal yachts, sailboats, power boats and watercraft. The city's extensive breakwall system protects its inner and outer harbors, which are maintained at commercial navigation depths for Great Lakes freighters. A Lake Erie tributary that flows through south Buffalo is the Buffalo River and Buffalo Creek. Eight New York State highways, one three-digit Interstate Highway and one U.S. Highway traverse the city of Buffalo. New York State Route 5, commonly referred to as Main Street within the city, enters through Lackawanna as a limited-access highway and intersects with Interstate 190, a north–south highway connecting Interstate 90 in the southeastern suburb of Cheektowaga with Niagara Falls. NY 354 (Clinton Street) and NY 130 (Broadway) are east to west highways connecting south and downtown Buffalo to the eastern suburbs of West Seneca and Depew. NY 265 (Delaware Avenue) and NY 266 (Niagara Street and River Road) both start in downtown Buffalo and end in the city of Tonawanda. One of three U.S. highways in Erie County, the other two being U.S. 20 (Transit Road) and U.S. 219 (Southern Expressway), U.S. 62 (Bailey Avenue) is a north to south trunk road that enters the city through Lackawanna and exits at the Amherst town border at a junction with NY 5. Within the city, the route passes by light industrial developments and high-density areas of the city. Bailey Avenue has major intersections with Interstate 190 and the Kensington Expressway. Three major expressways serve Buffalo. The Scajaquada Expressway (NY 198) is primarily a limited access highway connecting Interstate 190 near Unity Island to New York State Route 33, which starts at the edge of downtown and the city's East Side, continues through heavily populated areas of the city, intersects with Interstate 90 in Cheektowaga and ends at the airport. The Peace Bridge is a major international crossing near the city's Black Rock district that connects Buffalo with Fort Erie and Toronto via the Queen Elizabeth Way. The city of Buffalo has a higher than average percentage of households without a car. In 2015, 30 percent of Buffalo households lacked a car, and decreased slightly to 28.2 percent in 2016. The national average was 8.7 percent in 2016. Buffalo averaged 1.03 cars per household in 2016, compared to a national average of 1.8. Buffalo's water system is operated by Veolia Water. To reduce large-scale ice blockage in the Niagara River—with resultant flooding, ice damage to docks and other waterfront structures, as well as blockage of the water intakes for the hydro-electric power plants at Niagara Falls—the New York Power Authority and Ontario Power Generation have jointly operated the Lake Erie-Niagara River Ice Boom since 1964. The boom is installed on December 16, or when the water temperature reaches , whichever happens first. The boom is opened on April 1 unless there is more than of ice remaining in Eastern Lake Erie. When in place, the boom stretches from the outer breakwall at Buffalo Harbor almost to the Canadian shore near the ruins of the pier at Erie Beach in Fort Erie. The boom was originally made of wooden timbers, but these have been replaced by steel pontoons. Buffalo has 15 sister cities:
https://en.wikipedia.org/wiki?curid=3985
Benjamin Franklin Benjamin Franklin ( April 17, 1790) was an American polymath and one of the Founding Fathers of the United States. Franklin was a leading writer, printer, political philosopher, politician, Freemason, postmaster, scientist, inventor, humorist, civic activist, statesman, and diplomat. As a scientist, he was a major figure in the American Enlightenment and the history of physics for his discoveries and theories regarding electricity. As an inventor, he is known for the lightning rod, bifocals, and the Franklin stove, among other inventions. He founded many civic organizations, including the Library Company, Philadelphia's first fire department and the University of Pennsylvania. Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity, initially as an author and spokesman in London for several colonies. As the first United States Ambassador to France, he exemplified the emerging American nation. Franklin was foundational in defining the American ethos as a marriage of the practical values of thrift, hard work, education, community spirit, self-governing institutions, and opposition to authoritarianism both political and religious, with the scientific and tolerant values of the Enlightenment. In the words of historian Henry Steele Commager, "In a Franklin could be merged the virtues of Puritanism without its defects, the illumination of the Enlightenment without its heat." To Walter Isaacson, this makes Franklin "the most accomplished American of his age and the most influential in inventing the type of society America would become." Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing the "Pennsylvania Gazette" at the age of 23. He became wealthy publishing this and "Poor Richard's Almanack", which he authored under the pseudonym "Richard Saunders". After 1767, he was associated with the "Pennsylvania Chronicle", a newspaper that was known for its revolutionary sentiments and criticisms of the policies of the British Parliament and the Crown. He pioneered and was the first president of Academy and College of Philadelphia which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected president in 1769. Franklin became a national hero in America as an agent for several colonies when he spearheaded an effort in London to have the Parliament of Great Britain repeal the unpopular Stamp Act. An accomplished diplomat, he was widely admired among the French as American minister to Paris and was a major figure in the development of positive Franco-American relations. His efforts proved vital for the American Revolution in securing shipments of crucial munitions from France. He was promoted to deputy postmaster-general for the British colonies in 1753, having been Philadelphia postmaster for many years, and this enabled him to set up the first national communications network. During the revolution, he became the first United States Postmaster General. He was active in community affairs and colonial and state politics, as well as national and international affairs. From 1785 to 1788, he served as governor of Pennsylvania. He initially owned and dealt in slaves but, by the late 1750s, he began arguing against slavery and became an abolitionist. His life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen Franklin honored more than two centuries after his death on coinage and the $100 bill, warships, and the names of many towns, counties, educational institutions, and corporations, as well as countless cultural references. Benjamin Franklin's father, Josiah Franklin, was a tallow chandler, soaper, and candlemaker. Josiah Franklin was born at Ecton, Northamptonshire, England on December 23, 1657, the son of blacksmith and farmer Thomas Franklin and Jane White. Benjamin's father and all four of his grandparents were born in England. Josiah Franklin had a total of seventeen children with his two wives. He married his first wife, Anne Child, in about 1677 in Ecton and emigrated with her to Boston in 1683; they had three children before emigration, and four after. Following her death, Josiah was married to Abiah Folger on July 9, 1689, in the Old South Meeting House by Reverend Samuel Willard, and would eventually have ten children with her. Benjamin, their eighth child, was Josiah Franklin's fifteenth child overall, and his tenth and final son. Benjamin Franklin's mother, Abiah Folger, was born in Nantucket, Massachusetts Bay Colony, on August 15, 1667, to Peter Folger, a miller and schoolteacher, and his wife, Mary Morrell Folger, a former indentured servant. Mary Folger came from a Puritan family that was among the first Pilgrims to flee to Massachusetts for religious freedom, sailing for Boston in 1635 after King Charles I of England had begun persecuting Puritans. Her father Peter was "the sort of rebel destined to transform colonial America." As clerk of the court, he was jailed for disobeying the local magistrate in defense of middle-class shopkeepers and artisans in conflict with wealthy landowners. Benjamin Franklin followed in his grandfather's footsteps in his battles against the wealthy Penn family that owned the Pennsylvania Colony. Benjamin Franklin was born on Milk Street, in Boston, Massachusetts, on January 17, 1706, and baptized at Old South Meeting House. He was one of seventeen children born to Josiah Franklin, and one of ten born by Josiah's second wife, Abiah Folger; the daughter of Peter Foulger and Mary Morrill. Among Benjamin's siblings were his older brother James and his younger sister Jane. As a kid growing up along the Charles River, Franklin recalled that he was "generally the leader among the boys." Josiah wanted Ben to attend school with the clergy but only had enough money to send him to school for two years. He attended Boston Latin School but did not graduate; he continued his education through voracious reading. Although "his parents talked of the church as a career" for Franklin, his schooling ended when he was ten. He worked for his father for a time, and at 12 he became an apprentice to his brother James, a printer, who taught Ben the printing trade. When Ben was 15, James founded "The New-England Courant", which was the first truly independent newspaper in the colonies. When denied the chance to write a letter to the paper for publication, Franklin adopted the pseudonym of "Silence Dogood", a middle-aged widow. Mrs. Dogood's letters were published and became a subject of conversation around town. Neither James nor the "Courant"'s readers were aware of the ruse, and James was unhappy with Ben when he discovered the popular correspondent was his younger brother. Franklin was an advocate of free speech from an early age. When his brother was jailed for three weeks in 1722 for publishing material unflattering to the governor, young Franklin took over the newspaper and had Mrs. Dogood (quoting "Cato's Letters") proclaim: "Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech." Franklin left his apprenticeship without his brother's permission, and in so doing became a fugitive. At age 17, Franklin ran away to Philadelphia, Pennsylvania, seeking a new start in a new city. When he first arrived, he worked in several printer shops around town, but he was not satisfied by the immediate prospects. After a few months, while working in a printing house, Franklin was convinced by Pennsylvania Governor Sir William Keith to go to London, ostensibly to acquire the equipment necessary for establishing another newspaper in Philadelphia. Finding Keith's promises of backing a newspaper empty, Franklin worked as a typesetter in a printer's shop in what is now the Church of St Bartholomew-the-Great in the Smithfield area of London. Following this, he returned to Philadelphia in 1726 with the help of Thomas Denham, a merchant who employed Franklin as clerk, shopkeeper, and bookkeeper in his business. In 1727, Benjamin Franklin, then 21, formed the Junto, a group of "like minded aspiring artisans and tradesmen who hoped to improve themselves while they improved their community." The Junto was a discussion group for issues of the day; it subsequently gave rise to many organizations in Philadelphia. The Junto was modeled after English coffeehouses that Franklin knew well, and which had become the center of the spread of Enlightenment ideas in Britain. Reading was a great pastime of the Junto, but books were rare and expensive. The members created a library initially assembled from their own books after Franklin wrote: This did not suffice, however. Franklin conceived the idea of a subscription library, which would pool the funds of the members to buy books for all to read. This was the birth of the Library Company of Philadelphia: its charter was composed by Franklin in 1731. In 1732, Franklin hired the first American librarian, Louis Timothee. The Library Company is now a great scholarly and research library. Upon Denham's death, Franklin returned to his former trade. In 1728, Franklin had set up a printing house in partnership with Hugh Meredith; the following year he became the publisher of a newspaper called "The Pennsylvania Gazette". The "Gazette" gave Franklin a forum for agitation about a variety of local reforms and initiatives through printed essays and observations. Over time, his commentary, and his adroit cultivation of a positive image as an industrious and intellectual young man, earned him a great deal of social respect. But even after Franklin had achieved fame as a scientist and statesman, he habitually signed his letters with the unpretentious 'B. Franklin, Printer.' In 1732, Ben Franklin published the first German-language newspaper in America – "Die Philadelphische Zeitung" – although it failed after only one year because four other newly founded German papers quickly dominated the newspaper market. Franklin printed Moravian religious books in German. Franklin often visited Bethlehem, Pennsylvania staying at the Moravian Sun Inn. In a 1751 pamphlet on demographic growth and its implications for the colonies, he called the Pennsylvania Germans "Palatine Boors" who could never acquire the "Complexion" of the English settlers and referred to "Blacks and Tawneys" as weakening the social structure of the colonies. Although Franklin apparently reconsidered shortly thereafter, and the phrases were omitted from all later printings of the pamphlet, his views may have played a role in his political defeat in 1764. Franklin saw the printing press as a device to instruct colonial Americans in moral virtue. In "Benjamin Franklin's Journalism", Ralph Frasca argues he saw this as a service to God, because he understood moral virtue in terms of actions, thus, doing good provides a service to God. Despite his own moral lapses, Franklin saw himself as uniquely qualified to instruct Americans in morality. He tried to influence American moral life through the construction of a printing network based on a chain of partnerships from the Carolinas to New England. Franklin thereby invented the first newspaper chain. It was more than a business venture, for like many publishers since he believed that the press had a public-service duty. When Franklin established himself in Philadelphia, shortly before 1730, the town boasted two "wretched little" news sheets, Andrew Bradford's "The American Weekly Mercury", and Samuel Keimer's "Universal Instructor in all Arts and Sciences, and Pennsylvania Gazette". This instruction in all arts and sciences consisted of weekly extracts from "Chambers's Universal Dictionary". Franklin quickly did away with all this when he took over the "Instructor" and made it "The Pennsylvania Gazette". The "Gazette" soon became Franklin's characteristic organ, which he freely used for satire, for the play of his wit, even for sheer excess of mischief or of fun. From the first, he had a way of adapting his models to his own uses. The series of essays called "The Busy-Body", which he wrote for Bradford's "American Mercury" in 1729, followed the general Addisonian form, already modified to suit homelier conditions. The thrifty Patience, in her busy little shop, complaining of the useless visitors who waste her valuable time, is related to the ladies who address Mr. Spectator. The Busy-Body himself is a true Censor Morum, as Isaac Bickerstaff had been in the "Tatler". And a number of the fictitious characters, Ridentius, Eugenius, Cato, and Cretico, represent traditional 18th-century classicism. Even this Franklin could use for contemporary satire, since Cretico, the "sowre Philosopher", is evidently a portrait of Franklin's rival, Samuel Keimer. The "Pennsylvania Gazette", like most other newspapers of the period, was often poorly printed. Franklin was busy with matters outside of his printing office, and never seriously attempted to raise the mechanical standards of his trade. Nor did he ever properly edit or collate the chance medley of stale items that passed for news in the "Gazette." His influence on the practical side of journalism was minimal. On the other hand, his advertisements of books show his very great interest in popularizing secular literature. Undoubtedly his paper contributed to the broader culture that distinguished Pennsylvania from her neighbors before the Revolution. Like many publishers, Franklin built up a book shop in his printing office; he took the opportunity to read new books before selling them. Franklin had mixed success in his plan to establish an inter-colonial network of newspapers that would produce a profit for him and disseminate virtue. He began in Charleston, South Carolina, in 1731. After the second editor died, his widow Elizabeth Timothy took over and made it a success, 1738–1746. She was one of the colonial era's first woman printers. For three decades Franklin maintained a close business relationship with her and her son Peter who took over in 1746. The "Gazette" had a policy of impartiality in political debates, while creating the opportunity for public debate, which encouraged others to challenge authority. Editor Peter Timothy avoided blandness and crude bias, and after 1765 increasingly took a patriotic stand in the growing crisis with Great Britain. However, Franklin's "Connecticut Gazette" (1755–68) proved unsuccessful. In 1730 or 1731, Franklin was initiated into the local Masonic lodge. He became a Grand Master in 1734, indicating his rapid rise to prominence in Pennsylvania. The same year, he edited and published the first Masonic book in the Americas, a reprint of James Anderson's "Constitutions of the Free-Masons". He was the Secretary of St. John's Lodge in Philadelphia from 1735 to 1738. Franklin remained a Freemason for the rest of his life. At age 17 in 1723, Franklin proposed to 15-year-old Deborah Read while a boarder in the Read home. At that time, Read's mother was wary of allowing her young daughter to marry Franklin, who was on his way to London at Governor Sir William Keith’s request, and also because of his financial instability. Her own husband had recently died, and she declined Franklin's request to marry her daughter. While Franklin was in London, his trip was extended, and there were problems with Sir William's promises of support. Perhaps because of the circumstances of this delay, Deborah married a man named John Rodgers. This proved to be a regrettable decision. Rodgers shortly avoided his debts and prosecution by fleeing to Barbados with her dowry, leaving her behind. Rodgers's fate was unknown, and because of bigamy laws, Deborah was not free to remarry. Franklin established a common-law marriage with Deborah Read on September 1, 1730. They took in Franklin's recently acknowledged young illegitimate son, William, and raised him in their household. They had two children together. Their son, Francis Folger Franklin, was born in October 1732 and died of smallpox in 1736. Their daughter, Sarah "Sally" Franklin, was born in 1743 and grew up to marry Richard Bache, have seven children, and look after her father in his old age. Deborah's fear of the sea meant that she never accompanied Franklin on any of his extended trips to Europe, and another possible reason why they spent so much time apart is that he may have blamed her for possibly preventing their son Francis from being inoculated against the disease that subsequently killed him. Deborah wrote to him in November 1769 saying she was ill due to "dissatisfied distress" from his prolonged absence, but he did not return until his business was done. Deborah Read Franklin died of a stroke in 1774, while Franklin was on an extended mission to England; he returned in 1775. In 1730, 24-year-old Franklin publicly acknowledged the existence of his son William, who was deemed "illegitimate," as he was born out of wedlock, and raised him in his household. William was born February 22, 1730, and his mother's identity is still unknown. He was educated in Philadelphia, and beginning at about age 30, studied law in London in the early 1760s. He himself fathered an illegitimate son, William Temple Franklin, born on the same date, February 22, 1760. The boy's mother was never identified, and he was placed in foster care. In 1762, the elder William Franklin married Elizabeth Downes, daughter of a planter from Barbados, in London. After William passed the bar, his father helped him gain an appointment one year later in 1763 as the last Royal Governor of New Jersey. A Loyalist to the British Empire, the elder William Franklin and his father eventually broke relations over their differences about the American Revolutionary War, as Benjamin Franklin could never accept William's position. Deposed in 1776 by the revolutionary government of New Jersey, William, who was Royal Governor, was placed under house arrest at his home in Perth Amboy for six months. After the Declaration of Independence, Franklin was formally taken into custody by order of the Provincial Congress of New Jersey, an entity which he refused to recognize, regarding it as an "illegal assembly." He was incarcerated in Connecticut for two years, in Wallingford and Middletown, and after being caught surreptitiously engaging Americans into supporting the Loyalist cause, was held in solitary confinement at Litchfield for eight months. When finally released in a prisoner exchange in 1778, he moved to New York City, which was still occupied by the British at the time. While in New York City, he became leader of the Board of Associated Loyalists, a quasi-military organization chartered by King George III and headquartered in New York City. They initiated guerrilla forays into New Jersey, southern Connecticut, and New York counties north of the city. When British troops evacuated from New York, William Franklin left with them and sailed to England. He settled in London, never to return to North America. In the preliminary peace talks in 1782 with Britain, "... Benjamin Franklin insisted that loyalists who had borne arms against the United States would be excluded from this plea (that they be given a general pardon). He was undoubtedly thinking of William Franklin." In 1733, Franklin began to publish the noted "Poor Richard's Almanack" (with content both original and borrowed) under the pseudonym Richard Saunders, on which much of his popular reputation is based. Franklin frequently wrote under pseudonyms. Although it was no secret that Franklin was the author, his Richard Saunders character repeatedly denied it. "Poor Richard's Proverbs", adages from this almanac, such as "A penny saved is twopence dear" (often misquoted as "A penny saved is a penny earned") and "Fish and visitors stink in three days", remain common quotations in the modern world. Wisdom in folk society meant the ability to provide an apt adage for any occasion, and Franklin's readers became well prepared. He sold about ten thousand copies per year—it became an institution. In 1741, Franklin began publishing "The General Magazine and Historical Chronicle for all the British Plantations in America", the first such monthly magazine of this type published in America. In 1758, the year he ceased writing for the Almanack, he printed "Father Abraham's Sermon", also known as "The Way to Wealth". Franklin's autobiography, begun in 1771 but published after his death, has become one of the classics of the genre. Daylight saving time (DST) is often erroneously attributed to a 1784 satire that Franklin published anonymously. Modern DST was first proposed by George Vernon Hudson in 1895. Franklin was a prodigious inventor. Among his many creations were the lightning rod, glass harmonica (a glass instrument, not to be confused with the metal harmonica), Franklin stove, bifocal glasses and the flexible urinary catheter. Franklin never patented his inventions; in his autobiography he wrote, "... as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously." Franklin started exploring the phenomenon of electricity in 1746 when he saw some of Archibald Spencer's lectures using static electricity for illustrations. Franklin proposed that "vitreous" and "resinous" electricity were not different types of "electrical fluid" (as electricity was called then), but the same "fluid" under different pressures. (The same proposal was made independently that same year by William Watson.) Franklin was the first to label them as positive and negative respectively, and he was the first to discover the principle of conservation of charge. In 1748, he constructed a multiple plate capacitor, that he called an "electrical battery" (not to be confused with Volta's pile) by placing eleven panes of glass sandwiched between lead plates, suspended with silk cords and connected by wires. In pursuit of more pragmatic uses for electricity, remarking in spring 1749 that he felt "chagrin'd a little" that his experiments had heretofore resulted in "Nothing in this Way of Use to Mankind," Franklin planned a practical demonstration. He proposed a dinner party where a turkey was to be killed with electric shock and roasted on an electrical spit. After having prepared several turkeys this way, Franklin noted that "the birds kill'd in this manner eat uncommonly tender." Franklin recounted that in the process of one of these experiments, he was shocked by a pair of Leyden jars, resulting in numbness in his arms that persisted for one evening, noting "I am Ashamed to have been Guilty of so Notorious a Blunder." In recognition of his work with electricity, Franklin received the Royal Society's Copley Medal in 1753, and in 1756, he became one of the few 18th-century Americans elected as a Fellow of the Society. He received honorary degrees from Harvard and Yale universities (his first). The CGS unit of electric charge has been named after him: one "franklin" (Fr) is equal to one statcoulomb. Franklin advised Harvard University in its acquisition of new electrical laboratory apparatus after the complete loss of its original collection, in a fire that destroyed the original Harvard Hall in 1764. The collection he assembled would later become part of the Harvard Collection of Historical Scientific Instruments, now on public display in its Science Center. Franklin briefly investigated electrotherapy, including the use of the electric bath. This work led to the field becoming widely known. Franklin published a proposal for an experiment to prove that lightning is electricity by flying a kite in a storm that appeared capable of becoming a lightning storm. On May 10, 1752, Thomas-François Dalibard of France conducted Franklin's experiment using a iron rod instead of a kite, and he extracted electrical sparks from a cloud. On June 15, 1752, Franklin may possibly have conducted his well-known kite experiment in Philadelphia, successfully extracting sparks from a cloud. Franklin described the experiment in the "Pennsylvania Gazette" on October 19, 1752, without mentioning that he himself had performed it. This account was read to the Royal Society on December 21 and printed as such in the "Philosophical Transactions". Joseph Priestley published an account with additional details in his 1767 "History and Present Status of Electricity". Franklin was careful to stand on an insulator, keeping dry under a roof to avoid the danger of electric shock. Others, such as Prof. Georg Wilhelm Richmann in Russia, were indeed electrocuted in performing lightning experiments during the months immediately following Franklin's experiment. In his writings, Franklin indicates that he was aware of the dangers and offered alternative ways to demonstrate that lightning was electrical, as shown by his use of the concept of electrical ground. Franklin did not perform this experiment in the way that is often pictured in popular literature, flying the kite and waiting to be struck by lightning, as it would have been dangerous. Instead he used the kite to collect some electric charge from a storm cloud, showing that lightning was electrical. On October 19 in a letter to England with directions for repeating the experiment, Franklin wrote: Franklin's electrical experiments led to his invention of the lightning rod. He said that conductors with a sharp rather than a smooth point could discharge silently, and at a far greater distance. He surmised that this could help protect buildings from lightning by attaching "upright Rods of Iron, made sharp as a Needle and gilt to prevent Rusting, and from the Foot of those Rods a Wire down the outside of the Building into the Ground; ... Would not these pointed Rods probably draw the Electrical Fire silently out of a Cloud before it came nigh enough to strike, and thereby secure us from that most sudden and terrible Mischief!" Following a series of experiments on Franklin's own house, lightning rods were installed on the Academy of Philadelphia (later the University of Pennsylvania) and the Pennsylvania State House (later Independence Hall) in 1752. Franklin had a major influence on the emerging science of demography, or population studies. In the 1730s and 1740s, Franklin began taking notes on population growth, finding that the American population had the fastest growth rate on earth. Emphasizing that population growth depended on food supplies, Franklin emphasized the abundance of food and available farmland in America. He calculated that America's population was doubling every twenty years and would surpass that of England in a century. In 1751, he drafted "Observations concerning the Increase of Mankind, Peopling of Countries, etc." Four years later, it was anonymously printed in Boston, and it was quickly reproduced in Britain, where it influenced the economist Adam Smith and later the demographer Thomas Malthus, who credited Franklin for discovering a rule of population growth. Franklin's predictions how British mercantilism was unsustainable alarmed British leaders who did not want to be surpassed by the colonies, so they became more willing to impose restrictions on the colonial economy. Kammen (1990) and Drake (2011) say Franklin's "Observations concerning the Increase of Mankind" (1755) stands alongside Ezra Stiles' "Discourse on Christian Union" (1760) as the leading works of eighteenth-century Anglo-American demography; Drake credits Franklin's "wide readership and prophetic insight." Franklin was also a pioneer in the study of slave demography, as shown in his 1755 essay. Benjamin Franklin, in his capacity as a farmer, wrote at least one critique about the negative consequences of price controls, trade restrictions, and subsidy of the poor. This is succinctly preserved in his letter to the London Chronicle published November 29, 1766, titled 'On the Price of Corn, and Management of the poor'. As deputy postmaster, Franklin became interested in the North Atlantic Ocean circulation patterns. While in England in 1768, he heard a complaint from the Colonial Board of Customs: Why did it take British packet ships carrying mail several weeks longer to reach New York than it took an average merchant ship to reach Newport, Rhode Island? The merchantmen had a longer and more complex voyage because they left from London, while the packets left from Falmouth in Cornwall. Franklin put the question to his cousin Timothy Folger, a Nantucket whaler captain, who told him that merchant ships routinely avoided a strong eastbound mid-ocean current. The mail packet captains sailed dead into it, thus fighting an adverse current of . Franklin worked with Folger and other experienced ship captains, learning enough to chart the current and name it the Gulf Stream, by which it is still known today. Franklin published his Gulf Stream chart in 1770 in England, where it was completely ignored. Subsequent versions were printed in France in 1778 and the U.S. in 1786. The British edition of the chart, which was the original, was so thoroughly ignored that everyone assumed it was lost forever until Phil Richardson, a Woods Hole oceanographer and Gulf Stream expert, discovered it in the Bibliothèque Nationale in Paris in 1980. This find received front-page coverage in "The New York Times". It took many years for British sea captains to adopt Franklin's advice on navigating the current; once they did, they were able to trim two weeks from their sailing time.
https://en.wikipedia.org/wiki?curid=3986
Banach space In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly. Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. A Banach space is a vector space over any scalar field K that is equipped with a norm formula_1 and that is complete with respect to the distance function induced by the norm, that is to say, for every Cauchy sequence in , there exists an element in such that or equivalently: The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors. A normed space is a Banach space if and only if each absolutely convergent series in converges in , Completeness of a normed space is preserved if the given norm is replaced by an equivalent one. All norms on a finite-dimensional vector space are equivalent. Every finite-dimensional normed space over or is a Banach space. If and are normed spaces over the same ground field , the set of all continuous -linear maps is denoted by . In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space to another normed space is continuous if and only if it is bounded on the closed unit ball of . Thus, the vector space can be given the operator norm For a Banach space, the space is a Banach space with respect to this norm. If is a Banach space, the space forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps. If and are normed spaces, they are isomorphic normed spaces if there exists a linear bijection such that and its inverse are continuous. If one of the two spaces or is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces and are isometrically isomorphic if in addition, is an isometry, i.e., for every in . The Banach–Mazur distance between two isomorphic but not isometric spaces and gives a measure of how much the two spaces and differ. Every normed space can be isometrically embedded in a Banach space. More precisely, for every normed space , there exist a Banach space and a mapping such that T is an isometric mapping and is dense in . If is another Banach space such that there is an isometric isomorphism from onto a dense subset of , then is isometrically isomorphic to . This Banach space is the completion of the normed space . The underlying metric space for is the same as the metric completion of , with the vector space operations extended from to . The completion of is often denoted by formula_6. The cartesian product of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as and give rise to isomorphic normed spaces. In this sense, the product (or the direct sum ) is complete if and only if the two factors are complete. If is a closed linear subspace of a normed space , there is a natural norm on the quotient space , The quotient is a Banach space when is complete. The quotient map from onto , sending in to its class , is linear, onto and has norm , except when , in which case the quotient is the null space. The closed linear subspace of is said to be a complemented subspace of if is the range of a bounded linear projection from onto . In this case, the space is isomorphic to the direct sum of and , the kernel of the projection . Suppose that and are Banach spaces and that . There exists a canonical factorization of as where the first map is the quotient map, and the second map sends every class in the quotient to the image in . This is well defined because all elements in the same class have the same image. The mapping is a linear bijection from onto the range , whose inverse need not be bounded. Basic examples of Banach spaces include: the spaces and their special cases, the sequence spaces that consist of scalar sequences indexed by ; among them, the space of absolutely summable sequences and the space of square summable sequences; the space of sequences tending to zero and the space of bounded sequences; the space of continuous scalar functions on a compact Hausdorff space , equipped with the max norm, According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some . For every separable Banach space , there is a closed subspace of such that . Any Hilbert space serves as an example of a Banach space. A Hilbert space on is complete for a norm of the form where is the inner product, linear in its first argument that satisfies the following: For example, the space is a Hilbert space. The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others. A Banach algebra is a Banach space over or , together with a structure of algebra over, such that the product map "A" × "A" ∋ is continuous. An equivalent norm on can be found so that for all . If is a normed space and the underlying field (either the real or the complex numbers), the continuous dual space is the space of continuous linear maps from into , or continuous linear functionals. The notation for the continuous dual is in this article. Since is a Banach space (using the absolute value as norm), the dual is a Banach space, for every normed space . The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem. In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional. An important special case is the following: for every vector in a normed space , there exists a continuous linear functional on such that When is not equal to the vector, the functional must have norm one, and is called a norming functional for . The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane. The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane. A subset in a Banach space is total if the linear span of is dense in . The subset is total in if and only if the only continuous linear functional that vanishes on is the functional: this equivalence follows from the Hahn–Banach theorem. If is the direct sum of two closed linear subspaces and , then the dual of is isomorphic to the direct sum of the duals of and . If is a closed linear subspace in , one can associate the "orthogonal of" in the dual, The orthogonal is a closed linear subspace of the dual. The dual of is isometrically isomorphic to . The dual of is isometrically isomorphic to . The dual of a separable Banach space need not be separable, but: When is separable, the above criterion for totality can be used for proving the existence of a countable total subset in . The weak topology on a Banach space is the coarsest topology on for which all elements in the continuous dual space are continuous. The norm topology is therefore finer than the weak topology. It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed. A norm-continuous linear map between two Banach spaces and is also weakly continuous, i.e., continuous from the weak topology of to that of . If is infinite-dimensional, there exist linear maps which are not continuous. The space of all linear maps from to the underlying field (this space is called the algebraic dual space, to distinguish it from ) also induces a topology on which is finer than the weak topology, and much less used in functional analysis. On a dual space , there is a topology weaker than the weak topology of , called weak* topology. It is the coarsest topology on for which all evaluation maps , are continuous. Its importance comes from the Banach–Alaoglu theorem. The Banach–Alaoglu theorem depends on Tychonoff's theorem about infinite products of compact spaces. When is separable, the unit ball of the dual is a metrizable compact in the weak* topology. The dual of is isometrically isomorphic to : for every bounded linear functional on , there is a unique element such that The dual of is isometrically isomorphic to . The dual of is isometrically isomorphic to when and . For every vector in a Hilbert space , the mapping defines a continuous linear functional on . The Riesz representation theorem states that every continuous linear functional on is of the form for a uniquely defined vector in . The mapping is an antilinear isometric bijection from onto its dual . When the scalars are real, this map is an isometric isomorphism. When is a compact Hausdorff topological space, the dual of is the space of Radon measures in the sense of Bourbaki. The subset of consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball of . The extreme points of are the Dirac measures on . The set of Dirac measures on , equipped with the w*-topology, is homeomorphic to . The result has been extended by Amir and Cambern to the case when the multiplicative Banach–Mazur distance between and is . The theorem is no longer true when the distance is . In the commutative Banach algebra , the maximal ideals are precisely kernels of Dirac mesures on , More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology. In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual . Not every unital commutative Banach algebra is of the form for some compact Hausdorff space . However, this statement holds if one places in the smaller category of commutative C*-algebras. Gelfand's representation theorem for commutative C*-algebras states that every commutative unital "C"*-algebra is isometrically isomorphic to a space. The Hausdorff compact space here is again the maximal ideal space, also called the spectrum of in the C*-algebra context. If is a normed space, the (continuous) dual of the dual is called bidual, or second dual of . For every normed space , there is a natural map, This defines as a continuous linear functional on , i.e., an element of . The map is a linear map from to . As a consequence of the existence of a norming functional for every in , this map is isometric, thus injective. For example, the dual of is identified with , and the dual of is identified with , the space of bounded scalar sequences. Under these identifications, is the inclusion map from to . It is indeed isometric, but not onto. If is surjective, then the normed space is called reflexive (see below). Being the dual of a normed space, the bidual is complete, therefore, every reflexive normed space is a Banach space. Using the isometric embedding , it is customary to consider a normed space as a subset of its bidual. When is a Banach space, it is viewed as a closed linear subspace of . If is not reflexive, the unit ball of is a proper subset of the unit ball of . The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual. In other words, for every in the bidual, there exists a net in so that The net may be replaced by a weakly*-convergent sequence when the dual is separable. On the other hand, elements of the bidual of that are not in cannot be weak*-limit of "sequences" in , since is weakly sequentially complete. Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem. According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors. Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional. The Banach–Steinhaus theorem is not limited to Banach spaces. It can be extended for example to the case where is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood of in such that all in are uniformly bounded on , This result is a direct consequence of the preceding "Banach isomorphism theorem" and of the canonical factorization of bounded linear maps. This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from onto sending to the sum . The normed space is called reflexive when the natural map is surjective. Reflexive normed spaces are Banach spaces. This is a consequence of the Hahn–Banach theorem. Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space onto the Banach space , then is reflexive. Indeed, if the dual of a Banach space is separable, then is separable. If is reflexive and separable, then the dual of is separable, so is separable. Hilbert spaces are reflexive. The spaces are reflexive when . More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem. The spaces are not reflexive. In these examples of non-reflexive spaces , the bidual is "much larger" than . Namely, under the natural isometric embedding of into given by the Hahn–Banach theorem, the quotient is infinite-dimensional, and even nonseparable. However, Robert C. James has constructed an example of a non-reflexive space, usually called ""the James space"" and denoted by "J", such that the quotient is one-dimensional. Furthermore, this space is isometrically isomorphic to its bidual. When is reflexive, it follows that all closed and bounded convex subsets of are weakly compact. In a Hilbert space , the weak compactness of the unit ball is very often used in the following way: every bounded sequence in has weakly convergent subsequences. Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems. For example, every convex continuous function on the unit ball of a reflexive space attains its minimum at some point in . As a special case of the preceding result, when is a reflexive space over , every continuous linear functional in attains its maximum on the unit ball of . The following theorem of Robert C. James provides a converse statement. The theorem can be extended to give a characterization of weakly compact convex sets. On every non-reflexive Banach space , there exist continuous linear functionals that are not "norm-attaining". However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual of . A sequence in a Banach space is weakly convergent to a vector if converges to for every continuous linear functional in the dual . The sequence is a weakly Cauchy sequence if converges to a scalar limit , for every in . A sequence in the dual is weakly* convergent to a functional if converges to for every in . Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem. When the sequence in is a weakly Cauchy sequence, the limit above defines a bounded linear functional on the dual , i.e., an element of the bidual of , and is the limit of in the weak*-topology of the bidual. The Banach space is weakly sequentially complete if every weakly Cauchy sequence is weakly convergent in . It follows from the preceding discussion that reflexive spaces are weakly sequentially complete. An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the vector. The unit vector basis of , or of , is another example of a weakly null sequence, i.e., a sequence that converges weakly to . For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to . The unit vector basis of is not weakly Cauchy. Weakly Cauchy sequences in are weakly convergent, since -spaces are weakly sequentially complete. Actually, weakly convergent sequences in are norm convergent. This means that satisfies Schur's property. Weakly Cauchy sequences and the basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal. A complement to this result is due to Odell and Rosenthal (1975). By the Goldstine theorem, every element of the unit ball of is weak*-limit of a net in the unit ball of . When does not contain , every element of is weak*-limit of a "sequence" in the unit ball of . When the Banach space is separable, the unit ball of the dual , equipped with the weak*-topology, is a metrizable compact space , and every element in the bidual defines a bounded function on : This function is continuous for the compact topology of if and only if is actually in , considered as subset of . Assume in addition for the rest of the paragraph that does not contain . By the preceding result of Odell and Rosenthal, the function is the pointwise limit on of a sequence of continuous functions on , it is therefore a first Baire class function on . The unit ball of the bidual is a pointwise compact subset of the first Baire class on . When is separable, the unit ball of the dual is weak*-compact by Banach–Alaoglu and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences. This applies to separable reflexive spaces, but more is true in this case, as stated below. The weak topology of a Banach space is metrizable if and only if is finite-dimensional. If the dual is separable, the weak topology of the unit ball of is metrizable. This applies in particular to separable reflexive Banach spaces. Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences. A Banach space is reflexive if and only if each bounded sequence in has a weakly convergent subsequence. A weakly compact subset in is norm-compact. Indeed, every sequence in has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of . A Schauder basis in a Banach space is a sequence of vectors in "X" with the property that for every vector in , there exist "uniquely" defined scalars depending on , such that Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense. It follows from the Banach–Steinhaus theorem that the linear mappings are uniformly bounded by some constant . Let denote the coordinate functionals which assign to every in the coordinate of in the above expansion. They are called biorthogonal functionals. When the basis vectors have norm , the coordinate functionals have norm in the dual of . Most classical separable spaces have explicit bases. The Haar system is a basis for . The trigonometric system is a basis in when . The Schauder system is a basis in the space . The question of whether the disk algebra has a basis remained open for more than forty years, until Bočkarev showed in 1974 that admits a basis constructed from the Franklin system. Since every vector in a Banach space with a basis is the limit of , with of finite rank and uniformly bounded, the space satisfies the bounded approximation property. The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis. Robert C. James characterized reflexivity in Banach spaces with a basis: the space with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete. In this case, the biorthogonal functionals form a basis of the dual of . Let and be two -vector spaces. The tensor product of and is a -vector space with a bilinear mapping which has the following universal property: The image under of a couple in is denoted by , and called a simple tensor. Every element in is a finite sum of such simple tensors. There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955. In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the projective tensor product of two Banach spaces and is the "completion" formula_26 of the algebraic tensor product equipped with the projective tensor norm, and similarly for the injective tensor product formula_27. Grothendieck proved in particular that where is a compact Hausdorff space, the Banach space of continuous functions from to and the space of Bochner-measurable and integrable functions from to , and where the isomorphisms are isometric. The two isomorphisms above are the respective extensions of the map sending the tensor to the vector-valued function . Let be a Banach space. The tensor product formula_29 is identified isometrically with the closure in of the set of finite rank operators. When has the approximation property, this closure coincides with the space of compact operators on . For every Banach space , there is a natural norm linear map obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when is the dual of . Precisely, for every Banach space , the map is one-to-one if and only if has the approximation property. Grothendieck conjectured that formula_26 and formula_27 must be different whenever and are infinite-dimensional Banach spaces. This was disproved by Gilles Pisier in 1983. Pisier constructed an infinite-dimensional Banach space such that formula_34 and formula_35 are equal. Furthermore, just as Enflo's example, this space is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space does not have the approximation property. A necessary and sufficient condition for the norm of a Banach space to be associated to an inner product is the parallelogram identity: It follows, for example, that the Lebesgue space is a Hilbert space only when . If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives: For complex scalars, defining the inner product so as to be -linear in , antilinear in , the polarization identity gives: To see that the parallelogram law is sufficient, one observes in the real case that is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and . The parallelogram law implies that is additive in . It follows that it is linear over the rationals, thus linear by continuity. Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available. The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant : Kwapień proved that if for every integer and all families of vectors , then the Banach space is isomorphic to a Hilbert space. Here, denotes the average over the possible choices of signs . In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces. Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer , any finite-dimensional normed space, with dimension sufficiently large compared to , contains subspaces nearly isometric to the -dimensional Euclidean space. The next result gives the solution of the so-called "homogeneous space problem". An infinite-dimensional Banach space is said to be homogeneous if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to is homogeneous, and Banach asked for the converse. An infinite-dimensional Banach space is hereditarily indecomposable when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces. The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space contains, either a subspace with unconditional basis, or a hereditarily indecomposable subspace , and in particular, is not isomorphic to its closed hyperplanes. If is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that is isomorphic to . If formula_40 is an isometry from the Banach space formula_41 onto the Banach space formula_42 (where both formula_41 and formula_42 are vector spaces over formula_45), then the Mazur-Ulam theorem states that formula_46 must be an affine transformation. In particular, if formula_47, this is formula_46 maps the zero of formula_41 to the zero of formula_42, then formula_46 must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure. Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces. Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset. When two compact Hausdorff spaces and are homeomorphic, the Banach spaces and are isometric. Conversely, when is not homeomorphic to , the (multiplicative) Banach–Mazur distance between and must be greater than or equal to , see above the results by Amir and Cambern. Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin: The situation is different for countably infinite compact Hausdorff spaces. Every countably infinite compact is homeomorphic to some closed interval of ordinal numbers equipped with the order topology, where is a countably infinite ordinal. The Banach space is then isometric to . When are two countably infinite ordinals, and assuming , the spaces and are isomorphic if and only if . For example, the Banach spaces are mutually non-isomorphic. A glossary of symbols: Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details. The Fréchet derivative allows for an extension of the concept of a directional derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces. Fréchet differentiability is a stronger condition than Gateaux differentiability. The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability. Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions R → R, or the space of all distributions on R, are complete but are not normed vector spaces and hence not Banach spaces. In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits of Fréchet spaces.
https://en.wikipedia.org/wiki?curid=3989
Bram Stoker Abraham "Bram" Stoker (8 November 1847 – 20 April 1912) was an Irish author, best known today for his 1897 Gothic horror novel "Dracula". During his lifetime, he was better known as the personal assistant of actor Sir Henry Irving and business manager of the Lyceum Theatre, which Irving owned. Stoker was born on 8 November 1847 at 15 Marino Crescent, Clontarf, on the northside of Dublin, Ireland. His parents were Abraham Stoker (1799–1876) from Dublin and Charlotte Mathilda Blake Thornley (1818–1901), who was raised in County Sligo. Stoker was the third of seven children, the eldest of whom was Sir Thornley Stoker, 1st Bt.. Abraham and Charlotte were members of the Church of Ireland Parish of Clontarf and attended the parish church with their children, who were baptised there, and Abraham was a senior civil servant. Stoker was bedridden with an unknown illness until he started school at the age of seven, when he made a complete recovery. Of this time, Stoker wrote, "I was naturally thoughtful, and the leisure of long illness gave opportunity for many thoughts which were fruitful according to their kind in later years." He was educated in a private school run by the Rev. William Woods. After his recovery, he grew up without further serious illnesses, even excelling as an athlete (he was named University Athlete, participating in multiple sports) at Trinity College, Dublin, which he attended from 1864 to 1870. He graduated with a BA in 1870, and pursued his MA in 1875. Though he later in life recalled graduating "with honours in mathematics," this appears to have been a mistake. He was auditor of the College Historical Society ("the Hist") and president of the University Philosophical Society, where his first paper was on "Sensationalism in Fiction and Society". Stoker became interested in the theatre while a student through his friend Dr. Maunsell. While working for the Irish Civil Service, he became the theatre critic for the "Dublin Evening Mail", which was co-owned by Sheridan Le Fanu, an author of Gothic tales. Theatre critics were held in low esteem, but he attracted notice by the quality of his reviews. In December 1876, he gave a favourable review of Henry Irving's "Hamlet" at the Theatre Royal in Dublin. Irving invited Stoker for dinner at the Shelbourne Hotel where he was staying, and they became friends. Stoker also wrote stories, and "The Crystal Cup" was published by the London Society in 1872, followed by "The Chain of Destiny" in four parts in "The Shamrock". In 1876 while a civil servant in Dublin, Stoker wrote the non-fiction book "The Duties of Clerks of Petty Sessions in Ireland" (published 1879) which remained a standard work. Furthermore, he possessed an interest in art, and was a founder of the Dublin Sketching Club in 1879. In 1878 Stoker married Florence Balcombe, daughter of Lieutenant-Colonel James Balcombe of 1 Marino Crescent. She was a celebrated beauty whose former suitor had been Oscar Wilde. Stoker had known Wilde from his student days, having proposed him for membership of the university's Philosophical Society while he was president. Wilde was upset at Florence's decision, but Stoker later resumed the acquaintanceship, and after Wilde's fall visited him on the Continent. The Stokers moved to London, where Stoker became acting manager and then business manager of Irving's Lyceum Theatre, London, a post he held for 27 years. On 31 December 1879, Bram and Florence's only child was born, a son whom they christened Irving Noel Thornley Stoker. The collaboration with Henry Irving was important for Stoker and through him he became involved in London's high society, where he met James Abbott McNeill Whistler and Sir Arthur Conan Doyle (to whom he was distantly related). Working for Irving, the most famous actor of his time, and managing one of the most successful theatres in London made Stoker a notable if busy man. He was dedicated to Irving and his memoirs show he idolised him. In London, Stoker also met Hall Caine, who became one of his closest friends – he dedicated "Dracula" to him. In the course of Irving's tours, Stoker travelled the world, although he never visited Eastern Europe, a setting for his most famous novel. Stoker enjoyed the United States, where Irving was popular. With Irving he was invited twice to the White House, and knew William McKinley and Theodore Roosevelt. Stoker set two of his novels in America, and used Americans as characters, the most notable being Quincey Morris. He also met one of his literary idols, Walt Whitman. Stoker was a regular visitor to Cruden Bay in Scotland between 1893 and 1910. His month-long holidays to the Aberdeenshire coastal village provided a large portion of available time for writing his books. Two novels were set in Cruden Bay: "The Watter's Mou' "(1895) and "The Mystery of the Sea" (1902). He started writing "Dracula" here in 1895 while in residence at the Kilmarnock Arms Hotel. The guest book with his signatures from 1894 and 1895 still survives. The nearby Slains Castle (also known as New Slains Castle) is linked with Bram Stoker and plausibly provided the visual palette for the descriptions of Castle Dracula during the writing phase. A distinctive room in Slains Castle, the octagonal hall, matches the description of the octagonal room in Castle Dracula. Stoker visited the English coastal town of Whitby in 1890, and that visit was said to be part of the inspiration for "Dracula". He began writing novels while working as manager for Henry Irving and secretary and director of London's Lyceum Theatre, beginning with "The Snake's Pass" in 1890 and "Dracula" in 1897. During this period, Stoker was part of the literary staff of "The Daily Telegraph" in London, and he wrote other fiction, including the horror novels "The Lady of the Shroud" (1909) and "The Lair of the White Worm" (1911). He published his "Personal Reminiscences of Henry Irving" in 1906, after Irving's death, which proved successful, and managed productions at the Prince of Wales Theatre. Before writing "Dracula", Stoker met Ármin Vámbéry, a Hungarian-Jewish writer and traveller (born in Szent-György, Kingdom of Hungary now Svätý Jur, Slovakia). Dracula likely emerged from Vámbéry's dark stories of the Carpathian mountains. Stoker then spent several years researching Central and East European folklore and mythological stories of vampires. The 1972 book "In Search of Dracula" by Radu Florescu and Raymond McNally claimed that the Count in Stoker's novel was based on Vlad III Dracula. At most however, Stoker borrowed only the name and "scraps of miscellaneous information" about Romanian history, according to one expert, Elizabeth Miller; further, there are no comments about Vlad III in the author's working notes. "Dracula" is an epistolary novel, written as a collection of realistic but completely fictional diary entries, telegrams, letters, ship's logs, and newspaper clippings, all of which added a level of detailed realism to the story, a skill which Stoker had developed as a newspaper writer. At the time of its publication, "Dracula" was considered a "straightforward horror novel" based on imaginary creations of supernatural life. "It gave form to a universal fantasy ... and became a part of popular culture." Stoker was a deeply private man, but his almost sexless marriage, intense adoration of Walt Whitman, Henry Irving and Hall Caine, and shared interests with Oscar Wilde, as well as the homoerotic aspects of "Dracula" have led to scholarly speculation that he was a repressed homosexual who used his fiction as an outlet for his sexual frustrations. In 1912, he demanded imprisonment of all homosexual authors in Britain: it has been suggested that this was due to self-loathing and to disguise his own vulnerability. Possibly fearful, and inspired by the monstrous image and threat of otherness that the press coverage of his friend Oscar's trials generated, Stoker began writing "Dracula" only weeks after Wilde's conviction. According to the "Encyclopedia of World Biography", Stoker's stories are today included in the categories of "horror fiction", "romanticized Gothic" stories, and "melodrama." They are classified alongside other "works of popular fiction" such as Mary Shelley's "Frankenstein", which also used the "myth-making" and story-telling method of having multiple narrators telling the same tale from different perspectives. According to historian Jules Zanger, this leads the reader to the assumption that "they can't all be lying". The original 541-page typescript of "Dracula" was believed to have been lost until it was found in a barn in northwestern Pennsylvania in the early 1980s. It consisted of typed sheets with many emendations and handwritten on the title page was "THE UN-DEAD." The author's name was shown at the bottom as Bram Stoker. Author Robert Latham remarked: "the most famous horror novel ever published, its title changed at the last minute." The typescript was purchased by Microsoft co-founder Paul Allen. Stoker's inspirations for the story, in addition to Whitby, may have included a visit to Slains Castle in Aberdeenshire, a visit to the crypts of St. Michan's Church in Dublin, and the novella "Carmilla" by Sheridan Le Fanu. Stoker's original research notes for the novel are kept by the Rosenbach Museum and Library in Philadelphia. A facsimile edition of the notes was created by Elizabeth Miller and Robert Eighteen-Bisang in 1998. Stoker was a member of The London Library and it is here that he conducted much of the research for "Dracula." In 2018 the Library discovered some of the books that Stoker used for his research, complete with notes and marginalia. After suffering a number of strokes, Stoker died at No. 26 St George's Square, London on 20 April 1912. Some biographers attribute the cause of death to tertiary syphilis, others to overwork. He was cremated, and his ashes were placed in a display urn at Golders Green Crematorium in north London. The ashes of Irving Noel Stoker, the author's son, were added to his father's urn following his death in 1961. The original plan had been to keep his parents' ashes together, but after Florence Stoker's death, her ashes were scattered at the Gardens of Rest. Stoker was raised a Protestant in the Church of Ireland. He was a strong supporter of the Liberal Party and took a keen interest in Irish affairs. As a "philosophical home ruler," he supported Home Rule for Ireland brought about by peaceful means. He remained an ardent monarchist who believed that Ireland should remain within the British Empire, an entity that he saw as a force for good. He was an admirer of Prime Minister William Ewart Gladstone, whom he knew personally, and supported his plans for Ireland. Stoker believed in progress and took a keen interest in science and science-based medicine. Some Stoker novels represent early examples of science fiction, such as "The Lady of the Shroud" (1909). He had a writer's interest in the occult, notably mesmerism, but despised fraud and believed in the superiority of the scientific method over superstition. Stoker counted among his friends J.W. Brodie-Innis, a member of the Hermetic Order of the Golden Dawn, and hired member Pamela Colman Smith as an artist for the Lyceum Theatre, but no evidence suggests that Stoker ever joined the Order himself. Although Irving was an active Freemason, no evidence has been found of Stoker taking part in Masonic activities in London. The Grand Lodge of Ireland also has no record of his membership. The short story collection "Dracula's Guest and Other Weird Stories" was published in 1914 by Stoker's widow, Florence Stoker, who was also his literary executrix. The first film adaptation of "Dracula" was F. W. Murnau's "Nosferatu", released in 1922, with Max Schreck starring as Count Orlok. Florence Stoker eventually sued the filmmakers, and was represented by the attorneys of the British Incorporated Society of Authors. Her chief legal complaint was that she had neither been asked for permission for the adaptation nor paid any royalty. The case dragged on for some years, with Mrs. Stoker demanding the destruction of the negative and all prints of the film. The suit was finally resolved in the widow's favour in July 1925. A single print of the film survived, however, and it has become well known. The first authorised film version of "Dracula" did not come about until almost a decade later when Universal Studios released Tod Browning's "Dracula" starring Bela Lugosi. Canadian writer Dacre Stoker, a great-grandnephew of Bram Stoker, decided to write "a sequel that bore the Stoker name" to "reestablish creative control over" the original novel, with encouragement from screenwriter Ian Holt, because of the Stokers' frustrating history with "Dracula's" copyright. In 2009, "Dracula: The Un-Dead" was released, written by Dacre Stoker and Ian Holt. Both writers "based [their work] on Bram Stoker's own handwritten notes for characters and plot threads excised from the original edition" along with their own research for the sequel. This also marked Dacre Stoker's writing debut. In spring 2012, Dacre Stoker (in collaboration with Prof. Elizabeth Miller) presented the "lost" Dublin Journal written by Bram Stoker, which had been kept by his great-grandson Noel Dobbs. Stoker's diary entries shed a light on the issues that concerned him before his London years. A remark about a boy who caught flies in a bottle might be a clue for the later development of the Renfield character in "Dracula". On 8 November 2012, Stoker was honoured with a Google Doodle on Google's homepage commemorating the 165th anniversary of his birth. An annual festival takes place in Dublin, the birthplace of Bram Stoker, in honour of his literary achievements. The 2014 Bram Stoker Festival encompassed literary, film, family, street, and outdoor events, and ran from 24–27 October in Dublin. The festival is supported by the Bram Stoker Estate and funded by Dublin City Council and Fáilte Ireland.
https://en.wikipedia.org/wiki?curid=3992
Contract bridge Contract bridge, or simply bridge, is a trick-taking card game using a standard 52-card deck. In its basic format, it is played by four players in two competing partnerships, with partners sitting opposite each other around a table. Millions of people play bridge worldwide in clubs, tournaments, online and with friends at home, making it one of the world's most popular card games, particularly among seniors. The World Bridge Federation (WBF) is the governing body for international competitive bridge, with numerous other bodies governing bridge at the regional level. The game consists of several , each progressing through four phases. The cards are dealt to the players, and then the players ‘’call’’ (or ‘’bid’’) in an auction seeking to take the , specifying how many tricks the partnership receiving the contract (the declaring side) needs to take to receive points for the deal. During the auction, partners endeavor to exchange information about their hands, including overall strength and distribution of the suits. The cards are then played, the trying to fulfill the contract, and the trying to stop the declaring side from achieving its goal. The deal is scored based on the number of tricks taken, the contract, and various other factors which depend to some extent on the variation of the game being played. Rubber bridge is the most popular variation for casual play, but most club and tournament play involves some variant of duplicate bridge, in which the cards are not re-dealt on each occasion, but the same deal is played by two or more sets of players (or "tables") to enable comparative scoring. Bridge is a member of the family of trick-taking games and is a development of Whist, which had become the dominant such game and enjoyed a loyal following for centuries. The idea of a trick-taking 52-card game has its first documented origins in Italy and France. The French physician and author Rabelais (1493–1553) mentions a game called "La Triomphe" in one of his works. In 1526 the Italian Francesco Berni wrote the oldest known (as of 1960) textbook on a game very similar to Whist, known as "Triomfi". Also, a Spanish textbook in Latin from the first half of the 16th century, "Triumphens Historicus", deals with the same subject. Bridge departed from whist with the creation of "Biritch" in the 19th century, and evolved through the late 19th and early 20th centuries to form the present game. The first rule book for bridge, dated 1886, is "" written by John Collinson, an English financier working in Ottoman Istanbul. It and his subsequent letter to "The Saturday Review" dated May 28, 1906, document the origin of "Biritch" as being the Russian community in Istanbul. The word "biritch" is thought to be a transliteration of the Russian word Бирюч (бирчий, бирич), an occupation of a diplomatic clerk or an announcer. Another theory is that British soldiers invented the game bridge while serving in the Crimean War, and named it after the Galata Bridge, which they crossed on their way to a coffeehouse to play cards. Biritch had many significant bridge-like developments: dealer chose the trump suit, or nominated his partner to do so; there was a call of no trumps ("biritch"); dealer's partner's hand became dummy; points were scored above and below the line; game was 3NT, 4 and 5 (although 8 club odd tricks and 15 spade odd tricks were needed); the score could be doubled and redoubled; and there were slam bonuses. It has some features in common with Solo Whist. This game, and variants of it known as "bridge" and "bridge whist", became popular in the United States and the United Kingdom in the 1890s despite the long-established dominance of whist. Its breakthrough had been its acceptance at London's Portland Club by Lord Brougham in 1894. In 1904 auction bridge was developed, in which the players bid in a competitive auction to decide the contract and declarer. The object became to make at least as many tricks as were contracted for, and penalties were introduced for failing to do so. Auction bridge bidding beyond winning the auction is pointless. If taking all 13 tricks, there is no difference in score between a 1 and a 7 final bid, as no bonus for game, small slam or grand slam exists. The modern game of contract bridge was the result of innovations to the scoring of auction bridge by Harold Stirling Vanderbilt and others. The most significant change was that only the tricks contracted for were scored below the line toward game or a slam bonus, a change that resulted in bidding becoming much more challenging and interesting. Also new was the concept of "vulnerability", making sacrifices to protect the lead in a rubber more expensive. The various scores were adjusted to produce a more balanced and interesting game. Vanderbilt set out his rules in 1925, and within a few years contract bridge had so supplanted other forms of the game that "bridge" became synonymous with "contract bridge". In the US and many other countries, most of the bridge played today is duplicate bridge, which is played at clubs, in tournaments and online. The number of people playing contract bridge has declined since its peak in the 1940s, when a survey found it was played in 44% of US households. The game is still widely played, especially amongst retirees, and in 2005 the ACBL estimated there were 25 million players in the US. Bridge is a four-player partnership trick-taking game with thirteen tricks per deal. The dominant variations of the game are rubber bridge, more common in social play; and duplicate bridge, which enables comparative scoring in tournament play. Each player is dealt thirteen cards from a standard 52-card deck. A starts when a player leads, i.e. plays the first card. The leader to the first trick is determined by the auction; the leader to each subsequent trick is the player who won the preceding trick. Each player, in a clockwise order, plays one card on the trick. Players must play a card of the same suit as the original card led, unless they have none (said to be "void"), in which case they may play any card. The player who played the highest-ranked card wins the trick. Within a suit, the ace is ranked highest followed by the king, queen and jack and then the ten through to the two. In a deal where the auction has determined that there is no trump suit, the trick must be won by a card of the suit led. However, in a deal where there is a trump suit, cards of that suit are superior in rank to any of the cards of any other suit. If one or more players plays a trump to a trick when void in the suit led, the highest trump wins. For example, if the trump suit is spades and a player is void in the suit led and plays a spade card, he wins the trick if no other player plays a higher spade. If a trump suit is led, the usual rule for trick-taking applies. Unlike its predecessor Whist, the goal of bridge is not simply to take the most tricks in a deal. Instead, the goal is to successfully estimate how many tricks one's partnership can take. To illustrate this, the simpler partnership trick-taking game of Spades has a similar mechanism: the usual trick-taking rules apply with the trump suit being spades, but in the beginning of the game, players "bid" or estimate how many tricks they can win, and the number of tricks bid by both players in a partnership are added. If a partnership takes at least that many number of tricks, they receive points for the round; otherwise, they receive penalty points. Bridge extends the concept of bidding into an , where partnerships compete to take a , specifying how many tricks they will need to take in order to receive points, and also specifying the trump suit (or no trump, meaning that there will be no trump suit). Players take turns to call in a clockwise order: each player in turn either passes, doubleswhich increases the penalties for not making the contract specified by the opposing partnership's last bid, but also increases the reward for making itor redoubles, or states a contract that their partnership will adopt, which must be higher than the previous highest bid (if any). Eventually, the player who bid the highest contractwhich is determined by the contract's level as well as the trump suit or no trumpwins the contract for their partnership. In the example auction below, the east–west pair secures the contract of 6; the auction concludes when there have been three successive passes. Note that six tricks are added to contract values, so the six-level contract would actually be a contract of twelve tricks. In practice, establishing a contract without enough information on the other partner's hand is difficult, so there exist many bidding systems assigning meanings to bids, with common ones including Standard American, Acol, and 2/1 game forcing. Contrast with Spades, where players only have to bid their own hand. After the contract is decided, and the first lead is made, the declarer's partner (dummy) lays his cards face up on the table, and the declarer plays the dummy's cards as well as their own. The opposing partnership is called the , and their goal is to stop the declarer from fulfilling his contract. Once all the cards have been played, the hand is scored: if the declaring side make their contract, they receive points based on the level of the contract, with some trump suits being worth more points than others and no trump being the highest, as well as bonus points for . But if the declarer fails to fulfil the contract, the defenders receive points depending on the declaring side's undertricks (the number of tricks short of the contract) and whether the contract was doubled by the defenders. The four players sit in two partnerships, with each player sitting opposite his partner. A cardinal direction is assigned to each seat, so that one partnership sits in North and South, while the other sits in West and East. The cards may be freshly dealt or, in duplicate bridge games, pre-dealt. All that is needed in basic games are the cards and a method of keeping score, but there is often other equipment on the table, such as a board containing the cards to be played (in duplicate bridge), bidding boxes, or screens. In rubber bridge, each player draws a card at the start of the game: the two players who drew the highest cards are partners, and play against the other two. The deck is shuffled and cut, usually by the player to the left of the dealer, before dealing. Players take turns to deal, in a clockwise order. The dealer deals the cards clockwise, one card at a time. In duplicate bridge, the cards are pre-dealt, either by hand or by a computerized dealing machine, in order to allow for competitive scoring. Once dealt, the cards are placed in a device called a "board", having slots designated for each player's cardinal direction seating position. After a deal has been played, players return their cards to the appropriate slot in the board, ready to be played by the next table. The dealer opens the auction and can make the first call, and the auction proceeds clockwise. When it is their turn to call, a player may passbut can enter into the bidding lateror bid a contract, specifying the level of their contract and either the trump suit or no trump (the denomination), provided that it is higher than the last bid by any player, including their partner. All bids promise to take a number of tricks in excess of six, so a bid must be between one (seven tricks) and seven (thirteen tricks). A bid is higher than another bid if either the level is greater (e.g., 2 over 1NT) or the denomination is higher, with the order being in ascending order: , , , , and NT (no trump). Calls may be made orally, or with a bidding box, or digitally in online bridge. If the last bid was by the opposing partnership, one may also the opponents' bid, increasing the penalties for undertricks, but also increasing the reward for making the contract. Doubling does not carry to future bids by the opponents unless future bids are doubled again. A player on the opposing partnership being doubled may also , which increases the penalties and rewards further. Players may not see their partner's hand during the auction, only their own. There exist many bidding conventions that assign agreed meanings to various calls to assist players in reaching an optimal contract (or obstruct the opponents). The auction ends when, after a player bids, doubles, or redoubles, every other player has passed, in which case the action proceeds to the play; or every player has passed and no bid has been made, in which case the round is considered to be "passed out" and not played. The player from the declaring side who first bid the denomination named in the final contract becomes declarer. The player left to the declarer leads to the first trick. Dummy then lays his or her cards face up on the table, organized in columns by suit. Play proceeds clockwise, with each player required to follow suit if possible. Tricks are won by the highest trump, or if there were none played, the highest card of the led suit. The player who won the previous trick leads to the next trick. The declarer has control of the dummy's cards and tells his partner which card to play at dummy's turn. There also exist conventions that communicate further information between defenders about their hands during the play. At any time, a player may , stating that their side will win a specific number of the remaining tricks. The claiming player lays his cards down on the table and explains the order in which he intends to play the remaining cards. The opponents can either accept the claim and the round is scored accordingly, or dispute the claim. If the claim is disputed, play continues with the claiming player's cards face up in rubber games, or in duplicate games, play ceases and the tournament director is called to adjudicate the hand. At the end of the hand, points are awarded to the declaring side if they make the contract, or else to the defenders. Partnerships can be , increasing the rewards for making the contract, but also increasing the penalties for undertricks. In rubber bridge, if a side has won 100 contract points, they have won a and are vulnerable for the remaining rounds, but in duplicate bridge, vulnerability is predetermined based on the number of each board. If the declaring side makes their contract, they receive points for , or tricks bid and made in excess of six. In both rubber and duplicate bridge, the declaring side is awarded 20 points per odd trick for a contract in clubs or diamonds, and 30 points per odd trick for a contract in hearts or spades. For a contract in notrump, the declaring side is awarded 40 points for the first odd trick and 30 points for the remaining odd tricks. Contract points are doubled or quadrupled if the contract is respectively doubled or redoubled. In rubber bridge, a partnership wins one game once it has accumulated 100 contract points; excess contract points do not carry over to the next game. A partnership that wins two games wins the rubber, receiving a bonus of 500 points if the opponents have won a game, and 700 points if they have not. Overtricks score the same number of points per odd trick, although their doubled and redoubled values differ. Bonuses vary between the two bridge variations both in score and in type (for example, rubber bridge awards a bonus for holding a certain combination of high cards), although some are common between the two. A larger bonus is awarded if the declaring side makes a small slam or grand slam, a contract of 12 or 13 tricks respectively. If the declaring side is not vulnerable, a small slam gets 500 points, and a grand slam 1000 points. If the declaring side is vulnerable, a small slam is 750 points and a grand slam is 1,500. In rubber bridge, the rubber finishes when a partnership has won two games, but the partnership receiving the most "overall" points wins the rubber. Duplicate bridge is scored comparatively, meaning that the score for the hand is compared to other tables playing the same cards and match points are scored according to the comparative results: usually either "matchpoint scoring", where each partnership receives 2 points (or 1 point) for each pair that they beat, and 1 point (or point) for each tie; or IMPs (international matchpoint) scoring, where the number of IMPs varies (but less than proportionately) with the points difference between the teams. Undertricks are scored in both variations as follows: The rules of the game are referred to as the "laws" as promulgated by various bridge organizations. The official rules of duplicate bridge are promulgated by the WBF as "The Laws of Duplicate Bridge 2017". The Laws Committee of the WBF, composed of world experts, updates the Laws every 10 years; it also issues a Laws Commentary advising on interpretations it has rendered. In addition to the basic rules of play, there are many additional rules covering playing conditions and the rectification of irregularities, which are primarily for use by tournament directors who act as referees and have overall control of procedures during competitions. But various details of procedure are left to the discretion of the zonal bridge organisation for tournaments under their aegis and some (for example, the choice of "movement") to the sponsoring organisation (for example, the club). Some zonal organisations of the WBF also publish editions of the Laws. For example, the American Contract Bridge League (ACBL) publishes the "Laws of Duplicate Bridge" and additional documentation for club and tournament directors. There are no universally accepted rules for rubber bridge, but some zonal organisations have published their own. An example for those wishing to abide by a published standard is "The Laws of Rubber Bridge" as published by the American Contract Bridge League. The majority of rules mirror those of duplicate bridge in the bidding and play and differ primarily in procedures for dealing and scoring. In 2001, the WBF promulgated a set of Laws for online play. Bridge is a game of skill played with randomly dealt cards, which makes it also a game of chance, or more exactly, a tactical game with inbuilt randomness, imperfect knowledge and restricted communication. The chance element is in the deal of the cards; in duplicate bridge some of the chance element is eliminated by comparing results of multiple pairs in identical situations. This is achievable when there are eight or more players, sitting at two or more tables, and the deals from each table are preserved and passed to the next table, thereby "duplicating" them for the other table(s) of players. At the end of a session, the scores for each deal are compared, and the most points are awarded to the players doing the best with each particular deal. This measures relative skill (but still with an element of luck) because each pair or team is being judged only on the ability to bid with, and play, the same cards as other players. Duplicate bridge is played in clubs and tournaments, which can gather as many as several hundred players. Duplicate bridge is a mind sport, and its popularity gradually became comparable to that of chess, with which it is often compared for its complexity and the mental skills required for high-level competition. Bridge and chess are the only "mind sports" recognized by the International Olympic Committee, although they were not found eligible for the main Olympic program. In October 2017 the British High Court ruled against the English Bridge Union, finding that Bridge is not a sport under a definition of sport as involving physical activity, but did not rule on the "broad, somewhat philosophical question" as to whether or not bridge is a sport. The basic premise of duplicate bridge had previously been used for whist matches as early as 1857. Initially, bridge was not thought to be suitable for duplicate competition; it was not until the 1920s that (auction) bridge tournaments became popular. In 1925 when contract bridge first evolved, bridge tournaments were becoming popular, but the rules were somewhat in flux, and several different organizing bodies were involved in tournament sponsorship: the American Bridge League (formerly the "American Auction Bridge League", which changed its name in 1929), the American Whist League, and the United States Bridge Association. In 1935, the first officially recognized world championship was held. By 1937, however, the American Contract Bridge League (ACBL) had come to power (a union of the ABL and the USBA), and it remains the sanctioning body for bridge tournaments in North America. In 1958, the World Bridge Federation (WBF) was founded to promote bridge worldwide, coordinate periodic revision to the Laws (each ten years, next in 2027) and conduct world championships. In tournaments, "bidding boxes" are frequently used, as noted above. These avoid the possibility of players at other tables hearing any spoken bids. The bidding cards are laid out in sequence as the auction progresses. Although it is not a formal rule, many clubs adopt a protocol that the bidding cards stay revealed until the first playing card is tabled, after which point the bidding cards are put away. In top national and international events, "bidding screens" are used. These are placed diagonally across the table, preventing partners from seeing each other during the game; often the screen is removed after the auction is complete. Much of the complexity in bridge arises from the difficulty of arriving at a good final contract in the auction (or deciding to let the opponents declare the contract). This is a difficult problem: the two players in a partnership must try to communicate enough information about their hands to arrive at a makeable contract, but the information they can exchange is restricted – information may be passed only by the calls made and later by the cards played, not by other means; in addition, the agreed-upon meaning of each call and play must be available to the opponents. Since a partnership that has freedom to bid gradually at leisure can exchange more information, and since a partnership that can interfere with the opponents' bidding (as by raising the bidding level rapidly) can cause difficulties for their opponents, bidding systems are both informational and strategic. It is this mixture of information exchange and evaluation, deduction, and tactics that is at the heart of bidding in bridge. A number of basic rules of thumb in bridge bidding and play are summarized as bridge maxims. A "bidding system" is a set of partnership agreements on the meanings of bids. A partnership's bidding system is usually made up of a core system, modified and complemented by specific conventions (optional customizations incorporated into the main system for handling specific bidding situations) which are pre-chosen between the partners prior to play. The line between a well-known convention and a part of a system is not always clear-cut: some bidding systems include specified conventions by default. Bidding systems can be divided into mainly natural systems such as Acol and Standard American, and mainly artificial systems such as the Precision Club and Polish Club. Calls are usually considered to be either "natural" or "conventional" (artificial). A natural call carries a meaning that reflects the call; a natural bid intuitively showing hand or suit strength based on the level or suit of the bid, and a natural double expressing that the player believes that the opposing partnership will not make their contract. By contrast, a conventional (artificial) call offers and/or asks for information by means of pre-agreed coded interpretations, in which some calls convey very specific information or requests that are not part of the natural meaning of the call. Thus in response to 4NT, a 'natural' bid of 5 would state a preference towards a diamond suit or a desire to play the contract in 5 diamonds, whereas if the partners have agreed to use the common Blackwood convention, a bid of 5 in the same situation would say nothing about the diamond suit, but tell the partner that the hand in question contains exactly one ace. Conventions are valuable in bridge because of the need to pass information beyond a simple like or dislike of a particular suit, and because the limited bidding space can be used more efficiently by adopting a conventional (artificial) meaning for a given call where a natural meaning would have less utility, because the information it would convey is not valuable or because the desire to convey that information would arise only rarely. The conventional meaning conveys more useful (or more frequently useful) information. There are a very large number of conventions from which players can choose; many books have been written detailing bidding conventions. Well-known conventions include Stayman (to ask the opening 1NT bidder to show any four-card major suit), Jacoby transfers (a request by (usually) the weak hand for the partner to bid a particular suit first, and therefore to become the declarer), and the Blackwood convention (to ask for information on the number of aces and kings held, used in slam bidding situations). The term "preempt" refers to a high-level tactical bid by a weak hand, relying upon a very long suit rather than high cards for tricks. Preemptive bids serve a double purpose – they allow players to indicate they are bidding on the basis of a long suit in an otherwise weak hand, which is important information to share, and they also consume substantial bidding space which prevents a possibly strong opposing pair from exchanging information on their cards. Several systems include the use of opening bids or other early bids with weak hands including long (usually six to eight card) suits at the 2, 3 or even 4 or 5 levels as preempts. As a rule, a natural suit bid indicates a holding of at least four (or more, depending on the situation and the system) cards in that suit as an opening bid, or a lesser number when supporting partner; a natural NT bid indicates a balanced hand. Most systems use a count of high card points as the basic evaluation of the strength of a hand, refining this by reference to shape and distribution if appropriate. In the most commonly used point count system, aces are counted as 4 points, kings as 3, queens as 2, and jacks as 1 point; therefore, the deck contains 40 points. In addition, the "distribution" of the cards in a hand into suits may also contribute to the strength of a hand and be counted as distribution points. A better than average hand, containing 12 or 13 points, is usually considered sufficient to "open" the bidding, i.e., to make the first bid in the auction. A combination of two such hands (i.e., 25 or 26 points shared between partners) is often sufficient for a partnership to bid, and generally to make, game in a major suit or notrump (more are usually needed for a minor suit game, as the level is higher). In natural systems, a 1NT opening bid usually reflects a hand that has a relatively balanced shape (usually between two and four (or less often five) cards in each suit) and a sharply limited number of high card points, usually somewhere between 12 and 18 – the most common ranges use a span of exactly three points (for example, 12–14, 15–17 or 16–18), but some systems use a four-point range, usually 15–18. Opening bids of three or higher are preemptive bids, i.e., bids made with weak hands that especially favor a particular suit, opened at a high level in order to define the hand's value quickly and to frustrate the opposition. For example, a hand of would be a candidate for an opening bid of 3, designed to make it difficult for the opposing team to bid and find their optimum contract even if they have the bulk of the points, as it is nearly valueless unless spades are trumps, it contains good enough spades that the penalty for being set should not be higher than the value of an opponent game, and the high card weakness makes it more likely that the opponents have enough strength to make game themselves. Openings at the 2 level are either unusually strong (2NT, natural, and 2, artificial) or preemptive, depending on the system. Unusually strong bids communicate an especially high number of points (normally 20 or more) or a high trick-taking potential (normally 8 or more). Also 2 as the strongest (by HCP and by DP+HCP) has become more common, perhaps especially at websites that offer duplicate bridge. Here the 2 opening is used for either hands with a good 6-card suit or longer (max one losing card) and a total of 18 HCP up to 23 total points – or "NT", like 2NT but with 22–23 HCP. Whilst the 2 opening bid takes care of all hands with 24 points (HCP or with distribution points included) with the only exception of "Gambling 3NT". Opening bids at the one level are made with hands containing 12–13 points or more and which are not suitable for one of the preceding bids. Using Standard American with 5-card majors, opening hearts or spades usually promises a 5-card suit. Partnerships who agree to play 5-card majors open a minor suit with 4-card majors and then bid their major suit at the next opportunity. This means that an opening bid of 1 or 1 will sometimes be made with only 3 cards in that suit. Doubles are sometimes given conventional meanings in otherwise mostly natural systems. A natural, or "penalty" double, is one used to try to gain extra points when the defenders are confident of setting (defeating) the contract. The most common example of a conventional double is the takeout double of a low-level suit bid, implying support for the unbid suits or the unbid major suits and asking partner to choose one of them. Bidding systems depart from these basic ideas in varying degrees. Standard American, for instance, is a collection of conventions designed to bolster the accuracy and power of these basic ideas, while Precision Club is a system that uses the 1 opening bid for all or almost all strong hands (but sets the threshold for "strong" rather lower than most other systems – usually 16 high card points) and may include other artificial calls to handle other situations (but it may contain natural calls as well). Many experts today use a system called 2/1 game forcing (enunciated as two over one game forcing), which amongst other features adds some complexity to the treatment of the one notrump response as used in Standard American. In the UK, Acol is the most common system; its main features are a weak one notrump opening with 12–14 high card points and several variations for 2-level openings. There are also a variety of advanced techniques used for hand evaluation. The most basic is the Milton Work point count, (the 4-3-2-1 system detailed above) but this is sometimes modified in various ways, or either augmented or replaced by other approaches such as losing trick count, honor point count, law of total tricks, or Zar Points. Common conventions and variations within natural systems include: Within play, it is also commonly agreed what systems of opening leads, signals and discards will be played: Every call (including "pass", also sometimes called "no bid") serves two purposes. It confirms or passes some information to a partner, and also denies by implication any other kind of hand which would have tended to support an alternative call. For example, a bid of 2NT immediately after partner's 1NT not only shows a balanced hand of a certain point range, but also would almost always deny possession of a five-card major suit (otherwise the player would have bid it) or even a four card major suit (in that case, the player would probably have used the Stayman convention). Likewise, in some partnerships the bid of 2 in the sequence 1NT–2–2–2 between partners (opponents passing throughout) explicitly shows five hearts but also confirms four cards in spades: the bidder must hold at least five hearts to make it worth looking for a heart fit after 2 denied a four card major, and with at least five hearts, a Stayman bid must have been justified by having exactly four spades, the other major (since Stayman (as used by this partnership) is not useful with anything except a four card major suit). Thus an astute partner can read much more than the surface meaning into the bidding. Alternatively, many partnerships play this same bidding sequence as "Crawling Stayman" by which the responder shows a weak hand (less than eight high card points) with shortness in diamonds but at least four hearts and four spades; the opening bidder may correct to spades if that appears to be the better contract. The situations detailed here are extremely simple examples; many instances of advanced bidding involve specific agreements related to very specific situations and subtle inferences regarding entire sequences of calls. Terence Reese, a prolific author of bridge books, points out that there are only four ways of taking a trick by force, two of which are very easy: Nearly all trick-taking techniques in bridge can be reduced to one of these four methods. The optimum play of the cards can require much thought and experience and is the subject of whole books on bridge. The cards are dealt as shown in the bridge hand diagram; North is the dealer and starts the auction which proceeds as shown in the bidding table. As neither North nor East have sufficient strength to "open" the bidding, they each pass, denying such strength. South, next in turn, opens with the bid of 1, which denotes a reasonable heart suit (at least 4 or 5 cards long, depending on the bidding system) and at least 12 high card points. On this hand, South has 14 high card points. West "overcalls" with 1, since he has a long spade suit of reasonable quality and 10 high card points (an overcall can be made on a hand that is not quite strong enough for an opening bid). North "supports" partner's suit with 2, showing heart support and about points. East supports spades with 2. South inserts a "game try" of 3, "inviting" the partner to bid the "game" of 4 with good club support and overall values. North complies, as North is at the higher end of the range for his 2 bid, and has a fourth trump (the 2 bid promised only three), and the "doubleton" queen of clubs to fit with partner's strength there. (North could instead have bid 3, indicating not enough strength for game, asking South to pass and so play 3.) In the auction, north–south are trying to investigate whether their cards are sufficient to make a game (nine tricks at notrump, ten tricks in hearts or spades, 11 tricks in clubs or diamonds), which yields bonus points if bid and made. East-West are "competing" in spades, hoping to play a contract in spades at a low level. 4 is the final contract, 10 tricks being required for to make with hearts as trump. South is the "declarer", having been first to bid hearts, and the player to South's left, West, has to choose the first card in the play, known as the "opening lead". West chooses the spade king because spades is the suit the partnership has shown strength in, and because they have agreed that when they hold two "touching honors" (or "adjacent honors") they will play the higher one first. West plays the card face down, to give their partner and the declarer (but not dummy) a chance to ask any last questions about the bidding or to object if they believe West is not the correct hand to lead. After that, North's cards are laid on the table and North becomes "dummy", as both the North and South hands will be controlled by the declarer. West turns the lead card face up, and the declarer studies the two hands to make a plan for the play. On this hand, the trump ace, a spade, and a diamond trick must be lost, so declarer must not lose a trick in clubs. If the K is held by West, South will find it very hard to prevent it from making a trick (unless West leads a club). However, there is an almost-equal chance that it is held by East, in which case it can be 'trapped' against the ace, and will be beaten, using a tactic known as a "finesse". After considering the cards, the declarer directs dummy (North) to play a small spade. East plays "low" (small card) and South takes the A, gaining the "lead". (South may also elect to "duck", but for the purpose of this example, let us assume South wins the A at trick 1). South proceeds by "drawing trump", leading the K. West decides there is no benefit to holding back, and so wins the trick with the ace, and then cashes the Q. For fear of conceding a "ruff and discard", West plays the 2 instead of another spade. Declarer plays low from the table, and East scores the Q. Not having anything better to do, East returns the remaining trump, taken in South's hand. The trumps now accounted for, South can now execute the finesse, perhaps trapping the king as planned. South "enters" the dummy (i.e. wins a trick in the dummy's hand) by leading a low diamond, using dummy's A to win the trick, and leads the Q from dummy to the next trick. East "covers" the queen with the king, and South takes the trick with the ace, and proceeds by "cashing" the remaining "master" J. (If East doesn't play the king, then South will play a low club from South's hand and the queen will win anyway, this being the essence of the finesse). The game is now safe: South "ruffs" a small club with a dummy's trump, then ruffs a diamond in hand for an "entry" back, and ruffs the last club in dummy (sometimes described as a "crossruff"). Finally, South "claims" the remaining tricks by showing his or her hand, as it now contains only high trumps and there's no need to play the hand out to prove they are all winners. (The trick-by-trick notation used above can be also expressed in tabular form, but a textual explanation is usually preferred in practice, for reader's convenience. Plays of small cards or "discards" are often omitted from such a description, unless they were important for the outcome). North-South score the required 10 tricks, and their opponents take the remaining three. The contract is fulfilled, and North enters the pair numbers, the contract, and the score of +420 for the winning side (North is in charge of bookkeeping in duplicate tournaments) on the traveling sheet. North asks East to check the score entered on the traveller. All players return their own cards to the board, and the next deal is played. On the prior hand, it is quite possible that the K is held by West. For example, by swapping the K and A between the defending hands. Then the 4 contract would fail by one trick (unless West had led a club early in the play). However the failure of the contract would not mean that 4 is a bad contract on this hand. The contract depends on the club finesse working, or a mis-defense. The bonus points awarded for making a game contract far outweigh the penalty for going one off, so it is best strategy in the long run to bid game contracts such as this one. Similarly, there is a minuscule chance that the K is in the west hand, but the west hand has no other clubs. In that case, declarer can succeed by simply cashing the A, felling the K and setting up the Q as a winner. However the chance of this is far lower than the simple chance of approximately 50% that East started with the K. Therefore, the superior "percentage" play is to take the club finesse, as described above. After many years of little progress, computer bridge made great progress at the end of the 20th century. In 1996, the ACBL initiated official World Championships Computer Bridge, to be held annually along with a major bridge event. The first Computer Bridge Championship took place in 1997 at the North American Bridge Championships in Albuquerque, New Mexico. Strong bridge playing programs such as Jack (World Champion in 2001, 2002, 2003, 2004, 2006, 2009, 2010, 2012, 2013 and 2015), Wbridge5 (World Champion in 2005, 2007, 2008, 2016, 2017 and 2018), RoboBridge and many-time finalist Bridge Baron, would probably rank among the top few thousand human pairs worldwide. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describes matches between Jack and seven top Dutch pairs. A total of 196 boards were played. Overall, the program Jack lost, but by a small margin (359 versus 385 IMPs). There are several free and subscription-based services available for playing bridge on the internet. For example: Some national contract bridge organizations now offer online bridge play to their members, including the English Bridge Union, the Dutch Bridge Federation and the Australian Bridge Federation. MSN and Yahoo! Games have several online rubber bridge rooms. In 2001, the WBF issued a special edition of the lawbook adapted for internet and other electronic forms of the game.
https://en.wikipedia.org/wiki?curid=3995
Boat A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size, shape, cargo or passenger capacity, or its ability to carry boats. Small boats are typically found on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats, such as the whaleboat, were intended for use in an offshore environment. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Anomalous definitions exist, as lake freighters long on the Great Lakes are called "boats". Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to convey cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions. Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and motor (including gasoline, diesel, and electric). Boats have served as transportation since the earliest times. Circumstantial evidence, such as the early settlement of Australia over 40,000 years ago, findings in Crete dated 130,000 years ago, and in Flores dated to 900,000 years ago, suggest that boats have been used since prehistoric times. The earliest boats are thought to have been dugouts, and the oldest boats found by archaeological excavation date from around 7,000–10,000 years ago. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a "Pinus sylvestris" that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Rafts have operated for at least 8,000 years. A 7,000-year-old seagoing reed boat has been found in Kuwait. Boats were used between 4000 and 3000 BC in Sumer, ancient Egypt and in the Indian Ocean. Boats played an important role in the commerce between the Indus Valley Civilization and Mesopotamia. Evidence of varying models of boats has also been discovered at various Indus Valley archaeological sites. Uru craft originate in Beypore, a village in south Calicut, Kerala, in southwestern India. This type of mammoth wooden ship was constructed solely of teak, with a transport capacity of 400 tonnes. The ancient Arabs and Greeks used such boats as trading vessels. The historians Herodotus, Pliny the Elder and Strabo record the use of boats for commerce, travel, and military purposes. Boats can be categorized into three main types: The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On most boats a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or covering much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads. The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port. Until the mid-19th century most boats were made of natural materials, primarily wood, although reed, bark and animal skins were also used. Early boats include the bound-reed style of boat seen in Ancient Egypt, the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log. By the mid-19th century, many boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structure it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode. As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats. Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it wasn't until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight. Around the mid-1960s, boats made of fiberglass (aka "glassfibre") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong, and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam. Cold moulding is a modern construction method, using wood as the structural component. In cold moulding very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. The most common means of boat propulsion are as follows: A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink. As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading. Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class:
https://en.wikipedia.org/wiki?curid=3996
Blood Blood is a body fluid in humans and other animals that delivers necessary substances such as nutrients and oxygen to the cells and transports metabolic waste products away from those same cells. In vertebrates, it is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes) and platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas and greatly increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood. Arthropods, using hemolymph, have hemocytes as part of their immune system. Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Medical terms related to blood often begin with hemo- or hemato- (also spelled haemo- and haemato-) from the Greek word ("haima") for "blood". In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen. Blood performs many important functions within the body, including: Blood accounts for 7% of the human body weight, with an average density around 1060 kg/m3, very close to pure water's density of 1000 kg/m3. The average adult has a blood volume of roughly or 1.3 gallons, which is composed of plasma and several kinds of cells. These blood cells (which are also called corpuscles or "formed elements") consist of erythrocytes (red blood cells, RBCs), leukocytes (white blood cells), and thrombocytes (platelets). By volume, the red blood cells constitute about 45% of whole blood, the plasma about 54.3%, and white cells about 0.7%. Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics. One microliter of blood contains: About 55% of blood is blood plasma, a fluid that is the blood's liquid medium, which by itself is straw-yellow in color. The blood plasma volume totals of 2.7–3.0 liters (2.8–3.2 quarts) in an average human. It is essentially an aqueous solution containing 92% water, 8% blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid. Other important components include: The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins. Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic. Blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. Blood pH, partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2), and bicarbonate (HCO3−) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid-base balance and respiration. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive. Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences: Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood. Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium. The blood circulation was famously described by William Harvey in 1628. In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes. The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney. Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine. About 98.5% of the oxygen in a sample of arterial blood in a healthy human breathing air at sea-level pressure is chemically combined with the hemoglobin. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species (for exceptions, see below). Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O2 per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O2 per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries). With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart. Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98–99% saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml/min to the body. In a healthy adult at rest, oxygen consumption is approximately 200–250 ml/min, and deoxygenated blood returning to the lungs is still roughly 75% (70 to 78%) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15% in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95% or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90%), is dangerous to health, and severe hypoxia (saturations less than 30%) may be rapidly fatal. A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21% of the level found in an adult's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions. CO2 is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells by the reaction CO2 + H2O → H2CO3 → H+ + ; about 7% is dissolved in the plasma; and about 23% is bound to hemoglobin as carbamino compounds. Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin. In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein, where lymph rejoins the systemic blood circulation. Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially. Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml/min. Kidney and brain are the second and the third most supplied organs, with 1100 ml/min and ~700 ml/min, respectively. Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively. The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris. Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs. In insects, the blood (more properly called hemolymph) is not involved in the transport of oxygen. (Openings called tracheae allow oxygen from the air to diffuse directly to the tissues.) Insect blood moves nutrients to the tissues and removes waste products in an open system. Other invertebrates use respiratory proteins to increase the oxygen-carrying capacity. Hemoglobin is the most common respiratory protein found in nature. Hemocyanin (blue) contains copper and is found in crustaceans and mollusks. It is thought that tunicates (sea squirts) might use vanabins (proteins containing vanadium) for respiratory pigment (bright-green, blue, or orange). In many invertebrates, these oxygen-carrying proteins are freely soluble in the blood; in vertebrates they are contained in specialized red blood cells, allowing for a higher concentration of respiratory pigments without increasing viscosity or damaging blood filtering organs like the kidneys. Giant tube worms have unusual hemoglobins that allow them to live in extraordinary environments. These hemoglobins also carry sulfides normally fatal in other animals. The coloring matter of blood (hemochrome) is largely due to the protein in the blood responsible for oxygen transport. Different groups of organisms use different proteins. Hemoglobin is the principal determinant of the color of blood in vertebrates. Each molecule has four heme groups, and their interaction with various molecules alters the exact color. In vertebrates and other hemoglobin-using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot utilize oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus "Prasinohaema" have green blood due to a buildup of the waste product biliverdin. The blood of most mollusks – including cephalopods and gastropods – as well as some arthropods, such as horseshoe crabs, is blue, as it contains the copper-containing protein hemocyanin at concentrations of about 50 grams per liter. Hemocyanin is colorless when deoxygenated and dark blue when oxygenated. The blood in the circulation of these creatures, which generally live in cold environments with low oxygen tensions, is grey-white to pale yellow, and it turns dark blue when exposed to the oxygen in the air, as seen when they bleed. This is due to change in color of hemocyanin when it is oxidized. Hemocyanin carries oxygen in extracellular fluid, which is in contrast to the intracellular oxygen transport in mammals by hemoglobin in RBCs. The blood of most annelid worms and some marine polychaetes use chlorocruorin to transport oxygen. It is green in color in dilute solutions. Hemerythrin is used for oxygen transport in the marine invertebrates sipunculids, priapulids, brachiopods, and the annelid worm, magelona. Hemerythrin is violet-pink when oxygenated. The blood of some species of ascidians and tunicates, also known as sea squirts, contains proteins called vanadins. These proteins are based on vanadium, and give the creatures a concentration of vanadium in their bodies 100 times higher than the surrounding sea water. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. When exposed to oxygen, however, vanadins turn a mustard yellow. Substances other than oxygen can bind to hemoglobin; in some cases this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. English "blood" (Old English "blod") derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German "Blut", Swedish "blod", Gothic "blōþ"). There is no accepted Indo-European etymology. (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on March 27, 1914. The Rhesus factor was discovered in 1937. Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Jewish and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: In many Aboriginal rituals and ceremonies, red ochre is rubbed all over the naked bodies of the dancers. In secret, sacred male ceremonies, blood extracted from the veins of the participant's arms is exchanged and rubbed on their bodies. Red ochre is used in similar ways in less-secret ceremonies. Blood is also used to fasten the feathers of birds onto people's bodies. Bird feathers contain a protein that is highly magnetically sensitive. Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. Among the Germanic tribes, blood was used during their sacrifices; the "Blóts". The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called "blóedsian" in Old English, and the terminology was borrowed by the Roman Catholic Church becoming "to bless" and "blessing". The Hittite word for blood, "ishar" was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, "ichor", was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, ″... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), “... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ...” (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. ""This cup is the new testament in my blood, which is shed for you." ()". Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components. In south East Asian popular culture, it is often said that if a man's nose produces a small flow of blood, he is experiencing sexual desire. This often appears in Chinese-language and Hong Kong films as well as in Japanese and Korean culture parodied in anime, manga, and drama. Characters, mostly males, will often be shown with a nosebleed if they have just seen someone nude or in little clothing, or if they have had an erotic thought or fantasy; this is based on the idea that a male's blood pressure will spike dramatically when aroused. Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the 'Nosferatu' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are new world creatures discovered well after the origins of the European myths. Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains. Blood residue analysis is also a technique used in archeology. Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O'Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood. The term "blood" is used in genealogical circles to refer to one's ancestry, origins, and ethnic background as in the word "bloodline". Other terms where blood is used in a family history sense are "blue-blood", "royal blood", "mixed-blood" and "blood relative".
https://en.wikipedia.org/wiki?curid=3997
Benoit Mandelbrot Benoit B.  Mandelbrot  (20 November 1924 – 14 October 2010) was a Polish-born French and American mathematician and polymath with broad interests in the practical sciences, especially regarding what he labeled as "the art of roughness" of physical phenomena and "the uncontrolled element in life". He referred to himself as a "fractalist" and is recognized for his contribution to the field of fractal geometry, which included coining the word "fractal", as well as developing a theory of "roughness and self-similarity" in nature. In 1936, while he was a child, Mandelbrot's family emigrated to France from Warsaw, Poland. After World War II ended, Mandelbrot studied mathematics, graduating from universities in Paris and the United States and receiving a master's degree in aeronautics from the California Institute of Technology. He spent most of his career in both the United States and France, having dual French and American citizenship. In 1958, he began a 35-year career at IBM, where he became an IBM Fellow, and periodically took leaves of absence to teach at Harvard University. At Harvard, following the publication of his study of U.S. commodity markets in relation to cotton futures, he taught economics and applied sciences. Because of his access to IBM's computers, Mandelbrot was one of the first to use computer graphics to create and display fractal geometric images, leading to his discovery of the Mandelbrot set in 1980. He showed how visual complexity can be created from simple rules. He said that things typically considered to be "rough", a "mess" or "chaotic", like clouds or shorelines, actually had a "degree of order". His math and geometry-centered research career included contributions to such fields as statistical physics, meteorology, hydrology, geomorphology, anatomy, taxonomy, neurology, linguistics, information technology, computer graphics, economics, geology, medicine, physical cosmology, engineering, chaos theory, econophysics, metallurgy and the social sciences. Toward the end of his career, he was Sterling Professor of Mathematical Sciences at Yale University, where he was the oldest professor in Yale's history to receive tenure. Mandelbrot also held positions at the Pacific Northwest National Laboratory, Université Lille Nord de France, Institute for Advanced Study and Centre National de la Recherche Scientifique. During his career, he received over 15 honorary doctorates and served on many science journals, along with winning numerous awards. His autobiography, "The Fractalist: Memoir of a Scientific Maverick", was published posthumously in 2012. Mandelbrot was born in a Jewish family, in Warsaw during the Second Polish Republic. His father made his living trading clothing; his mother was a dental surgeon. During his first two school years, he was tutored privately by an uncle who despised rote learning: "Most of my time was spent playing chess, reading maps and learning how to open my eyes to everything around me." Later, the family's move to France, the war, and his acquaintance with his father's brother, the mathematician Szolem Mandelbrojt who had moved to Paris around 1920, further prevented a standard education. The family emigrated from Poland to France in 1936, when he was 11. "The fact that my parents, as economic and political refugees, joined Szolem in France saved our lives," he writes. Mandelbrot attended the Lycée Rolin in Paris until the start of World War II, when his family moved to Tulle, France. He was helped by Rabbi David Feuerwerker, the Rabbi of Brive-la-Gaillarde, to continue his studies. Much of France was occupied by the Nazis at the time, and Mandelbrot recalls this period: In 1944, Mandelbrot returned to Paris, studied at the Lycée du Parc in Lyon, and in 1945 to 1947 attended the École Polytechnique, where he studied under Gaston Julia and Paul Lévy. From 1947 to 1949 he studied at California Institute of Technology, where he earned a master's degree in aeronautics. Returning to France, he obtained his PhD degree in Mathematical Sciences at the University of Paris in 1952. From 1949 to 1958, Mandelbrot was a staff member at the Centre National de la Recherche Scientifique. During this time he spent a year at the Institute for Advanced Study in Princeton, New Jersey, where he was sponsored by John von Neumann. In 1955 he married Aliette Kagan and moved to Geneva, Switzerland (to collaborate with Jean Piaget at the International Centre for Genetic Epistemology) and later to the Université Lille Nord de France. In 1958 the couple moved to the United States where Mandelbrot joined the research staff at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. He remained at IBM for 35 years, becoming an IBM Fellow, and later Fellow Emeritus. From 1951 onward, Mandelbrot worked on problems and published papers not only in mathematics but in applied fields such as information theory, economics, and fluid dynamics. Mandelbrot saw financial markets as an example of "wild randomness", characterized by concentration and long range dependence. He developed several original approaches for modelling financial fluctuations. In his early work, he found that the price changes in financial markets did not follow a Gaussian distribution, but rather Lévy stable distributions having infinite variance. He found, for example, that cotton prices followed a Lévy stable distribution with parameter "α" equal to 1.7 rather than 2 as in a Gaussian distribution. "Stable" distributions have the property that the sum of many instances of a random variable follows the same distribution but with a larger scale parameter. As a visiting professor at Harvard University, Mandelbrot began to study fractals called Julia sets that were invariant under certain transformations of the complex plane. Building on previous work by Gaston Julia and Pierre Fatou, Mandelbrot used a computer to plot images of the Julia sets. While investigating the topology of these Julia sets, he studied the Mandelbrot set which was introduced by him in 1979. In 1982, Mandelbrot expanded and updated his ideas in "The Fractal Geometry of Nature". This influential work brought fractals into the mainstream of professional and popular mathematics, as well as silencing critics, who had dismissed fractals as "program artifacts". In 1975, Mandelbrot coined the term "fractal" to describe these structures and first published his ideas, and later translated, "Fractals: Form, Chance and Dimension". According to computer scientist and physicist Stephen Wolfram, the book was a "breakthrough" for Mandelbrot, who until then would typically "apply fairly straightforward mathematics … to areas that had barely seen the light of serious mathematics before". Wolfram adds that as a result of this new research, he was no longer a "wandering scientist", and later called him "the father of fractals": Wolfram briefly describes fractals as a form of geometric repetition, "in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesco broccoli are two examples from nature." He points out an unexpected conclusion: Mandelbrot used the term "fractal" as it derived from the Latin word "fractus", defined as broken or shattered glass. Using the newly developed IBM computers at his disposal, Mandelbrot was able to create fractal images using graphic computer code, images that an interviewer described as looking like "the delirious exuberance of the 1960s psychedelic art with forms hauntingly reminiscent of nature and the human body". He also saw himself as a "would-be Kepler", after the 17th-century scientist Johannes Kepler, who calculated and described the orbits of the planets. Mandelbrot, however, never felt he was inventing a new idea. He describes his feelings in a documentary with science writer Arthur C. Clarke: According to Clarke, "the Mandelbrot set is indeed one of the most astonishing discoveries in the entire history of mathematics. Who could have dreamed that such an incredibly simple equation could have generated images of literally "infinite" complexity?" Clarke also notes an "odd coincidencethe name Mandelbrot, and the word "mandala"—for a religious symbol—which I'm sure is a pure coincidence, but indeed the Mandelbrot set does seem to contain an enormous number of mandalas. Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM decided to end pure research in his division. He joined the Department of Mathematics at Yale, and obtained his first tenured post in 1999, at the age of 75. At the time of his retirement in 2005, he was Sterling Professor of Mathematical Sciences. Mandelbrot created the first-ever "theory of roughness", and he saw "roughness" in the shapes of mountains, coastlines and river basins; the structures of plants, blood vessels and lungs; the clustering of galaxies. His personal quest was to create some mathematical formula to measure the overall "roughness" of such objects in nature. He began by asking himself various kinds of questions related to nature: In his paper titled How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension published in "Science" in 1967 Mandelbrot discusses self-similar curves that have Hausdorff dimension that are examples of "fractals", although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Mandelbrot emphasized the use of fractals as realistic and useful models for describing many "rough" phenomena in the real world. He concluded that "real roughness is often fractal and can be measured." Although Mandelbrot coined the term "fractal", some of the mathematical objects he presented in "The Fractal Geometry of Nature" had been previously described by other mathematicians. Before Mandelbrot, however, they were regarded as isolated curiosities with unnatural and non-intuitive properties. Mandelbrot brought these objects together for the first time and turned them into essential tools for the long-stalled effort to extend the scope of science to explaining non-smooth, "rough" objects in the real world. His methods of research were both old and new: Fractals are also found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.  —Mandelbrot, in his introduction to "The Fractal Geometry of Nature" Mandelbrot has been called a work of art, and a visionary and a maverick. His informal and passionate style of writing and his emphasis on visual and geometric intuition (supported by the inclusion of numerous illustrations) made "The Fractal Geometry of Nature" accessible to non-specialists. The book sparked widespread popular interest in fractals and contributed to chaos theory and other fields of science and mathematics. Mandelbrot also put his ideas to work in cosmology. He offered in 1974 a new explanation of Olbers' paradox (the "dark night sky" riddle), demonstrating the consequences of fractal theory as a sufficient, but not necessary, resolution of the paradox. He postulated that if the stars in the universe were fractally distributed (for example, like Cantor dust), it would not be necessary to rely on the Big Bang theory to explain the paradox. His model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred. Mandelbrot's awards include the Wolf Prize for Physics in 1993, the Lewis Fry Richardson Prize of the European Geophysical Society in 2000, the Japan Prize in 2003, and the Einstein Lectureship of the American Mathematical Society in 2006. The small asteroid 27500 Mandelbrot was named in his honor. In November 1990, he was made a Chevalier in France's Legion of Honour. In December 2005, Mandelbrot was appointed to the position of Battelle Fellow at the Pacific Northwest National Laboratory. Mandelbrot was promoted to an Officer of the Legion of Honour in January 2006. An honorary degree from Johns Hopkins University was bestowed on Mandelbrot in the May 2010 commencement exercises. A partial list of awards received by Mandelbrot: Mandelbrot died from pancreatic cancer at the age of 85 in a hospice in Cambridge, Massachusetts on 14 October 2010. Reacting to news of his death, mathematician Heinz-Otto Peitgen said: "[I]f we talk about impact inside mathematics, and applications in the sciences, he is one of the most important figures of the last fifty years." Chris Anderson, TED conference curator, described Mandelbrot as "an icon who changed how we see the world". Nicolas Sarkozy, President of France at the time of Mandelbrot's death, said Mandelbrot had "a powerful, original mind that never shied away from innovating and shattering preconceived notions [… h]is work, developed entirely outside mainstream research, led to modern information theory." Mandelbrot's obituary in "The Economist" points out his fame as "celebrity beyond the academy" and lauds him as the "father of fractal geometry". Best-selling essayist-author Nassim Nicholas Taleb has remarked that Mandelbrot's book "The (Mis)Behavior of Markets" is in his opinion "The deepest and most realistic finance book ever published".
https://en.wikipedia.org/wiki?curid=3999
Benedict of Nursia Benedict of Nursia (; ; ; ; – ) is a Christian saint venerated in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, the Anglican Communion and Old Catholic Churches. He is a patron saint of Europe. Benedict founded twelve communities for monks at Subiaco, Lazio, Italy (about to the east of Rome), before moving to Monte Cassino in the mountains of southern Italy. The Order of Saint Benedict is of later origin and, moreover, not an "order" as commonly understood but merely a confederation of autonomous congregations. Benedict's main achievement, his "Rule of Saint Benedict", contains a set of rules for his monks to follow. Heavily influenced by the writings of John Cassian, it shows strong affinity with the Rule of the Master, but it also has a unique spirit of balance, moderation and reasonableness (, "epieíkeia"), which persuaded most Christian religious communities founded throughout the Middle Ages to adopt it. As a result, his Rule became one of the most influential religious rules in Western Christendom. For this reason, Giuseppe Carletti regarded Benedict as the founder of Western Christian monasticism. Apart from a short poem attributed to Mark of Monte Cassino, the only ancient account of Benedict is found in the second volume of Pope Gregory I's four-book "Dialogues", thought to have been written in 593, although the authenticity of this work has been disputed. Gregory's account of this saint's life is not, however, a biography in the modern sense of the word. It provides instead a spiritual portrait of the gentle, disciplined abbot. In a letter to Bishop Maximilian of Syracuse, Gregory states his intention for his "Dialogues", saying they are a kind of "floretum" (an "anthology", literally, 'flowers') of the most striking miracles of Italian holy men. Gregory did not set out to write a chronological, historically anchored story of Saint Benedict, but he did base his anecdotes on direct testimony. To establish his authority, Gregory explains that his information came from what he considered the best sources: a handful of Benedict's disciples who lived with the saint and witnessed his various miracles. These followers, he says, are Constantinus, who succeeded Benedict as Abbot of Monte Cassino; Valentinianus; Simplicius; and Honoratus, who was abbot of Subiaco when St Gregory wrote his "Dialogues". In Gregory's day, history was not recognised as an independent field of study; it was a branch of grammar or rhetoric, and "historia" was an account that summed up the findings of the learned when they wrote what was, at that time, considered 'history.' Gregory's "Dialogues" Book Two, then, an authentic medieval hagiography cast as a conversation between the Pope and his deacon Peter, is designed to teach spiritual lessons. He was the son of a Roman noble of Nursia, the modern Norcia, in Umbria. A tradition which Bede accepts makes him a twin with his sister Scholastica. If 480 is accepted as the year of his birth, the year of his abandonment of his studies and leaving home would be about 500. Saint Gregory's narrative makes it impossible to suppose him younger than 20 at the time. He was old enough to be in the midst of his literary studies, to understand the real meaning and worth of the dissolute and licentious lives of his companions, and to have been deeply affected by the love of a woman. He was at the beginning of life, and he had at his disposal the means to a career as a Roman noble; clearly he was not a child. Benedict was sent to Rome to study, but was disappointed by the life he found there. He does not seem to have left Rome for the purpose of becoming a hermit, but only to find some place away from the life of the great city. He took his old nurse with him as a servant and they settled down to live in Enfide. Enfide, which the tradition of Subiaco identifies with the modern Affile, is in the Simbruini mountains, about forty miles from Rome and two from Subiaco. A short distance from Enfide is the entrance to a narrow, gloomy valley, penetrating the mountains and leading directly to Subiaco. The path continues to ascend, and the side of the ravine, on which it runs, becomes steeper, until a cave is reached above which the mountain now rises almost perpendicularly; while on the right, it strikes in a rapid descent down to where, in Saint Benedict's day, below, lay the blue waters of the lake. The cave has a large triangular-shaped opening and is about ten feet deep. On his way from Enfide, Benedict met a monk, Romanus of Subiaco, whose monastery was on the mountain above the cliff overhanging the cave. Romanus had discussed with Benedict the purpose which had brought him to Subiaco, and had given him the monk's habit. By his advice Benedict became a hermit and for three years, unknown to men, lived in this cave above the lake. Gregory tells us little of these years. He now speaks of Benedict no longer as a youth ("puer"), but as a man ("vir") of God. Romanus, Gregory tells us, served the saint in every way he could. The monk apparently visited him frequently, and on fixed days brought him food. During these three years of solitude, broken only by occasional communications with the outer world and by the visits of Romanus, Benedict matured both in mind and character, in knowledge of himself and of his fellow-man, and at the same time he became not merely known to, but secured the respect of, those about him; so much so that on the death of the abbot of a monastery in the neighbourhood (identified by some with Vicovaro), the community came to him and begged him to become its abbot. Benedict was acquainted with the life and discipline of the monastery, and knew that "their manners were diverse from his and therefore that they would never agree together: yet, at length, overcome with their entreaty, he gave his consent" (ibid., 3). The experiment failed; the monks tried to poison him. The legend goes that they first tried to poison his drink. He prayed a blessing over the cup and the cup shattered. Thus he left the group and went back to his cave at Subiaco. There lived in the neighborhood a priest called Florentius who, moved by envy, tried to ruin him. He tried to poison him with poisoned bread. When he prayed a blessing over the bread, a raven swept in and took the loaf away. From this time his miracles seem to have become frequent, and many people, attracted by his sanctity and character, came to Subiaco to be under his guidance. Having failed by sending him poisonous bread, Florentius tried to seduce his monks with some prostitutes. To avoid further temptations, in about 530 Benedict left Subiaco. He founded 12 monasteries in the vicinity of Subiaco, and, eventually, in 530 he founded the great Benedictine monastery of Monte Cassino, which lies on a hilltop between Rome and Naples. During the invasion of Italy, Totila, King of the Goths, ordered a general to wear his kingly robes and to see whether Benedict would discover the truth. Immediately the Saint detected the impersonation, and Totila came to pay him due respect. He is believed to have died of a fever at Monte Cassino not long after his twin sister, Saint Scholastica, and was buried in the same place as his sister. According to tradition, this occurred on 21 March 547. He was named patron protector of Europe by Pope Paul VI in 1964. In 1980, Pope John Paul II declared him co-patron of Europe, together with Saints Cyril and Methodius. Furthermore, he is the patron saint of speleologists. In the pre-1970 General Roman Calendar, his feast is kept on 21 March, the day of his death according to some manuscripts of the "Martyrologium Hieronymianum" and that of Bede. Because on that date his liturgical memorial would always be impeded by the observance of Lent, the 1969 revision of the General Roman Calendar moved his memorial to 11 July, the date that appears in some Gallic liturgical books of the end of the 8th century as the feast commemorating his birth ("Natalis S. Benedicti"). There is some uncertainty about the origin of this feast. Accordingly, on 21 March the Roman Martyrology mentions in a line and a half that it is Benedict's day of death and that his memorial is celebrated on 11 July, while on 11 July it devotes seven lines to speaking of him, and mentions the tradition that he died on 21 March. The Eastern Orthodox Church commemorates Saint Benedict on 14 March. The Anglican Communion has no single universal calendar, but a provincial calendar of saints is published in each province. In almost all of these, Saint Benedict is commemorated on 11 July. Benedict wrote the "Rule" in 516 for monks living communally under the authority of an abbot. Seventy-three short chapters comprise the "Rule". Its wisdom is twofold: spiritual (how to live a Christocentric life on earth) and administrative (how to run a monastery efficiently). More than half of the chapters describe how to be obedient and humble, and what to do when a member of the community is not. About one-fourth regulate the work of God (the "opus Dei"). One-tenth outline how, and by whom, the monastery should be managed. Following the golden rule of "Ora et Labora - pray and work", the monks each day devoted eight hours to prayer, eight hours to sleep, and eight hours to manual work, sacred reading and/or works of charity. This devotional medal originally came from a cross in honour of Saint Benedict. On one side, the medal has an image of Saint Benedict, holding the Holy Rule in his left hand and a cross in his right. There is a raven on one side of him, with a cup on the other side of him. Around the medal's outer margin are the words ""Eius in obitu nostro praesentia muniamur"" ("May we be strengthened by his presence in the hour of our death"). The other side of the medal has a cross with the initials CSSML on the vertical bar which signify ""Crux Sacra Sit Mihi Lux"" ("May the Holy Cross be my light") and on the horizontal bar are the initials NDSMD which stand for ""Non Draco Sit Mihi Dux"" ("Let not the dragon be my guide"). The initials CSPB stand for ""Crux Sancti Patris Benedicti"" ("The Cross of the Holy Father Benedict") and are located on the interior angles of the cross. Either the inscription ""PAX"" (Peace) or the Christogram ""IHS"" may be found at the top of the cross in most cases. Around the medal's margin on this side are the "Vade Retro Satana" initials VRSNSMV which stand for ""Vade Retro Satana, Nonquam Suade Mihi Vana"" ("Begone Satan, do not suggest to me thy vanities") then a space followed by the initials SMQLIVB which signify ""Sunt Mala Quae Libas, Ipse Venena Bibas"" ("Evil are the things thou profferest, drink thou thy own poison"). This medal was first struck in 1880 to commemorate the fourteenth centenary of Saint Benedict's birth and is also called the Jubilee Medal; its exact origin, however, is unknown. In 1647, during a witchcraft trial at Natternberg near Metten Abbey in Bavaria, the accused women testified they had no power over Metten, which was under the protection of the cross. An investigation found a number of painted crosses on the walls of the abbey with the letters now found on St Benedict medals, but their meaning had been forgotten. A manuscript written in 1415 was eventually found that had a picture of Saint Benedict holding a scroll in one hand and a staff which ended in a cross in the other. On the scroll and staff were written the full words of the initials contained on the crosses. Medals then began to be struck in Germany, which then spread throughout Europe. This medal was first approved by Pope Benedict XIV in his briefs of 23 December 1741, and 12 March 1742. Saint Benedict has been also the motive of many collector's coins around the world. The Austria 50 euro 'The Christian Religious Orders', issued on 13 March 2002 is one of them. The early Middle Ages have been called "the Benedictine centuries." In April 2008, Pope Benedict XVI discussed the influence St Benedict had on Western Europe. The pope said that "with his life and work St Benedict exercised a fundamental influence on the development of European civilization and culture" and helped Europe to emerge from the "dark night of history" that followed the fall of the Roman empire. Saint Benedict contributed more than anyone else to the rise of monasticism in the West. His Rule was the foundational document for thousands of religious communities in the Middle Ages. To this day, The Rule of St. Benedict is the most common and influential Rule used by monasteries and monks, more than 1,400 years after its writing. Today the Benedictine family is represented by two branches: the Benedictine Federation and the Cistercians. The influence of Saint Benedict produced "a true spiritual ferment" in Europe, and over the coming decades his followers spread across the continent to establish a new cultural unity based on Christian faith. A basilica was built upon the birthplace of Saints Benedict and Scholastica in the 1400s. Ruins of their familial home were excavated from beneath the church and preserved. The earthquake of 30 October 2016 completely devastated the structure of the basilica, leaving only the front facade and altar standing.
https://en.wikipedia.org/wiki?curid=4001
Battle of Pharsalus The Battle of Pharsalus was the decisive battle of Caesar's Civil War. On 9 August 48 BC at Pharsalus in central Greece, Gaius Julius Caesar and his allies formed up opposite the army of the republic under the command of Gnaeus Pompeius Magnus ("Pompey the Great"). Pompey had the backing of a majority of the senators, of whom many were optimates, and his army significantly outnumbered the veteran Caesarian legions. The two armies confronted each other over several months of uncertainty, Caesar being in a much weaker position than Pompey. The former found himself isolated in a hostile country with only 22,000 legionaries and short of provisions, while on the other side of the river he was faced by Pompey with an army about twice as large in number. Pompey wanted to delay, knowing the enemy would eventually surrender from hunger and exhaustion. Pressured by the senators present and by his officers, he reluctantly engaged in battle and suffered an overwhelming defeat, ultimately fleeing the camp and his men, disguised as an ordinary citizen. However, Pompey was later assassinated in Ptolemaic Egypt by orders of Ptolemy XIII. A dispute between Caesar and the "optimates" faction in the Senate of Rome culminated in Caesar marching his army on Rome and forcing Pompey, accompanied by much of the Roman Senate, to flee in 49 BC from Italy to Greece, where he could better conscript an army to face his former ally. Caesar, lacking a fleet to immediately give chase, solidified his control over the western Mediterranean – Spain specifically – before assembling ships to follow Pompey. Marcus Calpurnius Bibulus, whom Pompey had appointed to command his 600-ship fleet, set up a massive blockade to prevent Caesar from crossing to Greece and to prevent any aid to Italy. Caesar, defying convention, chose to cross the Adriatic during the winter, with only half his fleet at a time. Caesar was now in a precarious position, holding a beachhead at Epirus with only half his army, no ability to supply his troops by sea, and limited local support, as the Greek cities were mostly loyal to Pompey. Caesar's only choice was to fortify his position, forage what supplies he could, and wait on his remaining army to attempt another crossing. Pompey by now had a massive international army; however, his troops were mostly untested raw recruits, while Caesar's troops were hardened veterans. Realizing Caesar's difficulty in keeping his troops supplied, Pompey decided to simply mirror Caesar's forces and let hunger do the fighting for him. Caesar began to despair and used every channel he could think of to pursue peace with Pompey. When this was rebuffed he made an attempt to cross back to Italy to collect his missing troops, but was turned back by a storm. Finally, Mark Antony rallied the remaining forces in Italy, fought through the blockade and made the crossing, reinforcing Caesar's forces in both men and spirit. Now at full strength, Caesar felt confident to take the fight to Pompey. Pompey was camped in a strong position just south of Dyrrhachium with the sea to his back and surrounded by hills, making a direct assault impossible. Caesar ordered a wall to be built around Pompey's position in order to cut off water and pasture land for his horses. Pompey built a parallel wall and in between a kind of no man's land was created, with fighting comparable to the trench warfare of World War I. Ultimately the standoff was broken when a traitor in Caesar's army informed Pompey of a weakness in Caesar's wall. Pompey immediately exploited this information and forced Caesar's army into a full retreat, but ordered his army not to pursue, fearing Caesar's reputation for setting elaborate traps. This caused Caesar to remark, "Today the victory had been the enemy's, had there been any one among them to gain it." Pompey continued his strategy of mirroring Caesar's forces and avoiding any direct engagements. After trapping Caesar in Thessaly, the prominent senators in Pompey's camp began to argue loudly for a more decisive victory. Although Pompey was strongly against it — he wanted to surround and starve Caesar's army instead — he eventually gave in and accepted battle from Caesar on a field near Pharsalus. Excerpt from Cassius Dio's "Roman History" gives a more ancient flavor of his take on the prelude to the "Battle of Pharsalus": [41.56] "As a result of these circumstances and of the very cause and purpose of the war a most notable struggle took place. For the city of Rome and its entire empire, even then great and mighty, lay before them as the prize, since it was clear to all that it would be the slave of him who then conquered. When they reflected on this fact and furthermore thought of their former deeds [...41.57] they were wrought up to the highest pitch of excitement...they now, led by their insatiable lust of power, hastened to break, tear, and rend asunder. Because of them Rome was being compelled to fight both in her own defense and against herself, so that even if victorious she would be vanquished." The date of the actual decisive battle is given as 9 August 48 BC according to the republican calendar. According to the Julian calendar however, the date was either 29 June (according to Le Verrier's chronological reconstruction) or possibly 7 June (according to Drumann/Groebe). As Pompey was assassinated on 3 September 48 BC, the battle must have taken place in the true month of August, when the harvest was becoming ripe (or Pompey's strategy of starving Caesar would not be plausible). The location of the battlefield was for a long time the subject of controversy among scholars. Caesar himself, in his Commentarii de Bello Civili, mentions few place-names; and although the battle is called after Pharsalos by modern authors, four ancient writers – the author of the "Bellum Alexandrinum" (48.1), Frontinus ("Strategemata" 2.3.22), Eutropius (20), and Orosius (6.15.27) – place it specifically at "Palae"pharsalus ("Old" Pharsalus). Strabo in his "Geographica" ("Γεωγραφικά") mentions both old and new Pharsaloi, and notes that the Thetideion, the temple to Thetis south of Scotoussa, was near both. In 198 BC, in the Second Macedonian War, Philip V of Macedon sacked Palaepharsalos (Livy, "Ab Urbe Condita" 32.13.9), but left new Pharsalos untouched. These two details perhaps imply that the two cities were not close neighbours. Many scholars, therefore, unsure of the site of Palaepharsalos, followed Appian (2.75) and located the battle of 48 BC south of the Enipeus or close to Pharsalos (today's Pharsala). Among the scholars arguing for the south side are Béquignon (1928), Bruère (1951), and Gwatkin (1956). An increasing number of scholars, however, have argued for a location on the north side of the river. These include Perrin (1885), Holmes (1908), Lucas (1921), Rambaud (1955), Pelling (1973), Morgan (1983), and Sheppard (2006). John D. Morgan in his definitive “Palae-pharsalus – the Battle and the Town”, shows that Palaepharsalus cannot have been at Palaiokastro, as Béquignon thought (a site abandoned c. 500 BC), nor the hill of Fatih-Dzami within the walls of Pharsalus itself, as Kromayer (1903, 1931) and Gwatkin thought; and Morgan argues that it is probably also not the hill of Khtouri (Koutouri), some 7 miles north-west of Pharsalus on the south bank of the Enipeus, as Lucas and Holmes thought, although that remains a possibility. However, Morgan believes it is most likely to have been the hill just east of the village of Krini (formerly Driskoli) very close to the ancient highway from Larisa to Pharsalus. This site is some six miles (10km) north of Pharsalus, and three miles north of the river Enipeus, and not only has remains dating back to neolithic times but also signs of habitation in the 1st century BC and later. The identification seems to be confirmed by the location of a place misspelled "Palfari" or "Falaphari" shown on a medieval route map of the road just north of Pharsalus. Morgan places Pompey's camp a mile to the west of Krini, just north of the village of Avra (formerly Sarikayia), and Caesar's camp some four miles to the east-south-east of Pompey's. According to this reconstruction, therefore, the battle took place not between Pharsalus and the river, as Appian wrote, but between Old Pharsalus and the river. An interesting side-note on Palaepharsalus is that it was sometimes identified in ancient sources with Phthia, the home of Achilles. Near Old and New Pharsalus was a "Thetideion", or temple dedicated to Thetis, the mother of Achilles. However, Phthia, the kingdom of Achilles and his father Peleus, is more usually identified with the lower valley of the Spercheios river, much further south. Although it is often called the Battle of Pharsalus by modern historians, this name was rarely used in the ancient sources. Caesar merely calls it the "proelium in Thessaliā" ("battle in Thessalia"); Marcus Tullius Cicero and Hirtius call it the "Pharsālicum proelium" ("Pharsalic battle") or "pugna Pharsālia" ("Pharsalian battle"), and similar expressions are also used in other authors. But Hirtius (if he is the author of the de Bello Alexandrino) also refers to the battle as having taken place at "Palaepharsalus", and this name also occurs in Strabo, Frontinus, Eutropius, and Orosius. Lucan in his poem about the Civil War regularly uses the name "Pharsālia", and this term is also used by the epitomiser of Livy and by Tacitus. The only ancient sources to refer to the battle as being at Pharsalus are a certain calendar known as the Fasti Amiternini and the Greek authors Plutarch, Appian, and Polyaenus. It has therefore been argued by some scholars that "Pharsalia" would be a more accurate name for the battle than Pharsalus. Caesar gives his own numbers as 22,000 men in eighty cohorts (it should be remembered that these numbers refer to legionaries, and do not include non-Roman infantry) and 1,000 cavalry (mainly Germanic and Gallic auxiliaries). Caesar had the following legions with him: However, all of these legions were understrength. Some only had about a thousand men at the time of Pharsalus, due partly to losses at Dyrrhachium and partly to Caesar's wish to rapidly advance with a picked body as opposed to a ponderous movement with a large army. Aside from 5 legions Pompey had brought over from Italy, he had one from Cilicia, one from Greece, and acquired two more from Asia raised by the consul Lentulus. He distributed amongst them a number of veteran re-enlistees from Greece, as well as the elements of 15 cohorts captured from Caesar's supporter Gaius Antonius in Illyria. Pompey at some point was reinforced by some cohorts which had fled Caesar's onslaught in Spain, and was awaiting another two legions from Syria to be brought by his father-in-law, Metellus Scipio. This brought the army's strength to a total of 11 legions, of which 88 cohorts were fielded at Pharsalus, comprising altogether some 40,000 heavy infantry. Pompey's auxiliary infantry and cavalry outnumbered Caesar's own by far (though their exact amount is unclear) and were remarkably diverse, including a handful of Gallic and Germanic cavalry and all polyglot peoples of the east – Phoenicians, Cretan slingers and other Greeks, Jews, Arabs, Anatolians, Armenians, and others – to which heterogeneous force Pompey added horsemen conscripted from his own slaves. Many of the foreigners were serving under their own rulers, for more than a dozen despots and petty kings under Roman influence in the east were Pompey's personal clients and some elected to attend in person, or send proxies. Of Pompey's cavalry specifically, Sheppard suggests that Pompey still had 6,700 men with him at Pharsalus, but Hans Delbrück believed the number to be much lower, 3,000 by his speculation. On the Pharsalian plain, Pompey deployed his army with its right flank against the river. Each cohort of Roman infantry was formed in a much thicker formation than usual, 10 men deep, in order to prevent the men in the front line from fleeing and enable his troops to absorb the shock of Caesar's attack. With this in mind, they were to tie down Caesar's infantry and thus give time for the superior Pompeian cavalry to overwhelm the enemy's own and subsequently attack Caesar's flank and rear. As a precaution, 500–600 Pontic horsemen and some Cappadocian light infantry were placed on the right flank; but, trusting that the river would provide sufficient protection to this wing, Pompey concentrated the bulk of the cavalry, his key to victory, in the left flank. Pompey's legions were arrayed in the traditional three line formation ("triplex acies"): four cohorts in the front line and three in the second and third lines each. He stationed in the center and wings the troops in which he placed most confidence: on the left stood the two legions which Caesar had given to the Senate shortly before the civil war began, while the two legions brought from Syria by Scipio were placed in the middle, and on the right the legion from Cilicia together with the cohorts brought from Spain; the space between these experienced soldiers was filled with raw recruits. Pompey also dispersed 2,000 re-enlisted veterans from his previous campaigns throughout the entire army in order to strengthen its ranks. The infantry column was divided under command of three subordinates, with L. Lentulus in charge of Pompey's left, Scipio of the center and L. Domitius Ahenobarbus the right. Pompey himself took up a position behind the left wing in order to oversee the course of the battle, while the cavalry on that wing was placed under command of Titus Labienus, a former lieutenant of Caesar. Caesar also deployed his men in three lines, but, being outnumbered, had to thin his ranks to a depth of only six men, in order to match the frontage presented by Pompey. His left flank, resting on the Enipeus River, consisted of his battle worn IXth legion supplemented by the VIIIth legion, these were commanded by Mark Antony. The VI, XII, XI and XIII formed the centre and were commanded by Domitius, then came the VII and upon his right he placed his favored Xth legion, giving Sulla command of this flank — Caesar himself took his stand on the right, across from Pompey. Upon seeing the disposition of Pompey's army Caesar grew discomforted, and further thinned his third line in order to form a fourth line on his right: this to counter the onslaught of the enemy cavalry, which he knew his numerically inferior cavalry could not withstand. He gave this new line detailed instructions for the role they would play, hinting that upon them would rest the fortunes of the day, and gave strict orders to his third line not to charge until specifically ordered. There was significant distance between the two armies, according to Caesar. Pompey ordered his men not to charge, but to wait until Caesar's legions came into close quarters; Pompey's adviser Gaius Triarius believed that Caesar's infantry would be fatigued and fall into disorder if they were forced to cover twice the expected distance of a battle march. Also, stationary troops were expected to be able to defend better against pila throws. Seeing that Pompey's army was not advancing, Caesar's infantry under Mark Antony and Gnaeus Domitius Calvinus started the advance. As Caesar's men neared throwing distance, without orders, they stopped to rest and regroup before continuing the charge; Pompey's right and centre line held as the two armies collided. As Pompey's infantry fought, Labienus ordered the Pompeian cavalry on his left flank to attack Caesar's cavalry; as expected they successfully pushed back Caesar's cavalry. Caesar then revealed his hidden fourth line of infantry and surprised Pompey's cavalry charge; Caesar's men were ordered to leap up and use their pila to thrust at Pompey's cavalry instead of throwing them. Pompey's cavalry panicked and suffered hundreds of casualties. After failing to reform, the rest of the cavalry retreated to the hills, leaving the left wing of Pompey's legions exposed. Caesar then ordered in his third line, containing his most battle-hardened veterans, to attack. This broke Pompey's left wing troops, who fled the battlefield. After routing Pompey's cavalry, Caesar threw in his last line of reserves —a move which at this point meant that the battle was more or less decided. Pompey lost the will to fight as he watched both cavalry and legions under his command break formation and flee from battle, and he retreated to his camp, leaving the rest of his troops at the centre and right flank to their own devices. He ordered the garrisoned auxiliaries to defend the camp as he gathered his family, loaded up gold, and threw off his general's cloak to make a quick escape. As the rest of Pompey's army were left confused, Caesar urged his men to end the day by routing the rest of Pompey's troops and capturing the Pompeian camp. They complied with his wishes; after finishing off the remains of Pompey's men, they furiously attacked the camp walls. The Thracians and the other auxiliaries who were left in the Pompeian camp, in total seven cohorts, defended bravely, but were not able to fend off the assault. Caesar had won his greatest victory, claiming to have only lost about 200 soldiers and 30 centurions. In his history of the war, Caesar would praise his own men's discipline and experience, and remembered each of his centurions by name. He also questioned Pompey's decision not to charge. Pompey fled from Pharsalus to Egypt, where he was assassinated on the order of Ptolemy XIII. Ptolemy XIII sent Pompey's head to Caesar in an effort to win his favor, but instead secured him as a furious enemy. Ptolemy, advised by his regent, the eunuch Pothinus, and his rhetoric tutor Theodotus of Chios, had failed to take into account that Caesar was granting amnesty to a great number of those of the senatorial faction in their defeat. Even men who had been bitter enemies were allowed not only to return to Rome but to assume their previous positions in Roman society. Pompey's assassination had deprived Caesar of his ultimate public relations moment — pardoning his most ardent rival. The Battle of Pharsalus ended the wars of the First Triumvirate. The Roman Civil War, however, was not ended. Pompey's two sons, Gnaeus Pompeius and Sextus Pompey, and the Pompeian faction, led now by Metellus Scipio and Cato, survived and fought for their cause in the name of Pompey the Great. Caesar spent the next few years 'mopping up' remnants of the senatorial faction. After seemingly vanquishing all his enemies and bringing peace to Rome, he was assassinated in 44 BC by friends, in a conspiracy organized by Marcus Junius Brutus and Gaius Cassius Longinus. Paul K. Davis wrote that "Caesar's victory took him to the pinnacle of power, effectively ending the Republic." The battle itself did not end the civil war but it was decisive and gave Caesar a much needed boost in legitimacy. Until then much of the Roman world outside Italy supported Pompey and his allies due to the extensive list of clients he held in all corners of the Republic. After Pompey's defeat former allies began to align themselves with Caesar as some came to believe the gods favored him, while for others it was simple self-preservation. The ancients took great stock in success as a sign of favoritism by the gods. This is especially true of success in the face of almost certain defeat — as Caesar experienced at Pharsalus. This allowed Caesar to parlay this single victory into a huge network of willing clients to better secure his hold over power and force the Optimates into near exile in search for allies to continue the fight against Caesar. The battle gives its name to the following artistic, geographical, and business concerns: In Alexander Dumas' "The Three Musketeers", the author makes reference to Caesar's purported order that his men try to cut the faces of their opponents - their vanity supposedly being of more value to them than their lives.
https://en.wikipedia.org/wiki?curid=4005
Bigfoot In North American folklore, Bigfoot or Sasquatch are said to be hairy, upright-walking, ape-like creatures that dwell in the wilderness and leave giant, humanlike footprints. Depictions often portray them as a missing link between humans and human ancestors or other great apes. They are strongly associated with the Pacific Northwest, particularly Oregon, Washington, British Columbia and Northern California. Individuals have claimed to see the creatures all across North America over the years. These creatures have inspired numerous commercial ventures and hoaxes. The plural nouns 'Bigfoots' and 'Bigfeet' are both in use. Folklorists trace the figure of Bigfoot to a combination of factors and sources, including folklore surrounding the European wild man figure, folk belief among Native Americans and loggers, and a cultural increase in environmental concerns. A majority of scientists have historically discounted the existence of Bigfoot, considering it to be a combination of folklore, misidentification, and hoax, rather than living animals. People who claim to have seen it describe Bigfoot as large, muscular, bipedal ape-like creatures, roughly tall, covered in hair described as black, dark brown, or dark reddish. The enormous footprints for which the creatures are named are claimed to be as large as long and wide. Some footprint casts have also contained claw marks, making it likely that they came from known animals such as bears, which have five toes and claws. According to David Daegling, the legends predate the name "Bigfoot". They differ in their details both regionally and between families in the same community. Ecologist Robert Pyle says that most cultures have accounts of human-like giants in their folk history, expressing a need for "some larger-than-life creature." Each language had its own name for the creatures featured in the local version of such legends. Many names meant something along the lines of "wild man" or "hairy man", although other names described common actions that it was said to perform, such as eating clams or shaking trees. Chief Mischelle of the Nlaka'pamux at Lytton, British Columbia told such a story to Charles Hill-Tout in 1898; he named the creature by a Salishan variant meaning "the benign-faced-one". Members of the Lummi tell tales about "Ts'emekwes", the local version of Bigfoot. The stories are similar to each other in the general descriptions of "Ts'emekwes", but details differed among various family accounts concerning the creatures' diet and activities. Some regional versions tell of more threatening creatures. The "stiyaha" or "kwi-kwiyai" were a nocturnal race. Children were warned against saying the names, lest the monsters hear and come to carry off a person, sometimes to be killed. In 1847 Paul Kane reported stories by the Indians about "skoocooms", a race of cannibalistic wildmen living on the peak of Mount St. Helens in southern Washington state. Less-menacing versions have also been recorded, such as one in 1840 by Elkanah Walker, a Protestant missionary who recorded stories of giants among the Indians living near Spokane, Washington. The Indians said that these giants lived on and around the peaks of nearby mountains and stole salmon from the fishermen's nets. In the 1920s, Indian Agent J. W. Burns compiled local stories and published them in a series of Canadian newspaper articles. They were accounts told to him by the Sts'Ailes people of Chehalis and others. The Sts'Ailes and other regional tribes maintained that the Sasquatch were real. They were offended by people telling them that the figures were legendary. According to Sts'Ailes accounts, the Sasquatch preferred to avoid white men and spoke the Lillooet language of the people at Port Douglas, British Columbia at the head of Harrison Lake. These accounts were published again in 1940. Burns borrowed the term Sasquatch from the Halkomelem "sásq'ets" () and used it in his articles to describe a hypothetical single type of creature portrayed in the local stories. About one-third of all claims of Bigfoot sightings are located in the Pacific Northwest, with the remaining reports spread throughout the rest of North America. Bigfoot has become better known and a phenomenon in popular culture, and sightings have spread throughout North America. Rural areas of the Great Lakes region and the Southeastern United States have been sources of numerous reports of Bigfoot sightings, in addition to the Pacific Northwest. In the "Bigfoot Casebook", authors Janet and Colin Bord document the sightings from 1818 to 1980, listing over 1,000 sightings. The debate over the legitimacy of Bigfoot sightings reached a peak in the 1970s, and Bigfoot has been regarded as the first widely popularized example of pseudoscience in American culture, so much so that, according to an Associated Press 2014 poll, more Americans believe in Bigfoot than the Big Bang Theory. Various explanations have been suggested for the sightings and to offer conjecture on what type of creature Bigfoot might be. Some scientists typically attribute sightings either to hoaxes or to misidentification of known animals and their tracks, particularly black bears. In 2007 the Bigfoot Field Researchers Organization put forward some photos which they claimed showed a juvenile Bigfoot. The Pennsylvania Game Commission, however, said that the photos were of a bear with mange. However, anthropologist Jeffrey Meldrum, and Ohio scientist Jason Jarvis said that the limb proportions of the creature were not bear-like, they were "more like a chimpanzee." Both Bigfoot believers and non-believers agree that many of the reported sightings are hoaxes or misidentified animals. Author Jerome Clark argues that the Jacko Affair was a hoax, involving an 1884 newspaper report of an apelike creature captured in British Columbia. He cites research by John Green, who found that several contemporaneous British Columbia newspapers regarded the alleged capture as highly dubious, and notes that the "Mainland Guardian" of New Westminster, British Columbia wrote, "Absurdity is written on the face of it." Tom Biscardi is a long-time Bigfoot enthusiast and CEO of Searching for Bigfoot Inc. He appeared on the "Coast to Coast AM" paranormal radio show on July 14, 2005 and said that he was "98% sure that his group will be able to capture a Bigfoot which they had been tracking in the Happy Camp, California area." A month later, he announced on the same radio show that he had access to a captured Bigfoot and was arranging a pay-per-view event for people to see it. He appeared on "Coast to Coast AM" again a few days later to announce that there was no captive Bigfoot. He blamed an unnamed woman for misleading him, and said that the show's audience was gullible. On July 9, 2008, Rick Dyer and Matthew Whitton posted a video to YouTube, claiming that they had discovered the body of a dead Sasquatch in a forest in northern Georgia. Tom Biscardi was contacted to investigate. Dyer and Whitton received US$50,000 from Searching for Bigfoot, Inc. as a good faith gesture. The story was covered by many major news networks, including BBC, CNN, ABC News, and Fox News. Soon after a press conference, the alleged Bigfoot body was delivered in a block of ice in a freezer with the Searching for Bigfoot team. When the contents were thawed, observers found that the hair was not real, the head was hollow, and the feet were rubber. Dyer and Whitton admitted that it was a hoax after being confronted by Steve Kulls, executive director of SquatchDetective.com. In August 2012, a man in Montana was killed by a car while perpetrating a Bigfoot hoax using a ghillie suit. In January 2014, Rick Dyer, perpetrator of a previous Bigfoot hoax, said that he had killed a Bigfoot creature in September 2012 outside San Antonio, Texas. He said that he had scientific tests performed on the body, "from DNA tests to 3D optical scans to body scans. It is the real deal. It's Bigfoot, and Bigfoot's here, and I shot it, and now I'm proving it to the world." He said that he had kept the body in a hidden location, and he intended to take it on tour across North America in 2014. He released photos of the body and a video showing a few individuals' reactions to seeing it, but never released any of the tests or scans. He refused to disclose the test results or to provide biological samples. He said that the DNA results were done by an undisclosed lab and could not be matched to identify any known animal. Dyer said that he would reveal the body and tests on February 9, 2014 at a news conference at Washington University, but he never made the test results available. After the Phoenix tour, the Bigfoot body was taken to Houston. On March 28, 2014, Dyer admitted on his Facebook page that his "Bigfoot corpse" was another hoax. He had paid Chris Russel of Twisted Toy Box to manufacture the prop, which he nicknamed "Hank", from latex, foam, and camel hair. Dyer earned approximately $60,000 from the tour of this second fake Bigfoot corpse. He said that he did kill a Bigfoot, but did not take the real body on tour for fear that it would be stolen. Bigfoot proponents Grover Krantz and Geoffrey H. Bourne believed that Bigfoot could be a relict population of "Gigantopithecus". All "Gigantopithecus" fossils were found in Asia, but according to Bourne, many species of animals migrated across the Bering land bridge and he suggested that "Gigantopithecus" might have done so, as well. "Gigantopithecus" fossils have not been found in the Americas. The only recovered fossils are of mandibles and teeth, leaving uncertainty about "Gigantopithecus"'s locomotion. Krantz has argued that "Gigantopithecus blacki" could have been bipedal, based on his extrapolation of the shape of its mandible. However, the relevant part of the mandible is not present in any fossils. An alternative view is that "Gigantopithecus" was quadrupedal; its enormous mass would have made it difficult for it to adopt a bipedal gait. American anthropologist Matt Cartmill criticizes the "Gigantopithecus" hypothesis: Bernard G. Campbell writes: "That "Gigantopithecus" is in fact extinct has been questioned by those who believe it survives as the Yeti of the Himalayas and the Sasquatch of the north-west American coast. But the evidence for these creatures is not convincing." Primatologist John R. Napier and anthropologist Gordon Strasenburg have suggested a species of "Paranthropus" as a possible candidate for Bigfoot's identity, such as "Paranthropus robustus", with its gorilla-like crested skull and bipedal gait —despite the fact that fossils of "Paranthropus" are found only in Africa. Michael Rugg of the Bigfoot Discovery Museum presented a comparison between human, "Gigantopithecus," and "Meganthropus" skulls (reconstructions made by Grover Krantz) in episodes 131 and 132 of the Bigfoot Discovery Museum Show. He favorably compares a modern tooth suspected of coming from a Bigfoot to the "Meganthropus" fossil teeth, noting the worn enamel on the occlusal surface. The "Meganthropus" fossils originated from Asia, and the tooth was found near Santa Cruz, California. Some suggest Neanderthal, "Homo erectus", or "Homo heidelbergensis" to be the creature, but no remains of any of those species have been found in the Americas. Scientists do not consider the subject of Bigfoot to be a fertile area for credible science and there have been a limited number of formal scientific studies of Bigfoot. Evidence such as the 1967 Patterson–Gimlin film has provided "no supportive data of any scientific value". Great apes have not been found in the fossil record in the Americas, and no Bigfoot remains are known to have been found. Phillips Stevens, a cultural anthropologist at the University at Buffalo, summarized the scientific consensus as follows: In the 1970s, when Bigfoot "experts" were frequently given high-profile media coverage, Mcleod writes that the scientific community generally avoided lending credence to the theories by debating them. The first scientific study of available evidence was conducted by John Napier and published in his book, "Bigfoot: The Yeti and Sasquatch in Myth and Reality," in 1973. Napier wrote that if a conclusion is to be reached based on scant extant "'hard' evidence," science must declare "Bigfoot does not exist." However, he found it difficult to entirely reject thousands of alleged tracks, "scattered over 125,000 square miles" (325,000 km²) or to dismiss all "the many hundreds" of eyewitness accounts. Napier concluded, "I am convinced that Sasquatch exists, but whether it is all it is cracked up to be is another matter altogether. There must be "something" in north-west America that needs explaining, and that something leaves man-like footprints." However, anthropologists such as George Gaylord Simpson rejected Napier's conclusion noting that much of the data cited by Napier were hoaxes and since his book had been published, no evidence for Bigfoot was found. In 1974, the National Wildlife Federation funded a field study seeking Bigfoot evidence. No formal federation members were involved and the study made no notable discoveries. Few qualified anthropologists have written on the subject. The few that did have included Grover Krantz, Carleton S. Coon, George Allen Agogino and William Charles Osman Hill, although they came to no definite conclusions and later drifted from this research. Beginning in the late 1970s, physical anthropologist Grover Krantz published several articles and four book-length treatments of Sasquatch. However, his work was found to contain multiple scientific failings including falling for hoaxes. A study published in the "Journal of Biogeography" in 2009 by J.D. Lozier et al. used ecological niche modeling on reported sightings of Bigfoot, using their locations to infer Bigfoot's preferred ecological parameters. They found a very close match with the ecological parameters of the American black bear, "Ursus americanus". They also note that an upright bear looks much like Bigfoot's purported appearance and consider it highly improbable that two species should have very similar ecological preferences, concluding that Bigfoot sightings are likely sightings of black bears. In the first systematic genetic analysis of 30 hair samples that were suspected to be from Bigfoot, yeti, sasquatch, almasty or other anomalous primates, only one was found to be primate in origin, and that was identified as human. A joint study by the University of Oxford and Lausanne's Cantonal Museum of Zoology and published in the "Proceedings of the Royal Society B" in 2014, the team used a previously published cleaning method to remove all surface contamination and the ribosomal mitochondrial DNA 12S fragment of the sample was sequenced and then compared to GenBank to identify the species origin. The samples submitted were from different parts of the world, including the United States, Russia, the Himalayas, and Sumatra. Other than one sample of human origin, all but two are from common animals. Black and brown bear accounted for most of the samples, other animals include cow, horse, dog/wolf/coyote, sheep, goat, raccoon, porcupine, deer and tapir. The last two samples were thought to match a fossilized genetic sample of a 40,000 year old polar bear of the Pleistocene epoch; however, a later study disputes this finding. In the second paper, tests identified the hairs as being from a rare type of brown bear. After what "The Huffington Post" described as "a five-year study of purported Bigfoot (also known as Sasquatch) DNA samples", but prior to peer review of the work, DNA Diagnostics, a veterinary laboratory headed by veterinarian Melba Ketchum, issued a press release on November 24, 2012, claiming that they had found proof that the Sasquatch "is a human relative that arose approximately 15,000 years ago as a hybrid cross of modern "Homo sapiens" with an unknown primate species." Ketchum called for this to be recognized officially, saying that "Government at all levels must recognize them as an indigenous people and immediately protect their human and Constitutional rights against those who would see in their physical and cultural differences a 'license' to hunt, trap, or kill them." In 2012, Ketchum registered the name "Homo sapiens cognatus" to be used for the reputed hominid more familiarly known as Bigfoot or Sasquatch with ZooBank, a non-governmental organization adjunct to the International Commission on Zoological Nomenclature (ICZN). According to Ari Grossman of Midwestern University, the lack of formal differential diagnosis, type specimen, or designated location of a type specimen to verify the organism named, leaves the registered name open to challenge. Failing to find a scientific journal that would publish their results, Ketchum announced on February 13, 2013, that their research had been published in the "DeNovo Journal of Science". "The Huffington Post" discovered that the journal's domain had been registered anonymously only nine days before the announcement. This was the only edition of "DeNovo" and was listed as Volume 1, Issue 1, with its only content being the Ketchum paper. Shortly after publication, the paper was analyzed and outlined by Sharon Hill of Doubtful News for the Committee for Skeptical Inquiry. Hill reported on the questionable journal, mismanaged DNA testing and poor quality paper, stating that "The few experienced geneticists who viewed the paper reported a dismal opinion of it noting it made little sense." "The Scientist" magazine also analyzed the paper, reporting that: Claims about the origins and characteristics of Bigfoot have crossed over with other paranormal claims, including that Bigfoot and UFOs are related or that Bigfoot creatures are psychic or even completely supernatural. The evidence advanced supporting the existence of such a large, ape-like creature has often been attributed to hoaxes or delusion rather than to sightings of a genuine creature. In a 1996 "USA Today" article, Washington State zoologist John Crane said, "There is no such thing as Bigfoot. No data other than material that's clearly been fabricated has ever been presented." In addition, scientists cite the fact that Bigfoot is alleged to live in regions unusual for a large, nonhuman primate, i.e., temperate latitudes in the northern hemisphere; all recognized apes are found in the tropics of Africa and Asia. There are several organizations dedicated to the research and investigation of Bigfoot sightings in the United States. The oldest and largest is the Bigfoot Field Researchers Organization (BFRO). The BFRO also provides a free database to individuals and other organizations. Their website includes reports from across North America that have been investigated by researchers to determine credibility. In February 2016, the University of New Mexico at Gallup held a two-day Bigfoot conference, at a cost of $7,000 in university funds. Bigfoot has had a demonstrable impact as a popular culture phenomenon. When asked for her opinion of Bigfoot in a September 27, 2002, interview on National Public Radio's "Science Friday", Jane Goodall said "I'm sure they exist", and later said, chuckling, "Well, I'm a romantic, so I always wanted them to exist", and finally, "You know, why isn't there a body? I can't answer that, and maybe they don't exist, but I want them to." In 2012, when asked again by "The Huffington Post", Goodall said, "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts."
https://en.wikipedia.org/wiki?curid=4009
Bing Crosby Harry Lillis "Bing" Crosby Jr. (; May 3, 1903 – October 14, 1977) was an American singer, comedian and actor. The first multimedia star, Crosby was a leader in record sales, radio ratings, and motion picture grosses from 1931 to 1954. He made over seventy feature films and has sold 1 billion records world wide on which were recorded more than 1,600 different songs (“White Christmas” alone sold over 50 million copies). His early career coincided with recording innovations that allowed him to develop an intimate singing style that influenced many male singers who followed him, including Perry Como, Frank Sinatra, Dick Haymes, Elvis Presley, John Lennon, and Dean Martin. "Yank" magazine said that he was "the person who had done the most for the morale of overseas servicemen" during World War II. In 1948, American polls declared him the "most admired man alive", ahead of Jackie Robinson and Pope Pius XII. Also in 1948, "Music Digest" estimated that his recordings filled more than half of the 80,000 weekly hours allocated to recorded radio music. Crosby won an Oscar for Best Actor for his role as Father Chuck O'Malley in the 1944 motion picture "Going My Way" and was nominated for his reprise of the role in "The Bells of St. Mary's" opposite Ingrid Bergman the next year, becoming the first of six actors to be nominated twice for playing the same character. In 1963, Crosby received the first Grammy Global Achievement Award. He is one of 33 people to have three stars on the Hollywood Walk of Fame, in the categories of motion pictures, radio, and audio recording. He was also known for his collaborations with longtime friend Bob Hope, starring in the "Road to..." films from 1940 to 1962. Crosby influenced the development of the postwar recording industry. After seeing a demonstration of a German broadcast quality reel-to-reel tape recorder brought to America by John T. Mullin, he invested $50,000 in a California electronics company called Ampex to build copies. He then convinced ABC to allow him to tape his shows. He became the first performer to pre-record his radio shows and master his commercial recordings onto magnetic tape. Through the medium of recording, he constructed his radio programs with the same directorial tools and craftsmanship (editing, retaking, rehearsal, time shifting) used in motion picture production, a practice that became an industry standard. In addition to his work with early audio tape recording, he helped to finance the development of videotape, bought television stations, bred racehorses, and co-owned the Pittsburgh Pirates baseball team. Crosby was born on May 3, 1903 in Tacoma, Washington, in a house his father built at 1112 North J Street. In 1906, his family moved to Spokane in Eastern Washington state, where he was raised. In 1913, his father built a house at 508 E. Sharp Avenue. The house sits on the campus of his alma mater, Gonzaga University. It functions today as a museum housing over 200 artifacts from his life and career, including his Oscar. He was the fourth of seven children: brothers Laurence Earl (Larry) (1895–1975), Everett Nathaniel (1896–1966), Edward John (Ted) (1900–1973), and George Robert (Bob) (1913–1993); and two sisters, Catherine Cordelia (1904–1974) and Mary Rose (1906–1990). His parents were Harry Lowe Crosby (1870–1950), a bookkeeper, and Catherine Helen "Kate" (née Harrigan; 1873–1964). His mother was a second generation Irish-American. His father was of English descent; an ancestor, Simon Crosby, emigrated from England to New England in the 1630s during the Puritan migration to New England. Through another line, also on his father's side, Crosby is descended from "Mayflower" passenger William Brewster (c. 1567 – April 10, 1644). On November 8, 1937, after Lux Radio Theatre's adaptation of "She Loves Me Not", Joan Blondell asked Crosby how he got his nickname: Crosby: "Well, I'll tell you, back in the knee-britches day, when I was a wee little tyke, a mere broth of a lad, as we say in Spokane, I used to totter around the streets, with a gun on each hip, my favorite after school pastime was a game known as "Cops and Robbers", I didn't care which side I was on, when a cop or robber came into view, I would haul out my trusty six-shooters, made of wood, and loudly exclaim "bing"! "bing"!, as my luckless victim fell clutching his side, I would shout "bing"! "bing"!, and I would let him have it again, and then as his friends came to his rescue, shooting as they came, I would shout "bing"! "bing"! "bing"! "bing"! "bing"! "bing"! "bing"! "bing"!"Blondell: "I'm surprised they didn't call you "Killer" Crosby! Now tell me another story, Grandpa!Crosby: "No, so help me, it's the truth, ask Mister De Mille."De Mille: "I'll vouch for it, Bing." That story was pure whimsy for dramatic effect and the truth is that a neighbor – Valentine Hobart – named him "Bingo from Bingville" after a comic feature in the local paper called "The Bingville Bugle" which the young Harry liked. In time, Bingo got shortened to Bing. In 1917, Crosby took a summer job as property boy at Spokane's "Auditorium," where he witnessed some of the finest acts of the day, including Al Jolson, who held him spellbound with ad libbing and parodies of Hawaiian songs. He later described Jolson's delivery as "electric." Crosby graduated from Gonzaga High School (today's Gonzaga Prep) in 1920 and enrolled at Gonzaga University. He attended Gonzaga for three years but did not earn a degree. As a freshman, he played on the university's baseball team. The university granted him an honorary doctorate in 1937. Today, Gonzaga University houses a large collection of photographs, correspondence, and other material related to Crosby. In 1923, Crosby was invited to join a new band composed of high-school students a few years younger than himself. Al and Miles Rinker (brothers of singer Mildred Bailey), James Heaton, Claire Pritchard and Robert Pritchard, along with drummer Crosby, formed the Musicaladers, who performed at dances both for high-school students and club-goers. The group performed on Spokane radio station KHQ, but disbanded after two years. Crosby and Al Rinker then obtained work at the Clemmer Theatre in Spokane (now known as the Bing Crosby Theater). Crosby was initially a member of a vocal trio called 'The Three Harmony Aces' with Al Rinker accompanying on piano from the pit, to entertain between the films. Bing and Al continued at the Clemmer Theatre for several months often with three other men – Wee Georgie Crittenden, Frank McBride and Lloyd Grinnell – and they were billed The Clemmer Trio or The Clemmer Entertainers depending who performed. In October 1925, Crosby and Rinker decided to seek fame in California. They traveled to Los Angeles, where Bailey introduced them to her show business contacts. The Fanchon and Marco Time Agency hired them for thirteen weeks for the revue "The Syncopation Idea" starting at the Boulevard Theater in Los Angeles and then on the Loew's circuit. They each earned $75 a week. As minor parts of "The Syncopation Idea" Crosby and Rinker started to develop as entertainers. They had a lively style that was popular with college students. After "The Syncopation Idea" closed, they worked in the Will Morrissey Music Hall Revue. They honed their skills with Morrissey. When they got a chance to present an independent act, they were spotted by a member of the Paul Whiteman organization. Whiteman needed something different to break up his musical selections, and Crosby and Rinker filled this requirement. After less than a year in show business, they were attached to one of the biggest names. Hired for $150 a week in 1926, they debuted with Whiteman on December 6 at the Tivoli Theatre in Chicago. Their first recording, in October 1926, was "I've Got the Girl" with Don Clark's Orchestra, but the Columbia-issued record was inadvertently recorded at a slow speed, which increased the singers' pitch when played at 78 rpm. Throughout his career, Crosby often credited Bailey for getting him his first important job in the entertainment business. Success with Whiteman was followed by disaster when they reached New York. Whiteman considered letting them go. However, the addition of pianist and aspiring songwriter Harry Barris made the difference, and "The Rhythm Boys" were born. The additional voice meant they could be heard more easily in large New York theaters. Crosby gained valuable experience on tour for a year with Whiteman and performing and recording with Bix Beiderbecke, Jack Teagarden, Tommy Dorsey, Jimmy Dorsey, Eddie Lang, and Hoagy Carmichael. He matured as a performer and was in demand as a solo singer. Crosby became the star attraction of the Rhythm Boys. In 1928, he had his first number one hit, a jazz-influenced rendition of "Ol' Man River". In 1929, the Rhythm Boys appeared in the film "King of Jazz" with Whiteman, but Crosby's growing dissatisfaction with Whiteman led to the Rhythm Boys leaving his organization. They joined the Gus Arnheim Orchestra, performing nightly in the Cocoanut Grove of the Ambassador Hotel. Singing with the Arnheim Orchestra, Crosby's solos began to steal the show while the Rhythm Boys act gradually became redundant. Harry Barris wrote several of Crosby's hits, including "At Your Command", "I Surrender Dear", and "Wrap Your Troubles in Dreams". When Mack Sennett signed Crosby to a solo recording contract in 1931, a break with the Rhythm Boys became almost inevitable. Crosby married Dixie Lee in September 1930. After a threat of divorce in March 1931, he applied himself to his career. On September 2, 1931, Crosby made his nationwide solo radio debut. Before the end of the year, he signed with both Brunswick and CBS Radio. Doing a weekly 15-minute radio broadcast, Crosby became a hit. "Out of Nowhere", "Just One More Chance", "At Your Command" and "I Found a Million Dollar Baby (in a Five and Ten Cent Store)" were among the best selling songs of 1931. Ten of the top 50 songs of 1931 included Crosby with others or as a solo act. A "Battle of the Baritones" with singer Russ Columbo proved short-lived, replaced with the slogan "Bing Was King". Crosby played the lead in a series of musical comedy short films for Mack Sennett, signed with Paramount, and starred in his first full-length film 1932's "The Big Broadcast" (1932), the first of 55 films in which he received top billing. He would appear in 79 pictures. He signed a contract with Jack Kapp's new record company, Decca, in late 1934. His first commercial sponsor on radio was Cremo Cigars and his fame spread nationwide. After a long run in New York, he went back to Hollywood to film "The Big Broadcast". His appearances, records, and radio work substantially increased his impact. The success of his first film brought him a contract with Paramount, and he began a pattern of making three films a year. He led his radio show for Woodbury Soap for two seasons while his live appearances dwindled. His records produced hits during the Depression when sales were down. Audio engineer Steve Hoffman stated, "By the way, Bing actually saved the record business in 1934 when he agreed to support Decca founder Jack Kapp's crazy idea of lowering the price of singles from a dollar to 35 cents and getting a royalty for records sold instead of a flat fee. Bing's name and his artistry saved the recording industry. All the other artists signed to Decca after Bing did. Without him, Jack Kapp wouldn't have had a chance in hell of making Decca work and the Great Depression would have wiped out phonograph records for good." His social life was frantic. His first son Gary was born in 1933 with twin boys following in 1934. By 1936, he replaced his former boss, Paul Whiteman, as host of the weekly NBC radio program "Kraft Music Hall", where he remained for the next ten years. "Where the Blue of the Night (Meets the Gold of the Day)", with his trademark whistling, became his theme song and signature tune. Crosby's vocal style helped take popular singing beyond the "belting" associated with Al Jolson and Billy Murray, who had been obligated to reach the back seats in New York theaters without the aid of the microphone. As music critic Henry Pleasants noted in "The Great American Popular Singers", something new had entered American music, a style that might be called "singing in American" with conversational ease. This new sound led to the popular epithet "crooner". Crosby admired Louis Armstrong for his musical ability, and the trumpet maestro was a formative influence on Crosby's singing style. When the two met, they immediately became friends. In 1936, Crosby exercised an option in his Paramount contract to regularly star in an out-of-house film. Signing an agreement with Columbia for a single motion picture, Crosby wanted Armstrong to appear in a screen adaptation of "The Peacock Feather" that eventually became "Pennies from Heaven". Crosby asked Harry Cohn, but Cohn had no desire to pay for the flight or to meet Armstrong's "crude, mob-linked but devoted manager, Joe Glaser." Crosby threatened to leave the film and refused to discuss the matter. Cohn gave in; Armstrong's musical scenes and comic dialogue extended his influence to the silver screen, creating more opportunities for him and other African Americans to appear in future films. Crosby also ensured behind the scenes that Armstrong received equal billing with his white co-stars. Armstrong appreciated Crosby's progressive attitudes on race, and often expressed gratitude for the role in later years. During the Second World War, Crosby made live appearances before American troops who had been fighting in the European Theater. He learned how to pronounce German from written scripts and read propaganda broadcasts intended for German forces. The nickname "Der Bingle" was common among Crosby's German listeners and came to be used by his English-speaking fans. In a poll of U.S. troops at the close of World War II, Crosby topped the list as the person who had done the most for G.I. morale, ahead of President Franklin Delano Roosevelt, General Dwight Eisenhower, and Bob Hope. The June 18, 1945, issue of "Life" magazine stated, "America's number one star, Bing Crosby, has won more fans, made more money than any entertainer in history. Today he is a kind of national institution." "In all, 60,000,000 Crosby discs have been marketed since he made his first record in 1931. His biggest best seller is 'White Christmas', 2,000,000 impressions of which have been sold in the U.S. and 250,000 in Great Britain." "Nine out of ten singers and bandleaders listen to Crosby's broadcasts each Thursday night and follow his lead. The day after he sings a song over the air – any song – some 50,000 copies of it are sold throughout the U.S. Time and again Crosby has taken some new or unknown ballad, has given it what is known in trade circles as the 'big goose' and made it a hit single-handed and overnight...Precisely what the future holds for Crosby neither his family nor his friends can conjecture. He has achieved greater popularity, made more money, attracted vaster audiences than any other entertainer in history. And his star is still in the ascendant. His contract with Decca runs until 1955. His contract with Paramount runs until 1954. Records which he made ten years ago are selling better than ever before. The nation's appetite for Crosby's voice and personality appears insatiable. To soldiers overseas and to foreigners he has become a kind of symbol of America, of the amiable, humorous citizen of a free land. Crosby, however, seldom bothers to contemplate his future. For one thing, he enjoys hearing himself sing, and if ever a day should dawn when the public wearies of him, he will complacently go right on singing—to himself." The biggest hit song of Crosby's career was his recording of Irving Berlin's "White Christmas", which he introduced on a Christmas Day radio broadcast in 1941. (A copy of the recording from the radio program is owned by the estate of Bing Crosby and was loaned to "CBS Sunday Morning" for their December 25, 2011, program.) The song then appeared in his movie "Holiday Inn" (1942). His record hit the charts on October 3, 1942, and rose to No. 1 on October 31, where it stayed for 11 weeks. A holiday perennial, the song was repeatedly re-released by Decca, charting another sixteen times. It topped the charts again in 1945 and a third time in January 1947. The song remains the bestselling single of all time. According to "Guinness World Records", his recording of "White Christmas" has sold over 50 million copies around the world. His recording was so popular that he was obliged to re-record it in 1947 using the same musicians and backup singers; the original 1942 master had become damaged due to its frequent use in pressing additional singles. Although the two versions are similar, the 1947 recording is more familiar today. In 1977, after Crosby died, the song was re-released and reached No. 5 in the UK Singles Chart. Crosby was dismissive of his role in the song's success, saying "a jackdaw with a cleft palate could have sung it successfully." In the wake of a solid decade of headlining mainly smash hit musical comedy films in the 1930s, Crosby starred with Bob Hope and Dorothy Lamour in seven "Road to" musical comedies between 1940 and 1962, cementing Crosby and Hope as an on-and-off duo, despite never officially declaring themselves a "team" in the sense that Laurel and Hardy or Martin and Lewis (Dean Martin and Jerry Lewis) were teams. The series consists of "Road to Singapore" (1940), "Road to Zanzibar" (1941), "Road to Morocco" (1942), "Road to Utopia" (1946), "Road to Rio" (1947), "Road to Bali" (1952), and "The Road to Hong Kong" (1962). When they appeared solo, Crosby and Hope frequently made note of the other in a comically insulting fashion. They performed together countless times on stage, radio, film, and television, and made numerous brief and not so brief appearances together in movies aside from the "Road" pictures, "Variety Girl" (1947) being an example of lengthy scenes and songs together along with billing. In the 1949 Disney animated film "The Adventures of Ichabod and Mr. Toad", Crosby provided the narration and song vocals for "The Legend of Sleepy Hollow" segment. In 1960, he starred in "High Time", a collegiate comedy with Fabian Forte and Tuesday Weld that predicted the emerging gap between him and the new young generation of musicians and actors who had begun their careers after WWII. The following year, Crosby and Hope reunited for one more "Road" movie, "The Road to Hong Kong", which teamed them up with the much younger Joan Collins and Peter Sellers. Collins was used in place of their longtime partner Dorothy Lamour, whom Crosby felt was getting too old for the role, though Hope refused to do the movie without her, and she instead made a lengthy and elaborate cameo appearance. Shortly before his death in 1977, he had planned another "Road" film in which he, Hope, and Lamour search for the Fountain of Youth. He won an Academy Award for Best Actor for "Going My Way" in 1944 and was nominated for the 1945 sequel, "The Bells of St. Mary's". He received critical acclaim for his performance as an alcoholic entertainer in "The Country Girl" and received his third Academy Award nomination. "The Fireside Theater" (1950) was his first television production. The series of 26-minute shows was filmed at Hal Roach Studios rather than performed live on the air. The "telefilms" were syndicated to individual television stations. He was a frequent guest on the musical variety shows of the 1950s and 1960s, appearing literally countless times on various variety shows as well as numerous late-night talk shows and his own highly rated specials. Bob Hope memorably devoted one of his monthly NBC specials to his long intermittent partnership with Crosby titled "On the Road With Bing." Crosby was associated with ABC's "The Hollywood Palace" as the show's first and most frequent guest host and appeared annually on its Christmas edition with his wife Kathryn and his younger children, and continued after "The Hollywood Palace" was eventually canceled. In the early 1970s, he made two late appearances on the "Flip Wilson Show", singing duets with the comedian. His last TV appearance was a Christmas special taped in London in September 1977 and aired weeks after his death. It was on this special that he recorded a duet of "The Little Drummer Boy" and "Peace on Earth" with rock star David Bowie. Their duet was released in 1982 as a single 45-rpm record and reached No. 3 in the UK singles charts. It has since become a staple of holiday radio and the final popular hit of Crosby's career. At the end of the 20th century, "TV Guide" listed the Crosby-Bowie duet one of the 25 most memorable musical moments of 20th-century television. Bing Crosby Productions, affiliated with Desilu Studios and later CBS Television Studios, produced a number of television series, including Crosby's own unsuccessful ABC sitcom "The Bing Crosby Show" in the 1964–1965 season (with co-stars Beverly Garland and Frank McHugh). The company produced two ABC medical dramas, "Ben Casey" (1961–1966) and "Breaking Point" (1963–1964), the popular "Hogan's Heroes" (1965–1971) military comedy on CBS, as well as the lesser-known show "Slattery's People" (1964–1965). Crosby was one of the first singers to exploit the intimacy of the microphone rather than use the deep, loud vaudeville style associated with Al Jolson. He was, by his own definition, a "phraser", a singer who placed equal emphasis on both the lyrics and the music. Paul Whiteman's hiring of Crosby, with phrasing that echoed jazz, particularly his bandmate Bix Beiderbecke's trumpet, helped bring the genre to a wider audience. In the framework of the novelty-singing style of the Rhythm Boys, he bent notes and added off-tune phrasing, an approach that was rooted in jazz. He had already been introduced to Louis Armstrong and Bessie Smith before his first appearance on record. Crosby and Armstrong remained warm acquaintances for decades, occasionally singing together in later years, e.g. "Now You Has Jazz" in the film "High Society" (1956). During the early portion of his solo career (about 1931–1934), Crosby's emotional, often pleading style of crooning was popular. But Jack Kapp, manager of Brunswick and later Decca, talked him into dropping many of his jazzier mannerisms in favor of a clear vocal style. Crosby credited Kapp for choosing hit songs, working with many other musicians, and most importantly, diversifying his repertoire into several styles and genres. Kapp helped Crosby have number one hits in Christmas music, Hawaiian music, and country music, and top-thirty hits in Irish music, French music, rhythm and blues, and ballads. Crosby elaborated on an idea of Al Jolson's: phrasing, or the art of making a song's lyric ring true. "I used to tell Sinatra over and over," said Tommy Dorsey, "there's only one singer you ought to listen to and his name is Crosby. All that matters to him is the words, and that's the only thing that ought to for you, too." Critic Henry Pleasants wrote: Crosby's was among the most popular and successful musical acts of the 20th century. "Billboard" magazine used different methodologies during his career. But his chart success remains impressive: 396 chart singles, including roughly 25 No. 1 hits. Crosby had separate charting singles every year between 1931 and 1954; the annual re-release of "White Christmas" extended that streak to 1957. He had 24 separate popular singles in 1939 alone. Statistician Joel Whitburn at "Billboard" determined that Crosby was America's most successful recording act of the 1930s and again in the 1940s. For fifteen years (1934, 1937, 1940, 1943–1954), Crosby was among the top ten acts in box-office sales, and for five of those years (1944–1948) he topped the world. He sang four Academy Award-winning songs – "Sweet Leilani" (1937), "White Christmas" (1942), "Swinging on a Star" (1944), "In the Cool, Cool, Cool of the Evening" (1951) – and won the Academy Award for Best Actor for his role in "Going My Way" (1944). A survey in 2000 found that with 1,077,900,000 movie tickets sold, Crosby was the third most popular actor of all time, behind Clark Gable (1,168,300,000) and John Wayne (1,114,000,000). The "International Motion Picture Almanac" lists him in a tie for second-most years at number one on the All Time Number One Stars List with Clint Eastwood, Tom Hanks, and Burt Reynolds. His most popular film, "White Christmas", grossed $30 million in 1954 ($ million in current value). He received 23 gold and platinum records, according to the book "Million Selling Records". The Recording Industry Association of America did not institute its gold record certification program until 1958 when Crosby's record sales were low. Before 1958, gold records were awarded by record companies. Crosby charted 23 "Billboard" hits from 47 recorded songs with the Andrews Sisters, whose Decca record sales were second only to Crosby's throughout the 1940s. They were his most frequent collaborators on disc from 1939 to 1952, a partnership that produced four million-selling singles: "Pistol Packin' Mama", "Jingle Bells", "Don't Fence Me In", and "South America, Take it Away". They made one film appearance together in "Road to Rio" singing "You Don't Have to Know the Language", and sang together on radio throughout the 1940s and 1950s. They appeared as guests on each other's shows and on Armed Forces Radio Service during and after World War II. The quartet's Top-10 "Billboard" hits from 1943 to 1945 include "The Vict'ry Polka", "There'll Be a Hot Time in the Town of Berlin (When the Yanks Go Marching In)", and "Is You Is or Is You Ain't (Ma' Baby?)" and helped morale of the American public. In 1962, Crosby was given the Grammy Lifetime Achievement Award. He has been inducted into the halls of fame for both radio and popular music. In 2007, he was inducted into the Hit Parade Hall of Fame and in 2008 the Western Music Hall of Fame. According to Shoshana Klebanoff: During the Golden Age of Radio, performers had to create their shows live, sometimes even redoing the program a second time for the west coast time zone. Crosby had to do two live radio shows on the same day, three hours apart, for the East and West Coasts. Crosby's radio career took a significant turn in 1945, when he clashed with NBC over his insistence that he be allowed to pre-record his radio shows. (The live production of radio shows was also reinforced by the musicians' union and ASCAP, which wanted to ensure continued work for their members.) In "On the Air: The Encyclopedia of Old-Time Radio", John Dunning wrote about German engineers having developed a tape recorder with a near-professional broadcast quality standard: Crosby's insistence eventually factored into the further development of magnetic tape sound recording and the radio industry's widespread adoption of it. He used his clout, both professional and financial, for innovations in audio. But NBC and CBS refused to broadcast prerecorded radio programs. Crosby left the network and remained off the air for seven months, creating a legal battle with his sponsor Kraft that was settled out of court. He returned to broadcasting for the last 13 weeks of the 1945–1946 season. The Mutual network, on the other hand, pre-recorded some of its programs as early as 1938 for "The Shadow" with Orson Welles. ABC was formed from the sale of the NBC Blue Network in 1943 after a federal antitrust suit and was willing to join Mutual in breaking the tradition. ABC offered Crosby $30,000 per week to produce a recorded show every Wednesday that would be sponsored by Philco. He would get an additional $40,000 from 400 independent stations for the rights to broadcast the 30-minute show, which was sent to them every Monday on three 16-inch (40-cm) lacquer discs that played ten minutes per side at 33 rpm. Crosby wanted to change to recorded production for several reasons. The legend that has been most often told is that it would give him more time for golf. He did record his first "Philco Radio Time" program in August 1947 so he could enter the Jasper National Park Invitational Golf Tournament in September when the radio season was to start. But golf was not the most important reason. He wanted better quality recording, the ability to eliminate mistakes and the need to perform a second live show for the West Coast, and to control the timing of his performances. Because Bing Crosby Enterprises produced the show, he could purchase the best audio equipment and arrange the microphones his way; microphone placement had been debated in studios since the beginning of the electrical era. He would no longer have to wear the toupee that CBS and NBC required for his live audience shows—he preferred a hat. He could also record short promotions for his latest investment, the world's first frozen orange juice, sold under the brand name Minute Maid. This investment allowed him to make more money by finding a loophole where the IRS couldn't tax him at a 77% rate. Murdo MacKenzie of Bing Crosby Enterprises had seen a demonstration of the German Magnetophon in June 1947—the same device that Jack Mullin had brought back from Radio Frankfurt with 50 reels of tape, at the end of the war. It was one of the magnetic tape recorders that BASF and AEG had built in Germany starting in 1935. The 6.5mm ferric-oxide-coated tape could record 20 minutes per reel of high-quality sound. Alexander M. Poniatoff ordered Ampex, which he founded in 1944, to manufacture an improved version of the Magnetophone. Crosby hired Mullin to start recording his "Philco Radio Time" show on his German-made machine in August 1947 using the same 50 reels of I.G. Farben magnetic tape that Mullin had found at a radio station at Bad Nauheim near Frankfurt while working for the U.S. Army Signal Corps. The advantage was editing. As Crosby wrote in his autobiography: Mullin's 1976 memoir of these early days of experimental recording agrees with Crosby's account: Crosby invested US$50,000 in Ampex with the intent to produce more machines. In 1948, the second season of Philco shows was recorded with the Ampex Model 200A and Scotch 111 tape from 3M. Mullin explained how one new broadcasting technique was invented on the Crosby show with these machines: Crosby started the tape recorder revolution in America. In his 1950 film "Mr. Music", he is seen singing into an Ampex tape recorder that reproduced his voice better than anything else. Also quick to adopt tape recording was his friend Bob Hope. He gave one of the first Ampex Model 300 recorders to his friend, guitarist Les Paul, which led to Paul's invention of multitrack recording. His organization, the Crosby Research Foundation, held tape recording patents and developed equipment and recording techniques such as the laugh track that are still in use today. With Frank Sinatra, Crosby was of the principal backers for the United Western Recorders studio complex in Los Angeles. Mullin continued to work for Crosby to develop a videotape recorder (VTR). Television production was mostly live television in its early years, but Crosby wanted the same ability to record that he had achieved in radio. "The Fireside Theater" (1950) sponsored by Procter & Gamble, was his first television production. Mullin had not yet succeeded with videotape, so Crosby filmed the series of 26-minute shows at the Hal Roach Studios, and the "telefilms" were syndicated to individual television stations. Crosby continued to finance the development of videotape. Bing Crosby Enterprises gave the world's first demonstration of videotape recording in Los Angeles on November 11, 1951. Developed by John T. Mullin and Wayne R. Johnson since 1950, the device aired what were described as "blurred and indistinct" images, using a modified Ampex 200 tape recorder and standard quarter-inch (6.3 mm) audio tape moving at 360 inches (9.1 m) per second. A Crosby-led group purchased station KCOP-TV, in Los Angeles, California, in 1954. NAFI Corporation and Crosby purchased television station KPTV in Portland, Oregon, for $4 million on September 1, 1959. In 1960, NAFI purchased KCOP from Crosby's group. In the early 1950s, Crosby helped establish the CBS television affiliate in his hometown of Spokane, Washington. He partnered with Ed Craney, who owned the CBS radio affiliate KXLY (AM) and built a television studio west of Crosby's alma mater, Gonzaga University. After it began broadcasting, the station was sold within a year to Northern Pacific Radio and Television Corporation. Crosby was a fan of thoroughbred horse racing and bought his first racehorse in 1935. In 1937, he became a founding partner of the Del Mar Thoroughbred Club and a member of its board of directors. Operating from the Del Mar Racetrack at Del Mar, California, the group included millionaire businessman Charles S. Howard, who owned a successful racing stable that included Seabiscuit. Charles' son, Lindsay C. Howard, became one of Crosby's closest friends; Crosby named his son Lindsay after him, and would purchase his 40-room Hillsborough, California estate from Lindsay in 1965. Crosby and Lindsay Howard formed Binglin Stable to race and breed thoroughbred horses at a ranch in Moorpark in Ventura County, California. They also established the Binglin stock farm in Argentina, where they raced horses at Hipódromo de Palermo in Palermo, Buenos Aires. A number of Argentine-bred horses were purchased and shipped to race in the United States. On August 12, 1938, the Del Mar Thoroughbred Club hosted a $25,000 winner-take-all match race won by Charles S. Howard's Seabiscuit over Binglin's horse Ligaroti. In 1943, Binglin's horse Don Bingo won the Suburban Handicap at Belmont Park in Elmont, New York. The Binglin Stable partnership came to an end in 1953 as a result of a liquidation of assets by Crosby, who needed to raise enough funds to pay the hefty federal and state inheritance taxes on his deceased wife's estate. The Bing Crosby Breeders' Cup Handicap at Del Mar Racetrack is named in his honor. Crosby had an interest in sports. In the 1930s, his friend and former college classmate, Gonzaga head coach Mike Pecarovich appointed Crosby as an assistant football coach. From 1946 until his death, he owned a 25% share of the Pittsburgh Pirates. Although he was passionate about the team, he was too nervous to watch the deciding Game 7 of the 1960 World Series, choosing to go to Paris with Kathryn and listen to its radio broadcast. Crosby had arranged for Ampex, another of his financial investments, to record the NBC telecast on kinescope. The game was one of the most famous in baseball history, capped off by Bill Mazeroski's walk-off home run. He apparently viewed the complete film just once, and then stored it in his wine cellar, where it remained undisturbed until it was discovered in December 2009. The restored broadcast was shown on MLB Network in December 2010. Crosby was also an avid golfer, and in 1978, he and Bob Hope were voted the Bob Jones Award, the highest honor given by the United States Golf Association in recognition of distinguished sportsmanship. He is a member of the World Golf Hall of Fame. In 1937, Crosby hosted the first 'Crosby Clambake' as it was popularly known, at Rancho Santa Fe Golf Club in Rancho Santa Fe, California, the event's location prior to World War II. Sam Snead won the first tournament, in which the first place check was for $500. After the war, the event resumed play in 1947 on golf courses in Pebble Beach, where it has been played ever since. Now the AT&T Pebble Beach Pro-Am, it has been a leading event in the world of professional golf. Crosby first took up golf at 12 as a caddy, dropped it, and started again in 1930 with some fellow cast members in Hollywood during the filming of "The King of Jazz". Crosby was accomplished at the sport, with a two handicap. He competed in both the British and U.S. Amateur championships, was a five-time club champion at Lakeside Golf Club in Hollywood, and once made a hole-in-one on the 16th at Cypress Point. Crosby was a keen fisherman especially in his younger days but it was a pastime that he enjoyed throughout his life. In the summer of 1966 he spent a week as the guest of Lord Egremont, staying in Cockermouth and fishing on the River Derwent. His trip was filmed for "The American Sportsman" on ABC, although all did not go well at first as the salmon were not running. He did make up for it at the end of the week by catching a number of sea trout. Crosby was married twice. His first wife was actress and nightclub singer Dixie Lee to whom he was married from 1930 until her death from ovarian cancer in 1952. They had four sons: Gary, twins Dennis and Phillip, and Lindsay. The "" (1947) is based on Lee's life. The Crosby family lived at 10500 Camarillo Street in North Hollywood for over five years. After his wife died, Crosby had relationships with model Pat Sheehan (who married his son Dennis in 1958) and actresses Inger Stevens and Grace Kelly before marrying actress Kathryn Grant, who converted to Catholicism, in 1957. They had three children: Harry Lillis III (who played Bill in "Friday the 13th"), Mary (best known for portraying Kristin Shepard on TV's "Dallas"), and Nathaniel (the 1981 U.S. Amateur champion in golf). Crosby reportedly had an alcohol problem in his youth, and may have been dismissed from Paul Whiteman's orchestra because of it, but he later got a handle on his drinking. According to Giddins, Crosby told his son Gary to stay away from alcohol, adding, "It killed your mother" and suggesting he smoke marijuana instead. Crosby told Barbara Walters in a 1977 televised interview that he thought marijuana should be legalized. After Crosby's death, his eldest son, Gary, wrote a highly critical memoir, "Going My Own Way", depicting his father as cruel, cold, remote, and physically and psychologically abusive. Crosby's younger son Phillip vociferously disputed his brother Gary's claims about their father. Around the time Gary made his claims, Phillip stated to the press that "Gary is a whining, bitching crybaby, walking around with a two-by-four on his shoulder and just daring people to nudge it off." Nevertheless, Phillip did not deny that Crosby believed in corporal punishment. In an interview with "People", Phillip stated that "we never got an extra whack or a cuff we didn't deserve." During an interview in 1999 by the "Globe", Phillip said: However, Dennis and Lindsay Crosby confirmed that Bing sometimes subjected his sons to harsh physical discipline and verbal put-downs. Regarding the writing of Gary's memoir, Lindsay said, "I'm glad [Gary] did it. I hope it clears up a lot of the old lies and rumors." Unlike Gary, though, Lindsay stated that he preferred to remember "all the good things I did with my dad and forget the times that were rough." When the book was published, Dennis distanced himself by calling it "Gary's business" but did not publicly deny its claims. Bing's younger brother, singer and jazz bandleader Bob Crosby, recalled at the time of Gary's revelations that Bing was a "disciplinarian," as their mother and father had been. He added, "We were brought up that way." In an interview for the same article, Gary clarified that Bing "was like a lot of fathers of that time. He was not out to be vicious, to beat children for his kicks." Crosby's will established a blind trust in which none of the sons received an inheritance until they reached the age of 65. Lindsay Crosby died in 1989 at age 51, and Dennis Crosby died in 1991 at age 56, both by suicide from self-inflicted gunshot wounds. Gary Crosby died of lung cancer in 1995 at age 62, and Phillip Crosby died of a heart attack in 2004 at age 69. Widow Kathryn Crosby dabbled in local theater productions intermittently and appeared in television tributes to her late husband. Nathaniel Crosby, Crosby's younger son from his second marriage, is a former high-level golfer who won the U.S. Amateur in 1981 at age 19, becoming the youngest winner in the history of that event at the time. Harry Crosby is an investment banker who occasionally makes singing appearances. Denise Crosby, Dennis Crosby's daughter, is also an actress and is known for her role as Tasha Yar on "" and for the recurring role of the Romulan Sela after her withdrawal from the series as a regular cast member. She also appeared in the film adaptation of Stephen King's novel "Pet Sematary". In 2006, Crosby's niece through his sister Mary Rose, Carolyn Schneider, published the laudatory book "Me and Uncle Bing". There have been disputes between Crosby's two families beginning in the late 1990s. When Dixie died in 1952, her will provided that her share of the community property be distributed in trust to her sons. After Crosby's death in 1977, he left the residue of his estate to a marital trust for the benefit of his widow, Kathryn, and HLC Properties, Ltd., was formed for the purpose of managing his interests, including his right of publicity. In 1996, Dixie's trust sued HLC and Kathryn for declaratory relief as to the trust's entitlement to interest, dividends, royalties, and other income derived from the community property of Crosby and Dixie. In 1999, the parties settled for approximately $1.5 million. Relying on a retroactive amendment to the California Civil Code, Dixie's trust brought suit again, in 2010, alleging that Crosby's right of publicity was community property, and that Dixie's trust was entitled to a share of the revenue it produced. The trial court granted Dixie's trust's claim. The California Court of Appeal reversed, however, holding that the 1999 settlement barred the claim. In light of the court's ruling, it was unnecessary for the court to decide whether a right of publicity can be characterized as community property under California law. Following his recovery from a life-threatening fungal infection of his right lung in January 1974, Crosby emerged from semi-retirement to start a new spate of albums and concerts. In March 1977, after videotaping a concert at the Ambassador Auditorium in Pasadena for CBS to commemorate his 50th anniversary in show business, and with Bob Hope looking on, Crosby fell off the stage into an orchestra pit, rupturing a disc in his back requiring a month in the hospital. His first performance after the accident was his last American concert, on August 16, 1977 (the day singer Elvis Presley died). When the electric power failed during his performance, he continued singing without amplification. In September, Crosby, his family and singer Rosemary Clooney began a concert tour of Britain that included two weeks at the London Palladium. While in the UK, Crosby recorded his final album, "Seasons", and his final TV Christmas special with guest David Bowie on September 11 (which aired a little over a month after Crosby's death). His last concert was in the Brighton Centre on October 10, four days before his death, with British entertainer Dame Gracie Fields in attendance. The following day he made his final appearance in a recording studio and sang eight songs at the BBC Maida Vale studios for a radio program, which also included an interview with Alan Dell. Accompanied by the Gordon Rose Orchestra, Crosby's last recorded performance was of the song "Once in a While". Later that afternoon, he met with Chris Harding to take photographs for the "Seasons" album jacket. On October 13, 1977, Crosby flew alone to Spain to play golf and hunt partridge. On October 14, at the La Moraleja Golf Course near Madrid, Crosby played 18 holes of golf. His partner was World Cup champion Manuel Piñero; their opponents were club president César de Zulueta and Valentín Barrios. According to Barrios, Crosby was in good spirits throughout the day, and was photographed several times during the round. At the ninth hole, construction workers building a house nearby recognized him, and when asked for a song, Crosby sang "Strangers in the Night". Crosby, who had a 13 handicap, lost to his partner by one stroke. As Crosby and his party headed back to the clubhouse, Crosby said, "That was a great game of golf, fellas." However his last words were reportedly, "That was a great game of golf, fellas" and then "Let's get a Coke." At about 6:30 pm, Crosby collapsed about 20 yards from the clubhouse entrance and died instantly from a massive heart attack. At the clubhouse and later in the ambulance, house physician Dr. Laiseca tried to revive him, but was unsuccessful. At Reina Victoria Hospital he was administered the last rites of the Catholic Church and was pronounced dead. On October 18, following a private funeral Mass at St. Paul's Catholic Church in Westwood, Crosby was buried at Holy Cross Cemetery in Culver City, California. A plaque was placed at the golf course in his memory. He is a member of the National Association of Broadcasters Hall of Fame in the radio division. The family created an official website on October 14, 2007, the 30th anniversary of Crosby's death. In his autobiography "Don't Shoot, It's Only Me!" (1990), Bob Hope wrote, "Dear old Bing. As we called him, the "Economy-sized Sinatra". And what a voice. God I miss that voice. I can't even turn on the radio around Christmas time without crying anymore." Calypso musician Roaring Lion wrote a tribute song in 1939 titled "Bing Crosby", in which he wrote: "Bing has a way of singing with his very heart and soul / Which captivates the world / His millions of listeners never fail to rejoice / At his golden voice ..." Bing Crosby Stadium in Front Royal, Virginia, was named after Crosby in honor of his fundraising and cash contributions for its construction from 1948 to 1950. In 2006, the former Metropolitan Theater of Performing Arts ('The Met') in Spokane, Washington was renamed to The Bing Crosby Theater. On June 25, 2019, "The New York Times Magazine" listed Bing Crosby among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire. Crosby wrote or co-wrote lyrics to 22 songs. His composition "At Your Command" was no. 1 for three weeks on the U.S. pop singles chart beginning on August 8, 1931. "I Don't Stand a Ghost of a Chance With You" was his most successful composition, recorded by Duke Ellington, Frank Sinatra, Thelonious Monk, Billie Holiday, and Mildred Bailey, among others. Songs co-written by Crosby include: Four performances by Bing Crosby have been inducted into the Grammy Hall of Fame, which is a special Grammy award established in 1973 to honor recordings that are at least 25 years old and that have "qualitative or historical significance".
https://en.wikipedia.org/wiki?curid=4010
Basel Convention The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries (LDCs). It does not, however, address the movement of radioactive waste. The Convention is also intended to minimize the amount and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist LDCs in environmentally sound management of the hazardous and other wastes they generate. The Convention was opened for signature on 22 March 1989, and entered into force on 5 May 1992. As of October 2018, 186 states and the European Union are parties to the Convention. Haiti and the United States have signed the Convention but not ratified it. With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, globalization of shipping made transboundary movement of waste more accessible, and many LDCs were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to LDCs, grew rapidly. One of the incidents which led to the creation of the Basel Convention was the "Khian Sea" waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea. Another is the 1988 Koko case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small town of Koko in Nigeria in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland. These practices have been deemed "Toxic Colonialism" by many developing countries. At its meeting that took place from 27 November to 1 December 2006, the Conference of the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships. According to Maureen Walsh, only around 4% of hazardous wastes that come from OECD countries are actually shipped across international borders. These wastes include, among others, chemical waste, radioactive waste, municipal solid waste, asbestos, incinerator ash, and old tires. Of internationally shipped waste that comes from developed countries, more than half is shipped for recovery and the remainder for final disposal. Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste". As of October 2018, there are 187 parties to the treaty, which includes 184 UN member states, the Cook Islands, the European Union, and the State of Palestine. The nine UN member states that are not party to the treaty are East Timor, Fiji, Grenada, Haiti, San Marino, Solomon Islands, South Sudan, Tuvalu, and United States. A waste falls under the scope of the Convention if it is within the category of wastes listed in Annex I of the Convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the Convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the Convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste. Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered. Annex IX attempts to define "commodities" which are not considered wastes and which would be excluded. In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. It is of note that the Convention places a general prohibition on the exportation or importation of wastes between Parties and non-Parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-Party to the Convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries. The OECD Council also has its own control system that governs the trans-boundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention. Parties to the Convention must honor import bans of other Parties. Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or import covered waste from, non-parties to the convention. The Convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions. According to Article 12, Parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders. Current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non terrestrial locations would not be covered. This has been suggested (somewhat laughably) as a way to deal with the "Fridge Mountain" and related deposits of waste in the UK and elsewhere in the event of a way to cheaply access space such as an orbital tether being built. After the initial adoption of the Convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to LDCs. In particular, the original Convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention. Lobbying at 1995 Basel conference by LDCs, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by 3/4 of the member states to the Convention). On September 6, 2019, Croatia became the 97th country to ratify the Amendment which will enter into force after 90 days on December 5, 2019. The Amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the Amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the Parties of the Basel Convention could not yet agree whether this would be three fourth of the Parties that were Party to the Basel Convention when the Ban was adopted, or three fourth of the current Parties of the Convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation. In the light of the blockage concerning the entry into force of the Ban amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative.
https://en.wikipedia.org/wiki?curid=4012
BASIC BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages whose design philosophy emphasizes ease of use. The original version was designed by John G. Kemeny and Thomas E. Kurtz and released at Dartmouth College in 1964. They wanted to enable students in fields other than science and mathematics to use computers. At the time, nearly all use of computers required writing custom software, which was something only scientists and mathematicians tended to learn. In addition to the language itself, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs at the same time. This general model became very popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of early microcomputers in the mid-1970s led to the development of a number of BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 kB, a variety of Tiny BASIC dialects was also created. BASIC was available for almost any system of the era, and naturally became the "de facto" programming language for the home computer systems that emerged in the late 1970s. These machines almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC fell from use during the later 1980s as newer machines with far greater capabilities came to market and other programming languages (such as Pascal and C) became tenable. In 1991, Microsoft released Visual Basic, combining a greatly updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET. John G. Kemeny was the math department chairman at Dartmouth College. Based largely on his reputation as an innovator in math teaching, in 1959 the school won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that." Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly-formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: 'DO 100, I = 1, 10, 2'. Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?" Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, Marvin Minsky suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students. Kemeny wrote the first version of BASIC. The acronym "BASIC" comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember codice_1 loop was replaced by the much easier to remember codice_2, and the line number used in the DO was instead indicated by the codice_3. Likewise, the cryptic codice_4 statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler codice_5. These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN. The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964. One of the graduate students on the implementation team was Mary Kenneth Keller, one of the first people in the United States to earn a Ph.D. in computer science and the first woman to do so. Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely. General Electric formed a new division to begin selling commercial access to similar machines at other locations. Wanting use of the language to become widespread, its designers made the compiler available free of charge. (In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with the very expensive computers, usually available only to lease.) They also made it available to high schools in the Hanover, New Hampshire area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as "Dartmouth BASIC". New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker recognizing the creation of BASIC. BASIC, by its very nature of being small, was naturally suited to porting to the newly emerging minicomputer market. These machines had very small main memory, perhaps as little as 4 kB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. Interpreters for BASIC on these platforms thus became the norm. A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a timesharing system. One, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs. The other, a high-end version of the same underlying machine, ran the programs and generated output. For the low-low cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer timesharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG). This cemented BASIC as the language of choice for the timesharing market from large to small. DEC initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, customers were clamouring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system. During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's "Star Trek". David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, "101 BASIC Computer Games", published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, "Creative Computing". The book remained popular, and was re-published on several occasions. The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers. The first microcomputer version of BASIC was co-written by Gates, Allen, and Monte Davidoff for their newly-formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy. Partially in response, Bob Albrecht urged Dennis Allison to write their own variation of the language. Albrecht had seen BASIC on minicomputers and felt it would be the perfect match for new machines. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the "People's Computer Company" newsletter published in 1975 and implementations with source code published in "". This led to a wide variety of versions with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known. Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. By 1978, MS BASIC was a "de facto" standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented. Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and a more advanced version developed as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 kB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX-80, and was later extended for the Sinclair ZX-81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features. As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from "Creative Computing" as "BASIC Computer Games". This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications. When IBM was designing the IBM PC they followed the paradigm of existing home computers in wanting to have a built-in BASIC. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed by the original author under the name PowerBASIC). Microsoft wrote the windowed AmigaBASIC that was supplied with version 1.1 of the pre-emptive multitasking GUI Amiga computers (late 1985 / early 1986), although the product unusually did not bear any Microsoft marks. These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. In 1991 Microsoft introduced Visual Basic, an evolutionary development of QuickBasic. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language. Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Quite BASIC and Microsoft's Small Basic. Many versions of BASIC are also now available for smartphones and tablets via the Apple App Store, or Google Play store for Android. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called "Petit Computer" allows for programming in a slightly modified version of BASIC with DS button support. Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments, HP, Casio, and others. QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS. The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs. Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 "Salon" article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999. Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event. Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables. Some dialects of BASIC supported matrices and matrix operations, useful for the solution of sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements. The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input: 5 LET S = 0 10 MAT INPUT V 20 LET N = NUM 30 IF N = 0 THEN 99 40 FOR I = 1 TO N 45 LET S = S + V(I) 50 NEXT I 60 PRINT S/N 70 GO TO 5 99 END New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program: 10 PRINT "Hello, World!" 20 END An infinite loop could be used to fill the display with the message. Most first-generation BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes: 10 INPUT "What is your name: "; U$ 20 PRINT "Hello "; U$ 30 INPUT "How many stars do you want: "; N 40 S$ = "" 50 FOR I = 1 TO N 60 S$ = S$ + "*" 70 NEXT I 80 PRINT S$ 90 INPUT "Do you want more stars? "; A$ 100 IF LEN(A$) = 0 THEN GOTO 90 110 A$ = LEFT$(A$, 1) 120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30 130 PRINT "Goodbye "; U$ 140 END The resulting dialog might resemble: Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced. The following example is in Microsoft QuickBASIC: REM QuickBASIC example REM Forward declaration - allows the main code to call a REM subroutine that is defined later in the source code DECLARE SUB PrintSomeStars (StarCount!) REM Main program follows INPUT "What is your name: ", UserName$ PRINT "Hello "; UserName$ DO LOOP WHILE UCASE$(Answer$) = "Y" PRINT "Goodbye "; UserName$ END REM subroutine definition SUB PrintSomeStars (StarCount) END SUB Third-generation BASIC dialects such as Visual Basic, Xojo, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as "methods" of standard objects rather than "operators". Also, the operating system became increasingly accessible to the BASIC language. The following example is in Visual Basic .NET: Public Module StarsProgram End Module
https://en.wikipedia.org/wiki?curid=4015
Butterfly effect In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term, closely associated with the work of Edward Lorenz, is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as the flapping of the wings of a distant butterfly several weeks earlier. Lorenz discovered the effect when he observed that runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in general and in weather specifically was earlier recognized by French mathematician and engineer Henri Poincaré and American mathematician and philosopher Norbert Wiener. Edward Lorenz's work placed the concept of "instability" of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. In "The Vocation of Man" (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in the literature in a particular case of the three-body problem by Henri Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury about time travel. In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called "Deterministic Nonperiodic Flow" (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following suggestions from colleagues, in later speeches and papers Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merrilees concocted "Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?" as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can "cause" the tornado: in the sense that the flap of the wings is a part of the initial conditions of an inter-connected complex web; one set of conditions leads to a tornado while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics. Recurrence, the approximate return of a system towards its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. If "M" is the state space for the map formula_1, then formula_1 displays sensitive dependence to initial conditions if for any x in "M" and any δ > 0, there are y in "M", with distance "d"(. , .) such that formula_3 and such that for some positive parameter "a". The definition does not require that all points from a neighborhood separate from the base point "x", but it requires one positive Lyapunov exponent. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: which, unlike most chaotic maps, has a closed-form solution: where the initial condition parameter formula_7 is given by formula_8. For rational formula_7, after a finite number of iterations formula_10 maps into a periodic sequence. But almost all formula_7 are irrational, and, for irrational formula_7, formula_10 never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2"n" shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps formula_10 folded within the range [0, 1]. The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and Delos and co-workers. Other authors suggest that the butterfly effect can be observed in quantum systems. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. The journalist Peter Dizikes, writing in "The Boston Globe" in 2008, notes that popular culture likes the idea of the butterfly effect, but gets it wrong. Whereas Lorenz suggested correctly with his butterfly metaphor that predictability "is inherently limited", popular culture supposes that each event can be explained by finding the small reasons that caused it. Dizikes explains: "It speaks to our larger expectation that the world should be comprehensible – that everything happens for a reason, and that we can pinpoint all those reasons, however small they may be. But nature itself defies this expectation."
https://en.wikipedia.org/wiki?curid=4024
Borland Borland Software Corporation was founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad and Philippe Kahn. Its main business was the development and sale of software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California and then in Austin, Texas. In 2009 the company became a full subsidiary of the British firm Micro Focus International plc. Three Danish citizens, Niels Jensen, Ole Henriksen, and Mogens Glad, founded Borland Ltd. in August 1981 to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley, and who had been a key developer of the Micral. The three Danes had embarked, at first successfully, on marketing software first from Denmark, and later from Ireland, before running into some challenges at the time when they met Philippe Kahn. Kahn was chairman, president, and CEO of Borland Inc. from its inception in 1983 until 1995. Main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000). Borland developed a series of software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product sold in Scandinavia under the name of Compas Pascal. 1984 saw the launch of Borland Sidekick, a time organization, notebook, and calculator utility that was an early terminate and stay resident program (TSR) for DOS operating systems. By the mid-1980s the company had the largest exhibit at the 1985 West Coast Computer Faire other than IBM or AT&T. Bruce Webster reported that "the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'". After Turbo Pascal and Sidekick the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as President, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development, while all software development was continuing to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker. In 1985 Borland acquired Analytica and its Reflex database product. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the USA. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access. In 1987 Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This apparently drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand new series of compilers at their London development centre. An agreement was reached and they spun off a company called Jensen & Partners International(JPI), later TopSpeed. JPI first launched a MS-DOS compiler named JPI Modula-2, that later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++ and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology exists today as the underlying technology of the Clarion 4GL programming language, a Windows development tool. In September 1987 Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board. The Quattro Pro spreadsheet was launched in 1989 with, at the time, an improvement and charting capabilities. Lotus Development, under the leadership of Jim Manzi sued Borland for copyright infringement (see Look and feel). The litigation, "Lotus Dev. Corp. v. Borland Int'l, Inc.", brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership took a position of principle and announced that they would defend against Lotus' legal position and "fight for programmer's rights". After a decision in favor of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight Justices heard the case, and it ended in a 4–4 tie. As a result, the First Circuit decision remained standing, but the Supreme Court result, being a tie, did not bind any other court and set no national precedent. Additionally, Borland's approach towards software piracy and intellectual property (IP) included its "Borland no-nonsense license agreement". This allowed the developer/user to utilize its products "just like a book"; he or she was allowed to make multiple copies of a program, as long as only one copy was in use at any point in time. In September 1991 Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. Competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer had the dominance in software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s. The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company had to constitute a replacement team (the ObjectVision team, redeployed) headed by Bill Turpin to redo the job. Borland lacked the financial strength to project its marketing and move internal resources off other products to shore up the dBASE/W effort. Layoffs occurred in 1993 to keep the company afloat, the third instance of this in five years. By the time dBASE for Windows eventually shipped, the developer community had moved on to other products such as Clipper or FoxBase, and dBASE never regained a significant share of Ashton-Tate's former market. This happened against the backdrop of the rise in Microsoft's combined Office product marketing. A change in market conditions also contributed to Borland's fall from prominence. In the 1980s, companies had few people who understood the growing personal computer phenomenon, and so most technical people were given free rein to purchase whatever software they thought they needed. Borland had done an excellent job marketing to those with a highly technical bent. By the mid-1990s, however, companies were beginning to ask what the return was on the investment they had made in this loosely controlled PC software buying spree. Company executives were starting to ask questions that were hard for technically minded staff to answer, and so corporate standards began to be created. This required new kinds of marketing and support materials from software vendors, but Borland remained focused on the technical side of its products. During 1993 Borland explored ties with WordPerfect as a possible way to form a suite of programs to rival Microsoft's nascent integration strategy. WordPerfect itself was struggling with a late and troubled transition to Windows. The eventual joint company effort, named Borland Office for Windows (a combination of the WordPerfect word processor, Quattro Pro spreadsheet and Paradox database) was introduced at the 1993 Comdex computer show. Borland Office never made significant in-roads against Microsoft Office. WordPerfect was then bought by Novell. In October 1994, Borland sold Quattro Pro and rights to sell up to million copies of Paradox to Novell for $140 million in cash, repositioning the company on its core software development tools and the Interbase database engine and shifting toward client-server scenarios in corporate applications. This later proved a good foundation for the shift to web development tools. Philippe Kahn and the Borland board disagreed on how to focus the company, and Kahn resigned as chairman, CEO and president, after 12 years, in January 1995. Kahn remained on the board until November 7, 1996. Borland named Gary Wetsel as CEO, but he resigned in July 1996. William F. Miller was interim CEO until September of that year, when Whitney G. Lynn became interim president and CEO (along with other executive changes), followed by a succession of CEOs including Dale Fuller and Tod Nielsen. The Delphi 1 rapid application development (RAD) environment was launched in 1995, under the leadership of Anders Hejlsberg. In 1996 Borland acquired Open Environment Corporation, a Cambridge-based company founded by John J. Donovan. On November 25, 1996, Del Yocam was hired as Borland CEO and chairman. In 1997, Borland sold Paradox to Corel, but retained all development rights for the core BDE. In November 1997, Borland acquired Visigenic, a middleware company that was focused on implementations of CORBA. On 1998-04-29, Borland International, Inc. announced it has become Inprise Corporation. For a number of years (both before and during the Inprise name) Borland suffered from serious financial losses and poor public image. When the name was changed to Inprise, many thought Borland had gone out of business. In March 1999, dBase was sold to KSoft, Inc. which was soon renamed to dBASE Inc. (In 2004 dBASE Inc. was renamed to DataBased Intelligence, Inc.). In 1999, Dale L. Fuller replaced Yocam. At this time Fuller's title was "interim president and CEO." The "interim" was dropped in December 2000. Keith Gottfried served in senior executive positions with the company from 2000 to 2004. A proposed merger between Inprise and Corel was announced in February 2000, aimed at producing Linux-based products. The scheme was abandoned when Corel's shares fell and it became clear that there was really no strategic fit. InterBase 6.0 was made available as open-source software in July 2000. On 2000-11-14, Inprise Corporation announced the company intends to officially change its name to Borland Software Corporation. The legal name of the company would continue to be Inprise Corporation until the completion of renaming process during the first quarter of 2001. Once the name change is completed, the company would also expect to change its Nasdaq market symbol from 'INPR' to 'BORL'. On 2001-01-22, Borland Software Corporation announced it has completed its name change from Inprise Corporation. Effective at the open of trading on Nasdaq, the Company's Nasdaq market symbol would also be changed from 'INPR' to 'BORL'. Under the Borland name and a new management team headed by president and CEO Dale L. Fuller, a now-smaller and profitable Borland refocused on Delphi, and created a version of Delphi and C++ Builder for Linux, both under the name Kylix. This brought Borland's expertise in integrated development environments to the Linux platform for the first time. Kylix was launched in 2001. Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace. In 2001 Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services. C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. As of the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET had been combined into a single IDE called "Borland Developer Studio" (though the combined IDE is still popularly known as "Delphi"). In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as "Caliber"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development. Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO. In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also has development centers at Singapore, Santa Ana, California, and Linz, Austria. On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50/share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. It was estimated to have 750 employees at the time. On 2015-04-05, Micro Focus announced the completion of integrating Attachmate Group of companies that had been merged in 2014-11-20. During the integration period, the affected companies had been merged into a single organization. In the announced reorganization, Borland products would be part of Micro Focus portfolio. The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are: Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke "integrating the enterprise". The idea was to integrate Borland's tools, Delphi, C++ Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server. Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2. A live action version of Frank Borland was made after Micro Focus plc had acquired Borland Software Corporation. This version was created by True Agency Limited. An introductory film was also made about the mascot.
https://en.wikipedia.org/wiki?curid=4027
Buckminster Fuller Richard Buckminster Fuller (; July 12, 1895 – July 1, 1983) was an American architect, systems theorist, author, designer, inventor, and futurist. Fuller published more than 30 books, coining or popularizing terms such as "Spaceship Earth", "Dymaxion" (house, car, map...), ephemeralization, synergetic, and "tensegrity". He also developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome. Carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. Fuller was the second World President of Mensa from 1974 to 1983. Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller and Caroline Wolcott Andrews, and grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller. Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten. He disagreed with the way geometry was taught in school, being unable to experience for himself that a chalk dot on the blackboard represented an "empty" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term "invention". Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade. Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard College, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his "irresponsibility and lack of interest". By his own appraisal, he was a non-conforming misfit in the fraternity environment. Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS "Inca". After discharge, he worked again in the meat packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing light-weight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927. Buckminster Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Stanford historian, Barry Katz, found signs that around this time in his life Fuller was suffering from depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing. In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment. Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared: Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on "an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity". Speaking to audiences later in life, Fuller would regularly recount the story of his Lake Michigan experience, and its transformative impact on his life. Historians have been unable to identify direct evidence for this experience within the 1927 papers of Fuller's Chronofile archives, housed at Stanford University. Stanford historian Barry Katz suggests that the suicide story may be a myth which Fuller constructed later in life, to summarize this formative period of his career. In 1927 Fuller resolved to think independently which included a commitment to "the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of "doing more with less" to the end that all people everywhere can have more and more". By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship. Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's "Le piège de Méduse" produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker. At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925 by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller neglected to cite that the self supporting dome had already been built some 26 years prior in his patent applications. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design. One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeild's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world. Fuller's first "continuous tension – discontinuous compression" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members. For half of a century, Fuller developed many ideas, designs and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy and ideas scrupulously by a daily diary (later called the "Dymaxion Chronofile"), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project. International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges. Fuller began working with architect Shoji Sadao in 1954, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the "Montreal Biosphère". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York. After employing several Southern Illinois University Carbondale graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of "design science exploration" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was "unlike most faculty appointments [...] more a celebrity role than a teaching job" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was "extremely productive," and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972. Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to "applying the principles of science to solving the problems of humanity." From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975. Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of "omni-successful education and sustenance of all humanity". Fuller referred to himself as "the property of universe" and during one radio interview he gave later in life, declared himself and his work "the property of all humanity". For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year. In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements. Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. Fuller was elected as an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50th year reunion of his Harvard class of 1917 (from which he was expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. In 1968, he was elected into the National Academy of Design as an Associate member, and became a full Academician in 1970. In 1970, he received the Gold Medal award from the American Institute of Architects. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, Fuller received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom presented to him on February 23, 1983, by President Ronald Reagan. Fuller's last filmed interview took place on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address. Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: "She is squeezing my hand!" He then stood up, suffered a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts. Buckminster Fuller was a Unitarian, like his grandfather Arthur Buckminster Fuller, a Unitarian minister. Fuller was also an early environmental activist, aware of the Earth's finite resources, and promoted a principle he termed "ephemeralization", which, according to futurist and Fuller disciple Stewart Brand, was defined as "doing more with less". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components. Fuller was a pioneer in thinking globally, and explored energy and material efficiency in the fields of architecture, engineering and design. Citing François de Chardenèdes' opinion that petroleum, from the standpoint of its replacement cost in our current energy "budget" (essentially, the net incoming solar flux), had cost nature "over a million dollars" per U.S. gallon (US$300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: "There is no energy crisis, only a crisis of ignorance." Though Fuller was concerned about sustainability and human survival under the existing socio-economic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge, as the "technological ability to protect, nurture, support, and accommodate all growth needs of life," his analysis of the condition of "Spaceship Earth" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: "selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete." He criticized previous utopian schemes as too exclusive, and thought this was a major source of their failure. To work, he thought that a utopia needed to include everyone. Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the Institute of General Semantics, and he delivered the annual Alfred Korzybski Memorial Lecture in 1955. Korzybski is mentioned in the Introduction of his book "Synergetics". The two shared a remarkable amount of similarity in their formulations of general semantics. In his 1970 book "I Seem To Be a Verb", he wrote: "I live on Earth at present, and I don't know what I am. I know that I am not a category. I am not a thing—a noun. I seem to be a verb, an evolutionary process—an integral function of the universe." Fuller wrote that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral. He had become a guru of the design, architecture, and 'alternative' communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 "Dymaxion Award" for "poetically economic" domed living structures. Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's. Their construction is based on extending some basic principles to build simple "tensegrity" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning novel "Stand on Zanzibar" by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials). The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the "Dymaxion Corporation" and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance. Fuller associated the word "Dymaxion" with much of his work, a portmanteau of the words dynamic", maximum", and "tension" to sum up the goal of his study, "maximum gain of advantage from minimal energy input". The Dymaxion was not an automobile "per se", but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an "Omni-Medium Transport" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made constant improvements and refinements to the platform, and Fuller noted the Dymaxion "was an invention that could not be made available to the general public without considerable improvements". The bodywork was aerodynamically designed for increased fuel efficiency and speed as well as light weight, and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive and three-wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Thus able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt. Shortly after launch, a prototype crashed after being hit by another car, killing the Dymaxion's driver. The other car was driven by a local politician and was illegally removed from the accident scene, leaving reporters who arrived subsequently to blame the Dymaxion's unconventional design — though investigations exonerated the prototype. Fuller would himself later crash another prototype with his young daughter aboard. Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving "Dymaxion Corporation" and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives. Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term "Dymaxion" is used in effect to signify a "radically strong and light tensegrity structure". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened "bell" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation. Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a "Go-Ahead-With-Life Room" stocked with maps, charts, and helpful tools for travel "through time and space". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems. In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum. In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to . The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids and arches proved equally adaptable. The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp, (mesh) and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered as one of Fuller's greatest contributions. Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface. In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to "make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone". Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his Naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward. In public appearances, Fuller always wore dark-colored suits, appearing like "an alert little clergyman". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become "the invisible man" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a "second-rate bank clerk". Writer Guy Davenport met him in 1965 and described him thus: He's a dwarf, with a worker's hands, all callouses and squared fingers. He carries an ear trumpet, of green plastic, with WORLD SERIES 1965 printed on it. His smile is golden and frequent; the man's temperament is angelic, and his energy is just a touch more than that of [Robert] Gallway (champeen runner, footballeur, and swimmer). One leg is shorter than the other, and the prescription shoe worn to correct the imbalance comes from a country doctor deep in the wilderness of Maine. Blue blazer, Khrushchev trousers, and a briefcase full of Japanese-made wonderments; ... Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits. In the 1920s, Fuller experimented with polyphasic sleep, which he called "Dymaxion sleep". Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him "twenty-two thinking hours a day", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II. Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as "tireless" by Barry Farrell in "Life" magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night. Fuller documented his life copiously from 1915 to 1983, approximately of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University. In his youth, Fuller experimented with several ways of presenting himself: R. B. Fuller, Buckminster Fuller, but as an adult finally settled on R. Buckminster Fuller, and signed his letters as such. However, he preferred to be addressed as simply "Bucky". Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative) as well as terms he himself invented. Fuller used the word "Universe" without the definite or indefinite articles ("the" or "a") and always capitalized the word. Fuller wrote that "by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences". The words "down" and "up", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words "in" and "out" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. "I suggest to audiences that they say, 'I'm going "outstairs" and "instairs."' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'" "World-around" is a term coined by Fuller to replace "worldwide". The general belief in a flat Earth died out in classical antiquity, so using "wide" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms "sunsight" and "sunclipse", replacing "sunrise" and "sunset" to overturn the geocentric bias of most pre-Copernican celestial mechanics. Fuller also invented the word "livingry," as opposed to weaponry (or "killingry"), to mean that which is in support of all human, plant, and Earth life. "The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry." As well as contributing significantly to the development of tensegrity technology, Fuller invented the term "tensegrity", a portmanteau of "tensional integrity". "Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder." "Dymaxion" is a portmanteau of "dynamic maximum tension". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term utilizing three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension. Fuller also helped to popularize the concept of Spaceship Earth: "The most important fact about Spaceship Earth: an instruction manual didn't come with it." His concepts and buildings include: Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson and Stewart Brand. An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel prize in chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene. He is quoted in the lyric of "The Tower of Babble" in the musical "Godspell": "Man is a complex of patterns and processes." The indie band Driftless Pony Club named their 2011 album, "Buckminster", after him. All the songs within the album are based upon his life and works. On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964 cover of "Time Magazine". Fuller was the subject of two documentary films: "The World of Buckminster Fuller" (1971) and "" (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 "live documentary" about Fuller, "The Love Song of R. Buckminster Fuller". In June 2008, the Whitney Museum of American Art presented "Buckminster Fuller: Starting with the Universe", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city. In 2009, a number of US companies decided to repackage spherical magnets and sell them as toys. One company, Maxfield & Oberton, told "The New York Times" that they saw the product on YouTube and decided to repackage them as ""Buckyballs"", because the magnets could self-form and hold together in shapes reminiscent of the Fuller inspired buckyballs. The buckyball toy launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands, but by 2010 began to experience problems with toy safety issues and the company was forced to recall the packages that were labelled as toys. Robert Kiyosaki's 2015 book "Second Chance" is largely about Kiyosaki's interactions with Fuller, and Fuller's unusual final book "Grunch of Giants". In 2012, the San Francisco Museum of Modern Art hosted "The Utopian Impulse" – a show about Buckminster Fuller's influence in the Bay Area. Featured were concepts, inventions and designs for creating "free energy" from natural forces, and for sequestering carbon from the atmosphere. The show ran January through July. Fuller is briefly mentioned in the 2014 superhero film, "", when Kitty Pryde is giving a lecture to a group of students regarding utopian architecture. In a different note, Fuller's quote "Those who play with the Devil's toys, will be brought by degree to wield his sword" was used and referenced as the first display seen in the strategy sci-fi video game "" developed by Firaxis Games. "The House of Tomorrow", is a 2017 American independent drama film written and directed by Peter Livolsi, based on Peter Bognanni's 2010 novel of the same name, featuring Asa Butterfield, Alex Wolff, Nick Offerman, Maude Apatow, and Ellen Burstyn. Burstyn's character is obsessed by all things Buckminster Fuller providing retro-futurist tours of her geodesic home, including authentic video of Buckminster Fuller talking and sailing with Ellen Burstyn, who'd actually befriended him in real life.
https://en.wikipedia.org/wiki?curid=4031
Bill Watterson William Boyd Watterson II (born July 5, 1958) is an American former cartoonist and the author of the comic strip "Calvin and Hobbes", which was syndicated from 1985 to 1995. Watterson stopped drawing "Calvin and Hobbes" at the end of 1995 with a short statement to newspaper editors and his readers that he felt he had achieved all he could in the medium. Watterson is known for his negative views on licensing and comic syndication, his efforts to expand and elevate the newspaper comic as an art-form, and his move back into private life after he stopped drawing "Calvin and Hobbes". Watterson was born in Washington, D.C., and grew up in Chagrin Falls, Ohio. The suburban Midwestern United States setting of Ohio was part of the inspiration for "Calvin and Hobbes". Watterson was born in Washington, D.C., where his father James G. Watterson (1932–2016) worked as a patent attorney. The family relocated to Chagrin Falls, Ohio in 1965 when Watterson was six because his mother Kathryn wanted to be closer to her family and felt that the small town was a good place to raise children. Watterson drew his first cartoon at age eight, and spent much time in childhood alone, drawing and cartooning. This continued through his school years, during which time he discovered comic strips such as "Pogo", "Krazy Kat", and Charles Schulz' "Peanuts" which subsequently inspired and influenced his desire to become a professional cartoonist. On one occasion when he was in fourth grade, he wrote a letter to Charles Schulz, who responded — to Watterson's surprise — making a big impression on him at the time. His parents encouraged him in his artistic pursuits. Later, they recalled him as a "conservative child" — imaginative, but "not in a fantasy way", and certainly nothing like the character of Calvin that he later created. Watterson found avenues for his cartooning talents throughout primary and secondary school, creating high school-themed super hero comics with his friends and contributing cartoons and art to the school newspaper and yearbook. From 1976 to 1980, Watterson attended Kenyon College and graduated with a Bachelor of Arts degree in political science. He had already decided on a career in cartooning, but he felt his studies would help him move into editorial cartooning. At college, he continued to develop his art skills; during his sophomore year, he painted Michelangelo's "Creation of Adam" on the ceiling of his dorm room. He also contributed cartoons to the college newspaper, some of which included the original "Spaceman Spiff" cartoons. Later, when Watterson was creating names for the characters in his comic strip, he decided on Calvin (after the Protestant reformer John Calvin) and Hobbes (after the social philosopher Thomas Hobbes), allegedly as a "tip of the hat" to Kenyon's political science department. In "The Complete Calvin and Hobbes", Watterson stated that Calvin was named for "a 16th-century theologian who believed in predestination," and Hobbes for "a 17th-century philosopher with a dim view of human nature." Watterson wrote a brief, tongue-in-cheek autobiography in the late 1980s. Watterson was inspired by the work of "Cincinnati Enquirer" political cartoonist Jim Borgman, a 1976 graduate of Kenyon College, who currently draws "Zits", and decided to try to follow the same career path as Borgman, who in turn offered support and encouragement to the aspiring artist. Watterson graduated in 1980 and was hired on a trial basis at the "Cincinnati Post", a competing paper of the "Enquirer". Watterson quickly discovered that the job was full of unexpected challenges which prevented him from performing his duties to the standards set for him. Not the least of these challenges was his unfamiliarity with the Cincinnati political scene, as he had never resided in or near the city, having grown up in the Cleveland area and attending college in central Ohio. The "Post" abruptly fired Watterson before his contract was up. He then joined a small advertising agency and worked there for four years as a designer, creating grocery advertisements while also working on his own projects, including development of his own cartoon strip and contributions to "Target: The Political Cartoon Quarterly". As a freelance artist, Watterson has drawn other works for various merchandise, including album art for his brother's band, calendars, clothing graphics, educational books, magazine covers, posters, and post cards. Watterson has said that he works for personal fulfilment. As he told the graduating class of 1990 at Kenyon College, "It's surprising how hard we'll work when the work is done just for ourselves." "Calvin and Hobbes" was first published on November 18, 1985. In "Calvin and Hobbes Tenth Anniversary Book", he wrote that his influences included Charles Schulz's "Peanuts", Walt Kelly's "Pogo", and George Herriman's "Krazy Kat". Watterson wrote the introduction to the first volume of "The Komplete Kolor Krazy Kat". Watterson's style also reflects the influence of Winsor McCay's "Little Nemo in Slumberland". Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about "building character", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes. Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined, and that the space which they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he opined that art should not be judged by the medium for which it is created (i.e., there is no "high" art or "low" art—just art). For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic. He refused to merchandise his creations on the grounds that displaying "Calvin and Hobbes" images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and said that he had signed his contract without fully perusing it because, as a new artist, he was happy to find a syndicate willing to give him a chance (two syndicates had denied Watterson). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him but continue "Calvin and Hobbes" with a new artist. Watterson's position eventually won out and he was able to renegotiate his contract so that he would receive all rights to his work, but later added that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991. Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, "Only thieves and vandals have made money on "Calvin and Hobbes" merchandise." Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and still is, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. (In this context, half-page is an absolute sizeapproximately half a nominal page sizeand not related to the actual page size on which a cartoon might eventually be printed for distribution.) Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste the space on throwaway jokes that did not always fit the strip. While he was set to return from his first sabbatical (a second took place during 1994), Watterson discussed with his syndicate a new format for "Calvin and Hobbes" that would enable him to use his space more efficiently and would almost require the papers to publish it as a half-page. Universal agreed that they would sell the strip as the half-page and nothing else, which garnered anger from papers and criticism for Watterson from both editors and some of his fellow cartoonists (whom he described as "unnecessarily hot-tempered"). Eventually, Universal compromised and agreed to offer papers a choice between the full half-page or a reduced-sized version to alleviate concerns about the size issue. Watterson conceded that this caused him to lose space in many papers, but he said that, in the end, it was a benefit because he felt that he was giving the papers' readers a better strip for their money and editors were free not to run "Calvin and Hobbes" at their own risk. He added that he was not going to apologize for drawing a popular feature. Watterson announced the end of "Calvin and Hobbes" on November 9, 1995, with the following letter to newspaper editors: The last strip of "Calvin and Hobbes" was published on December 31, 1995. In the years since "Calvin and Hobbes" was ended, many attempts have been made to contact Watterson. Both "The Plain Dealer" and the "Cleveland Scene" sent reporters, in 1998 and 2003 respectively, but neither were able to make contact with the media-shy Watterson. Since 1995, Watterson has taken up painting, at one point drawing landscapes of the woods with his father. He has kept away from the public eye and shown no interest in resuming the strip, creating new works based on the strip's characters, or embarking on new commercial projects, though he has published several "Calvin and Hobbes" "treasury collection" anthologies. He does not sign autographs or license his characters, staying true to his stated principles. In previous years, Watterson was known to sneak autographed copies of his books onto the shelves of the Fireside Bookshop, a family-owned bookstore in his hometown of Chagrin Falls, Ohio. He ended this practice after discovering that some of the autographed books were being sold online for high prices. Watterson rarely gives interviews or makes public appearances. His lengthiest interviews include the cover story in "The Comics Journal" No. 127 in February 1989, an interview that appeared in a 1987 issue of "Honk Magazine", and one in a 2015 Watterson exhibition catalogue. On December 21, 1999, a short piece was published in the "Los Angeles Times", written by Watterson to mark the forthcoming retirement of iconic "Peanuts" creator Charles Schulz. In or around 2003, Gene Weingarten of "The Washington Post" sent Watterson the first edition of the "Barnaby" book as an incentive, hoping to land an interview. Weingarten passed the book to Watterson's parents, along with a message, and declared that he would wait in his hotel for as long as it took Watterson to contact him. Watterson's editor Lee Salem called the next day to tell Weingarten that the cartoonist would not be coming. In 2004, Watterson and his wife Melissa bought a home in the Cleveland suburb of Cleveland Heights, Ohio. In 2005, they completed the move from their home in Chagrin Falls to their new residence. In October 2005, Watterson answered 15 questions submitted by readers. In October 2007, he wrote a review of "Schulz and Peanuts", a biography of Charles Schulz, in "The Wall Street Journal". In 2008, he provided a foreword for the first book collection of Richard Thompson's "Cul de Sac" comic strip. In April 2011, a representative for Andrews McMeel received a package from a "William Watterson in Cleveland Heights, Ohio" which contained a oil-on-board painting of "Cul de Sac" character Petey Otterloop, done by Watterson for the "Team Cul de Sac" fundraising project for Parkinson's disease in honor of Richard Thompson who was diagnosed in 2009. Watterson's syndicate has since become Universal Uclick, and they said that the painting was the first new artwork of his that the syndicate has seen since "Calvin and Hobbes" ended in 1995. October 2009 saw the publication of "Looking for Calvin and Hobbes," Nevin Martell's humorous story of seeking an interview with Watterson. In his search he interviews friends, co-workers and family but never gets to meet the artist himself. In early 2010, Watterson was interviewed by "The Plain Dealer" on the 15th anniversary of the end of "Calvin and Hobbes". Explaining his decision to discontinue the strip, he said, In October 2013, the magazine "Mental Floss" published an interview with Watterson, only the second since the strip ended. Watterson again confirmed that he would not be revisiting "Calvin and Hobbes", and that he was satisfied with his decision. He also gave his opinion on the changes in the comic-strip industry and where it would be headed in the future: In 2013 the documentary "Dear Mr. Watterson", exploring the cultural impact of "Calvin and Hobbes", was released. On February 26, 2014, Watterson published his first cartoon since the end of "Calvin and Hobbes": a poster for the documentary "Stripped". In 2014, Watterson co-authored "The Art of Richard Thompson" with Washington Post cartoonist Nick Galifianakis and David Apatoff. In June 2014, three strips of "Pearls Before Swine" (published June 4, June 5, and June 6, 2014) featured guest illustrations by Watterson after mutual friend Nick Galifianakis connected him and cartoonist Stephan Pastis, who communicated via e-mail. Pastis likened this unexpected collaboration to getting "a glimpse of Bigfoot". "I thought maybe Stephan and I could do this goofy collaboration and then use the result to raise some money for Parkinson's research in honor of Richard Thompson. It seemed like a perfect convergence", Watterson told the "Washington Post". The day that Stephan Pastis returned to his own strip, he paid tribute to Watterson by alluding to the final strip of Calvin and Hobbes from December 31, 1995. On November 5, 2014, a poster was unveiled, drawn by Watterson for the 2015 Angoulême International Comics Festival where he was awarded the Grand Prix in 2014. In 2015, three "original" Calvin & Hobbes comic strips were listed for sale on eBay. These pieces proved to be fakes after the Billy Ireland Cartoon Library & Museum at Ohio State released a statement saying the actual originals were in its archives. On April 1, 2016, for April Fools' Day, Berkeley Breathed posted on Facebook that Watterson had signed "the franchise over to my 'administration'". He then posted a comic with Calvin, Hobbes, and Opus all featured. The comic is signed by Watterson, though it remains to be seen how involved he actually was. Breathed posted another "Calvin County" strip featuring Calvin and Hobbes, also "signed" by Watterson on April 1, 2017, along with a fake "The New York Times" story ostensibly detailing the "merger" of the two strips. Berkeley Breathed included Hobbes in a November 27, 2017, strip as a stand-in for the character Steve Dallas. In 2001, Billy Ireland Cartoon Library & Museum at Ohio State University mounted an exhibition of Watterson's Sunday strips. He chose thirty-six of his favorites, displaying them with both the original drawing and the colored finished product, with most pieces featuring personal annotations. Watterson also wrote an accompanying essay that served as the foreword for the exhibit, called "Calvin and Hobbes: Sunday Pages 1985–1995", which opened on September 10, 2001. It was taken down in January 2002. The accompanying published catalog had the same title. From March 22 to August 3, 2014, Watterson exhibited again at the Billy Ireland Cartoon Library & Museum at Ohio State University. In conjunction with this exhibition, Watterson also participated in an interview with the school. An exhibition catalog named "Exploring Calvin and Hobbes" was released with the exhibit. The book contained a lengthy interview with Bill Watterson, conducted by Jenny Robb, the curator of the museum. Watterson was awarded the National Cartoonists Society's Reuben Award in both 1986 and 1988. Watterson's second Reuben win made him the youngest cartoonist to be so honored, and only the sixth person to win twice, following Milton Caniff, Charles Schulz, Dik Browne, Chester Gould, and Jeff MacNelly. (Gary Larson is the only cartoonist to win a second Reuben since Watterson.) In 2014, Watterson was awarded the Grand Prix at the Angoulême International Comics Festival for his body of work, becoming just the fourth non-European cartoonist to be so honored in the first 41 years of the event. Bill Watterson has been heavily influenced by Charles M. Schulz, Walt Kelly, and George Herriman. Schulz and Kelly in particular were big influences when it came to his outlook of the comic strip format.
https://en.wikipedia.org/wiki?curid=4032
Black Black is the darkest color, the result of the absence or complete absorption of visible light. It is an achromatic color, a color without hue, like white and gray. It is often used symbolically or figuratively to represent darkness, while white represents light. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason is still commonly worn by judges and magistrates. Black was one of the first colors used by artists in neolithic cave paintings. In the 14th century, it was worn by royalty, clergy, judges and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches and magic. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, evil, and elegance. Black ink is the most common color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. The word "black" comes from Old English "blæc" ("black, dark", "also", "ink"), from Proto-Germanic *"blakkaz" ("burned"), from Proto-Indo-European *"bhleg-" ("to burn, gleam, shine, flash"), from base *"bhel-" ("to shine"), related to Old Saxon "blak" ("ink"), Old High German "blach" ("black"), Old Norse "blakkr" ("dark"), Dutch "blaken" ("to burn"), and Swedish "bläck" ("ink"). More distant cognates include Latin "flagrare" ("to blaze, glow, burn"), and Ancient Greek "phlegein" ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. "Kuanos"' could mean both dark blue and black. The Ancient Romans had two words for black: "ater" was a flat, dull black, while "niger" was a brilliant, saturated black. "Ater" has vanished from the vocabulary, but "niger" was the source of the country name "Nigeria," the English word "Negro", and the word for "black" in most modern Romance languages (French: "noir"; Spanish and Portuguese: "negro"; Italian: "nero" ). Old High German also had two words for black: "swartz" for dull black and "blach" for a luminous black. These are parallelled in Middle English by the terms "swart" for dull black and "blaek" for luminous black. "Swart" still survives as the word "swarthy", while "blaek" became the modern English "black". In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal. Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and then made more vivid black pigments by burning bones or grinding a powder of manganese oxide. For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. For the ancient Greeks, black was also the color of the underworld, separated from the world of the living by the river Acheron, whose water was black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background. In the social hierarchy of ancient Rome, purple was the color reserved for the Emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often turned out faded gray or brown. In Latin, the word for black, "ater" and to darken, "atere", were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". Black was also the Roman color of death and mourning. In the 2nd century BC Roman magistrates began to wear a dark toga, called a "toga pulla", to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the "hora nigra", the black hour. The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening. In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair. In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin," while white represented "purity, innocence and all the virtues". Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy. Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white engravings and prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens. In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty. In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics. The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts. While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white. In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray. In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called "Kattenstoet," black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft. Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches. In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color. Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré. A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet. The invention of new, inexpensive synthetic black dyes and the industrialization of the textile industry meant that good-quality black clothes were available for the first time to the general population. In the 19th century gradually black became the most popular color of business dress of the upper and middle classes in England, the Continent, and America. Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeil Whistler made the color the subject of his most famous painting, "Arrangement in grey and black number one" (1871), better known as "Whistler's Mother". Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black." Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black." Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.) In the 20th century, black was the color of Italian and German fascism. (See the section political movements.) In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the "Black Square" in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea." Black was also appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument." In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who didn't accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as "The Wild One", with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress. In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits. Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in "Vogue" magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film "Breakfast at Tiffany's". The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the late 1960s and 1970s, and popularized the slogan "Black is Beautiful". In the 1990s, the Black Standard became the banner of several Islamic extremist, jihadist groups. (See the section political movements.) In the visible spectrum, black is the absorption of all colors. Black can be defined as the visual impression experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye "look black". A black pigment can, however, result from a "combination" of several pigments that collectively absorb all colors. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called "black". This provides two superficially opposite but actually complementary descriptions of black. Black is the absorption of all colors of light, or an exhaustive combination of multiple colors of pigment. In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce. On January 16, 2008, researchers from Troy, New York's Rensselaer Polytechnic Institute announced the creation of the then darkest material on the planet. The material, which reflected only 0.045 percent of light, was created from carbon nanotubes stood on end. This is 1/30 of the light reflected by the current standard for blackness, and one third the light reflected by the previous record holder for darkest substance. As of February 2016, the current darkest material known is claimed to be Vantablack. Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector). The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment. The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use." Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint. Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment. Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually the walnut, chestnut, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black. A much richer and deeper black dye was eventually found made from the Oak apple or gall-nut. The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe. Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps. Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks. The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting. India ink (or Indian ink in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called "masi". In India, the black color of the ink came from bone char, tar, pitch and other substances. The Ancient Romans had a black writing ink they called "atramentum librarium". Its name came from the Latin word "atrare", which meant to make something black. (This was the same root as the English word "atrocious".) It was usually made, like India ink, from soot, although one variety, called "atramentum elephantinum", was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black. The current accepted answer is that, although the universe is infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black. The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering. The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is no atmosphere to scatter the light, the sky is black both day and night. This phenomenon also holds true for other locations without an atmosphere. In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color. The Chinese and Japanese character for black ("kuro" in Japanese), can, depending upon the context, also mean dark or evil. In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions. In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day. In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium. Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States and Russia. In the latter, the movement initially allied itself with the Bolsheviks. The Black Army was a collection of anarchist military units which fought in the Russian Civil War, sometimes on the side of the Bolshevik Red Army, and sometimes for the opposing White Army. It was officially known as the Revolutionary Insurrectionary Army of Ukraine, and it was under the command of the famous anarchist Nestor Makhno. Fascism. The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security ("Milizia Volontaria per la Sicurezza Nazionale", or MVSN). Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts. Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In "Mein Kampf", Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the "Schutzstaffel" or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II. The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind." Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy," prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists. Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands. Patriotic resistance. The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today. Islamism. The Black Standard ( , also known as "banner of the eagle" or simply as "the banner") is the historical flag flown by Muhammad in Islamic tradition, an eschatological symbol in Shi'a Islam (heralding the advent of the Mahdi), and a symbol used in Islamism and Jihadism. Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia. The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies. The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion. In Christianity, the devil is often called the "prince of darkness." The term was used in John Milton's poem "Paradise Lost", published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase "princeps tenebrarum", which occurs in the "Acts of Pilate", written in the fourth century, in the 11th-century hymn "Rhythmus de die mortis" by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in "King Lear" by William Shakespeare (c. 1606), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman." Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence. In Europe and America, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning. In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: "non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months." A "black day" (or week or month) usually refers to tragic date. The Romans marked "fasti" days with white stones and "nefasti" days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government. In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week. In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic. In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film "The Wizard of Oz" became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades. Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church. Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as "beltzak" ("blacks") after their uniform. Black today is the most common color for limousines and the official cars of government officials. Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black. Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018. Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others). In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black. In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders. In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as "branco" (white), "pardo" (brown), "preto" (black), or "amarelo" (yellow). In 2008 6.8 percent of the population identified themselves as "preto". Black is commonly associated with secrecy. Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white. Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term "smoking" is still used today in Russia and other countries. The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself. For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: "Black is the color that goes with everything. If you're wearing black, you're on sure ground." Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, "elegance is a combination of distinction, naturalness, care and simplicity," and black exemplified elegance. The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché. Many performers of both popular and European classical music, including French singers Edith Piaf and Juliette Gréco, and violinist Joshua Bell have traditionally worn black on stage during performances. A black costume was usually chosen as part of their image or stage persona, or because it did not distract from the music, or sometimes for a political reason. Country-western singer Johnny Cash always wore black on stage. In 1971, Cash wrote the song "Man in Black" to explain why he dressed in that color: "We're doing mighty fine I do suppose / In our streak of lightning cars and fancy clothes / But just so we're reminded of the ones who are held back / Up front there ought to be a man in black."
https://en.wikipedia.org/wiki?curid=4035
Bletchley Park Bletchley Park is an English country house and estate in Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powersmost importantly the German Enigma and Lorenz ciphers; among its most notable early personnel the GC&CS team of codebreakers included Alan Turing, Gordon Welchman, Hugh Alexander and Stuart Milner-Barry. The nature of the work there was secret until many years after the war. According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war, the Post Office took over the site and used it as a management school, but by 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in 1991 to save large portions of the site from developers. More recently, Bletchley Park has been open to the public and houses interpretive exhibits and rebuilt huts as they would have appeared during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site. The site appears in the Domesday Book as part of the Manor of Eaton. Browne Willis built a mansion there in 1711, but after Thomas Harrison purchased the property in 1793 this was pulled down. It was first known as Bletchley Park after its purchase by Samuel Lipscomb Seckham in 1877. The estate of was bought in 1883 by Sir Herbert Samuel Leon, who expanded the then-existing farmhouse into what architect Landis Gores called a "maudlin and monstrous pile" combining Victorian Gothic, Tudor, and Dutch Baroque styles. At his Christmas family gatherings there was a fox hunting meet on Boxing Day with glasses of sloe gin from the butler, and the house was always "humming with servants". With 40 gardeners, a flower bed of yellow daffodils could become a sea of red tulips overnight. In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the mansion and of land for £6,000 (£ today) for use by GC&CS and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so. A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the "Varsity Line" between Oxford and Cambridgewhose universities were expected to supply many of the code-breakersmet the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford. Bletchley Park was known as "B.P." to those who worked there. "Station X" (X = Roman numeral ten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war. The formal posting of the many "Wrens"members of the Women's Royal Naval Serviceworking there, was to HMS Pembroke V. Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was "Room 47, Foreign Office". After the war, the Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in the 1950s. The site was used by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving. In 1990 the site was at risk of being sold for housing development. However, Milton Keynes Council made it into a conservation area. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance. The initial trustees included Roger Bristow, Ted Enever, Peter Wescombe, Dr Peter Jarvis of the Bletchley Archaeological & Historical Society, and Tony Sale who in 1994 became the first director of the Bletchley Park Museums. Commander Alastair Denniston was operational head of GC&CS from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgroundslinguists and chess champions were common, and in Knox's case papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills. On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting "men of the professor type". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, "The Daily Telegraph" was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort". Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst. This eclectic staff of "Boffins and Debs" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society". During a September 1941 morale-boosting visit, Winston Churchill reportedly remarked to Denniston: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally." Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done." The Army CIGS Alan Brooke wrote that on 16 April 1942 ""Took lunch in car and went to see the organization for breaking down ciphers – a wonderful set of professors and genii! I marvel at the work they succeed in doing."" After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedfordwhere it was known locally as "the Spy School") staff worked a six-day week, rotating through three shifts: 4p.m. to midnight, midnight to 8a.m. (the most disliked shift), and 8a.m. to 4p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8a.m. and came back at 4p.m., thus putting in sixteen hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German. In January 1945, at the peak of codebreaking efforts, nearly 10,000 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section. The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher. Many of the women had backgrounds in languages, particularly French, German and Italian, among them were Rozanne Colchester a translator who worked mainly for the Italian air forces Section and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals. For a long time, the British Government didn't recognize the contributions the personnel at Bletchley Park made. Their work achieved official recognition only in 2009. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret"higher even than the normally highest classification and security was paramount. All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut..." Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks "increase in number and seriousness". Without doubt, the most serious of these was that Bletchley Park had been infiltrated by John Cairncross, the notorious Soviet mole and member of the Cambridge Spy Ring, who leaked Ultra material to Moscow. Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have still photographs" of the park and its associated sites. The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections. After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence. (See 1943 BRUSA Agreement.) In contrast, the Soviet Union was never officially told of Bletchley Park and its activities a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat. The only direct enemy damage to the site was done 2021 November 1940 by three bombs probably intended for Bletchley railway station; Hut4, shifted two feet off its foundation, was winched back into place as work inside continued. Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of “Tunny” (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3. Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field. Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut8 provided excellent "cribs" for Known-plaintext attacks on the daily naval Enigma key. Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is the Roman numeral "ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site. Subsequently, other listening stationsthe Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) and Beeston Hill Y Station in Norfolkgathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter. The wartime needs required the building of additional accommodation. Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation. In addition to the wooden huts, there were a number of brick-built "blocks". Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine. Five weeks before the outbreak of war, Warsaw's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages. The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about high and wide, deep and weighed about a ton. At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst. Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links. The Lorenz messages were codenamed "Tunny" at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions. Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers. Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory. Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them. On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 percent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes. A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria. Soviet signals had been studied since the 1920s. In 193940, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square. An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India. In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman. By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers". After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as "the geese that laid the golden eggs and never cackled". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print. With the publication of F.W. Winterbotham's "The Ultra Secret" (1974) public discussion of Bletchley's work finally became possible (though even today some former staff still consider themselves bound to silence) and in July 2009 the British government announced that Bletchley personnel would be recognised with a commemorative badge. After the war, the site passed through a succession of hands and saw a number of uses, including as a teacher-training college and local GPO headquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment. In February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area, and the Bletchley Park Trust was formed to maintain the site as a museum. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT, granted a lease to the Trust giving it control over most of the site. June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie, and Valerie's twin sister, Mary ("née" Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers. The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz. In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to "The Times" from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM, and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000. In October 2011, the Bletchley Park Trust received a £4.6m Heritage Lottery Fund grant to be used "to complete the restoration of the site, and to tell its story to the highest modern standards" on the condition that £1.7m of 'match funding' is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550k and by June 2012 the trust had successfully raised £2.4m to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C. Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms. The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed "Tunny" by the British. The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is "To collect and restore computer systems particularly those developed in Britain and to enable people to explore that collection for inspiration, learning and enjoyment." Through its many exhibits, the museum displays the story of computing through the mainframes of the 1960s and 1970s, and the rise of personal computing in the 1980s. It has a policy of having as many of the exhibits as possible in full working order. This consists of serviced office accommodation housed in Bletchley Park's Blocks A and E, and the upper floors of the Mansion. Its aim is to foster the growth and development of dynamic knowledge-based start-ups and other businesses. In April 2020 Bletchley Park Capital Partners, a private company run by Tim Reynolds, Deputy Chairman of the National Museum of Computing, announced plans to sell off the freehold to part of the site containing former Block G for commercial development. Offers of between £4m and £6m were reportedly being sought for the 3 acre plot, for which planning permission for employment purposes was granted in 2005. Previously, the construction of a National College of Cyber Security for students aged from 16 to 19 years old had been envisaged on the site, to be housed in Block G after renovation with funds supplied by the Bletchley Park Science and Innovation Centre. The Radio Society of Great Britain's National Radio Centre (including a library, radio station, museum and bookshop) are in a newly constructed building close to the main Bletchley Park entrance. Not until July 2009 did the British government fully acknowledge the contribution of the many people working for the Government Code and Cypher School ('G C & C S') at Bletchley. Only then was a commemorative medal struck to be presented to those involved. The gilded medal bears the inscription "G C & C S 1939-1945 Bletchley Park and its Outstations". Bletchley Park is opposite Bletchley railway station. It is close to junctions 13 and 14 of the M1, about northwest of London.
https://en.wikipedia.org/wiki?curid=4037
Bede Bede (; 672/3 – 26 May 735), also known as Saint Bede, Venerable Bede, and Bede the Venerable (), was an English Benedictine monk at the monastery of St. Peter and its companion monastery of St. Paul in the Kingdom of Northumbria of the Angles (contemporarily Monkwearmouth–Jarrow Abbey in Tyne and Wear, England). Born on lands belonging to the twin monastery of Monkwearmouth-Jarrow in present-day Tyne and Wear, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow, both of whom survived a plague that struck in 686, an outbreak that killed a majority of the population there. While he spent most of his life in the monastery, Bede travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria. He is well known as an author, teacher (a student of one of his pupils was Alcuin), and scholar, and his most famous work, "Ecclesiastical History of the English People", gained him the title "The Father of English History". His ecumenical writings were extensive and included a number of Biblical commentaries and other theological works of exegetical erudition. Another important area of study for Bede was the academic discipline of "computus", otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped popularize the practice of dating forward from the birth of Christ ("Anno Domini" – in the year of our Lord), a practice which eventually became commonplace in medieval Europe. Bede was one of the greatest teachers and writers of the Early Middle Ages and is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800. In 1899, Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation; Anselm of Canterbury, also a Doctor of the Church, was originally from Italy. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to an impressive library which included works by Eusebius, Orosius, and many others. Almost everything that is known of Bede's life is contained in the last chapter of his "Ecclesiastical History of the English People", a history of the church in England. It was completed in about 731, and Bede implies that he was then in his fifty-ninth year, which would give a birth date in 672 or 673. A minor source of information is the letter by his disciple Cuthbert (not to be confused with the saint, Cuthbert, who is mentioned in Bede's work) which relates Bede's death. Bede, in the "Historia", gives his birthplace as "on the lands of this monastery". He is referring to the twinned monasteries of Monkwearmouth and Jarrow, in modern-day Wearside and Tyneside respectively; there is also a tradition that he was born at Monkton, two miles from the site where the monastery at Jarrow was later built. Bede says nothing of his origins, but his connections with men of noble ancestry suggest that his own family was well-to-do. Bede's first abbot was Benedict Biscop, and the names "Biscop" and "Beda" both appear in a list of the kings of Lindsey from around 800, further suggesting that Bede came from a noble family. Bede's name reflects West Saxon "Bīeda" (Northumbrian "Bǣda", Anglian "Bēda"). It is an Anglo-Saxon short name formed on the root of "bēodan" "to bid, command". The name also occurs in the "Anglo-Saxon Chronicle", s.a. 501, as "Bieda", one of the sons of the Saxon founder of Portsmouth. The "Liber Vitae" of Durham Cathedral names two priests with this name, one of whom is presumably Bede himself. Some manuscripts of the "Life of Cuthbert", one of Bede's works, mention that Cuthbert's own priest was named Bede; it is possible that this priest is the other name listed in the "Liber Vitae". At the age of seven, Bede was sent as a "puer oblatus" to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year. The dedication stone for the church has survived to the present day; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The "Life of Ceolfrith", written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14. When Bede was about 17 years old, Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnan sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John. In about 701 Bede wrote his first works, the "De Arte Metrica" and "De Schematibus et Tropis"; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last-surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of "Acts of the Apostles" that is believed to have been used by Bede survives and is now in the Bodleian Library at University of Oxford; it is known as the Codex Laudianus. Bede may also have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of Saint Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works. In 708, some monks at Hexham accused Bede of having committed heresy in his work "De Temporibus". The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor. In 733, Bede travelled to York to visit Ecgbert, who was then bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise-unknown monastery of a monk named , a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed. It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his "Historia Ecclesiastica". Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time and wrote excellent biblical and historical books. Bede died on the Feast of the Ascension, Thursday, 26 May 735, on the floor of his cell, singing "Glory be to the Father and to the Son and to the Holy Spirit" and was buried at Jarrow. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, "with frequent attacks of breathlessness but almost without pain", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day. At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery "a few treasures" of his: "some pepper, and napkins, and some incense". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it might be considered already 26 May in that ecclesiastical sense, although 25 May in the ordinary sense. Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as "Bede's Death Song". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been transferred to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral. One further oddity in his writings is that in one of his works, the "Commentary on the Seven Catholic Epistles", he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: "Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray." Another passage, in the "Commentary on Luke", also mentions a wife in the first person: "Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ." The historian Benedicta Ward argues that these passages are Bede employing a rhetorical device. Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of Early Medieval scholars. Although Bede is mainly studied as an historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed. Bede's best-known work is the "Historia ecclesiastica gentis Anglorum", or "An Ecclesiastical History of the English People", completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons. The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex. The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility. The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library. For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the "Historia Ecclesiastica", and also the "Chronicon", though he had neither in the original Greek; instead he had a Latin translation of the "Historia", by Rufinus, and Saint Jerome's translation of the "Chronicon". He also knew Orosius's "Adversus Paganus", and Gregory of Tours' "Historia Francorum", both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's "Life of Germanus" as a source for Germanus's visits to Britain. Bede's account of the invasion of the Anglo-Saxons is drawn largely from Gildas's "De Excidio et Conquestu Britanniae". Bede would also have been familiar with more recent accounts such as Stephen of Ripon's "Life of Wilfrid", and anonymous "Life" "of Gregory the Great" and "Life of Cuthbert". He also drew on Josephus's "Antiquities", and the works of Cassiodorus, and there was a copy of the "Liber Pontificalis" in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's "Natural History", and his monastery also owned copies of the works of Dionysius Exiguus. He probably drew his account of St. Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of St. Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's "Historia Ecclesiastica" was written. Bede also had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the "Historia Ecclesiastica"; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey. The historian Walter Goffart argues that Bede based the structure of the "Historia" on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used "De excidio". The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on "Life of Gregory the Great" written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels were modelled on "Life of Wilfrid". Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence. Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's "Historia Ecclesiastica". Bede also followed Eusebius in taking the "Acts of the Apostles" as the model for the overall work: where Eusebius used the "Acts" as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms "Australes" and "Occidentales" for the South and West Saxons respectively, but in a passage in the first book he uses "Meridiani" and "Occidui" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier "History of the Franks". Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the "Historia Ecclesiastica". His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date. Bede is described by Michael Lapidge as "without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period". His Latin has been praised for its clarity, but his style in the "Historia Ecclesiastica" is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the "Historia Ecclesiastica", Bede's Latin is "clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style." Bede's primary intention in writing the "Historia Ecclesiastica" was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Saxons; by the end of the "Historia" the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the "Historia" is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote. N.J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters. Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues to the present day. Bede, like Gregory the Great whom Bede quotes on the subject in the "Historia", felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct. Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church. Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as "a scholar's dislike of the indefinite"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him. Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He also is parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held "imperium", or overlordship; only one king of Wessex, Ceawlin, is listed, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints. Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally "curbed any missionary impulses towards the Anglo-Saxons from the British clergy." At the time Bede wrote the "Historia Ecclesiastica", there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the "Anno Domini" method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in "De Temporum Ratione", his work on chronology, is the main reason it is now so widely used. Beda Venerabilis' Easter table, contained in "De Temporum Ratione", was developed from Dionysius Exiguus’ famous Paschal table. The "Historia Ecclesiastica" was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's "Historia" come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg, France. Modern historians have studied the "Historia" extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the "Historia", but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the "Historia" was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars. Modern historians and editors of Bede have been lavish in their praise of his achievement in the "Historia Ecclesiastica". Stenton regards it as one of the "small class of books which transcend all but the most fundamental conditions of time and place", and regards its quality as dependent on Bede's "astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history." Patrick Wormald describes him as "the first and greatest of England's historians". The "Historia Ecclesiastica" has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the "Anglo-Saxon Chronicle", the medieval writers William of Malmesbury, Henry of Huntingdon, and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the "Historia", and his works were used by both Protestant and Catholic sides in the wars of religion. Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the "Historia's" account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time. It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works. As Chapter 66 of his "On the Reckoning of Time", in 725 Bede wrote the "Greater Chronicle" ("chronica maiora"), which sometimes circulated as a separate work. For recent events the "Chronicle", like his "Ecclesiastical History", relied upon Gildas, upon a version of the "Liber Pontificalis" current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's "Chronikoi Kanones." The dating of events in the "Chronicle" is inconsistent with his other works, using the era of creation, the "Anno Mundi". His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of Saint Cuthbert of Lindisfarne, an adaptation of Paulinus of Nola's "Life of St Felix", and a translation of the Greek "Passion of St Anastasius". He also created a listing of saints, the "Martyrology". In his own time, Bede was as well known for his biblical commentaries and exegetical, as well as other theological, works. The majority of his writings were of this type and covered the Old Testament and the New Testament. Most survived the Middle Ages, but a few were lost. It was for his theological writings that he earned the title of "Doctor Anglorum" and why he was declared a saint. Bede synthesised and transmitted the learning from his predecessors, as well as made careful, judicious innovation in knowledge (such as recalculating the age of the earth—for which he was censured before surviving the heresy accusations and eventually having his views championed by Archbishop Ussher in the sixteenth century—see below) that had theological implications. In order to do this, he learned Greek and attempted to learn Hebrew. He spent time reading and rereading both the Old and the New Testaments. He mentions that he studied from a text of Jerome's Vulgate, which itself was from the Hebrew text. He also studied both the Latin and the Greek Fathers of the Church. In the monastic library at Jarrow were numerous books by theologians, including works by Basil, Cassian, John Chrysostom, Isidore of Seville, Origen, Gregory of Nazianzus, Augustine of Hippo, Jerome, Pope Gregory I, Ambrose of Milan, Cassiodorus, and Cyprian. He used these, in conjunction with the Biblical texts themselves, to write his commentaries and other theological works. He had a Latin translation by Evagrius of Athanasius's "Life of Antony" and a copy of Sulpicius Severus' "Life of St. Martin". He also used lesser known writers, such as Fulgentius, Julian of Eclanum, Tyconius, and Prosper of Aquitaine. Bede was the first to refer to Jerome, Augustine, Pope Gregory and Ambrose as the four Latin Fathers of the Church. It is clear from Bede's own comments that he felt his calling was to explain to his students and readers the theology and thoughts of the Church Fathers. Bede also wrote homilies, works written to explain theology used in worship services. He wrote homilies on the major Christian seasons such as Advent, Lent, or Easter, as well as on other subjects such as anniversaries of significant events. Both types of Bede's theological works circulated widely in the Middle Ages. Several of his biblical commentaries were incorporated into the "Glossa Ordinaria", an 11th-century collection of biblical commentaries. Some of Bede's homilies were collected by Paul the Deacon, and they were used in that form in the Monastic Office. Saint Boniface used Bede's homilies in his missionary efforts on the continent. Bede sometimes included in his theological books an acknowledgement of the predecessors on whose works he drew. In two cases he left instructions that his marginal notes, which gave the details of his sources, should be preserved by the copyist, and he may have originally added marginal comments about his sources to others of his works. Where he does not specify, it is still possible to identify books to which he must have had access by quotations that he uses. A full catalogue of the library available to Bede in the monastery cannot be reconstructed, but it is possible to tell, for example, that Bede was very familiar with the works of Virgil. There is little evidence that he had access to any other of the pagan Latin writers—he quotes many of these writers, but the quotes are almost found in the Latin grammars that were common in his day, one or more of which would certainly have been at the monastery. Another difficulty is that manuscripts of early writers were often incomplete: it is apparent that Bede had access to Pliny's "Encyclopedia", for example, but it seems that the version he had was missing book xviii, since he did not quote from it in his "De temporum ratione". Bede's works included "Commentary on Revelation", "Commentary on the Catholic Epistles", "Commentary on Acts", "Reconsideration on the Books of Acts", "On the Gospel of Mark", "On the Gospel of Luke", and "Homilies on the Gospels". At the time of his death he was working on a translation of the Gospel of St. John into English. He did this for the last 40 days of his life. When the last passage had been translated he said: "All is finished." The works dealing with the Old Testament included "Commentary on Samuel", "Commentary on Genesis", "Commentaries on Ezra and Nehemiah", "On the Temple", "On the Tabernacle", "Commentaries on Tobit", "Commentaries on Proverbs", "Commentaries on the Song of Songs", "Commentaries on the Canticle of Habakkuk", The works on Ezra, the tabernacle and the temple were especially influenced by Gregory the Great's writings. "De temporibus", or "On Time", written in about 703, provides an introduction to the principles of Easter computus. This was based on parts of Isidore of Seville's "Etymologies", and Bede also included a chronology of the world which was derived from Eusebius, with some revisions based on Jerome's translation of the Bible. In about 723, Bede wrote a longer work on the same subject, "On the Reckoning of Time", which was influential throughout the Middle Ages. He also wrote several shorter letters and essays discussing specific aspects of computus. "On the Reckoning of Time" ("De temporum ratione") included an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the spherical earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight. Bede also records the effect of the moon on tides. He shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. Since the focus of his book was the computus, Bede gave instructions for computing the date of Easter from the date of the Paschal full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. He gives some information about the months of the Anglo-Saxon calendar. Any codex of Beda Venerabilis' Easter table is normally found together with a codex of his "De temporum ratione". Bede's Easter table, being an exact extension of Dionysius Exiguus' Paschal table and covering the time interval AD 532–1063, contains a 532-year Paschal cycle based on the so called classical Alexandrian 19-year lunar cycle, being the close variant of bishop Theophilus' 19-year lunar cycle proposed by Annianus and adopted by bishop Cyril of Alexandria around AD 425. The ultimate similar (but rather different) predecessor of this Metonic 19-year lunar cycle is the one invented by Anatolius around AD 260. For calendric purposes, Bede made a new calculation of the age of the world since the creation, which he dated as 3952 BC. Because of his innovations in computing the age of the world, he was accused of heresy at the table of Bishop Wilfrid, his chronology being contrary to accepted calculations. Once informed of the accusations of these "lewd rustics," Bede refuted them in his Letter to Plegwin. In addition to these works on astronomical timekeeping, he also wrote "De natura rerum", or "On the Nature of Things", modelled in part after the work of the same title by Isidore of Seville. His works were so influential that late in the ninth century Notker the Stammerer, a monk of the Monastery of St. Gall in Switzerland, wrote that "God, the orderer of natures, who raised the Sun from the East on the fourth day of Creation, in the sixth day of the world has made Bede rise from the West as a new Sun to illuminate the whole Earth". Bede wrote some works designed to help teach grammar in the abbey school. One of these was "De arte metrica", a discussion of the composition of Latin verse, drawing on previous grammarians' work. It was based on Donatus' "De pedibus" and Servius' "De finalibus" and used examples from Christian poets as well as Virgil. It became a standard text for the teaching of Latin verse during the next few centuries. Bede dedicated this work to Cuthbert, apparently a student, for he is named "beloved son" in the dedication, and Bede says "I have laboured to educate you in divine letters and ecclesiastical statutes" "De orthographia" is a work on orthography, designed to help a medieval reader of Latin with unfamiliar abbreviations and words from classical Latin works. Although it could serve as a textbook, it appears to have been mainly intended as a reference work. The date of composition for both of these works is unknown. "De schematibus et tropis sacrae scripturae" discusses the Bible's use of rhetoric. Bede was familiar with pagan authors such as Virgil, but it was not considered appropriate to teach biblical grammar from such texts, and Bede argues for the superiority of Christian texts in understanding Christian literature. Similarly, his text on poetic metre uses only Christian poetry for examples. According to his disciple Cuthbert, Bede was "doctus in nostris carminibus" ("learned in our songs"). Cuthbert's letter on Bede's death, the "Epistola Cuthberti de obitu Bedae", moreover, commonly is understood to indicate that Bede composed a five-line vernacular poem known to modern scholars as "Bede's Death Song" As Opland notes, however, it is not entirely clear that Cuthbert is attributing this text to Bede: most manuscripts of the latter do not use a finite verb to describe Bede's presentation of the song, and the theme was relatively common in Old English and Anglo-Latin literature. The fact that Cuthbert's description places the performance of the Old English poem in the context of a series of quoted passages from Sacred Scripture, indeed, might be taken as evidence simply that Bede also cited analogous vernacular texts. On the other hand, the inclusion of the Old English text of the poem in Cuthbert's Latin letter, the observation that Bede "was learned in our song," and the fact that Bede composed a Latin poem on the same subject all point to the possibility of his having written it. By citing the poem directly, Cuthbert seems to imply that its particular wording was somehow important, either since it was a vernacular poem endorsed by a scholar who evidently frowned upon secular entertainment or because it is a direct quotation of Bede's last original composition. There is no evidence for cult being paid to Bede in England in the 8th century. One reason for this may be that he died on the feast day of Augustine of Canterbury. Later, when he was venerated in England, he was either commemorated after Augustine on 26 May, or his feast was moved to 27 May. However, he was venerated outside England, mainly through the efforts of Boniface and Alcuin, both of whom promoted the cult on the continent. Boniface wrote repeatedly back to England during his missionary efforts, requesting copies of Bede's theological works. Alcuin, who was taught at the school set up in York by Bede's pupil Ecgbert, praised Bede as an example for monks to follow and was instrumental in disseminating Bede's works to all of Alcuin's friends. Bede's cult became prominent in England during the 10th-century revival of monasticism and by the 14th century had spread to many of the cathedrals of England. Wulfstan, Bishop of Worcester was a particular devotee of Bede's, dedicating a church to him in 1062, which was Wulfstan's first undertaking after his consecration as bishop. His body was 'translated' (the ecclesiastical term for relocation of relics) from Jarrow to Durham Cathedral around 1020, where it was placed in the same tomb with Saint Cuthbert of Lindisfarne. Later Bede's remains were moved to a shrine in the Galilee Chapel at Durham Cathedral in 1370. The shrine was destroyed during the English Reformation, but the bones were reburied in the chapel. In 1831 the bones were dug up and then reburied in a new tomb, which is still there. Other relics were claimed by York, Glastonbury and Fulda. His scholarship and importance to Catholicism were recognised in 1899 when he was declared a Doctor of the Church. He is the only Englishman named a Doctor of the Church. He is also the only Englishman in Dante's "Paradise" ("Paradiso" X.130), mentioned among theologians and doctors of the church in the same canto as Isidore of Seville and the Scot Richard of St. Victor. His feast day was included in the General Roman Calendar in 1899, for celebration on 27 May rather than on his date of death, 26 May, which was then the feast day of Pope Gregory VII. He is venerated in both the Anglican and Catholic Church, with a feast day of 25 May, and in the Eastern Orthodox Church, with a feast day on 27 May (Βεδέα του Ομολογητού). Bede became known as "Venerable Bede" (Latin: ) by the 9th century because of his holiness, but this was not linked to consideration for sainthood by the Catholic Church. According to a legend, the epithet was miraculously supplied by angels, thus completing his unfinished epitaph. It is first utilised in connection with Bede in the 9th century, where Bede was grouped with others who were called "venerable" at two ecclesiastical councils held at Aachen in 816 and 836. Paul the Deacon then referred to him as venerable consistently. By the 11th and 12th century, it had become commonplace. Bede's reputation as a historian, based mostly on the "Historia Ecclesiastica", remains strong; historian Walter Goffart says of Bede that he "holds a privileged and unrivalled place among first historians of Christian Europe". His life and work have been celebrated with the annual Jarrow Lecture, held at St. Paul's Church, Jarrow, since 1958. Jarrow Hall – Anglo-Saxon Farm, Village and Bede Museum (previously known as Bede's World), is a museum that celebrates the history of Bede and other parts of English heritage, on the site where he lived.
https://en.wikipedia.org/wiki?curid=4041
Bubble tea Bubble tea (also known as pearl milk tea, bubble milk tea, or boba) (, ) is a tea-based drink invented in Taiwan during the 1980s, which is shaken with ice to create the "bubbles", a foamy layer on top of the drink; chewy tapioca balls ("pearls") are added as well. Ice-blended versions are frozen and put into a blender, resulting in a slushy consistency. There are many varieties of the drink with a wide range of flavors. The two most popular varieties are black pearl milk tea and green pearl milk tea. Bubble teas fall under two categories: teas (without milk) and milk teas. Both varieties come with a choice of black, green, or oolong tea, and come in many flavors (both fruit and non-fruit). Milk teas include condensed milk, powdered milk, almond milk, soy milk, coconut milk, 2% milk, skim milk, or fresh milk. Some shops offer non-dairy creamer options as well (many milk tea drinks in North America are made with non-dairy creamer). In addition, many boba shops sell Asian style smoothies, which include a dairy base and either fresh fruit or fruit-flavored powder, creating fruity flavours, such as honeydew, lemon, and many more (but no tea). Now, there are hot versions available at most shops as well. The oldest known bubble tea consisted of a mixture of hot Taiwanese black tea, small tapioca pearls (粉圆), condensed milk, and syrup (糖浆) or honey. Many variations followed; the most common are served cold rather than hot. The most prevalent varieties of tea have changed frequently. The tapioca pearls are made from the starch of the cassava which was introduced to Taiwan from South America during Japanese colonial rule. Bubble tea first became popular in Taiwan in the 1980s, but the original inventor is unknown. Larger tapioca pearls (波霸/黑珍珠) were adapted and quickly replaced the small pearls. Soon after, different flavors, especially fruit flavors, became popular. Flavors may be added in the form of powder, pulp, or syrup to oolong, black or green tea, which is then shaken with ice in a cocktail shaker. The tea mixture is then poured into a cup with the toppings in it. Today, there are stores that specialize in bubble tea. Some cafés use plastic lids, but more authentic bubble tea shops serve drinks using a machine to seal the top of the cup with plastic cellophane. The latter method allows the tea to be shaken in the serving cup and makes it spill-free until one is ready to drink it. The cellophane is then pierced with an oversize straw large enough to allow the toppings to pass through. Today, in Taiwan, it is most common for people to refer to the drink as pearl milk tea (zhēn zhū nǎi chá, or zhēn nǎi for short). More flavors such as black tea and brown sugar have appeared. Bubble tea has now become a signature flavor itself and inspired a variety of bubble tea flavored snacks such as bubble tea ice cream and bubble tea candy. Each of the ingredients of bubble tea can have many variations depending on the tea store. Typically, different types of black tea, green tea, oolong tea, and sometimes white tea are used. Another variation called yuenyeung (鸳鸯, named after the Mandarin duck) originated in Hong Kong and consists of black tea, coffee, and milk. Decaffeinated versions of teas are sometimes available when the tea house freshly brews the tea base. Other varieties of the drink can include blended tea drinks. Some may be blended with ice cream. There are also smoothies that contain both tea and fruit. Although bubble tea originated in Taiwan, some bubble tea shops are starting to add in flavors which originate from other countries. For example, hibiscus flowers, saffron, cardamom, and rosewater are becoming popular. Tapioca pearls, (boba) are the prevailing chewy spheres in bubble tea, but a wide range of other options can be used to add similar texture to the drink. These are usually black due to the brown sugar mixed in with the tapioca. Green pearls have a small hint of green tea flavor and are chewier than the traditional tapioca balls. White pearls, not to be confused with the original pearls, are made with seaweed extract making them slightly healthier but comes with a more crunchy texture. Jelly comes in different shapes: small cubes, stars, or rectangular strips, and flavors such as coconut jelly, konjac, lychee, grass jelly, mango, coffee and green tea available at some shops. Azuki bean or mung bean paste, typical toppings for Taiwanese shaved ice desserts, give the drinks an added subtle flavor as well as texture. Aloe, egg pudding (custard), grass jelly, and sago can be found in most tea houses. Popping Boba are spheres and have fruit juices or syrups inside of them. They are also popular toppings. The many flavors include mango, lychee, strawberry, green apple, passion fruit, pomegranate, orange, cantaloupe, blueberry, coffee, chocolate, yogurt, kiwi, peach, banana, lime, cherry, pineapple, red guava, etc. Some shops offer milk or cheese foam top off the drink too, which has a thicker consistency similar to that of whipped cream. In some cases, the foam is meant to be drunk with the tea by tilting the cup to get a good balance instead of mixing the foam into the tea. Bubble tea cafés will frequently offer drinks without coffee or tea in them. The dairy base for these drinks is flavoring blended with ice, often called snow bubble. All mix-ins that can be added to the bubble tea can be added to these slushie-like drinks. One drawback is that the coldness of the iced drink may cause the tapioca balls to harden, making them difficult to suck up through a straw and chew. To prevent this from happening, these slushies must be consumed more quickly than bubble tea. Bubble tea stores often give customers the option of choosing the amount of ice or sugar, usually using percentages. Bubble tea is also offered in some restaurants, like the Michelin-awarded Din Tai Fung. There are two competing stories for the origin of bubble tea. The Hanlin Tea Room of Tainan, Taiwan, claims that it was invented in 1986 when teahouse owner Tu Tsong-he was inspired by white tapioca balls he saw in the Ya Mu Liao market. He then made tea using the tapioca balls, resulting in the so-called "pearl tea". Shortly after, Hanlin changed the white tapioca balls to the black version, mixed with brown sugar or honey, that is seen today. At many locations, one can purchase both black tapioca balls and white tapioca balls. The other claim is from the Chun Shui Tang tearoom in Taichung, Taiwan. Its founder, Liu Han-Chieh, began serving Chinese tea cold after he observed that coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business, and multiple chains were established. This expansion began the rapid expansion of bubble tea. The creator of bubble tea is Lin Hsiu Hui, the teahouse's product development manager, who randomly poured her fen yuan into the iced tea drink during a boring meeting in 1988. The beverage was well-received at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product. The drink became popular in most parts of East and Southeast Asia during the 1990s, especially Vietnam. In Malaysia, the number of brands selling the beverage has grown to over 50. The drink is well received by foreign consumers in North America, specifically around areas with high populations of Chinese and Taiwanese expatriates. Bubble tea has a very large presence in the Bay Area, New York City, Chicago and other large American cities, populated by many of those from Chinese and Vietnamese backgrounds. Jollibee, a Filipino fast food chain, once established in Daly City, California in 1998, introduced boba on a wider scale with their semi-discontinued "Pearl Coolers", which included the tapioca in popular flavors such as ube and Buko Pandan (coconut). In contemporary times, bubble tea has achieved cultural significance outside of Taiwan in some areas for major East Asian diaspora populations. In the United States there is a geographic split with the west coast referring to the drink as “boba” and the east coast calling it “bubble tea." In May 2011, occurred in Taiwan when DEHP (a chemical plasticizer) was found as a stabilizer in drinks and juice syrups. In June the Health Minister of Malaysia, Liow Tiong Lai, instructed companies selling "Strawberry Syrup", a material used in some bubble teas, to stop selling them after chemical tests showed they were tainted with DEHP. In August 2012, scientists from the Technical University of Aachen (RWTH) in Germany analyzed bubble tea samples in a research project to look for allergenic substances. The result indicated that the products contain styrene, acetophenone, and brominated substances, which can negatively affect health. The report was published by German newspaper "Rheinische Post" and caused Taiwan's representative office in Germany to issue a statement, saying food items in Taiwan are monitored. Taiwan's Food and Drug Administration confirmed in September that, in a second round of tests conducted by German authorities, Taiwanese bubble tea was found to be free of cancer-causing chemicals. The products were also found to contain no excessive levels of heavy-metal contaminants or other health-threatening agents. In May 2013, the Taiwan Food and Drug Administration issued an alert on the detection of maleic acid, an unapproved food additive, in some food products, including tapioca pearls. The Agri-Food & Veterinary Authority of Singapore conducted its own tests and found additional brands of tapioca pearls and some other starch-based products sold in Singapore were similarly affected. In May 2019, around 100 undigested tapioca pearls were found in the abdomen of a 14-year-old girl in Zhejiang province, China after she complained of constipation. However, physicians believe that consuming tapioca pearls should not be a concern as it is made from starch-based cassava root which is easily digested by the body, similarly to fibre. In July 2019, Singapore's Mount Alvernia Hospital warned against the sugar content of bubble tea since the drink had become extremely popular in Singapore in recent years. While it recognises the benefits of drinking green tea and black tea in reducing risk of cardiovascular disease, diabetes, arthritis and cancer, the hospital cautions the addition of other ingredients like non-dairy creamer and toppings in the tea, which raises the fat and sugar content of the tea and increases the risk of chronic diseases. Non-dairy creamer is a milk substitute that contains trans fat in the form of hydrogenated palm oil. The hospital warns that this oil has been strongly correlated with an increased risk of heart disease and stroke. According to Al Jazeera bubble tea has become synonymous with Taiwan and is an important symbol of Taiwanese identity both domestically and internationally. Within Taiwan bubble tea is iconic, to the point of serving as a representation of the nation. A stylized embossed gold image of bubble tea has been suggested as an alternative cover for the country’s passport. Many Taiwanese immigrants settled in California, leading to a number of bubble tea shops opening around Los Angeles. Two of the first dedicated bubble tea shops were Tapioca Express and Lollicup, both of which were originally owned by Taiwanese immigrants. Bubble tea has become an icon for Chinese Americans in Los Angeles and is commonly known as simply "boba" in California. The symbolism has also been criticised for its superficiality and lack of inclusiveness, and it is used in the pejorative "boba liberal". A bubble tea emoji has been accepted as part of the Unicode standard and will be issued in 2020. Bubble tea is used to represent Taiwan in the context of the Milk Tea Alliance.
https://en.wikipedia.org/wiki?curid=4045
Battle of Blenheim The Battle of Blenheim (German: "Zweite Schlacht bei Höchstädt"; French "Bataille de Höchstädt"), fought on 13 August 1704, was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: the Elector of Bavaria and Marshal Marsin's forces in Bavaria threatened from the west, and Marshal Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage the Elector's and Marsin's army before Marshal Tallard could bring reinforcements through the Black Forest. However, the Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, the Duke failing in his attempts to force an engagement. When Tallard arrived to bolster the Elector's army, and Prince Eugene arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was leaning for Louis' coalition, and ended French plans of knocking the Emperor out of the war. France suffered as many as 38,000 casualties including the commander-in-chief, Marshal Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. The offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war would rage on for another decade. By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of success for France and her allies, most particularly on the Danube, where Marshal Villars and the Elector of Bavaria had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to the brilliant Villars being replaced by the less dynamic Marshal Marsin. Nevertheless, by 1704, the threat was still real: Rákóczi's Hungarian revolt was already threatening the Empire's eastern approaches, and Marshal Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the Grand Alliance. To isolate the Danube from any Allied intervention, Marshal Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and English troops around Maastricht in the Low Countries, while General de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were Prince Louis of Baden's force of 36,000 stationed in the Lines of Stollhofen to watch Marshal Tallard at Strasbourg; there was also a weak force of 10,000 men under Field Marshal Count Limburg Styrum observing Ulm. Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch, however, who clung to their troops for their country's protection, were against any adventurous military operation as far south as the Danube and would never willingly permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to ignore Dutch wishes was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to simply move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany. "My intentions", wrote the Duke from The Hague on 29 April to his governmental confidant, Sidney Godolphin, "are to march with the English to Coblenz and declare that I intend to campaign on the Moselle. But when I come there, to write to the Dutch States that I think it absolutely necessary for the saving of the Empire to march with the troops under my command and to join with those that are in Germany ... in order to make measures with Prince Lewis of Baden for the speedy reduction of the Elector of Bavaria." Marlborough's march started on 19 May from Bedburg, northwest of Cologne. The army (assembled by the Duke's brother, General Charles Churchill) consisted of 66 squadrons, 31 battalions and 38 guns and mortars totalling 21,000 men (16,000 of whom were English troops). This force was to be augmented "en route" such that by the time Marlborough reached the Danube, it would number 40,000 (47 battalions, 88 squadrons). Whilst Marlborough led his army, General Overkirk would maintain a defensive position in the Dutch Republic in case Villeroi mounted an attack. The Duke had assured the Dutch that if the French were to launch an offensive he would return in good time, but Marlborough calculated that as he marched south, the French commander would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed the Duke with 30,000 men in 60 squadrons and 42 battalions. The military dangers in such an enterprise were numerous: Marlborough's lines of communication along the Rhine would be hopelessly exposed to French interference, for Louis' generals controlled the left bank of the river and its central reaches. Such a long march would almost certainly involve a high wastage of men and horses through exhaustion and disease. However, Marlborough was convinced of the urgency – "I am very sensible that I take a great deal upon me", he had earlier written to Godolphin, "but should I act otherwise, the Empire would be undone ..." Whilst Allied preparations had progressed, the French were striving to maintain and re-supply Marshal Marsin. Marsin had been operating with the Elector of Bavaria against the Imperial commander, Prince Louis of Baden, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. However, on 14 May, with considerable skill Marshal Tallard managed to bring 10,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring Baron Thüngen, the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him. The whole operation was an outstanding military achievement. On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle the Duke must now turn west, but, instead, the following day the army crossed to the right bank of the Rhine, (pausing to add 5,000 waiting Hanoverians and Prussians). "There will be no campaign on the Moselle", wrote Villeroi who had taken up a defensive position on the river, "the English have all gone up into Germany." A second possible objective now occurred to the French – an Allied incursion into Alsace and an attack on the city of Strasbourg. Marlborough skilfully encouraged this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna remained paralysed by uncertainty. With Villeroi shadowing Marlborough's every move, Marlborough's gamble that the French would not move against the weakened Dutch position in the Netherlands paid off. In any case, Marlborough had promised to return to the Netherlands if a French attack developed there, transferring his troops down the Rhine on barges at a rate of a day. Encouraged by this promise (whatever it was worth) the States General agreed to release the Danish contingent of seven battalions and 22 squadrons as a reinforcement. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt. On 10 June, the Duke met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Prince Louis of Baden, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At conference it was decided that Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to keep an eye on Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Baden's forces would combine, totalling 80,000 men, for the march on the Danube to seek out the Elector and Marsin before they could be reinforced. Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to rapidly construct a plan to save Bavaria but the rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army wrote – "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from Louis arrived on 27 June: Tallard was to reinforce Marsin and the Elector on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Marshal Tallard; and General de Coignies with 8,000 men, would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march. On 22 June, Marlborough's forces linked up with Baden's Imperial forces at Launsheim. A distance of had been covered in five weeks. Thanks to a carefully planned time-table, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline – "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, the Elector and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he was unable to bring any from the Low Countries, and Baden had failed to supply any despite assurances to the contrary. The Allies, nevertheless, needed a base for provisions and a good river crossing. On 2 July, therefore, Marlborough at the Battle of Schellenberg stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill but after a ferocious and bloody battle, inflicting enormous casualties on both sides, Schellenberg finally succumbed, forcing Donauwörth to surrender shortly afterwards. The Elector, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg. Tallard's march presented a dilemma for Eugene. If the Allies were not to be outnumbered on the Danube, Eugene realised he must either try to cut Tallard off before he could get there or he must hasten to reinforce Marlborough. if he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with the Elector and Marsin. Eugene compromised: leaving 12,000 troops behind guarding the Lines of Stollhofen, he marched off with the rest of his army to forestall Tallard. Lacking in numbers, Eugene could not seriously disrupt Tallard's march but the French Marshal's progress was proving pitifully slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagons of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest." Tallard had insisted on besieging the little town of Villingen for six days (16–22 July) but abandoned the enterprise on discovering the approach of Eugene. The Elector in Augsburg was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered the Elector's policy of inaction, further encouraging him to wait for the reinforcements. But this reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on the Elector to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue the Duke into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of Rain (9–16 July), caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined." Nevertheless, strategically the Duke had been able to place his numerically stronger forces between the Franco-Bavarian army and Vienna. Marshal Tallard, with 34,000 men, reached Ulm, joining with the Elector and Marsin in Augsburg on 5 August (although Tallard was not impressed to find that the Elector had dispersed his army in response to Marlborough's campaign of ravaging the region). Also on 5 August, Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew it was necessary that another crossing point over the Danube would be required in case Donauwörth fell to the enemy. On 7 August, therefore, the first of Baden's 15,000 Imperial troops (the remainder following two days later) left Marlborough's main force to besiege the heavily defended city of Ingolstadt, farther down the Danube. With Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and the Elector debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder weeks of Autumn; the Elector and Marsin, however, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed on a plan and decided to attack Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth – "The enemy have marched. It is almost certain that the whole army is crossing the Danube at Lauingen ... The plain of Dillingen is crowded with troops ... Everything, milord, consists in speed and that you put yourself forthwith in movement to join me tomorrow, without which I fear it will be too late." By a series of brilliant marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete. During 11 August, Tallard pushed forward from the river crossings at Dillingen; by 12 August, the Franco-Bavarian forces were encamped behind the small river Nebel near the village of Blenheim on the plain of Höchstädt. That same day Marlborough and Eugene carried out their own reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – from the French camp. A French reconnaissance under the Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of General Wilkes and Brigadier Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, ventured to remonstrate with Marlborough about the hazards of attacking; but the Duke was resolute – "I know the danger, yet a battle is absolutely necessary, and I rely on the bravery and discipline of the troops, which will make amends for our disadvantages". Marlborough and Eugene decided to risk everything, and agreed to attack on the following day. The battlefield stretched for nearly . The extreme right flank of the Franco-Bavarian army was covered by the Danube; to the extreme left flank lay the undulating pine-covered hills of the Swabian Jura. A small stream, the Nebel, (the ground either side of which was soft and marshy and only fordable intermittently), fronted the French line. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the next village of Oberglauheim the fields of wheat had been cut to stubble and were now ideal to deploy troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers. At 02:00 on 13 August 40 squadrons were sent forward towards the enemy, followed at 03:00, in eight columns, by the main Allied force pushing over the Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The English and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left (including capturing the village of Blenheim), whilst Eugene, commanding 16,000 men would attack the Elector and Marsin's combined forces of 23,000 troops on the right wing; if this attack was pressed hard the Elector and Marsin would have no troops to send to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. However, Marlborough would have to wait until Eugene was in position before the general engagement could begin. The last thing Tallard expected that morning was to be attacked by the Allies – deceived by intelligence gathered from prisoners taken by de Silly the previous day, and assured in their strong natural position, Tallard and his colleagues were convinced that Marlborough and Eugene were about to retreat north-eastwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning, but hardly had he sent the messenger when the Allied army began to appear opposite his camp. "I could see the enemy advancing ever closer in nine great columns", wrote Mérode-Westerloo, " ... filling the whole plain from the Danube to the woods on the horizon." Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops tried to draw into battle-order to face the unexpected threat. At about 08:00 the French artillery on their right wing opened fire, answered by Colonel Blood's batteries. The guns were heard by Baden in his camp before Ingolstadt, "The Prince and the Duke are engaged today to the westward", he wrote to the Emperor. "Heaven bless them." An hour later Tallard, the Elector, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that the Elector and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were, however, divided as to how to utilise the Nebel: Tallard's tactic – opposed by Marsin and the Elector who felt it better to close their infantry right up to the stream itself – was to lure the allies across before unleashing their cavalry upon them, causing panic and confusion; whilst the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. The plan was sound if all its parts were implemented, but it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had in mind. The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under the Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim the Elector placed 27 squadrons of cavalry – Count d'Arco commanded 14 Bavarian squadrons and Count Wolframsdorf had 13 more in support nearby. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by the Marquis de Blainville (including the effective Irish Brigade known as the 'Wild Geese'). Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons (16 drawn from Marsin) supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by the Marquis de Clérambault. Four battalions stood to the rear and a further 11 were in reserve. These battalions were supported by Hautefeuille's 12 squadrons of dismounted dragoons. By 11:00 Tallard, the Elector, and Marsin were in place. Many of the Allied generals were hesitant to attack such a relatively strong position. The Earl of Orkney later confessed that, "had I been asked to give my opinion, I had been against it." Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Lord Cutts' column – who by 10:00 had expelled the enemy from two water mills upon the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a heavy six-gun battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel Cadogan reported that Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank. Cutts ordered Brigadier-General Archibald Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some . John Ferguson's Scottish brigade supported Rowe's left, and moved in perfect order towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within , the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their torn ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back towards the Nebel, sustaining heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, Beat-Jacques von Zurlauben, fell upon the English troops, cutting at the exposed flank of Rowe's own regiment. However, Wilkes' Hessian brigade, lying nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack. Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders." Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons. On the Allied right, Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of the Elector and Marsin. The Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count Scholten, attempted to drive the French infantry out of the copses beyond the village. With the infantry heavily engaged, Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry, and were forced back across the Nebel in confusion. Nevertheless, the exhausted French were unable to follow up their advantage, and the two cavalry forces tried to regroup and reorder their ranks. However, without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Eugene and the Prussian Prince that the imperial infantry were prevented from abandoning the field. After rallying his troops near Schwennenbach – well beyond their starting point – Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous cross-fire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians, however, were almost as disordered as their opponents, and they too were in need of inspiration from their commander, the Elector, who was seen " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream. Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. The centre, commanded by the Duke's brother, General Charles Churchill, consisted of 18 battalions of infantry arranged in two lines: seven battalions in the front line to secure a foothold across the Nebel, and 11 battalions in the rear providing cover from the Allied side of the stream. Between the infantry were placed two lines, 72 squadrons of cavalry. The first line of foot was to pass the stream first and march as far to the other side as could be conveniently done. This line would then form and cover the passage of the horse, leaving gaps in the line of infantry large enough for the cavalry to pass through and take their position in front. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As these elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in terrible confusion, pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed the Elector, "the gentlemen of France fleeing?" Palmes, however, attempted to follow up his success but was repulsed in some confusion by other French cavalry, and musket fire from the edge of Blenheim. Nevertheless, Tallard was alarmed by the repulse of the elite Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry was being taken into Blenheim by Clérambault. Fatally, Tallard, aware of the situation, did nothing to rectify this grave mistake, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream; his front-line cavalry darting forward down the gentle slope towards the Nebel. But the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded, and died two days later. The time was just after 15:00. The Danish cavalry, under the Duke of Württemberg-Neuenstadt (not to be confused with the Duke of Württemberg who fought with Eugene), had made slow work of crossing the Nebel near Oberglau; harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured. Count Horn directed the Prince of Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding the Prince during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces now engaged across the whole plain. Seeing the opportunity, Marsin ordered his cavalry to change from facing Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough (who had crossed the Nebel on a makeshift bridge to take personal control), ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A Dutch cavalry brigade under Averock was also called forward but soon came under pressure from Marsin's more numerous squadrons. Marlborough now requested Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own desperate struggle, the Imperial Prince at once complied, demonstrating the high degree of confidence and mutual co-operation between the two generals. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus forestalling the chance for Marsin to strike at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Colonel Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, the Marquis de Blainville, numbered amongst the heavy casualties. By 16:00, with the Franco-Bavarian troops besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column), supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to concert the attack upon the whole front, and Eugene, after his second repulse, needed time to reorganise. Just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of the Duke's line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded in Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the first line – Lumley's English and Scots on the Allied left, and Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, lacking infantry support, were tired and ragged but managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – "Sir, you are under a mistake, the enemy lies that way ..." Now, at the Duke's command, the second Allied line under Cuno Josua von Bülow and Bothmer was ordered forward, and, driving through the centre, the Allies finally put Tallard's tired horse to rout, though not without cost. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his 2nd in command, Lt. Col. von Hacke, fell next to each other. But the charge succeeded and with their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form square. But it was futile. The French battalions were overwhelmed by Colonel Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – "[They] died to a man where they stood, stationed right out in the open plain – supported by nobody." The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where upwards of 3,000 French horsemen drowned; others were cut down by the pursuing cavalry. The Marquis de Gruignan attempted a counter-attack, but he was easily brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Marshal Tallard was caught up in the rout and pushed towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant-Colonel de Boinenburg, the Prince of Hesse-Kassel's "aide-de-camp", and was sent under escort to Marlborough. The Duke welcomed the French commander – "I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard." With salutes and courtesies, the Marshal was escorted to Marlborough's coach. Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Eugene, however, became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to "fight among brave men and not among cowards", Eugene went into the attack with the Prussian and Danish infantry, as did the Dessauer, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, the Elector and Marsin decided the battle was lost and, like the remnants of Tallard's army, fled the battlefield (albeit in better order than Tallard's men). Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field. Marlborough now had to turn his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Hay and Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, and once more went to the attack. The Allied progress was slow and hard, and like the defenders, they suffered many casualties. Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. "Inform Monsieur Tallard", replied the Duke, "that, in the position in which he is now, he has no command." Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. Clérambault's insistence on confining his huge force in the village was to seal his fate that day. Realising his tactical mistake had contributed to Tallard's defeat in the centre, Clérambault deserted Blenheim and the 27 battalions defending the village, and reportedly drowned in the Danube while attempting to make his escape. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack with heavy slaughter, but many had seen what had happened on the plain and what its consequences to them would be; their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – "... it came into my head to beat parley", he later wrote, "which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms." Threatened by Allied guns, other units followed their example. However, it was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms. During these events Marlborough was still in the saddle conducting the pursuit of the broken enemy. Pausing for a moment he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: "I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory." French losses were immense, with over 27,000 killed, wounded and captured. Moreover, the myth of French invincibility had been destroyed and Louis's hopes of an early and victorious peace had been wrenched from his grasp. Mérode-Westerloo summarised the case against Tallard's army: "The French lost this battle for a wide variety of reasons. For one thing they had too good an opinion of their own ability ... Another point was their faulty field dispositions, and in addition there was rampant indiscipline and inexperience displayed ... It took all these faults to lose so celebrated a battle." It was a hard-fought contest, leading Prince Eugene to observe – "I have not a squadron or battalion which did not charge four times at least." Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Eugene, working indivisibly together, had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and all remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed 7 November 1704, Bavaria was placed under Austrian military rule, allowing the Habsburgs to use its resources for the rest of the conflict. The remnants of the Elector of Bavaria's and Marshal Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria (under strict terms of an alliance with Austria), the Elector left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Their commander-in-chief that day, Marshal Tallard – who, unlike his subordinates, had not been ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711. The 1704 campaign lasted considerably longer than usual as the Allies sought to wring out maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, however, Eugene, Marlborough and Baden met to plan their next moves. For the following year the Duke proposed a campaign along the valley of the River Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to the Margrave of Baden and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January the 110 cavalry standards and the 128 infantry colours that were taken during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a Duke in 1702, granted him the Park of Woodstock and promised a sum of £240,000 to build a suitable house as a gift from a grateful crown in recognition of his victory – a victory which British historian Sir Edward Shepherd Creasy considered one of the pivotal battles in history, writing – "Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability." However, military historian John A. Lynn consider this claim unjustified as Louis XIV never laboured such objective, as the campaign in Bavaria was intended to bring only a favourable peace settlement and not domination over Europe. The famous Lake poet Robert Southey scathingly criticised the Battle of Blenheim in his anti war poem After Blenheim. However Robert Southey himself is said to have later praised the victory. The poem points out the complacency of the public and lack of curiosity
https://en.wikipedia.org/wiki?curid=4049
Battle of Ramillies The Battle of Ramillies (), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand Alliance – Austria, England, and the Dutch Republic – the battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks Louis XIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706. The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains Louis XIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven ("Louvain") at the head of 60,000 men and marched towards Tienen ("Tirlemont"), as if to threaten Zoutleeuw ("Léau"). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his army – some 62,000 men – near Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the Mehaigne and Petite Gheete rivers, close to the small village of Ramillies. In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battle – something his opponents failed to realise until it was too late – caught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that Louis XIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an "annus mirabilis". After their disastrous defeat at Blenheim in 1704, the next year brought the French some respite. The Duke of Marlborough had intended the 1705 campaign – an invasion of France through the Moselle valley – to complete the work of Blenheim and persuade King Louis XIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden’s pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden's health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor Leopold I in May and the accession of Joseph I, which unavoidably complicated matters for the Grand Alliance. The resilience of the French King and the efforts of his generals, also added to Marlborough's problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commander – whose supplies had by now become very short – was forced to call off his campaign on 16 June. "What a disgrace for Marlborough," exulted Villeroi, "to have made false movements without any result!" With Marlborough's departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine. The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign. On 11 January 1706, Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke's favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war. Determined to show the Grand Alliance that France was still resolute, Louis XIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries. The Duke left The Hague on 9 May. "God knows I go with a heavy heart," he wrote six days later to his friend and political ally in England, Lord Godolphin, "for I have no hope of doing anything considerable, unless the French do what I am very confident they will not … " – in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent – "I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment …" Additionally, the King "in" Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin's command. But in this he had miscalculated. Although Louis XIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted. Following the successes in Italy and along the Rhine, Louis XIV was now hopeful of similar results in Flanders. Far from standing on the defensive therefore – and unbeknown to Marlborough – Louis XIV was persistently goading his marshal into action. "[Villeroi] began to imagine," wrote St Simon, "that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself." Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannon – comprising an overall force of some 60,000 troops – and crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success. Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoigne – this line of march took Villeroi's army towards the narrow aperture of dry ground between the Mehaigne and Petite Gheete rivers close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day's march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: "If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success." The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi's army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. At about 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi's advance guard some off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, General Daniel Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the front. Marlborough later told Bishop Burnet that, ‘the French army looked the best of any he had ever seen’. The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi's right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi's left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouille – upon which the Anglo-Dutch army amassed – rises to the east. At 11:00, the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantry – perhaps 30,000 men – facing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramillies – and where Marlborough thought the decisive encounter must take place – Overkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces. Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the Mehaigne river, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass. Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this disposition – concave in relation to the Allied army – gave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the ‘horns’ of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouille – threatening to encircle their army – the Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line. At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south the Dutch Guards, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing "mêlée", but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered – "this village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond. Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded. As the French ranks wavered, the leading squadrons of Württemberg's Danish horse – now unhampered by enemy fire from either village – were also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos – "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight … My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank. While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from Louis XIV – "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley’s English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Spaar; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish dragoons who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song "Clare's Dragoons". Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies’ real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions. At around 15:30, Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. Overkirk's squadrons – 48 Dutch, supported on their left by 21 Danish – steadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene – "They advanced in four lines … As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." The initial clash favoured the Dutch and Danish squadrons. The disparity of numbers – exacerbated by Villeroi stripping their ranks of infantry to reinforce his left flank – enabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in Louis XIV's army – the "Maison du Roi", supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two. A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the "mêlée", rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown – "Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eyewitness, " … seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Fortunately Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers. After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feet – a moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flank – a change of which Villeroi remained blissfully unaware. The time was about 16:30, and the two armies were in close contact across the whole front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities. The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army. The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishmen serving with the French – "when the words "sauve qui peut" went through the great part, if not the whole army, and put all to confusion" In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled – " … I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter." The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's ‘Scots Greys’ also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village … made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur." The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the Dyle river. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "… reduced to a handful." What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for Louis XIV's army: ("see below"). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was – "The most shameful, humiliating and disastrous of routs." Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley, "and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When Louis XIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands. As news spread of the Allies’ triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town's defences begin to fail. Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command – "I cannot foresee a happy day in my life save only that of my death." Louis XIV was more forgiving to his old friend – "At our age, Marshal, we must no longer expect good fortune." In the meantime, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Ath – the last conquest of 1706 – on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambre – it was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France. The immediate question for the Allies was how to deal with the Spanish Netherlands, a subject on which the Austrians and the Dutch were diametrically opposed. Emperor Joseph I, acting on behalf of his younger brother King ’Charles III’, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to Louis XIV's forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke's opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of ‘Charles III’, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled. Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did Louis XIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing Philip V to re-enter his capital on 4 October. All in all though, the situation had changed considerably and Louis XIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance – "Now we have God be thanked so hopeful a prospect of peace." Instead of continuing the momentum of victory, however, cracks in Allied unity would enable Louis XIV to reverse some of the major setbacks suffered at Turin and Ramillies. The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler’s "Marlborough as Military Commander" and "A Guide to the Battlefields of Europe" are consistent with regards to French casualty figures i.e., 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in "Ramillies 1706: Year of Miracles," also notes 12,000 dead and wounded and states ‘up to 10,000’ taken prisoner. In "The Collins Encyclopaedia of Military History", Dupuy puts Villeroi's dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner's memoirs – "Compendious Journal" (1733) – is more specific, recording 12,087 of Villeroi's army were killed or wounded, with another 9,729 taken prisoner. In "Marlborough", however, Correlli Barnett puts the total casualty figure as high as 30,000 – 15,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi's casualties at 13,000, but adds, ‘his losses by desertion may have doubled that number’. La Colonie omits a casualty figure in his "Chronicles of an old Campaigner"; but Saint-Simon in his "Memoirs" states 4,000 killed, adding 'many others were wounded and many important persons were taken prisoner'. Voltaire, however, in "Histoire du siècle du Louis XIV" records, 'the French lost there twenty thousand men'.
https://en.wikipedia.org/wiki?curid=4050
Brian Kernighan Brian Wilson Kernighan (; born January 1, 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language ("The C Programming Language") with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and the "K" in AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a Professor of Computer Science at Princeton University since 2000 and is the Director of Undergraduate Studies in the Department of Computer Science. In 2015, he co-authored the book "The Go Programming Language". Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his Bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Kernighan has held a professorship in the Department of Computer Science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term Unix and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in Computer Science. Kernighan was elected as a member of the National Academy of Engineering in 2002 and a member of the American Academy of Arts and Sciences in 2019. Other achievements during his career include:
https://en.wikipedia.org/wiki?curid=4051
BCPL BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Further, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 man-months. This approach became common practice later (e.g. Pascal, Java). The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit. The interpretation of any value was determined by the operators used to process the values. (For example, codice_1 added two values together, treating them as integers; codice_2 indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking. Hungarian notation was developed to help programmers avoid inadvertent type errors. The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by codice_3). BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively BCPL gives the programmer control of the linking process. The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick "ad hoc" debugging aid. BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences codice_4 and codice_5 in place of the symbols codice_6 and codice_7. The single-line codice_8 comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99. The book "BCPL: The language and its compiler" describes the philosophy of BCPL as follows: BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System (CTSS), was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology (MIT) in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference. BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book. BCPL is the language in which the original hello world program was written. The first MUD was also written in BCPL (MUD1). Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the seminal Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL. An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Martin Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method. By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, the IBM 360, the PDP-10, the TX-2, the CDC 6400, the UNIVAC 1108, the PDP-9, the KDF 9 and the Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the Arpanet. There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Dr. Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England. Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with codice_9 being C's increment operator), although meanwhile a D programming language also exists. In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems. Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, Mac OS X and Raspberry Pi. The latest distribution includes Graphics and Sound libraries and there is a comprehensive manual in PDF format. He continues to program in it, including for his research on musical automated score following. A common informal MIME type for BCPL is . If these programs are run using Martin Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors. Print factorials: GET "LIBHDR" LET START() = VALOF $( AND FACT(N) = N = 0 -> 1, N * FACT(N - 1) Count solutions to the N queens problem: GET "LIBHDR" GLOBAL $( LET TRY(LD, ROW, RD) BE LET START() = VALOF $(
https://en.wikipedia.org/wiki?curid=4052
Battleship A battleship is a large armored warship with a main battery consisting of large caliber guns. During the late 19th and early 20th centuries the battleship was the most powerful type of warship, and a fleet centered around the battleship was part of the command of the sea doctrine for several decades. By the time of World War II, however, the battleship was made obsolete as other ships, primarily the smaller and faster destroyers, the secretive submarines, and the more versatile aircraft carriers came to be far more useful in naval warfare. While a few battleships were repurposed as fire support ships and as platforms for guided missiles, few countries maintained battleships after the 1950s, with the last battleships being decommissioned in the early 1990s. The term "battleship" came into formal use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of into the United Kingdom's Royal Navy heralded a revolution in battleship design. Subsequent battleship designs, influenced by HMS "Dreadnought", were referred to as "dreadnoughts", though the term eventually became obsolete as they became the only type of battleship in common use. Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS "Dreadnought". The launch of "Dreadnought" in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 (both during the Russo-Japanese War) and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships. The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected. The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected, and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991. The last battleships were struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era battleships remain in use today as museum ships. A ship of the line was the dominant warship of its age. It was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades. The ship of the line developed gradually over centuries and, apart from growing in size, it changed little between the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'. The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun in 1850—the first true steam battleship. "Napoléon" was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of , regardless of the wind condition. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1). The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells. Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn. Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day. The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched , the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, "Gloire" was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. "Gloire" prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead. The superior armored frigate followed "Gloire" by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the and the at the Battle of Hampton Roads at least eight navies possessed ironclad ships. Navies experimented with the positioning of guns, in turrets (like the USS "Monitor"), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's , laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material. The term "battleship" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of , and an armament of four guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British of 1871. The slow-firing main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats. The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticised by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s. In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorised a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors. Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were also introduced. A small number of designs, including the American and es, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice. In 1906, the British Royal Navy launched the revolutionary . Created as a result of pressure from Admiral Sir John ("Jackie") Fisher, HMS "Dreadnought" made existing battleships obsolete. Combining an "all-big-gun" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, , in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. "Dreadnought" sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power. Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after "Dreadnought"s commissioning, much more powerful ships, the super-dreadnoughts, were being built. In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns. Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the "Regia Marina" did not pursue his ideas, Cuniberti wrote an article in "Janes" proposing an "ideal" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying belt armor, and capable of 24 knots (44 km/h). The Russo-Japanese War provided operational experience to validate the "all-big-gun" concept. During the Battle of the Yellow Sea on August 10, 1904, Admiral Togo of the Imperial Japanese Navy commenced deliberate 12-inch gun fire at the Russian flagship "Tzesarevich" at 14,200 yards (13,000 meters). At the Battle of Tsushima on May 27, 1905, Russian Admiral Rozhestvensky's flagship fired the first 12-inch guns at the Japanese flagship "Mikasa" at 7,000 meters. It is often held that these engagements demonstrated the importance of the gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons when dealing with smaller fast moving torpedo craft. Such was the case, albeit unsuccessfully, when the Russian battleship "Knyaz Suvorov" at Tsushima had been sent to the bottom by destroyer launched torpedoes. When dealing with a mixed 10- and 12-inch armament. The 1903–04 design also retained traditional triple-expansion steam engines. As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to by steam turbines. It was to prove this revolutionary technology that "Dreadnought" was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win. "Dreadnought" was to have been followed by three s, their construction delayed to allow lessons from "Dreadnought" to be used in their design. While Fisher may have intended "Dreadnought" to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the and es. An American design, , authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, , were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with "Dreadnought"s wing turrets. They thus retained the same broadside, despite having two fewer guns. In 1897, before the revolution in design brought about by HMS Dreadnought, the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards. By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany. The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser , were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle. In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli. In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine in less than an hour. The British Super-dreadnought soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships. As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of , which was caught in the Dardanelles by a British submarine and and were torpedoed by "U-21" as well as , , etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS "Audacious" turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS "Dreadnought" rammed and sank the German submarine "U-29" on March 18, 1915 off the Moray Firth. Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed. Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought by during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought . She was sunk by destroyers during the night phase of the Battle of Jutland. The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war. For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919 just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships. The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out. While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939 at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled "The Command of the Air", which foresaw the dominance of air power over naval units. In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship and the American pre-dreadnought . Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of "Ostfriesland" was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the back foot. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program. The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth "Yamato"s (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . "España" (originally named "Alfonso XIII"), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard "Jaime I" remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, "España" ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, "Jaime I" was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting "Deutschland" incident meant the end of German and Italian participation in non-intervention. The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its potential against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. When fired the last salvo of this battle, the last salvo fired by a battleship against another heavy ship, she was "firing a funeral salute to a finished era of naval warfare". In April 1945, during the battle for Okinawa, the world's most powerful battleship, the "Yamato", was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. Indeed, it soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The was taken by the Soviets as reparations and renamed "Novorossiysk"; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The French was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956–57. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic "en route" to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. "Hessen" was taken over by the Soviet Union and renamed "Tsel". She was scrapped in 1960. "Schleswig-Holstein" was renamed "Borodino", and was used as a target ship until 1960. "Schlesien", too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four "Iowa"-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, "New Jersey" firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of "Kirov" by the Soviet Union, the United States recommissioned all four "Iowa"-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with "New Jersey" seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. "Wisconsin" served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of "Desert Storm", firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; "Missouri" was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . After "Indiana" was stricken in 1962, the four "Iowa-class" ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four "Iowa" ships were finally decommissioned in the early 1990s. and were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. When the last "Iowa"-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . "Missouri" and "New Jersey" are museums at Pearl Harbor and Camden, New Jersey, respectively. "Iowa" is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. "Wisconsin" now serves as a museum ship in Norfolk, Virginia. "Massachusetts", which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. "Texas", the first battleship turned into a museum, is on display at the San Jacinto Battleground State Historic Site, near Houston. "North Carolina" is on display in Wilmington, North Carolina. "Alabama" is on display in Mobile, Alabama. The wreck of the , sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The only other 20th-century battleship on display is the Japanese pre-dreadnought . A replica of the ironclad battleship was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China. Battleships were the embodiment of sea power. For Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in "The Influence of Sea Power Upon History, 1660–1783" of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it. Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the "decisive battle" doctrine in some navies, while targeting merchant ships (commerce raiding or "guerre de course", as posited by the "Jeune École") could never succeed. Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era. The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect. While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The "Jeune École" doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was vitiated by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers. Battleship doctrine emphasised the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time. The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS "Missouri" was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS "New Jersey" stopped the firing. Gunfire from "New Jersey" later killed militia leaders. Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The "Jeune École" school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the "Jeune École" were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticised by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle.
https://en.wikipedia.org/wiki?curid=4054
Bifröst In Norse mythology, Bifröst ( or sometimes Bilröst or Bivrost) is a burning rainbow bridge that reaches between Midgard (Earth) and Asgard, the realm of the gods. The bridge is attested as "Bilröst" in the "Poetic Edda"; compiled in the 13th century from earlier traditional sources, and as "Bifröst" in the "Prose Edda"; written in the 13th century by Snorri Sturluson, and in the poetry of skalds. Both the "Poetic Edda" and the "Prose Edda" alternately refer to the bridge as Ásbrú (Old Norse "Æsir's bridge"). According to the "Prose Edda", the bridge ends in heaven at Himinbjörg, the residence of the god Heimdallr, who guards it from the jötnar. The bridge's destruction during Ragnarök by the forces of Muspell is foretold. Scholars have proposed that the bridge may have originally represented the Milky Way and have noted parallels between the bridge and another bridge in Norse mythology, Gjallarbrú. Scholar Andy Orchard posits that "Bifröst" may mean "shimmering path." He notes that the first element of "Bilröst"—"bil" (meaning "a moment")—"suggests the fleeting nature of the rainbow," which he connects to the first element of "Bifröst"—the Old Norse verb "bifa" (meaning "to shimmer" or "to shake")—noting that the element evokes notions of the "lustrous sheen" of the bridge. Austrian Germanist Rudolf Simek says that "Bifröst" either means "the swaying road to heaven" (also citing "bifa") or, if "Bilröst" is the original form of the two (which Simek says is likely), "the fleetingly glimpsed rainbow" (possibly connected to "bil", perhaps meaning "moment, weak point"). Two poems in the "Poetic Edda" and two books in the "Prose Edda" provide information about the bridge: In the "Poetic Edda", the bridge is mentioned in the poems "Grímnismál" and "Fáfnismál", where it is referred to as "Bilröst". In one of two stanzas in the poem "Grímnismál" that mentions the bridge, Grímnir (the god Odin in disguise) provides the young Agnarr with cosmological knowledge, including that Bilröst is the best of bridges. Later in "Grímnismál", Grímnir notes that Asbrú "burns all with flames" and that, every day, the god Thor wades through the waters of Körmt and Örmt and the two Kerlaugar: In "Fáfnismál", the dying wyrm Fafnir tells the hero Sigurd that, during the events of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river. The bridge is mentioned in the "Prose Edda" books "Gylfaginning" and "Skáldskaparmál", where it is referred to as "Bifröst". In chapter 13 of "Gylfaginning", Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question isn't an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, "and is built with art and skill to a greater extent than other constructions." High notes that, although the bridge is strong, it will break when "Muspell's lads" attempt to cross it, and their horses will have to make do with swimming over "great rivers." Gangleri says that it doesn't seem that the gods "built the bridge in good faith if it is liable to break, considering that they can do as they please." High responds that the gods do not deserve blame for the breaking of the bridge, for "there is nothing in this world that will be secure when Muspell's sons attack." In chapter 15 of "Gylfaginning", Just-As-High says that Bifröst is also called "Asbrú", and that every day the gods ride their horses across it (with the exception of Thor, who instead wades through the boiling waters of the rivers Körmt and Örmt) to reach Urðarbrunnr, a holy well where the gods have their court. As a reference, Just-As-High quotes the second of the two stanzas in "Grímnismál" that mention the bridge (see above). Gangleri asks if fire burns over Bifröst. High says that the red in the bridge is burning fire, and, without it, the frost jotnar and mountain jotnar would "go up into heaven" if anyone who wanted could cross Bifröst. High adds that, in heaven, "there are many beautiful places" and that "everywhere there has divine protection around it." In chapter 17, High tells Gangleri that the location of Himinbjörg "stands at the edge of heaven where Bifrost reaches heaven." While describing the god Heimdallr in chapter 27, High says that Heimdallr lives in Himinbjörg by Bifröst, and guards the bridge from mountain jotnar while sitting at the edge of heaven. In chapter 34, High quotes the first of the two "Grímnismál" stanzas that mention the bridge. In chapter 51, High foretells the events of Ragnarök. High says that, during Ragnarök, the sky will split open, and from the split will ride forth the "sons of Muspell". When the "sons of Muspell" ride over Bifröst it will break, "as was said above." In the "Prose Edda" book "Skáldskaparmál", the bridge receives a single mention. In chapter 16, a work by the 10th century skald Úlfr Uggason is provided, where Bifröst is referred to as "the powers' way." In his translation of the "Prose Edda", Henry Adams Bellows comments that the "Grímnismál" stanza mentioning Thor and the bridge stanza may mean that "Thor has to go on foot in the last days of the destruction, when the bridge is burning. Another interpretation, however, is that when Thor leaves the heavens (i.e., when a thunder-storm is over) the rainbow-bridge becomes hot in the sun." John Lindow points to a parallel between Bifröst, which he notes is "a bridge between earth and heaven, or earth and the world of the gods", and the bridge Gjallarbrú, "a bridge between earth and the underworld, or earth and the world of the dead." Several scholars have proposed that Bifröst may represent the Milky Way. In the final scene of Richard Wagner's 1869 opera "Das Rheingold", the god Froh summons a rainbow bridge, over which the gods cross to enter Valhalla. The Bifröst appears in comic books associated with the Marvel Comics character Thor and in subsequent adaptations of those comic books. In the Marvel Cinematic Universe film "Thor", Jane Foster describes the Bifröst as an Einstein–Rosen bridge, which functions as a means of transportation across space in a short period of time.
https://en.wikipedia.org/wiki?curid=4055