text
stringlengths
10
951k
source
stringlengths
39
44
Atari 5200 The Atari 5200 SuperSystem, commonly known as the Atari 5200, is a home video game console that was introduced in 1982 by Atari Inc. as a higher-end complementary console for the popular Atari 2600. The 5200 was created to compete with the Intellivision, but wound up more directly competing with the ColecoVision shortly after its release. The 5200's internal hardware is almost identical to that of Atari's 8-bit computers, although software is not directly compatible between the two systems. The 5200's controllers have an analog joystick and a numeric keypad along with start, pause, and reset buttons. The 360-degree non-centering joystick was touted as offering more control than the eight-way joystick controller offered with the Atari 2600. On May 21, 1984, during a press conference at which the Atari 7800 was introduced, company executives revealed that the 5200 had been discontinued after just two years on the market. Total sales of the 5200 were reportedly in excess of 1 million units, far short of its predecessor's sales of over 30 million. Much of the technology in the Atari 8-bit family of home computer systems was originally developed as a second-generation games console intended to replace the 2600. However, as the system was reaching completion, the personal computer revolution was starting with the release of machines like the Commodore PET, TRS-80 and Apple II. These machines had less advanced hardware than the new Atari technology, but sold for much higher prices with associated higher profit margins. Atari's management decided to enter this market, and the technology was repackaged into the Atari 400 and 800. The chipset used in these machines was created with the mindset that the 2600 would likely be obsolete by the 1980 time frame. Atari later decided to re-enter the games market with a design that closely matched their original 1978 specifications. In its prototype stage, the Atari 5200 was originally called the "Atari Video System X – Advanced Video Computer System", and was codenamed "Pam" after a female employee at Atari, Inc. It is also rumored that PAM actually stood for "Personal Arcade Machine", as the majority of games for the system ended up being arcade conversions. Actual working "Atari Video System X" machines, whose hardware is 100% identical to the Atari 5200 do exist, but are extremely rare. The initial 1982 release of the system featured four controller ports, where nearly all other systems of the day had only one or two ports. The 5200 also featured a new style of controller with an analog joystick, numeric keypad, two fire buttons on each side of the controller and game function keys for Start, Pause, and Reset. The 5200 also featured the innovation of the first automatic TV switchbox, allowing it to automatically switch from regular TV viewing to the game system signal when the system was activated. Previous RF adapters required the user to slide a switch on the adapter by hand. The RF box was also where the power supply connected in a unique dual power/television signal setup similar to the RCA Studio II's. A single cable coming out of the 5200 plugged into the switch box and was used for both electricity and the television signal. The 1983 revision of the Atari 5200 has two controller ports instead of four, and a change back to the more conventional separate power supply and standard non-autoswitching RF switch. It also has changes in the cartridge port address lines to allow for the Atari 2600 adapter released that year. While the adapter was only made to work on the two-port version, modifications can be made to the four-port to make it line-compatible. In fact, towards the end of the four-port model's production run, there were a limited number of consoles produced which included these modifications. These consoles can be identified by an asterisk in their serial numbers. The controller prototypes used in the electrical development lab employed a yoke and gimbal mechanism that came from an RC airplane controller kit. The design of the analog joystick, which used a weak rubber boot rather than springs to provide centering, proved to be ungainly and unreliable. They quickly became the Achilles' heel of the system because of their combination of an overly complex mechanical design with a very low-cost internal flex circuit system. Another major flaw of the controllers was that the design did not translate into a linear acceleration from the center through the arc of the stick travel. The controllers did, however, include a pause button, a unique feature at the time. Various third-party replacement joysticks were also released, including those made by Wico. Atari Inc. released the Pro-Line Trak-Ball controller for the system, which was used primarily for gaming titles such as "Centipede" and "Missile Command". A paddle controller and an updated self-centering version of the original controller were also in development, but never made it to market. Games were shipped with plastic card overlays that snapped in over the keypad. The card would indicate which game functions, such as changing the view or vehicle speed, were assigned to each key. The primary controller was ranked the 10th worst video game controller by IGN editor Craig Harris. An editor for "Next Generation" said that their non-centering joysticks "rendered many games nearly unplayable". David H. Ahl in 1983 described the Atari 5200 as "a 400 computer in disguise". Its internal design was extensively based on that of the Atari 8-bit family, including ANTIC, POKEY, and GTIA. Software designed for one does not run on the other, but porting the source code is not difficult as long as it does not use computer-specific features. "Antic" magazine reported in 1984 that "the similarities grossly outweigh the differences, so that a 5200 program can be developed and almost entirely debugged [on an Atari 8-bit computer] before testing on a 5200". John J. Anderson of "Creative Computing" alluded to the incompatibility being intentional, caused by rivalries between Atari's computer and console divisions. Besides the 5200's lack of a computer keyboard, other differences include: In 1987, Atari Corporation released the XE Game System console, which was essentially a repackaged 65XE (the then-current iteration of the 8-bit computer line) with a detachable keyboard that could run home computer titles directly, unlike the 5200. Anderson wrote in 1984 that Atari could have released a console compatible with computer software in 1981. The Atari 5200 did not fare well commercially compared to its predecessor, the Atari 2600. While it touted superior graphics to the 2600 and Mattel's Intellivision, the system was initially incompatible with the 2600's expansive library of games, and some market analysts have speculated that this hurt its sales, especially since an Atari 2600 cartridge adapter had been released for the Intellivision II. (A revised 2-port model was released in 1983, along with a game adapter that allowed gamers to play all 2600 games.) This lack of new games was due in part to a lack of funding, with Atari continuing to develop most of its games for the saturated 2600 market. Many of the 5200's games appeared simply as updated versions of 2600 titles, which failed to excite consumers. Its pack-in game, "Super Breakout", was criticized for not doing enough to demonstrate the system's capabilities, and this gave the ColecoVision a significant advantage when its pack-in, "Donkey Kong", delivered a more authentic arcade experience than any previous game cartridge. In its list of the top 25 game consoles of all time, IGN claimed that the main reason for the 5200's market failure was the technological superiority of its competitor, while other sources maintain that the two consoles are roughly equivalent in power. The 5200 received much criticism for the "sloppy" design of its non-centering analog controllers. Anderson described the controllers as "absolutely atrocious". David H. Ahl of "Creative Computing Video & Arcade Games" said in 1983 that the "Atari 5200 is, dare I say it, Atari's answer to Intellivision, Colecovision, and the Astrocade", describing the console as a "true mass market" version of the Atari 8-bit computers despite the software incompatibility. He criticized the joystick's imprecise control but said that "it is at least as good as many other controllers", and wondered why "Super Breakout" was the pack-in game when it did not use the 5200's improved graphics. At one point following the 5200's release, Atari had planned a smaller, cost-reduced version of the Atari 5200, which would have removed the controller storage bin. Code-named the "Atari 5100" (a.k.a. "Atari 5200 Jr."), only a few fully working prototype 5100s were made before the project was canceled.
https://en.wikipedia.org/wiki?curid=2780
Atari 7800 The Atari 7800 ProSystem, or simply the Atari 7800, is a home video game console officially released by the Atari Corporation in 1986. It is almost fully backward-compatible with the Atari 2600, the first console to have backward compatibility without the use of additional modules. It was considered affordable at a price of . The 7800 has significantly improved graphics hardware over the 2600 but uses the same audio chip. It also shipped with a different model of joystick from the 2600-standard CX40. The 1986 launch is sometimes referred to as a "re-release" or "relaunch" because the Atari 7800 had originally been announced on May 21, 1984, to replace Atari Inc.'s Atari 5200, but a general release was shelved due to the sale of the company. Atari had been facing pressure from Coleco and its ColecoVision console, which supported graphics that more closely mirrored arcade games of the time than either the Atari 2600 or 5200. The Atari 5200 (released as a successor to the Atari 2600) was criticized for not being able to play 2600 games without an adapter. The Atari 7800 ProSystem was the first console from Atari, Inc. designed by an outside company, General Computer Corporation (GCC). It was designed in 1983–1984 with an intended mass market rollout in June 1984, but was canceled shortly thereafter due to the sale of the company to Tramel Technology Ltd on July 2, 1984. The project was originally called the Atari 3600. With a background in creating arcade games such as "Food Fight", GCC designed the new system with a graphics architecture similar to arcade machines of the time. Powering the system is a slightly customized 6502 processor, the Atari SALLY (sometimes described as a "6502C"), running at 1.79 MHz. By some measures the 7800 is more powerful, and by others less, than Nintendo's 1983 NES. It uses the 2600's Television Interface Adaptor chip, with the same restrictions, for generating two-channels of audio. The 7800 was initially released in southern California in June 1984, following an announcement on May 21, 1984, at the Summer Consumer Electronics Show. Thirteen games were announced for the system's launch: "Ms. Pac-Man", "Pole Position II", "Centipede", "Joust", "Dig Dug", "Desert Falcon", "", "Galaga", "Food Fight", "Ballblazer", "Rescue on Fractalus!", "Track & Field", and "Xevious". On July 2, 1984, Warner Communications sold Atari's Consumer Division to Jack Tramiel. All projects were halted during an initial evaluation period. Modern publications have often incorrectly asserted that Jack Tramiel mothballed the Atari 7800, feeling that video games were a past fad, and subsequently asserted that he dusted off the Atari 7800 once the Nintendo Entertainment System became successful. The reality was that a contractual issue arose in that GCC had not been paid for their development of the 7800. Warner and Tramiel battled back and forth over who was accountable, with Tramiel believing that the 7800 should have been covered as part of his acquisition deal. In May 1985, Tramiel relented and paid GCC the overdue payment. This led to additional negotiations regarding the initial launch titles that GCC had developed and then an effort to find someone to lead their new video game division, which was completed in November 1985. The original production run of the Atari 7800 languished on warehouse shelves until it was re-introduced in January 1986, after strong 2600 sales the previous Christmas. The console was released nationwide in May 1986 for $79.95. The console's launch under Tramiel was more subdued than Warner had planned for the system in 1984 with a marketing budget of just $300,000 ().The keyboard and high score cartridge were canceled, the expansion port was removed from later production runs of the system and, in lieu of new titles, the system was launched with titles intended for the 7800's debut in 1984. By the end of 1986, "Computer Entertainer" claimed the Atari 7800 had sold 100,000 consoles in the United States, less than the Master System's 125,000 and the NES's 1.1 million. According to Atari, due to manufacturing problems, it only managed to produce and sell 100,000 units by 1986, including units that had been in a warehouse since 1984. A common complaint in 1986 was a lack of games, including a gap of months between new releases ("Galaga"s release in August was followed by "Xevious" in November). By the end of 1986, the 7800 had 10 games, compared to Sega's 20 and Nintendo's 36. Nine of the NES games were third-party, whereas the 7800 and Master System had no third-party games. Atari's lineup for the 7800 emphasized high-quality versions of popular arcade games like "Joust" and "Asteroids", which at the time of the 1986 launch were four and seven years old, respectively. Eleven titles were developed and sold by three third-party companies under their own labels for the 7800 (Absolute Entertainment, Activision, and Froggo) with the rest published by Atari themselves. However, most Atari development was contracted out. Some NES titles were developed by companies who had licensed their title from a different arcade manufacturer. While the creator of the NES version would be restricted from making a competitive version of an NES game, the original arcade copyright holder was not precluded from licensing out rights for a home version of an arcade game to multiple systems. Through this loophole, Atari 7800 conversions of "Mario Bros.", "Double Dragon", "Commando", "Rampage", "Xenophobe", "Ikari Warriors", and "Kung-Fu Master" were licensed and developed. The Atari 7800 remained officially active in the United States between 1986 and 1991 and in Europe between 1989 and 1991. On January 1, 1992, Atari Corp. formally announced that production of the Atari 7800, the Atari 2600, the Atari 8-bit computer line, and the Atari XE Game System would cease. (It has since been discovered that Atari Corp. continued to develop games such as "Toki" for the Atari 7800 until all development was shut down in May 1993.) By the time of the cancellation, Nintendo's NES dominated the North American market, controlling 80% while Atari Corp. controlled just 12%. Despite trailing the Nintendo Entertainment System in terms of the number of units sold, the 7800 was a profitable enterprise for Atari Corp., benefiting largely from Atari's name and the system's 2600 compatibility. Profits were strong owing to low investment in game development and marketing. "Retro Gamer" magazine issue 132 reported that according to Atari UK Marketing Manager Darryl Still "it was very well stocked by European retail", "Although it never got the consumer traction that the 2600 did, I remember we used to sell a lot of units through mail order catalogues and in the less affluent areas". The graphics are generated by a custom chip called MARIA, which uses an approach common in contemporary arcade system boards and is different from other second and third generation consoles. Instead of a limited number of hardware sprites, MARIA treats everything as a sprite described in a series of display lists. Each display list contains pointers to graphics data and color and positioning information. MARIA supports a palette of 256 colors and graphics modes which are either 160 pixels wide or 320 pixels wide. While the 320 pixel modes theoretically enable the 7800 to create games at higher resolution than the 256 pixel wide graphics found in the Nintendo Entertainment System and Master System, the processing demands of MARIA result in most games using the 160 pixel mode. Depending on various parameters, each individual sprite can use from 1 to 12 colors, with 3 colors (plus a 4th "transparency" color) being the most common. In this format, the sprite is referenced to one of 8 palettes, where each palette holds 3 assignable colors. There is also an assignable background color, which will be visible wherever another object has not covered it up. In total the system can reference a 25-color palette on a scanline. The graphics resolution, color palette assignments, and background color can be adjusted in between scanlines. This technique is documented in the original 1983 "Atari 3600 Software Guide". This could be used to render high resolution text in one area of the screen, while displaying more colorful graphics at lower resolution in the gameplay area. The MARIA's approach has advantages and disadvantages when it comes to generating graphics. It excels at moving large numbers of sprites on a static screen. Its flexible design enables games with a pseudo 3D appearance such as "Ballblazer" (1987) and "F-18 Hornet" (1988). While side-scrolling games in the vein of "Super Mario Bros." are possible on the system (such as 1990's "Scrapyard Dog"), it is harder to develop such a title than on a tile-based system such as the NES. A common criticism of the 7800 regards its use of the TIA to provide 2-channel sound effects and music, resulting in sound quality that is virtually identical to the Atari 2600 VCS from 1977. While the inclusion of 2600 hardware is required to maintain compatibility with the older system, this drove up production costs and reduced available space on the 7800's motherboard. As such, the 7800 does not include additional hardware for generating sound as it does with graphics, and the sound hardware is considered the weakest part of the system. To compensate for this, GCC's engineers allowed games to include a POKEY audio chip in the cartridge which substantially improved the audio quality. To ensure software developers had an economical means of producing better sound than TIA, GCC had originally planned to make a low-cost, high performance sound chip, GUMBY, which could also be placed in 7800 cartridges to enhance its sound capabilities further. This project was cancelled when Atari was sold to Jack Tramiel. Despite having the capability to support sound chips in cartridges, almost no 7800 cartridges feature POKEY hardware for enhanced sound. "Ballblazer", released in 1987, uses the POKEY to generate all music and sound effects. Similarly, "Commando", released in 1989, uses a POKEY to generate in-game music while the TIA generates the game's sound effects for a total of 6 channels of sound. Following the debate over "Custer's Revenge", an Atari 2600 VCS title with adult themes, Atari had concerns over similar adult titles finding their way onto the 7800 and displaying adult content using the significantly improved graphics capabilities of the MARIA chip. To combat this, they included a digital signature protection method which prevented unauthorized 7800 games from being played on the system. When a cartridge was inserted into the system, the 7800 BIOS included code which would generate a digital signature of the cartridge ROM and compare it to the signature stored on the cartridge. If a correct signature was located on the cartridge, the 7800 would operate in 7800 mode, granting the game access to MARIA and other features. If a signature was not located, the 7800 remained in 2600 mode and MARIA was unavailable. All 7800 games released in North America had to be digitally signed by Atari. This digital signature code is not present in PAL 7800s, which use various heuristics to detect 2600 cartridges, due to export restrictions. The signing utility was found and released by Classic Gaming Expo in 2001. The Atari 7800 differs from the 2600 in several key areas. It features a full Atari SALLY 6502 processor whereas the 2600 VCS has a stripped-down 6507 processor running at a slower speed. It has additional RAM and the ability to access more cartridge data at one time than the 2600. The most substantial difference, however, is a graphics architecture which differs markedly from either the Atari 2600 VCS or Atari's 8-bit line of computers. The 7800's compatibility with the Atari 2600 is made possible by including many of the same chips used in the Atari 2600. When operating in “2600” mode to play Atari 2600 titles, the 7800 uses a Television Interface Adapter (TIA) chip to generate graphics and sound. The processor is slowed to 1.19 MHz, enabling the 7800 to mirror the performance of the 2600's 6507 processor. RAM is limited to 128 bytes found in the RIOT and game data is accessed in 4K blocks. When in “7800” mode (signified by the appearance of the full-screen Atari logo), the graphics are generated entirely by the MARIA graphics processing unit. All system RAM is available and game data is accessed in larger 48K blocks. The system's SALLY 6502 runs at its normal 1.79 MHz instead of the reduced speed of 2600 mode. The 2600 chips are used in 7800 mode to generate sound and to provide the interfaces to the controllers and console switches. The Atari 7800 does not support backward compatibility for Atari 5200 games or accessories. Prototypes: Production: The Atari 7800 came bundled with the Atari Pro-Line Joystick, a two-button controller with a joystick for movement. The Pro-Line was originally developed for the 2600 and was advertised in 1983, but delayed until Atari proceeded with the 7800. The right fire button only works as a separate fire button for certain 7800 games that utilize it; otherwise, it duplicates the left fire button, allowing either button to be used for 2600 games. While physically compatible, the 7800's controllers were incompatible with the Sega Master System, and Sega's controllers were unable to use the 7800's two-button mode. In response to criticism over ergonomic issues in the 7800's Pro-Line controllers, Atari later released a joypad controller with the European 7800. It was similar in style to controllers found on Nintendo and Sega Systems. The joypad was not available in the United States. There were few add-on peripherals for the 7800, though its backwards compatibility feature allowed it to use most Atari 2600 peripherals. The Atari XG-1 light gun, which came bundled with the Atari XEGS, was sold separately for other Atari systems and was compatible with the 7800. Atari released five 7800 light gun games: "Alien Brigade", "Barnyard Blaster", "Crossbow", "Meltdown", and "Sentinel". After the acquisition of the Atari Consumer Division by Jack Tramiel in 1984, a number of planned peripherals for the system were canceled: While the 7800 can actually play hundreds of titles due to its compatibility with the Atari 2600, there was limited third-party support for the 7800 and fewer than 100 titles were specifically designed for it. In 2004, Atari (now owned by Infogrames) released the first Atari Flashback console. This system resembled a miniature Atari 7800 and joysticks and had 20 built in games (five 7800 and fifteen 2600 titles). While the unit sold well, it was controversial among Atari fans. Atari had given the engineering firm, Legacy Engineering, extremely limited development timelines. The firm was forced to build the Flashback using NES-On-A-Chip hardware instead of recreating the Atari 7800 hardware. As a result, the Flashback has been criticized for failing to properly replicate the actual Atari gaming experience. Legacy Engineering was later commissioned to create another 7800 project that was subsequently cancelled after prototypes were made. When emulators of 1980s video game consoles began to appear on home computers in the late 1990s, the Atari 7800 was one of the last to be emulated. The lack of awareness of the system, the lack of understanding of the hardware, and fears about the digital signature lockout initially caused concerns. Since that time, however, the 7800 has been emulated successfully and is now common on emulation sites. One such program is ProSystem, written in C/C++ for the Microsoft Windows operating system. It uses the Windows API and DirectX to display what it emulates in both PAL and NTSC. The digital signature long prevented homebrew games from being developed until the original encryption generating software was discovered. When the original digital signature generating software was turned over to the Atari community, development of new Atari 7800 titles began. In addition, the Atari community has slowly uncovered the original 7800 development tools and released them into the public domain. New tools, documentation, source code and utilities for development have since been created which has sponsored additional homebrew development. Several new commercial Atari 7800 titles such as "Beef Drop", "B*nQ", "Pac-Man Collection", "Combat 1990", "Santa Simon", and "Space War" have been released. The source code for 13 games, as well as the OS and development tools (for the Atari ST computer system) were discovered in a dumpster behind the Atari building in Sunnyvale, California. Commented assembly language source code was made available for "Centipede", "Commando", "Crossbow", "Desert Falcon", "Dig Dug", "Food Fight", "Galaga", "Hat Trick", "Joust", "Ms. Pac-Man", "Super Stunt Cycle", "", and "Xevious".
https://en.wikipedia.org/wiki?curid=2781
Atari Jaguar The Atari Jaguar is a home video game console that was developed by Atari Corporation and originally released in North America in November 1993. A successor to the 7800, the Jaguar came as part of the fifth generation of video game consoles and was, controversially, marketed by Atari as being the world's first "64-bit" video game system, while competing with the existing 16-bit consoles (Sega Genesis and Super Nintendo Entertainment System) and the 32-bit 3DO Interactive Multiplayer platform (which launched the same year). The Jaguar shipped with "Cybermorph" as the pack-in game. Development on the Atari Jaguar started in the early 1990s by Flare Technology. The multi-chip architecture, hardware bugs, and lacking developer support tools made game development difficult. Underwhelming sales further contributed to the console's lack of third-party support. This, in addition to the lack of internal development at Atari, led to a games library comprising only 50 licensed titles, plus another 13 games on the Jaguar CD. Atari attempted to extend the lifespan of the system with the Atari Jaguar CD add-on and marketing the Jaguar as the low-cost next generation console, with a price tag over $100 less than any of its competitors. However, with the release of the Sega Saturn and Sony PlayStation in 1995, sales of the Jaguar continued to fall, ultimately selling no more than 250,000 units before it was discontinued in 1996. The commercial failure of the Jaguar prompted Atari to leave the video game console market. After Hasbro Interactive acquired all Atari properties in 1998, the rights to the Jaguar were released into the public domain, with the console being declared an open platform. Since its discontinuation, the Jaguar has gained a cult following, with a developer base that produces homebrew games for the console. The Jaguar was developed by the members of Flare Technology, a company formed by Martin Brennan and John Mathieson. The team had claimed that they could not only make a console superior to the Genesis or the Super NES, but they could also be cost-effective. Impressed by their work on the Konix Multisystem, Atari persuaded them to close Flare and form a new company called Flare II, with Atari providing the funding. Flare II initially set to work designing two consoles for Atari. One was a 32-bit architecture (codenamed "Panther"), and the other was a 64-bit system (codenamed "Jaguar"); however, work on the Jaguar design progressed faster than expected, so Atari canceled the Panther project to focus on the more promising Jaguar. The Jaguar was unveiled in August 1993 at the Chicago Consumer Entertainment Show. The Jaguar was launched on November 23, 1993, at a price of $249.99, under a $500 million manufacturing deal with IBM; to prepare for this, all of Atari's other products -- the 2600, 7800, XEGS and the ST Computer line -- would be discontinued by that time. The system was initially available only in the test markets of New York City and San Francisco, under the slogan "Do the Math", claiming superiority over competing 16-bit and 32-bit systems. A U.S.-wide release followed six months later, in early 1994. "Computer Gaming World" wrote in January 1994 that the Jaguar was "a great machine in search of a developer/customer base", as Atari had to "overcome the stigma of its name (lack of marketing and customer support, as well as poor developer relations in the past)". The company "ventured late into third party software support" while competing console 3DO's "18 month public relations blitz" would result in "an avalanche of software support", the magazine reported. The Jaguar struggled to attain a substantial user base. Atari reported that it had shipped 17,000 units as part of the system's initial test market in 1993. By the end of 1994, it reported that it had sold approximately 100,000 units. In early 1995, Atari announced that they had dropped the price of the Jaguar to $149.99 in order to improve its competitive nature. Atari ran early morning infomercials, with enthusiastic salesmen touting the powerful game system. These infomercials would run for most of 1995, but did not significantly sell the remaining stock of Jaguar systems. In a 1995 interview with "Next Generation", then-CEO Sam Tramiel declared that the Jaguar was as powerful, if not more powerful, than the newly launched Sega Saturn, and slightly weaker than the upcoming PlayStation. "Next Generation" received a deluge of letters in response to Tramiel's comments, particularly his threat to bring Sony to court for price dumping if the PlayStation entered the U.S. market at a retail price below $300 and his remark that the small number of third party Jaguar games was good for Atari's profitability, which angered Jaguar owners who were already frustrated at how few games were coming out for the system. In Atari's 1995 annual report, it noted: In addition, Atari had severely limited financial resources, and so could not create the level of marketing which has historically backed successful gaming consoles. By November 1995, mass layoffs and insider statements were fueling journalistic speculation that Atari had ceased both development and manufacturing for the Jaguar and was simply trying to sell off existing stock before exiting the video game industry. Although Atari continued to deny these theories going into 1996, core Jaguar developers such as High Voltage Software and Beyond Games stated that they were no longer receiving communications from Atari regarding future Jaguar projects. In its 10-K405 SEC Filing, filed April 12, 1996, Atari informed their stockholders that its revenues had declined by more than half, from $38.7 million in 1994 to $14.6 million in 1995, then gave them the news on the truly dire nature of the Jaguar: The filing also essentially confirmed all theories that Atari had given up on the Jaguar beginning in November of 1995, and in the subsequent months were concerned chiefly with liquidating its inventory of Jaguar products. On April 8, 1996, Atari Corporation agreed to merge with JTS, Inc. in a reverse takeover, thus forming JTS Corporation. The merger was finalized on July 30. After the Atari/JTS merger, the bulk of the remaining Jaguar inventory remained unsold; these would be finally moved out to Tiger Software, a private liquidator, on December 23, 1996. On March 13, 1998, JTS sold the Atari name and all of the Atari properties to Hasbro Interactive. The Jaguar's underlying hardware was crippled by a flaw in the CPU's memory controller, which prevented code execution out of system RAM. Less severe defects included a buggy UART. The memory controller flaw could have been mitigated by a mature code-development environment, to unburden the programmer from having to micromanage small chunks of code. Jaguar's development tools left much to the programmer's own implementation, as documentation was incomplete. Design specs for the console allude to the GPU or DSP being capable of acting as a CPU, leaving the Motorola 68000 to read controller inputs. Atari's Leonard Tramiel also specifically suggested that the 68000 not be used by developers. In practice, however, many developers used the Motorola 68000 to drive gameplay logic due to the greater developer familiarity with the 68000, bugs that made using the custom chips difficult, lacking developer support tools (particularly early on), and the adequacy of the 68000 for certain types of games. Atari tried to play down competing consoles by proclaiming the Jaguar was the only "64-bit" system. This claim is questioned by some, because the (Motorola 68000) CPU and the "Tom" GPU executes a 32-bit instruction-set, but sends control signals to the 64-bit graphics co-processors. Atari's reasoning that the 32-bit "Tom" and "Jerry" chips work in tandem to add up to a 64-bit system was ridiculed in a mini-editorial by "Electronic Gaming Monthly", which commented that "If Sega did the math for the Sega Saturn the way Atari did the math for their 64-bit Jaguar system, the Sega Saturn would be a 112-bit monster of a machine." "Next Generation", while giving a mostly negative review of the Jaguar, maintained that it is a true 64-bit system, since the data path from the DRAM to the CPU and Tom and Jerry chips is 64 bits wide. From the Jaguar Software Reference manual, page 1: Jaguar is a custom chip set primarily intended to be the heart of a very high-performance games/leisure computer. It may also be used as a graphics accelerator in more complex systems, and applied to workstation and business uses. As well as a general purpose CPU, Jaguar contains four processing units. These are the Object Processor, Graphics Processor, Blitter, and Digital Sound Processor. Jaguar provides these blocks with a 64-bit data path to external memory devices, and is capable of a very high data transfer rate into external dynamic RAM. Atari Games licensed the Atari Jaguar's chipset for use in its arcade games. The system, named COJAG (for "Coin-Op Jaguar"), replaced the 68000 with a 68020 or MIPS R3000-based CPU (depending on the board version), added more RAM, a full 64-bit wide ROM bus (Jaguar ROM bus being 32-bit), and optionally a hard drive (some titles such as "Freeze" are ROM only). It ran the lightgun games "Area 51" and "Maximum Force", which were released by Atari as dedicated cabinets or as the Area 51/Maximum Force combo machine. Other games ("3 On 3 Basketball"; "Fishin' Frenzy"; "Freeze"; "Vicious Circle") were developed but never released. The Atari Jaguar Duo was a proposed console similar to the TurboDuo and Genesis CDX. It was an attempt by Atari to combine the Atari Jaguar and Atari Jaguar CD to make a new console. A prototype model, described by journalists as resembling a bathroom scale, was unveiled at the 1995 Winter Consumer Electronics Show, but the console was cancelled before production could begin. Prior to the launch of the console in November 1993, Atari had announced a variety of peripherals and add-ons for the Jaguar to be released over the console's lifespan. This included a CD-ROM-based add-on console, a dial-up internet link with support for online gaming, a virtual reality headset, and an MPEG-2 video card. However, due to the poor sales and eventual commercial failure of the Jaguar, most of the peripherals in development were scrapped. The only peripherals and add-ons released by Atari for the Jaguar were a redesigned controller, an adapter for four players, a CD console add-on, and a link cable for local area network (LAN) gaming. The redesigned second controller for the Jaguar, the ProController by Atari, added three more face buttons and two triggers. The controller was created in response to the criticism of the original controller that the console came with, which was said to not possess enough buttons for fighting games in particular. Sold independently, however, it was never bundled with the system after its release. A peripheral that allowed 4 controllers to be plugged into the console was also released. Dubbed the "Team Tap", it was released independently and as a bundle with "White Men Can't Jump". However, the Team Tap was compatible with "White Men Can't Jump" and "NBA Jam Tournament Edition" only. Eight player gameplay with the Team Tap peripheral is also possible if a second Team Tap is plugged into the second controller port on the console, but neither of the compatible games supports eight players. Local area network multiplayer gameplay was achieved through the use of the Jaglink Interface, which allowed two Jaguar consoles to be linked together through a modular extension and a UTP phone cable. The Jaglink was compatible with three games: "AirCars", "BattleSphere" and "Doom". In 1994 at the CES, Atari announced that it had partnered with Phylon, Inc. to create the Jaguar Voice/Data Communicator. The unit was delayed and eventually in 1995 mass production was canceled, but not before an estimated 100 units were produced. The Jaguar Voice Modem or JVM, as it became known, utilized a 19.9kbit/s dial up modem and had the ability to answer incoming phone calls and store up to 18 phone numbers. Players were required to directly dial each other for online game play. The only Jaguar game that supports the JVM is "Ultra Vortek"; the modem is initialized in the "Ultra Vortek" start up screen by entering 911 on the key pad. The Atari Jaguar CD is an add-on to the Jaguar that made use of CD-ROMs to distribute games. It was released in September 1995, two years after the Jaguar's launch. Twelve games were released for the system during its manufacturing lifetime, with more being made later by homebrew developers. Each Jaguar CD unit came with a Virtual Light Machine, which displayed light patterns corresponding to music, if the user inserts an audio CD into the console. It was developed by Jeff Minter, who had created the program after experimenting with graphics during the development of "Tempest 2000". The program was deemed a spiritual successor to the Atari Video Music, a system which served a similar purpose, released in 1976. An additional accessory for the Jaguar CD, which allowed Jaguar CD games to save persistent data such as preferences and saved games, was also released. Known as the Memory Track, it was a cartridge that contained a 128 K EEPROM, and was to be inserted into the cartridge slot on the Jaguar CD while the user played a Jaguar CD game. The program manager for the Memory Track is accessed by pushing the option button while the system is starting, and exited by pushing the * and # keys simultaneously. There were plans to make a second model of the Jaguar console that combined both the Jaguar and the Jaguar CD into one unit, a la the TurboDuo. Originally codenamed the Jaguar III, and later the Jaguar Duo, the proposed model was scrapped after the discontinuation of the Jaguar. A virtual reality headset compatible with the console, tentatively titled the Jaguar VR, was unveiled by Atari at the 1995 Winter Consumer Electronics Show. The development of the peripheral was a response to Nintendo's virtual reality console, the Virtual Boy, which had been announced the previous year. The headset was developed in cooperation with Virtuality, who had previously created many virtual reality arcade systems, and was already developing a similar headset for practical purposes, named Project Elysium, for IBM. The peripheral was targeted for a commercial release before Christmas 1995. However, the deal with Virtuality was abandoned in October 1995. After Atari's merger with JTS in 1996, all prototypes of the headset were allegedly destroyed. However, two working units, one low-resolution prototype with red and grey-colored graphics and one high-resolution prototype with blue and grey-colored graphics, have since been recovered, and are regularly showcased at retrogaming-themed conventions and festivals. Only one game was developed for the Jaguar VR prototype: a 3D-rendered version of the 1980 arcade game "Missile Command", entitled "Missile Command 3D", though a demo of Virtuality's "Zone Hunter" was also created for Jaguar VR demonstrations. An unofficial expansion peripheral for the Atari Jaguar dubbed the "Catbox" was released by the Rockford, Illinois company ICD. It was originally slated to be released early in the Jaguar's life, in the second quarter of 1994, but was not actually released until mid-1995. The ICD CatBox plugs directly into the AV/DSP connectors located in the rear of the Jaguar console and provides three main functions. These are audio, video, and communications. It features six output formats, three for audio (line level stereo, RGB monitor, headphone jack with volume control) and three for video (composite, S-Video, and RGB analog component video) making the Jaguar compatible with multiple high quality monitor systems and multiple monitors at the same time. It is capable of communications methods known as CatNet and RS-232 as well as DSP pass through, allowing the user to connect two or more Jaguars together for multiplayer games either directly or with modems. The ICD CatBox features a polished stainless steel casing and red LEDs in the jaguar's eyes on the logo that indicate communications activity. An IBM AT-type null modem cable may be used to connect two Jaguars together. The CatBox is also compatible with Atari's Jaglink Interface peripheral. An adaptor for the Jaguar that allows for WebTV access was revealed in 1998; one prototype is known to exist. Reviewing the Jaguar just a few weeks prior to its launch, "GamePro" gave it a "thumbs sideways". They praised the power of the hardware but criticized the controller, and were dubious of how the software lineup would turn out, commenting that Atari's failure to secure support from key third party publishers such as Capcom was a bad sign. They concluded that "Like the 3DO, the Jaguar is a risky investment – just not quite as expensive." The Jaguar won "GameFan"'s "Best New System" award for 1993. The small size and poor quality of the Jaguar's game library became the most commonly cited reason for its failure in the marketplace. The pack-in game "Cybermorph" was one of the first polygon-based games for consoles, but was criticized for design flaws and a weak color palette, and compared unfavorably with the SNES's "Star Fox". Other early releases like "Trevor McFur in the Crescent Galaxy", "Raiden", and "" also received poor reviews, the latter two for failing to take full advantage of the Jaguar's hardware. Jaguar did eventually earn praise with titles such as "Tempest 2000", "Doom", and "Wolfenstein 3D". The most successful title during the Jaguar's first year was "Alien vs. Predator". Both "Alien vs. Predator" and "Tempest 2000" were named among the system's defining titles by "GamePro" in 2007. However, these occasional successes were seen as insufficient while the Jaguar's competitors were receiving a continual stream of critically acclaimed software; "GamePro" concluded their rave review of "Alien vs. Predator" by remarking "If Atari can turn out a dozen more games like AvP, Jaguar owners could truly rest easy and enjoy their purchase." In late 1995 reviews of the Jaguar, "Game Players" remarked, "The Jaguar suffers from several problems, most importantly the lack of good software." and "Next Generation" likewise commented that "thus far, Atari has spectacularly failed to deliver on the software side, leaving many to question the actual quality and capability of the hardware. With only one or two exceptions – "Tempest 2000" is cited most frequently – there have just been no truly great games for the Jaguar up to now." They further noted that while Atari is well known by older gamers, the company had much less overall brand recognition than Sega, Sony, Nintendo, or even The 3DO Company. However, they argued that with its low price point, the Jaguar might still compete if Atari could improve the software situation. They gave the system two out of five stars. "Game Players" also stated the despite being 64-bit, the Jaguar is much less powerful than the 3DO, Saturn, and PlayStation, even when supplemented with the Jaguar CD. With such a small library of games to challenge the incumbent 16-bit game consoles, Jaguar's appeal never grew beyond a small gaming audience. Digital Spy commented: "Like many failed hardware ventures, it still maintains something of a cult following but can only be considered a misstep for Atari." In 2006 IGN editor Craig Harris rated the standard Jaguar controller as the worst game controller ever, criticizing the unwarranted recycling of the 1980s "phone keypad" format and the small number of action buttons, which he found particularly unwise given that Atari was actively trying to court fighting game fans to the system. Ed Semrad of "Electronic Gaming Monthly" commented that many Jaguar games gratuitously used all of the controller's phone keypad buttons, making the controls much more difficult than they needed to be. "GamePro"s The Watch Dog remarked, "The controller usually doesn't use the keypad, and for games that use the keypad extensively ("Alien vs. Predator", "Doom"), a keypad overlay is used to minimize confusion. But yes, it is a lot of buttons for nuttin'." Atari added more action buttons for its Pro Controller, to improve performance in fighting games in particular. Telegames continued to publish games for the Jaguar after it was discontinued, and for a time was the only company to do so. On May 14, 1999, Hasbro Interactive announced that it had released all rights to the Jaguar, declaring it an open platform; this opened the doors for extensive homebrew development. Following Hasbro Interactive's announcement, Songbird Productions joined Telegames in releasing previously unfinished materials from the Jaguar's past as well as several brand new titles in order to satisfy the system's cult following. Hasbro Interactive, along with all the Atari properties, would be sold to Infogrames on January 29, 2001. In the United Kingdom in 2001, a deal was struck between Telegames and retailer Game to bring the Jaguar to Game's retail outlets. The machine was initially sold for £29.99 brand new and software prices ranged between £9.99 for more common games such as "Doom" and "Ruiner Pinball", and £39.99 for more sought-after releases such as "Defender 2000" and "Checkered Flag". The machine had a presence in the stores until 2007 when remaining consoles were sold off for £9.99 and games were sold for as low as 97p. Eventually, the molding plates for the Jaguar's console and cartridge casing had moved from one point to another. In 1997, Imagin Systems, a manufacturer of dental imaging equipment, purchased the molds from JTS. As with minor modification, the console molds were found to be the right size for housing their HotRod camera, with the cartridge molds being reused to create an optional memory expansion card. In December 2014, the molds were purchased from Imagin Systems by Mike Kennedy, owner of the Kickstarter funded "Retro Videogame Magazine", to propose a new crowdfunded video game console called the "Retro VGS", later rebranded the "Coleco Chameleon" after entering a licensing agreement with Coleco. The purchase of the molds from Imagin Systems was far cheaper than designing and manufacturing entirely new molds, and Kennedy described their acquisition as "the entire reason [the Retro VGS] is possible". However, the project was terminated in March 2016 following criticism of Kennedy and doubts regarding demand for the proposed console. Two "prototypes" were discovered to be fakes and Coleco withdrew from the project. After the project's termination, the molds have since been sold to Albert Yarusso, the founder of the AtariAge website. The 32X add-on for the Sega Genesis video game console was developed in response to the Atari Jaguar. Concerning that the Saturn would not make it to market by the end of 1994, the product was conceived as an entirely new console. At the suggestion of Sega of America executive Joe Miller and his team, who stressed the importance of coming up with a quick response to the Jaguar, the console was converted into an add-on to the existing Genesis and made more powerful. Unveiled by Sega at June 1994's Consumer Electronics Show, the 32X was presented as a low-cost option for consumers looking to play 32-bit games. Its origin as a response to the Jaguar was emphasized when it launched with "Doom" in its lineup in the same month that the Jaguar version of "Doom" was released. Sega of America president Tom Kalinske touted the 32X version against the Jaguar version during an interview: "The [32X game] I like best is "Doom", that's personally one of my favorite games. And I think it's fantastic that it's the complete game. As I'm sure you know, on one of the other systems, there's no sound. I can't imagine a game without sound." Atari in turn mocked the 32X version by including a "32X Doom Mode" in "NBA Jam T.E." in which players have only front-facing sprites, reflecting the fact that the 32X version of "Doom" has only front-facing sprites. Sega's efforts to rush the 32X to market cut into available time for game development, resulting in a weak library of forty titles that could not fully use the add-on's hardware, including Genesis ports. Discontinued in 1996 as Sega turned its focus to the Saturn, the 32X, like the Jaguar, is considered a commercial failure.
https://en.wikipedia.org/wiki?curid=2782
Atari Lynx The Atari Lynx is a 8/16-bit handheld game console that was released by Atari Corporation in September 1989 in North America, and in Europe and Japan in 1990. It was the world's first handheld electronic game with a color LCD. It was also notable for its advanced graphics and ambidextrous layout. The Lynx competed with the Game Boy (released two months earlier), as well as the Game Gear and TurboExpress, both released the following year. It was discontinued in 1995. The Lynx system was originally developed by Epyx as the Handy Game. In 1986, two former Amiga designers, R. J. Mical and Dave Needle, had been asked by former manager at Amiga, David Morse, if they could come up with a design for a portable gaming system. Morse now worked at Epyx, a game software company that had a recent string of hit games. Morse's son had asked him if he could make a portable gaming system, prompting a meeting with Mical and Needle to discuss the idea. Morse convinced Mical and Needle to develop the idea and they were hired by Epyx to be a part of the design team. Planning and design of the console began in 1986 and was completed in 1987. Epyx first showed the Handy system at the Winter Consumer Electronics Show (CES) in January 1989. Facing financial difficulties, Epyx sought out partners. Nintendo, Sega, and other companies declined, but Atari and Epyx eventually agreed that Atari would handle production and marketing, while Epyx would handle software development. Epyx declared bankruptcy by the end of the year, and Atari essentially owned the entire project; both Atari and others, however, had to purchase Amigas from Atari archrival Commodore to develop Lynx software. The Handy was designed to run games from the cartridge format, and the game data must be copied from ROM to RAM before it can be used. Thus, less RAM is then available and each game's initial loading is slow. There are trace remnants of a cassette tape interface physically capable of being programmed to read a tape. Lynx developers have noted that "there is still reference of the tape and some hardware addresses" and an updated vintage Epyx manual describes the bare existence of what could be utilized for tape support. A 2009 retrospective interview clarifies that although some early reports claimed that games were loaded from tape, Mical says there was no truth in them: "We did think about hard disk a little." Atari changed the internal speaker and removed the thumb-stick on the control pad before releasing it as the Lynx, initially retailing in the US at . Atari then showed the Lynx to the press at the Summer 1989 CES as the "Portable Color Entertainment System", which was changed to "Lynx" when actual consoles were distributed to resellers. The Lynx started off successfully. Atari reported that they had sold 90% of the 50,000 units it shipped in its launch month in the U.S. with a limited launch in New York. US sales in 1990 were approximately 500,000 units according to the Associated Press. In late 1991, it was reported that Atari sales estimates were about 800,000, which Atari claimed was within their expected projections. Lifetime sales by 1995 amounted to fewer than 7 million units when combined with the Game Gear. In comparison, the Game Boy sold 16 million units by 1995 because it was more rugged, cost half as much, had much longer battery life, was bundled with "Tetris", and had a superior software library. As with the actual console units, the game cartridges themselves evolved over the first year of the console's release. The first generation of cartridges were flat, and were designed to be stackable for ease of storage. However, this design proved to be very difficult to remove from the console and was replaced by a second design. This style, called "tabbed" or "ridged", used the same basic design as the original cartridges with the addition of two small tabs on the cartridge's underside to aid in removal. The original flat style cartridges could be stacked on top of the newer cartridges, but the newer cartridges could not be easily stacked on each other, nor were they stored easily. Thus a third style, the "curved lip" style was produced, and all official and third-party cartridges during the console's lifespan were released (or re-released) using this style. In May 1991, Sega launched its Game Gear portable gaming handheld. Also a color handheld, in comparison to the Lynx it had a higher cost and shorter battery life (3–4 hours as opposed to 4-5 for the Lynx), but it was slightly smaller and was backed up by significantly more games. Retailers such as Game and Toys R Us continued to sell the Lynx well into the mid-1990s on the back of the Atari Jaguar launch, helped by magazines such as "Ultimate Future Games" who continued to cover the Lynx alongside the new generation of 32-bit and 64-bit consoles. During 1990, the Lynx had moderate sales. In July 1991, Atari introduced the Lynx II with a new marketing campaign, new packaging, slightly improved hardware, better battery life and a new sleeker look. The new system (referred to within Atari as the "Lynx II") featured rubber hand grips and a clearer backlit color screen with a power save option (which turned off the LCD panel's backlighting). It also replaced the monaural headphone jack of the original Lynx with one wired for stereo. The new packaging made the Lynx available without any accessories, dropping the price to $99. Although sales improved, Nintendo still dominated the handheld market. In 1993, Atari started shifting its focus away from the Lynx in order to prepare for the launch of the Jaguar. A few games were released during this time, including "Battlezone 2000". Support for the Lynx would not be formally discontinued until 1995. After the launch of the Sega Saturn and Sony PlayStation drastically caused the commercial failure of the Jaguar that same year, Atari terminated all internal game and hardware development and, in 1996, agreed to reverse merge with JTS, Inc., thus the Atari brand would leave the gaming market for good; the Atari properties were soon sold to Hasbro Interactive in 1998. The Atari Lynx's innovative features include being the first color handheld, with a backlit display, a switchable right-handed/left-handed (upside down) configuration, and the ability to network with up to 15 other units via its Comlynx system (though most games would network eight or fewer players). Comlynx was originally developed to run over infrared links (and was codenamed RedEye). This was changed to a cable-based networking system before the final release. According to Peter Engelbrite, when players walked through the beam, the link would be interrupted. The maximum stable connection allowed was eight players. Engelbrite also developed the first recordable eight-player co-op game, and the only eight-player game for the Atari Lynx, "Todd's Adventures in Slime World", using the Comlynx system. Each Lynx needed a copy of the game, and one cable could connect two machines. The cables could be connected into a chain. The Lynx was cited as the "first gaming console with hardware support for zooming and distortion of sprites". Featuring a 4096 color palette and integrated math and graphics co-processors (including a blitter unit), its pseudo-3D color graphics display was said to be the key defining feature in the system's competition against Nintendo's monochromatic Game Boy. The fast pseudo-3D graphics features were made possible on a minimal hardware system by co-designer Dave Needle having "invented the technique for planar expansion/shrinking capability" and using stretched, textured, triangles instead of full polygons. The game system was reviewed in 1990 in "Dragon", which gave the Lynx 5 out of 5 stars. The review states that the Lynx "throws the into the prehistoric age", and praises the built-in object scaling capabilities, the multiplayer feature of the ComLynx cable, and the strong set of launch games. The infrequency of Lynx software releases and the system's minimal marketing budget have been cited as the main factors in its commercial failure. Telegames released a number of games in the second half of the 1990s, including a port of "Raiden" and a platformer called "Fat Bobby" in 1997, as well as an action sports game called "Hyperdrome" in 1999. In 1999, Hasbro, which would continue to hold on to the Atari properties until 2000, released all development rights on the Lynx, as well as all patents relating to the Jaguar, into the public domain, and thus the two platforms were declared open to everyone. In 2008, Atari was honored at the 59th Annual Technology & Engineering Emmy Awards for pioneering the development of handheld games with its Lynx game unit. Lynx games by independent developers include "T-Tris" (the first Lynx game with save-game feature), "Alpine Games", and "Zaku".
https://en.wikipedia.org/wiki?curid=2783
Bayonne Bayonne (; Gascon: "Baiona" ; ; ; ) is a city and commune and one of the two sub-prefectures of the department of Pyrénées-Atlantiques, in the Nouvelle-Aquitaine region of southwestern France. It is located at the confluence of the Nive and Adour rivers in the northern part of the cultural region of the Basque Country. This is also the southern part of Gascony, where the Aquitaine basin joins the beginning of the Pre-Pyrenees. Together with nearby Anglet, Biarritz, Saint-Jean-de-Luz, and several smaller communes, Bayonne forms an urban area with 288,359 inhabitants at the 2012 census; 45,855 residents lived in the city of Bayonne proper. The site on the left bank of the Nive and the Adour was probably occupied before ancient times; a fortified enclosure was attested in the 1st century at the time when the Tarbelli occupied the territory. Archaeological studies have confirmed the presence of a Roman castrum, a stronghold in Novempopulania at the end of the 4th century, before the city was populated by the Vascones. In 1023 Bayonne was the capital of Labourd. In the 12th century, it extended to the confluence and beyond of the Nive River. At that time the first bridge was built over the Adour. The city came under the domination of the English in 1152 through the marriage of Eleanor of Aquitaine: it became militarily and, above all, commercially important thanks to maritime trade. In 1177 Richard the Lion Heart of England took control of it, separating it from the Viscount of Labourd. In 1451 the city was taken by the Crown of France after the Hundred Years' War. The loss of trade with the English was followed by the river gradually filling with silt and becoming impassable to ships. As the city developed to the north, its position was weakened compared to earlier times. The district of Saint-Esprit developed initially from settlement by Sephardic Jewish refugees fleeing the Spanish expulsions dictated by the Alhambra Decree. This community brought skill in chocolate making, and Bayonne gained a reputation for chocolate. The course of the Adour was changed in 1578 by dredging under the direction of Louis de Foix, and the river returned to its former mouth. Bayonne flourished after regaining the maritime trade that it had lost for more than a hundred years. In the 17th century the city was fortified by Vauban, whose works were followed as models of defense for 100 years. In 1814 Bayonne and its surroundings were the scene of fighting between the Napoleonic troops and the Spanish-Anglo-Portuguese coalition led by the Duke of Wellington. It was the last time the city was under siege. In 1951 the Lacq gas field was discovered in the region; its extracted sulphur and associated oil are shipped from the port of Bayonne. During the second half of the 20th century, many housing estates were built, forming new districts on the periphery. The city developed to form a conurbation with Anglet and Biarritz: this agglomeration became the heart of a vast Basque-Landes urban area. In 2014 Bayonne was a commune with more than 45,000 inhabitants, the heart of the urban area of Bayonne and of the "Agglomeration Côte Basque-Adour." This includes Anglet and Biarritz. It is an important part of the Basque "Bayonne-San Sebastián Eurocity" and it plays the role of economic capital of the Adour basin. Modern industry—metallurgy and chemicals—have been established to take advantage of procurement opportunities and sea shipments through the harbour. Business services today represent the largest source of employment. Bayonne is also a cultural capital, a city with strong Basque and Gascon influences, and a rich historical past. Its heritage is expressed in its architecture, the diversity of collections in museums, its gastronomic specialties, and traditional events such as the noted Fêtes de Bayonne. The inhabitants of the commune are known as "Bayonnais" or "Bayonnaises". Bayonne is located in the south-west of France on the western border between Basque Country and Gascony. It developed at the confluence of the Adour and tributary on the left bank, the Nive, 6 km from the Atlantic coast. The commune was part of the Basque province of Labourd. Bayonne occupies a territory characterized by a flat relief to the west and to the north towards the Landes forest, tending to slightly raise towards the south and east. The city has developed at the confluence of the Adour and Nive from the ocean. The meeting point of the two rivers coincides with a narrowing of the Adour valley. Above this the alluvial plain extends for nearly towards both Tercis-les-Bains and Peyrehorade, and is characterized by swampy meadows called "barthes." These were are influenced by floods and high tides. Downstream from this point, the river has shaped a large, wide bed in the sand dunes, creating a significant bottleneck at the confluence. The occupation of the hill that dominates this narrowing of the valley developed through a gradual spread across the lowlands. Occupants built embankments and the aggradation from flood soil. The Nive has played a leading role in the development of the Bayonne river system in recent geological time by the formation of alluvial terraces; these form the sub-soil of Bayonne beneath the surface accumulations of silt and aeolian sands. The drainage network of the western Pre-Pyrenees evolved mostly from the Quaternary, from south-east to northwest, oriented east–west. The Adour was captured by the gaves and this system, together with the Nive, led to the emergence of a new alignment of the lower Adour and the Adour-Nive confluence. This capture has been dated to the early Quaternary (80,000 years ago). Before this capture, the Nive had deposited pebbles from the Mindel glaciation of medium to large sizes; this slowed erosion of the hills causing the bottleneck at Bayonne. After the deposit of the lowest alluvial terrace ( high at Grand Bayonne), the course of the Adour became fixed in its lower reaches. Subsequent to these deposits, there was a rise in sea level in the Holocene period (from 15,000 to 5000 years ago). This explains the invasion of the lower valleys with fine sand, peat, and mud with a thickness of more than below the current bed of the Adour and the Nive in Bayonne. These same deposits are spread across the barthes. In the late Quaternary, the current topographic physiognomy was formed—i.e. a set of hills overlooking a swampy lowland. The promontory of Bassussarry–Marracq ultimately extended to the Labourdin foothills. The Grand Bayonne hill is an example. Similarly, on the right bank of the Nive, the heights of Château-Neuf (Mocoron Hill) met the latest advance of the plateau of Saint-Pierre-d'Irube (height ). On the right bank of the Adour, the heights of Castelnau (today the citadel), with an altitude of , and Fort (today Saint-Esprit), with an altitude of , rise above the Barthes of the Adour, the Nive, Bourgneuf, Saint-Frédéric, Sainte-Croix, Aritxague, and Pontots. The area of the commune is and its altitude varies between . The city developed along the Adour. The river is part of the Natura 2000 network from its source at Bagnères-de-Bigorre to its exit to the Atlantic Ocean after Bayonne, between Tarnos (Landes) for the right bank and Anglet (Pyrénées-Atlantiques) for the left bank. Apart from the Nive, which joins the left bank of the Adour after of a sometimes tumultuous course, two tributaries join the Adour in Bayonne commune: the "Ruisseau de Portou" and the "Ruisseau du Moulin Esbouc". Tributaries of the Nive are the "Ruisseau de Hillans" and the "Ruisseau d'Urdaintz" which both rise in the commune. The nearest weather station is that of Biarritz-Anglet. The climate of Bayonne is relatively similar to that of its neighbour Biarritz, described below, with fairly heavy rainfall; the oceanic climate is due to the proximity of the Atlantic Ocean. The average winter temperature is around 8 °C and is around 20 °C in summer. The lowest temperature recorded was −12.7 °C on 16 January 1985 and the highest 40.6 °C on 4 August 2003. Rains on the Basque coast are rarely persistent except during winter storms. They often take the form of intense thunderstorms of short duration. Bayonne is located at the intersection of the A63 autoroute (Bordeaux-Spain) and the D1 extension of the A64 autoroute (towards Toulouse). The city is served by three interchanges—two of them on the A63: exit (Bayonne Nord) serves the northern districts of Bayonne but also allows quick access to the centre while exit (Bayonne Sud) provides access to the south and also serves Anglet. The third exit is the D1 / A64 via the Mousserolles interchange (exit Bayonne Mousserolles) which links the district of the same name and also serves the neighbouring communes of Mouguerre and Saint-Pierre-d'Irube. Bayonne was traversed by Route nationale 10 connecting Paris to Hendaye but this is now downgraded to a departmental road D810. Route nationale 117, linking Bayonne to Toulouse has been downgraded to departmental road D817. There are several bridges over both the Nive and the Adour linking the various districts. Coming from upstream on the Adour there is the A63 bridge, then the Saint-Frédéric bridge which carries the D 810, then the railway bridge that replaced the old Eiffel iron bridge, the Saint-Esprit bridge, and finally the Grenet bridge. The Saint-Esprit bridge connects the Saint-Esprit district to the Amiral-Bergeret dock just upstream of the confluence with the river Nive. In 1845 the old bridge, originally made of wood, was rebuilt in masonry with seven arches supporting a deck wide. It was then called the Nemours Bridge in honour of Louis of Orleans, sixth Duke of Nemours, who laid the first stone. The bridge was finally called Saint-Esprit. Until 1868 the bridge had a moving span near the left bank. It was expanded in 1912 to facilitate the movement of horse-drawn carriages and motor vehicles. On the Nive coming from upstream to downstream there is the A63 bridge then the "Pont Blanc" (White bridge) railway bridge, and then D810 bridge, the Génie bridge (or "Pont Millitaire"), the Pannecau bridge, the Marengo bridge leading to the covered markets, and the Mayou Bridge The Pannecau bridge was long named "Bertaco bridge" and was rebuilt in masonry under Napoleon III. According to François Lafitte Houssat, "[...] a municipal ordinance of 1327 provided for the imprisonment of any quarrellsome woman of bad character in an iron cage dropped into the waters of the Nive River from the bridge. The practice lasted until 1780 [...]" This punishment bore the evocative name of "cubainhade". The commune is traversed by the "Vélodyssée". Bicycle paths are located along the left bank of the Adour, a large part of the left bank of the Nive, and along various axes of the city where there are some bicycle lanes. The city offers free bicycles on loan. Most of the lines of the "Chronoplus" bus network operated by the "Transdev agglomeration of Bayonne" link Bayonne to other communes in the urban transport perimeter: Anglet, Biarritz, Bidart, Boucau, Saint-Pierre-d'Irube and Tarnos The Bayonne free shuttle Bayonne serves the city centre (Grand and Petit Bayonne) by connecting several parking stations; other free shuttles perform other short trips within the commune. Bayonne is connected to many cities in the western half of the department such as Saint-Jean-de-Luz and Saint-Palais by the Pyrenees-Atlantiques long-distance coach network of "Transport 64" managed by the General Council. Since the network restructuring in the summer of 2013, the lines converge on Bayonne. Bayonne is also served by services from the Landes departmental network, "XL'R". The Gare de Bayonne is located in the Saint-Esprit district and is an important station on the Bordeaux-Irun railway. It is also the terminus of lines leading from Toulouse to Bayonne and from Bayonne to Saint-Jean-Pied-de-Port. It is served by TGV, Intercités, Lunéa, and TER Aquitaine trains (to Hendaye, Saint-Jean-Pied-de-Port, Dax, Bordeaux, Pau, and Tarbes). Bayonne is served by the Biarritz – Anglet – Bayonne Airport (IATA code: BIQ • ICAO code: LFBZ), located on the communal territories of Anglet and Biarritz. The airport was returned to service in 1954 after repair of damage from bombing during the Second World War. Airport management is carried out by the joint association for the development and operation of the airport of Biarritz-Anglet-Bayonne, which includes the Chamber of Commerce and Industry of Bayonne Basque Country, the agglomeration of Côte Basque-Adour, the departments of Pyrénées-Atlantiques and Landes, and the commune of Saint-Jean-de-Luz. The airport of Biarritz-Anglet-Bayonne had nearly 1.1 million passengers in 2013. It has regular connections to Paris-Orly, Paris-CDG, Lyon, Nice, Geneva, and London Stansted and from March to October 2014 had connections with: Marseille, Strasbourg, Lille, Brussels South Charleroi Airport, Dublin, Stockholm-Skavsta, Stockholm-Arlanda, London-Gatwick, Copenhagen, Oslo, and Helsinki. Airline companies serving the airport at 1 November 2014 were: Air France, Etihad Regional, EasyJet, Finnair, Hop!, Ryanair, SAS, Twin Jet, and Volotea. While the modern Basque spelling is "Baiona" and the same in Gascon Occitan, "the name "Bayonne" poses a number of problems both historical and linguistic which have still not been clarified". There are different interpretations of its meaning. The termination "-onne" in "Bayonne" can come from many in hydronyms "-onne" or toponyms derived from that. In certain cases the element "-onne" follows an Indo-European theme: "*ud-r/n" (Greek "húdōr" giving hydro, Gothic "watt" meaning "water") hence "*udnā" meaning "water" giving "unna" then "onno" in the glossary of Vienne. "Unna" therefore would refer to the Adour. This toponymic type evoking a river traversing a locality is common. The appellative "unna" seems to be found in the name of the Garonne ("Garunna" 1st century; "Garonna" 4th century). However it is possible to see a pre-Celtic suffix "-ona" in the name of the Charente ("Karantona" in 875) or the Charentonne ("Carentona" in 1050). It could also be an augmentative Gascon from the original Latin radical "Baia-" with the suffix "-ona" in the sense of "vast expanse of water" or a name derived from the Basque "bai" meaning "river" and "ona" meaning "good", hence "good river". The proposal by Eugene Goyheneche repeated by Manex Goyhenetche and supported by Jean-Baptiste Orpustan is "bai una", "the place of the river" or "bai ona" "hill by the river"—"Ibai" means "river" in Basque and "muinoa" means "hill". "It has perhaps been lost from sight that many urban place names in France, from north to south, came from the element "Bay-" or "Bayon-" such as: Bayons, Bayonville, Bayonvillers and pose the unusual problem of whether they are Basque or Gascon" adds Pierre Hourmat. However, the most ancient form of Bayonne: "Baiona", clearly indicates a feminine or a theme of "-a" whereas this is not the case for Béon or Bayon. In addition, the "Bayon-" in Bayonville or Bayonvillers in northern France is clearly the personal Germanic name "Baio". The names of the Basque province of Labourd and the locality of Bayonne have been attested from an early period with the place name "Bayonne" appearing in the Latin form "Lapurdum" after a period during which the two names could in turn designate a Viscounty or Bishopric. "Labourd" and "Bayonne" were synonymous and used interchangeably until the 12th century before being differentiated: Labord for the province and Bayonne for the city. The attribution of Bayonne as "Civitas Boatium", a place mentioned in the Antonine Itinerary and by Paul Raymond in his 1863 dictionary, has been abandoned. The city of the "Boïates" may possibly be La Teste-de-Buch but is certainly not Bayonne. The following table details the origins of Labord, Bayonne, and other names in the commune. Sources: Origins: In the absence of accurate objective data there is some credence to the probable existence of a fishing village on the site in a period prior to ancient times. Numerous traces of human occupation have been found in the Bayonne region from the Middle Paleolithic especially in the discoveries at Saint-Pierre-d'Irube, a neighbouring locality. On the other hand, the presence of a mound about high has been detected in the current Cathedral Quarter overlooking the Nive which formed a natural protection and a usable port on the left bank of the Nive. At the time the mound was surrounded north and west by the Adour swamps. At its foot lies the famous "Bayonne Sea"—the junction of the two rivers—which may have been about wide between Saint-Esprit and the Grand Bayonne and totally covered the current location of Bourg-Neuf (in the district of Petit Bayonne). To the south the last bend of the Nive widens near the Saint-Léon hills. Despite this, the narrowing of the Adour valley allows easier crossing than anywhere else along the entire length of the estuary. In conclusion, the strategic importance of this height was so obvious it must be presumed that it has always been inhabited. The oldest documented human occupation site is located on a hill overlooking the Nive and its confluence with the Adour. In the 1st century AD, during the Roman occupation, Bayonne already seems to have been of some importance since the Romans surrounded the city with a wall to keep out the Tarbelli, Aquitani, or the proto-Basque who then occupied a territory that extended south of modern-day Landes, to the modern French Basque country, the Chalosse, the valleys of the Adour, the mountain streams of Pau, Pyrénées-Atlantiques, and to the Gave d'Oloron. The archaeological discoveries of October and November 1995 provided a shred of evidence to support this projection. In the four layers of sub-soil along the foundation of the Gothic cathedral (in the "apse of the cathedral" area) a 2-metre depth was found of old objects from the end of the 1st century—in particular sigillated Gallic ceramics from Montans imitating Italian styles, thin-walled bowls, and fragments of amphorae. In the "southern sector" near the cloister door there were objects from the second half of the 1st century as well as coins from the first half of the 3rd century. A very high probability of human presence, not solely military, seems to provisionally confirm the occupation of the site at least around the 3rd century. A Roman castrum dating to the end of the 4th century has been proven as a fortified place of Novempopulania. Named "Lapurdum", the name became the name of the province of "Labourd". According to Eugene Goyheneche the name "Baiona" designated the city, the port, and the cathedral while that of "Lapurdum" was only a territorial designation. This Roman settlement was strategic as it allowed the monitoring of the trans-Pyrenean roads and of local people rebellious to the Roman power. The construction covered 6 to 10 hectares according to several authors. The geographical location of the locality at the crossroads of a river system oriented from east to west and the road network connecting Europe to the Iberian Peninsula from north to south predisposed the site to the double role of fortress and port. The city, after being Roman, alternated between the Vascones and the English for three centuries from the 12th to the 15th century. The Romans left the city in the 4th century and the Basques, who had always been present, dominated the former Novempopulania province between the Garonne, the Ocean, and the Pyrénées. Novempopulania was renamed Vasconia and then Gascony after a Germanic deformation (resulting from the Visigoth and Frankish invasions). Basquisation of the plains region was too weak against the advance of romanization. From the mixture between the Basque and Latin language Gascon was created. Documentation on Bayonne for the period from the High Middle Ages are virtually nonexistent. with the exception of two Norman intrusions: one questionable in 844 and a second attested in 892. When Labourd was created in 1023 Bayonne was the capital and the Viscount resided there. The history of Bayonne proper started in 1056 when Raymond II the Younger, Bishop of Bazas, had the mission to build the Church of Bayonne The construction was under the authority of Raymond III of Martres, Bishop of Bayonne from 1122 to 1125, combined with Viscount Bertrand for the Romanesque cathedral, the rear of which can still be seen today, and the first wooden bridge across the Adour extending the Mayou bridge over the Nive, which inaugurated the heyday of Bayonne. From 1120 new districts were created under population pressure. The development of areas between the old Roman city of Grand Bayonne and the Nive also developed during this period, then between the Nive and the Adour at the place that became Petit Bayonne. A Jacobin Convent was located there in 1225 then that of the Cordeliers in 1247. Construction of and modifications to the defences of the city also developed to protect the new districts. In 1130 the King of Aragon Alfonso the Battler besieged the city without success. Bayonne came under English rule when Eleanor of Aquitaine married Henry II of England in 1152. This alliance gave Bayonne many commercial privileges. The Bayonnaises became carriers of Bordeaux wines and other south-western products like resin, ham, and woad to England. Bayonne was then an important military base. In 1177 King Richard separated the Viscounty of Labourd whose capital then became Ustaritz. Like many cities at the time, in 1215 Bayonne obtained the award of a municipal charter and was emancipated from feudal powers. The official publication in 1273 of a Coutume unique to the city, remained in force for five centuries until the separation of Bayonne from Labourd. Bayonnaise industry at that time was dominated by shipbuilding: wood (oak, beech, chestnut from the Pyrenees, and pine from Landes) being overabundant. There was also maritime activity in providing crews for whaling, commercial marine or, and it was often so at a time when it was easy to turn any merchant ship into a warship, the English Royal Navy. Jean de Dunois – a former companion at arms of Joan of Arc—captured the city on 20 August 1451 and annexed it to the Crown "without making too many victims", but at the cost of a war indemnity of 40,000 gold Écus payable in a year,—thanks to the opportunism of the bishop who claimed to have seen "a large white cross surmounted by a crown which turns into a fleur-de-lis in the sky" to dissuade Bayonne from fighting against the royal troops. The city continued to be fortified by the kings of France to protect it from danger from the Spanish border. In 1454 Charles VII created a separate judicial district: the "Seneschal of Lannes" a "single subdivision of Guyenne during the English period" which had jurisdiction over a wide area including Bayonne, Dax and Saint-Sever and which exercised civil justice, criminal jurisdiction within the competence of the district councilors. Over time, the "Seneschal of the Sword" which was at Dax lost any role other than protocol and Bayonne, along with Dax and Saint-Sever, became the de facto seat of a separate Seneschal under the authority of a "lieutenant-general of the Seneschal". In May 1462 King Louis XI authorized the holding of two annual fairs by letters patent after signing the Treaty of Bayonne after which it was confirmed by the coutoumes of the inhabitants in July 1472 following the death of Charles de Valois, Duke de Berry, the king's brother. At the time the Spanish Inquisition raged in the Iberian Peninsula Spanish and Portuguese Jews fled Spain and also later, Portugal, then settled in Southern France, including in Saint-Esprit (Pyrénées-Atlantiques), a northern district of Bayonne located along the northern bank of the Adour river. They brought with them chocolate and the recipe for its preparation. In 1750, the Jewish population in Saint-Esprit (Pyrénées-Atlantiques) is estimated to have reached about 3,500 people. The golden age of the city ended in the 15th century with the loss of trade with England and the silting of the port of Bayonne created by the movement of the course of the Adour to the north. At the beginning of the 16th century Labourd suffered the emergence of the plague. Its path can be tracked by reading the "Registers". In July 1515 the city of Bayonne was "prohibited to welcome people from plague-stricken places" and on 21 October, "we inhibit and prohibit all peasants and residents of this city [...] to go Parish Bidart [...] because of the contagion of the plague". On 11 April 1518 the plague raged in Saint-Jean-de-Luz and the city of Bayonne "inhibited and prohibited for all peasants and city inhabitants and other foreigners to maintain relationships at the location and Parish of Saint-Jean-de-Luz where people have died of the plague". On 11 November 1518 plague was present in Bayonne to the point that in 1519 the city council moved to the district of Brindos (Berindos at the time) in Anglet. In 1523 Marshal Odet of Foix, Viscount of Lautrec resisted the Spaniards under Philibert of Chalon in the service of Charles V and lifted the siege of Bayonne. It was at Château-Vieux that the ransom demand for the release of Francis I, taken prisoner after his defeat at the Battle of Pavia, was gathered. The meeting in 1565 between Catherine de Medici and the envoy of Philip II: the Duke of Alba, is known as the "Interview of Bayonne". At the time that Catholics and Protestants tore each other apart in parts of the kingdom of France, Bayonne seemed relatively untouched by these troubles. An iron fist from the city leaders did not appear to be unknown. In fact they never hesitated to use violence and criminal sanctions for keeping order in the name of the "public good". Two brothers, Saubat and Johannes Sorhaindo who were both lieutenants of the mayor of Bayonne in the second half of the 16th century, perfectly embody this period. They often wavered between Catholicism and Protestantism but always wanted to ensure the unity and prestige of the city. In the 16th century the king's engineers, under the direction of Louis de Foix, were dispatched to rearrange the course of the Adour by creating an estuary to maintain the river bed. The river discharged in the right place to the Ocean on 28 October 1578. The port of Bayonne then attained a greater level of activity. Fishing for cod and whale ensured the wealth of fishermen and shipowners. From 1611 to 1612 the college Principal of Bayonne was a man of 26 years old with a future: Cornelius Jansen known as "Jansénius", the future Bishop of Ypres. Bayonne became the birthplace of Jansenism, an austere science which strongly disrupted the monarchy of Louis XIV. During the sporadic conflicts that troubled the French countryside from the mid 17th century, Bayonne peasants were short of powder and projectiles. They attached the long hunting knives in the barrels of their muskets and that way they fashioned makeshift spears later called "bayonets". In that same century, Vauban was charged by Louis XIV to fortify the city. He added a citadel built on a hill overlooking the district of "San Espirit Cap deou do Punt". Activity in Bayonne peaked in the 18th century. The Chamber of Commerce was founded in 1726. Trade with Spain, the Netherlands, the Antilles, the cod fishery off the shores of Newfoundland, and construction sites maintained a high level of activity in the port. In 1792 the district of Saint-Esprit (that revolutionaries renamed "Port-de-la-Montagne") located on the right bank of the Adour, was separated from the city and renamed "Jean-Jacques Rousseau". It was reunited with Bayonne on 1 June 1857. For 65 years the autonomous commune was part of the department of Landes. In 1808 at the Château of Marracq the act of abdication of the Spanish king Charles IV in favour of Napoleon was signed under the "friendly pressure" of the Emperor. In the process the Bayonne Statute was initialed as the first Spanish constitution. Also in 1808 the French Empire imposed on the Duchy of Warsaw the Convention of Bayonne to buy from France the debts owed to it by Prussia. The debt, amounting to more than 43 million francs in gold, was bought at a discounted rate of 21 million francs. However, although the duchy made its payments in installments to France over a four-year period, Prussia was unable to pay it (due to a very large indemnity it owed to France resulting from Treaties of Tilsit), causing the Polish economy to suffer heavily. Trade was the wealth of the city in the 18th century but suffered greatly in the 19th century, severely sanctioned by conflict with Spain, its historic trading partner in the region. The Siege of Bayonne marked the end of the period with the surrender of the Napoleonic troops of Marshal Jean-de-Dieu Soult who were defeated by the coalition led by Wellington on 5 May 1814. In 1854 the railway arrived from Paris bringing many tourists eager to enjoy the beaches of Biarritz. Bayonne turned instead to the steel industry with the forges of the Adour. The Port took on an industrial look but its slow decline seemed inexorable in the 19th century. The discovery of the Lacq gas field restored a certain dynamism. The Treaty of Bayonne was concluded on 2 December 1856. It overcame the disputes in fixing the Franco-Spanish border in the area extending from the mouth of the Bidassoa to the border between Navarre and Aragon. The city built three light railway lines to connect to Biarritz at the beginning of the 20th century. The most direct line, that of the "Tramway Bayonne-Lycée–Biarritz" was operated from 1888 to 1948. In addition a line further north served Anglet, operated by the "Chemin de fer Bayonne-Anglet-Biarritz" company from 1877 to 1953. Finally a line following the Adour to its mouth and to the Atlantic Ocean by the bar in Anglet, was operated by "VFDM réseau basque" from 1919 to 1948. On the morning of 23 December 1933, sub-prefect Anthelme received Gustave Tissier, the director of the "Crédit Municipal de Bayonne". He responded well, with some astonishment, to his persistent interview. It did not surprise him to see the man unpacking what became the scam of the century. "Tissier, director of the "Crédit Municipal", was arrested and imprisoned under suspicion of forgery and misappropriation of public funds. He had issued thousands of false bonds in the name of "Crédit Municipal" [...]" This was the beginning of the Stavisky Affair which, together with other scandals and political crises, led to the Paris riots of 6 February 1934. The 249th Infantry Regiment, created from the 49th Infantry Regiment, was engaged in operations in the First World War, including action at Chemin des Dames, especially on the plateau of Craonne. 700 Bayonnaises perished in the conflict. A centre for engagement of foreign volunteers was established in August 1914 in Bayonne. Many nationalities were represented, particularly the Spanish, the Portuguese, the Czechs, and the Poles During the Second World War Bayonne was occupied by the 3rd SS Panzer Division Totenkopf from 27 June 1940 to 23 August 1944. On 5 April 1942 the Allies made a landing attempt in Bayonne but after a barge penetrated the Adour with great difficulty, the operation was canceled. On 21 August 1944, after blowing up twenty ships in port, German troops withdrew. On the 22nd a final convoy of five vehicles passed through the city. It transported Gestapo Customs agents and some elements of the "Feldgendarmerie". One or more Germans opened fire with machine guns killing three people. On the 23rd there was an informal and immediate installation of a "special municipal delegation" by the young deputy prefect Guy Lamassoure representing the Provisional Government of the French Republic which had been established in Algiers since 27 June. The Gramont family provided captains and governors in Bayonne from 1472 to 1789 as well as mayors, a post which became hereditary from 28 January 1590 by concession of Henry IV to Antoine II of Gramont. From the 15th century they resided in the Château Neuf then in the Château-Vieux from the end of the 16th century: List of Successive Mayors (Not all data is known) As per the Decree of 22 December 1789 Bayonne was part of two cantons: Bayonne-North-east, which includes part of Bayonne commune plus Boucau, Saint-Pierre-d'Irube, Lahonce, Mouguerre, and Urcuit; and Bayonne Northwest which consisted of the rest of Bayonne commune plus Anglet, Arcangues, and Bassussarry. In a first revision of cantons in 1973 three cantons were created from the same total; geographic area: Bayonne North, Bayonne East, and Bayonne West. A further reconfiguration in 1982 focused primarily on Bayonne and, apart from Bayonne North Canton, which also includes Boucau, the cantons of Bayonne East and Bayonne West did not change. Starting from the 2015 French departmental elections which took place on 22 and 29 March, a new division took effect following the decree of 25 February 2014 Once again three cantons centred on Bayonne are defined: Bayonne-1—with part of Anglet; Bayonne-2—which includes Boucau; and Bayonne-3 now define the cantonal territorial division of the area. Bayonne is the seat of many courts for the region. It falls under the jurisdiction of the "Tribunal d'instance" (District court) of Bayonne, the "Tribunal de grande instance" (High Court) of Bayonne, the "Cour d'appel" (Court of Appeal) of Pau, the "Tribunal pour enfants" (Juvenile court) of Bayonne, the "Conseil de prud'hommes" (Labour Court) of Bayonne, the "Tribunal de commerce" (Commercial Court) of Bayonne, the "Tribunal administratif" (Administrative tribunal) of Pau, and the "Cour administrative d'appel" (Administrative Court of Appeal) of Bordeaux. The commune has a police station, a Departmental Gendarmerie, an Autonomous Territorial Brigade of the district gendarmerie, squadron 24/2 of Mobile Gendarmerie and a Tax collection office. The commune is part of twelve inter-communal structures of which eleven are based in the commune: The city of Bayonne is part of the "Agglomeration Côte Basque-Adour" which also includes Anglet, Biarritz, Bidart and Boucau. The statutory powers of the structure extend to economic development—including higher education and research—housing and urban planning, public transport—through Transdev—alternative and the collection and recovery waste collection and management of rain and coastal waters, the sustainable development, interregional cooperation and finally 106. In addition Bayonne is part of the Basque Bayonne-San Sebastián Eurocity which is a European economic interest grouping (EEIG) established in 1993 based in San Sebastián. Bayonne has twinning associations with: In 2012 the commune had 45,855 inhabitants. The evolution of the number of inhabitants is known from the population censuses conducted in the commune since 1793. From the 21st century, a census of communes with fewer than 10,000 inhabitants is held every five years, unlike larger communes that have a sample survey every year. Bayonne commune is attached to the Academy of Bordeaux. It has an information and guidance center (CIO). On 14 December 2015 Bayonne had 10 kindergartens, 22 elementary or primary schools (12 public and 10 private primary schools including two ikastolas). 2 public colleges (Albert Camus and Marracq colleges), 5 private colleges (La Salle Saint-Bernard, Saint Joseph, Saint-Amand, Notre-Dame and Largenté) which meet the criteria of the first cycle of second degree studies. For the second cycle Bayonne has 3 public high schools (René-Cassin school (general education), the Louis de Foix school (general, technological and vocational education), and the Paul Bert vocational school), 4 private high schools (Saint-Louis Villa Pia (general education), Largenté, Bernat Etxepare (general and technological), and Le Guichot vocational school). There are also the Maurice Ravel Conservatory of Music, Dance, and Dramatic Art and the art school of the urban community of Bayonne-Anglet-Biarritz. For 550 years every holy Thursday, Friday and Saturday the "Foire au Jambon" (Ham festival) is held to mark the beginning of the season. An annual summer festival has been held in the commune since 1932 for five days organized around parades, bulls races, fireworks, and music in the Basque and Gascon tradition. These festivals have become the most important festive events in France in terms of attendance. Bayonne has the oldest French bullfighting tradition. A bylaw regulating the "encierro" is dated 1283: cows, oxen and bulls are released each year in the streets of Petit Bayonne during the summer festivals. The current arena, opened in 1893, is the largest in South-west France with more than 10,000 seats. A dozen bullfights are held each year, attracting the biggest names in bullfighting. Throughout summer several "novilladas" also take place. The city is a member of the "Union of French bullfighting cities". Bayonne is the focus of much of the hospital services for the agglomeration of Bayonne and the southern Landes. In this area all inhabitants are less than 35 km from a hospital offering medical, obstetrical, surgical, or psychiatric care. The hospitals for all the Basque Coast are mainly established in Bayonne (the main site of Saint-Léon and Cam-de-Prats) and also in Saint-Jean-de-Luz which has several clinics. Bayonne is in the Diocese of Bayonne, Lescar and Oloron, with a Suffragan bishop since 2002 under the Archdiocese of Bordeaux. Monseigneur Marc Aillet has been the bishop of this diocese since 15 October 2008. The diocese is located in Bayonne in the Place Monseigneur-Vansteenberghe. Besides Bayonne Cathedral in Grand Bayonne, Bayonne has Saint-Esprit, Saint Andrew (Rue des Lisses), Arènes (Avenue of the Czech Legion), Saint-Étienne, and Saint-Amand (Avenue Marechal Soult) churches. The "Carmel of Bayonne", located in the Marracq district, has had a community of Carmelite nuns since 1858. The "Way of Baztan" (also "ruta del Baztan" or "camino Baztanés") is a way on the pilgrimage of Camino de Santiago which crosses the Pyrenees further west by the lowest pass (by the "Col de Belate", 847 m). It is the ancient road used by pilgrims descending to Bayonne then either along the coast on the "Way of Soulac" or because they landed there from England, for example, to join the French Way as soon as possible in Pamplona. The "Way of Bayonne" joins the French Way further downstream at Burgos. The Protestant church is located at the corner of Rue Albert-I st and Rue du Temple. A gospel church is located in the Saint-Esprit districtit where there is also a church belonging to the Gypsy Evangelical Church of the Protestant Federation of France. The synagogue was built in 1837 in the Saint-Esprit district north of the town. The Jewish community of Bayonne is old—it consists of different groups of fugitives from Navarre and Portugal who established at Saint-Esprit-lès-Bayonne after the expulsion of Jews from Spain in 1492 and Portugal in 1496. In 1846 the Central Consistory moved to Saint-Esprit which was integrated with Bayonne in 1857. The mosque is located in Rue Joseph-Latxague. It is the seat of the cultural association of Muslims in the Basque Coast. In 2011, the median household income tax was €22,605, placing Bayonne 28,406th place among the 31,886 communes with more than 49 households in metropolitan France. In 2011 47.8% of households were not taxable. In 2011 the population aged from 15 to 64 years was 29,007 persons of which 70.8% were employable, 60.3% in employment and 10.5% unemployed. While there were 30,012 jobs in the employment area, against 29,220 in 2006, and the number of employed workers residing in the employment area was 17,667, the indicator of job concentration is 169.9% which means that the employment area offers nearly two jobs to for every available worker. Bayonne is the economic capital of the agglomeration of Bayonne and southern Landes. The table below details the number of companies located in Bayonne according to their industry: The table below shows employees by business establishments in terms of numbers: The following comments apply to the two previous tables: In 2013 549 new establishments were created in Bayonne including 406 Sole proprietorships. Bayonne has few such industries, as indicated in the previous tables. There is "Plastitube" specializing in plastic packaging (190 employees). The Izarra liqueur company set up a distillery in 1912 at Quai Amiral-Bergeret and has long symbolized the economic wealth of Bayonne. Industrial activities are concentrated in the neighbouring communes of Boucau, Tarnos (Turbomeca), Mouguerre, and Anglet. Bayonne is known for its fine chocolates, produced in the town for 500 years, and Bayonne ham, a cured ham seasoned with peppers from nearby Espelette. Izarra, the liqueur made in bright green or yellow colours, is distilled locally. It is said by some that Bayonne is the birthplace of mayonnaise, supposedly a corruption of "Bayonnaise", the French adjective describing the city's people and produce. Now bayonnaise can refer to a particular mayonnaise flavoured with the Espelette chillis. Bayonne is now the centre of certain craft industries that were once widespread, including the manufacture of "makilas", traditional Basque walking-sticks. The Fabrique Alza just outside the city is known for its "palas", bats used in "pelota", the traditional Basque sport. The active tertiary sector includes some large retail chains such as those detailed by geographer Roger Brunet: BUT (240 staff), Carrefour (150 staff), E.Leclerc (150 staff), Leroy Merlin (130 staff), and Galeries Lafayette (120 employees). Banks, cleaning companies (Onet, 170 employees), and security (Brink's, 100 employees) are also major employers in the commune, as is urban transport which employs nearly 200 staff. Five health clinics, providing a total of more than 500 beds, each employ 120 to 170 staff. The port of Bayonne is located at the mouth of the Adour, downstream of the city. It also occupies part of communes of Anglet and Boucau in Pyrenees-Atlantiques and Tarnos in Landes. It benefits greatly from the natural gas field of Lacq to which it is connected by pipeline. This is the 9th largest French port for trade with an annual traffic of about 4.2 million tonnes of which 2.8 is export. It is also the largest French port for export of maize. It is the property of the Aquitaine region who manage and control the site. Metallurgical products movement are more than one million tons per year and maize exports to Spain vary between 800,000 and 1 million tons. The port also receives refined oil products from the Total oil refinery at Donges (800,000 tons per year). Fertilizers are a traffic of 500,000 tons per year and sulphur from Lacq, albeit in sharp decline, is 400,000 tons. The port also receives Ford and General Motors vehicles from Spain and Portugal and wood both tropical and from Landes. Due to its proximity to the ocean and the foothills of the Pyrenees as well as its historic heritage, Bayonne has developed important activities related to tourism. On 31 December 2012 there were 15 hotels in the city offering more than 800 rooms to visitors, but there were no camp sites. The tourist infrastructure in the surrounding urban area of Bayonne complements the local supply with around 5800 rooms spread over nearly 200 hotels and 86 campsites offering over 14,000 beds. The Nive divides Bayonne into Grand Bayonne and Petit Bayonne with five bridges between the two, both quarters still being backed by Vauban's walls. The houses lining the Nive are examples of Basque architecture, with half-timbering and shutters in the national colours of red and green. The much wider Adour is to the north. The Pont Saint-Esprit connects Petit Bayonne with the Quartier Saint-Esprit across the Adour, where the massive Citadelle and the railway station are located. Grand Bayonne is the commercial and civic hub, with small pedestrianised streets packed with shops, plus the cathedral and Hôtel de Ville. The Cathédrale Sainte-Marie is an imposing, elegant Gothic building, rising over the houses, glimpsed along the narrow streets. It was constructed in the 12th and 13th centuries. The south tower was completed in the 16th century but the cathedral was only completed in the 19th century with the north tower. The cathedral is noted for its charming cloisters. There are other details and sculptures of note, although much was destroyed in the Revolution. Nearby is the Château Vieux, some of which dates back to the 12th century, where the governors of the city were based, including the English Black Prince. The Musée Basque is the finest ethnographic museum of the entire Basque Country. It opened in 1922 but has been closed for a decade recently for refurbishment. It now has special exhibitions on Basque agriculture, seafaring and "pelota", handicrafts and Basque history and way of life. The Musée Bonnat began with a large collection bequeathed by the local-born painter Léon Bonnat. The museum is one of the best galleries in south west France and has paintings by Edgar Degas, El Greco, Sandro Botticelli, and Francisco Goya, among others. At the back of Petit Bayonne is the Château Neuf, among the ramparts. Now an exhibition space, it was started by the newly arrived French in 1460 to control the city. The walls nearby have been opened to visitors. They are important for plant life now and Bayonne's botanic gardens adjoin the walls on both sides of the Nive. The area across the Adour is largely residential and industrial, with much demolished to make way for the railway. The Saint-Esprit church was part of a bigger complex built by Louis XI to care for pilgrims to Santiago de Compostela. It is home to a wooden "Flight into Egypt" sculpture. Overlooking the quarter is Vauban's 1680 Citadelle. The soldiers of Wellington's army who died besieging the citadelle in 1813 are buried in the nearby English Cemetery, visited by Queen Victoria and other British dignitaries when staying in Biarritz. The distillery of the famous local liqueur Izarra is located on the northern bank of the Adour and is open to visitors.
https://en.wikipedia.org/wiki?curid=4741
Bubblegum Crisis The series involves the adventures of the Knight Sabers, an all-female group of mercenaries who don powered exoskeletons and fight numerous problems, most frequently rogue robots. The success of the series spawned several sequel series. The series begins in late 2032, seven years after the Second Great Kanto earthquake has split Tokyo geographically and culturally in two. During the first episode, disparities in wealth are shown to be more pronounced than in previous periods in post-war Japan. The main adversary is Genom, a megacorporation with immense power and global influence. Its main product are boomers—artificial cybernetic life forms that are usually in the form of humans, with most of their bodies being machine; also known as "cyberoids". While Boomers are intended to serve mankind, they become deadly instruments in the hands of ruthless individuals. The AD Police are tasked to deal with Boomer-related crimes. One of the series' themes is the inability of the department to deal with threats due to political infighting, red tape, and an insufficient budget. The setting displays strong influences from the movies "Blade Runner" and "Streets of Fire". The opening sequence of episode 1 is modeled on the opening sequence of "Streets of Fire". The humanoid robots known as "boomers" in the series also resemble Terminators cyborgs from the "Terminator" film. Suzuki explained in a 1993 "Animerica" interview the meaning behind the cryptic title: "We originally named the series 'bubblegum' to reflect a world in crisis, like a chewing-gum bubble that's about to burst." The series started with Toshimichi Suzuki's intention to remake the 1982 film "Techno Police 21C". However, he met Junji Fujita and the two discussed ideas, and decided to collaborate on what later became "Bubblegum Crisis". Kenichi Sonoda acted as character designer, and designed the four female leads. Masami Ōbari created the mechanical designs. Obari would also go on to direct episodes 5 and 6. The OVA series is eight episodes long but was originally slated to run for 13 episodes. Due to legal problems between Artmic and Youmex, who jointly held the rights to the series, the series was discontinued prematurely. In North America, AnimEigo first released "Bubblegum Crisis" to VHS and Laserdisc in 1991 in Japanese with English subtitles. The series is notable in that it was one of the few early anime series that were brought over from Japan unedited and subtitled in English. While anime has become much more popular in the years since, in 1991, it was still mostly unknown as a storytelling medium in North America. "Bubblegum Crisis" was aired in the US when it first aired on PBS affiliate Superstation KTEH in the 1990s, and STARZ!'s Action Channel in 2000. An English dub of the series was produced beginning in 1994 by AnimEigo through Southwynde Studios in Wilmington, NC, and released to VHS and Laserdisc beginning that year. A digitally-remastered compilation, featuring bilingual audio tracks and production extras, was released on DVD in 2004 by AnimEigo. The company later successfully crowdfunded a collector's edition Blu-ray release through Kickstarter in November 2013. The series was released on a regular edition Blu-ray on September 25, 2018. The series is currently available for streaming on Night Flight Plus. There are eight soundtrack releases (one per OVA), as well as numerous "vocal" albums which feature songs "inspired by" the series as well as many drawn directly from it. Masaki Kajishima and Hiroki Hayashi, who both worked on the "Bubblegum Crisis" OAVs, cite the show as being the inspiration for their harem series "Tenchi Muyo! Ryo-Ohki". In an interview with AIC, Hayashi described "Bubblegum Crisis" as "a pretty gloomy anime. Serious fighting, complicated human relationships, and dark Mega Tokyo." They thought it would be fun to create some comedy episodes with ideas like the girls going to the hot springs, but it was rejected by the sponsors. He also said that there was a trend to have a bunch of characters of one gender and a single one of the other gender, and asked what if Mackey (Sylia's brother) was a main character, reversing the "Bubblegum" scenario. This idea then became the basis for Tenchi. Hayashi said that Mackey is "sort of" the original model for Tenchi. Kevin Siembieda's becoming aware of "Boomers" being already in use in this caused him to changed his planned name for the "Rifts" RPG which he had named after the "Boom Gun"–wielding power armor which was also renamed to "Glitter Boy". The success of the series spawned several sequel series. The first of them was the three-part OVA . After the split between Artmic and Youmex, Artmic proceeded to make a sequel on their own, "Bubblegum Crash", which ran three OVA episodes and is conjectured that it was a shortened version of how "Crisis" was to end. Youmex promptly sued Artmic, cutting "Crash" short and tying the entire franchise up in legal issues for the next several years. It is set in 2034, and the Knight Sabers seem to be finished; each of its members—except Nene—have seemingly drifted off to pursue their own goals. But at the same time, parts of a unique artificial intelligence are stolen by several villains acting under the orders of a mysterious voice. Unexpectedly, Sylia resurfaces and prepares her teammates for battle. And as a gigantic machine drills its way to Mega Tokyo's main nuclear power plant, they meet again with an old and deadly enemy. In 1993, it appeared on "Scramble Wars", a crossover event between "Bubblegum Crisis", "Gall Force", "Genesis Survivor Gaiarth", "AD Police" and "Riding Bean". The series' creator Toshimichi Suzuki wrote two novels: In Japan, a number of comic books were produced that featured characters and storylines based in the same universe. Some were very much thematically linked to the OVA series, while others were "one-shots" or comedy features. A number of artists participated in the creation of these comics, including Kenichi Sonoda, who had produced the original Knight Saber character designs. A North American comic based in the Bubblegum Crisis Universe was published in English by Dark Horse Comics. In May 2009 it was announced that a live-action movie of "Bubblegum Crisis" was in the early stages of production. A production agreement was signed at the 2009 Cannes Film Festival. The film was expected to be released in late 2012 with a budget of 30 million. The production staff was said to have consulted with the original anime's staff members, Shinji Aramaki and Kenichi Sonoda, to help maintain consistency with the world of the original. However, no further developments have been announced.
https://en.wikipedia.org/wiki?curid=4742
Black people Black people is a skin color-based classification for specific people with a mid to dark brown complexion. Not all black people have dark skin; in certain countries, often in socially based systems of racial classification in the Western World, the term "black" is used to describe persons who are perceived as dark-skinned compared to other populations. It is mostly used for people of Sub-Saharan African descent. Indigenous African societies do not use the term "black" as a racial identity outside of influences brought by Western cultures. The term "black" may be capitalized but is more commonly written lower case. The AP Stylebook, a highly influential source used by many news organizations, government, and public relations agencies, changed its guide to capitalize the "b" in "Black" in June 2020. Different societies apply different criteria regarding who is classified "black", and these social constructs have changed over time. In a number of countries, societal variables affect classification as much as skin color, and the social criteria for "blackness" vary. In the United Kingdom, "black" was historically equivalent with "person of color", a general term for non-European peoples. In other regions such as Australasia, settlers applied the term "black" or it was used by local populations with different histories and ancestral backgrounds. For many other individuals, communities and countries, "black" is perceived as a derogatory, outdated, reductive or otherwise unrepresentative label, and as a result is neither used nor defined, especially in African countries with little to no history of colonial racial segregation. Some have pointed out that labeling people "black" is erroneous as the people described as "black" have a brown skin color. Numerous communities of dark-skinned peoples are present in North Africa, some dating from prehistoric communities. Others descend from immigrants via the historical trans-Saharan trade or, after the Arab invasions of North Africa in the 7th century, from slaves from the Arab slave trade in North Africa. In the 18th century, the Moroccan Sultan Moulay Ismail "the Warrior King" (1672–1727) raised a corps of 150,000 black soldiers, called his Black Guard. According to Carlos Moore, resident scholar at Brazil's University of the State of Bahia, in the 21st century Afro-multiracials in the Arab world, including Arabs in North Africa, self-identify in ways that resemble multi-racials in Latin America. He claims that darker toned Arabs, much like darker toned Latin Americans, consider themselves white because they have some distant white ancestry. Egyptian President Anwar Sadat had a mother who was a dark-skinned Nubian Sudanese (Sudanese Arab) woman and a father who was a lighter-skinned Egyptian. In response to an advertisement for an acting position, as a young man he said, "I am not white but I am not exactly black either. My blackness is tending to reddish". Due to the patriarchal nature of Arab society, Arab men, including during the slave trade in North Africa, enslaved more African women than men. The female slaves were often put to work in domestic service and agriculture. The men interpreted the Quran to permit sexual relations between a male master and his enslaved females outside of marriage (see Ma malakat aymanukum and sex), leading to many mixed-race children. When an enslaved woman became pregnant with her Arab master's child, she was considered as "umm walad" or "mother of a child", a status that granted her privileged rights. The child was given rights of inheritance to the father's property, so mixed-race children could share in any wealth of the father. Because the society was patrilineal, the children took their fathers' social status at birth and were born free. Some succeeded their fathers as rulers, such as Sultan Ahmad al-Mansur, who ruled Morocco from 1578 to 1608. He was not technically considered as a mixed-race child of a slave; his mother was Fulani and a concubine of his father. In early 1991, non-Arabs of the Zaghawa people of Sudan attested that they were victims of an intensifying Arab apartheid campaign, segregating Arabs and non-Arabs (specifically, people of Nilotic ancestry). Sudanese Arabs, who controlled the government, were widely referred to as practicing apartheid against Sudan's non-Arab citizens. The government was accused of "deftly manipulat(ing) Arab solidarity" to carry out policies of apartheid and ethnic cleansing. American University economist George Ayittey accused the Arab government of Sudan of practicing acts of racism against black citizens. According to Ayittey, "In Sudan... the Arabs monopolized power and excluded blacks – Arab apartheid." Many African commentators joined Ayittey in accusing Sudan of practising Arab apartheid. In the Sahara, the native Tuareg Berber populations kept "negro" slaves. Most of these captives were of Nilotic extraction, and were either purchased by the Tuareg nobles from slave markets in the Western Sudan or taken during raids. Their origin is denoted via the Ahaggar Berber word "Ibenheren" (sing. "Ébenher"), which alludes to slaves that only speak a Nilo-Saharan language. These slaves were also sometimes known by the borrowed Songhay term "Bella". Similarly, the Sahrawi indigenous peoples of the Western Sahara observed a class system consisting of high castes and low castes. Outside of these traditional tribal boundaries were "Negro" slaves, who were drawn from the surrounding areas. In Ethiopia and Somalia, the slave classes mainly consisted of captured peoples from the Sudanese-Ethiopian and Kenyan-Somali international borders or other surrounding areas of Nilotic and Bantu peoples who were collectively known as "Shanqella" and "Adone" (both analogues to "negro" in an English-speaking contexts). Some of these slaves were captured during territorial conflicts in the Horn of Africa then sold off to slave merchants. The earliest representation of this tradition dates from a seventh or eighth century BC inscription belonging to the Kingdom of Damat. These captives and others of analogous morphology were distinguished as "tsalim barya" (dark-skinned slave) in contrast with the Afroasiatic-speaking nobles or "saba qayh" ("red men") or light-skinned slave; while on the other hand, western racial category standards do not differentiate between "saba qayh" ("red men" -- light-skinned) or "saba tiqur" ("black men" -- dark-skinned) Horn Africans (of either Afroasiatic-speaking, Nilotic-speaking or Bantu origin) thus considering all of them as "Black people" (and in some case "negro") according to Western society's notion of race. In South Africa, the period of colonization resulted in many unions and marriages between European men and Bantu and Khoisan women from various tribes, resulting in mixed-race children. As the European settlers acquired control of territory, they generally pushed the mixed-race and Bantu and Khoisan populations into second-class status. During the first half of the 20th century, the Afrikaaner-dominated government classified the population according to four main racial groups: "Black", "White", "Asian" (mostly Indian), and "Coloured". The Coloured group included people of mixed Bantu, Khoisan, and European descent (with some Malay ancestry, especially in the Western Cape). The Coloured definition occupied an intermediary political position between the Black and White definitions in South Africa. It imposed a system of legal racial segregation, a complex of laws known as apartheid. The apartheid bureaucracy devised complex (and often arbitrary) criteria in the Population Registration Act of 1945 to determine who belonged in which group. Minor officials administered tests to enforce the classifications. When it was unclear from a person's physical appearance whether the individual should be considered Coloured or Black, the "pencil test" was used. A pencil was inserted into a person's hair to determine if the hair was kinky enough to hold the pencil, rather than having it pass through, as it would with smoother hair. If so, the person was classified as Black. Such classifications sometimes divided families. Sandra Laing is a South African woman who was classified as Coloured by authorities during the apartheid era, due to her skin colour and hair texture, although her parents could prove at least three generations of European ancestors. At age 10, she was expelled from her all-white school. The officials' decisions based on her anomalous appearance disrupted her family and adult life. She was the subject of the 2008 biographical dramatic film "Skin", which won numerous awards. During the apartheid era, those classed as "Coloured" were oppressed and discriminated against. But, they had limited rights and overall had slightly better socioeconomic conditions than those classed as "Black". The government required that Blacks and Coloureds live in areas separate from Whites, creating large townships located away from the cities as areas for Blacks. In the post-apartheid era, the Constitution of South Africa has declared the country to be a "Non-racial democracy". In an effort to redress past injustices, the ANC government has introduced laws in support of affirmative action policies for Blacks; under these they define "Black" people to include "Africans", "Coloureds" and "Asians". Some affirmative action policies favor "Africans" over "Coloureds" in terms of qualifying for certain benefits. Some South Africans categorized as "African Black" say that "Coloureds" did not suffer as much as they did during apartheid. "Coloured" South Africans are known to discuss their dilemma by saying, "we were not white enough under apartheid, and we are not black enough under the ANC (African National Congress)". In 2008, the High Court in South Africa ruled that Chinese South Africans who were residents during the apartheid era (and their descendants) are to be reclassified as "Black people," solely for the purposes of accessing affirmative action benefits, because they were also "disadvantaged" by racial discrimination. Chinese people who arrived in the country after the end of apartheid do not qualify for such benefits. Other than by appearance, "Coloureds" can usually be distinguished from "Blacks" by language. Most speak Afrikaans or English as a first language, as opposed to Bantu languages such as Zulu or Xhosa. They also tend to have more European-sounding names than Bantu names. "Afro-Asians" or "African-Asians" (also known as "Black Asians" or "Blasian"), are persons of mixed African and Asian ancestry. Historically, Afro-Asian populations have been marginalized as a result of human migration and social conflict. The term "Black Asian" may also be used to describe Negritos (blacks indigenous to Asia); therefore "Afro-Asian" is a more scientifically proper term to describe Black Asians with direct African background or ancestry. Historians estimate that between the advent of Islam in 650 CE and the abolition of slavery in the Arabian Peninsula in the mid-20th century, 10 to 18 million Black Africans (known as the Zanj) were enslaved by Arab slave traders and transported to the Arabian Peninsula and neighboring countries. This number far exceeded the number of slaves who were taken to the Americas. Several factors affected the visibility of descendants of this diaspora in 21st-century Arab societies: The traders shipped more female slaves than males, as there was a demand for them to serve as concubines in harems in the Arabian Peninsula and neighboring countries. Male slaves were castrated in order to serve as harem guards. The death toll of Black African slaves from forced labor was high. The mixed-race children of female slaves and Arab owners were assimilated into the Arab owners' families under the patrilineal kinship system. As a result, few distinctive Afro-Arab communities have survived in the Arabian Peninsula and neighboring countries. Distinctive and self-identified black communities have been reported in countries such as Iraq, with a reported 1.2 million black people, and they attest to a history of discrimination. These descendants of the Zanj have sought minority status from the government, which would reserve some seats in Parliament for representatives of their population. According to Alamin M. Mazrui et al., generally in the Arabian Peninsula and neighboring countries, most of these communities identify as both black and Arab. Afro-Iranians are people of black African ancestry residing in Iran. During the Qajar dynasty, many wealthy households imported black African women and children as slaves to perform domestic work. This slave labor was drawn exclusively from the Zanj, who were Bantu-speaking peoples that lived along the African Great Lakes, in an area roughly comprising modern-day Tanzania, Mozambique and Malawi. About 150,000 East African and black people live in Israel, amounting to just over 2% of the nation's population. The vast majority of these, some 120,000, are Beta Israel, most of whom are recent immigrants who came during the 1980s and 1990s from Ethiopia. In addition, Israel is home to over 5,000 members of the African Hebrew Israelites of Jerusalem movement that are ancestry of African Americans who emigrated to Israel in the 20th century, and who reside mainly in a distinct neighborhood in the Negev town of Dimona. Unknown numbers of black converts to Judaism reside in Israel, most of them converts from the United Kingdom, Canada, and the United States. Additionally, there are around 60,000 non-Jewish African immigrants in Israel, some of whom have sought asylum. Most of the migrants are from communities in Sudan and Eritrea, particularly the Niger-Congo-speaking Nuba groups of the southern Nuba Mountains; some are illegal immigrants. Beginning several centuries ago, during the period of the Ottoman Empire, tens of thousands of Zanj captives were brought by slave traders to plantations and agricultural areas situated between Antalya and Istanbul in present-day Turkey. Some of their ancestry remained "in situ", and many migrated to larger cities and towns. Other black slaves were transported to Crete, from where they or their descendants later reached the İzmir area through the population exchange between Greece and Turkey in 1923, or indirectly from Ayvalık in pursuit of work. The Siddi are an ethnic group inhabiting India and Pakistan whose members are descendants from the Bantu peoples. In the Makran strip of the Sindh and Balochistan provinces in southwestern Pakistan, these Bantu descendants are known as the Makrani. There was a brief "Black Power" movement in Sindh in the 1960s and many Siddi are proud of and celebrate their African ancestry. Negritos are believed to have been the first inhabitants of Southeast Asia. Once inhabiting Taiwan, Vietnam, and various other parts of Asia, they are now confined primarily to Thailand, the Malay Archipelago, and the Andaman and Nicobar Islands of India. Negrito means "little black people" in Spanish (negrito is the Spanish diminutive of negro, i.e., "little black person"); it is what the Spaniards called the aborigines that they encountered in the Philippines. The term "Negrito" itself has come under criticism in countries like Malaysia, where it is now interchangeable with the more acceptable Semang, although this term actually refers to a specific group. Negritos in the Philippines, and Southeast Asia in general, face much discrimination. Usually, they are marginalized and live in poverty, unable to find employment that will take them. While census collection of ethnic background is illegal in France, it is estimated that there are about 2.5 – 5 million black people residing there. As of 2013, there were approximately 800,000 Afro-Germans (not including those of mixed ethnicity). This number is difficult to estimate because the German census does not use race as a category. Afro-Dutch are residents of the Netherlands who are of Black African or Afro-Caribbean ancestry. They tend to be from the former and present Dutch overseas territories of Aruba, Bonaire, Curaçao, Sint Maarten and Suriname. The Netherlands also has sizable Cape Verdean and other African communities. The term "Moors" has been used in Europe in a broader, somewhat derogatory sense to refer to Muslims, especially those of Arab or Berber ancestry, whether living in North Africa or Iberia. Moors were not a distinct or self-defined people. Medieval and early modern Europeans applied the name to Muslim Arabs, Berbers, Black Africans and Europeans alike. Isidore of Seville, writing in the 7th century, claimed that the Latin word Maurus was derived from the Greek "mauron", μαύρον, which is the Greek word for black. Indeed, by the time Isidore of Seville came to write his "Etymologies", the word Maurus or "Moor" had become an adjective in Latin, "for the Greeks call black, mauron". "In Isidore's day, Moors were black by definition..." Afro-Spaniards are Spanish nationals of West/Central African ancestry. They today mainly come from Cameroon, Equatorial Guinea, Ghana, Gambia, Mali, Nigeria and Senegal. Additionally, many Afro-Spaniards born in Spain are from the former Spanish colony Equatorial Guinea. Today, there are an estimated 683,000 Afro-Spaniards in Spain. According to the Office for National Statistics, at the 2001 census there were over a million black people in the United Kingdom; 1% of the total population described themselves as "Black Caribbean", 0.8% as "Black African", and 0.2% as "Black other". Britain encouraged the immigration of workers from the Caribbean after World War II; the first symbolic movement was those who came on the ship the "Empire Windrush" and hence those who migrated between 1948 and 1970 are known as the Windrush generation. The preferred official umbrella term is "black and minority ethnic" (BAME), but sometimes the term "black" is used on its own, to express unified opposition to racism, as in the Southall Black Sisters, which started with a mainly British Asian constituency, and the National Black Police Association, which has a membership of "African, African-Caribbean and Asian origin". As African states became independent in the 1960s, the Soviet Union offered many of their citizens the chance to study in Russia. Over a period of 40 years, about 400,000 African students from various countries moved to Russia to pursue higher studies, including many Black Africans. This extended beyond the Soviet Union to many countries of the Eastern bloc. Due to the slave trade in the Ottoman Empire that had flourished in the Balkans, the coastal town of Ulcinj in Montenegro had its own black community. As a consequence of the slave trade and privateer activity, it is told how until 1878 in Ulcinj 100 black people lived. The Ottoman Army also deployed an estimated 30,000 Black African troops and cavalrymen to its expedition in Hungary during the Austro-Turkish War of 1716–18. Indigenous Australians have been referred to as "black people" in Australia since the early days of European settlement. While originally related to skin colour, the term is used today to indicate Aboriginal or Torres Strait Islander ancestry in general and can refer to people of any skin pigmentation. Being identified as either "black" or "white" in Australia during the 19th and early 20th centuries was critical in one's employment and social prospects. Various state-based Aboriginal Protection Boards were established which had virtually complete control over the lives of Indigenous Australians – where they lived, their employment, marriage, education and included the power to separate children from their parents. Aborigines were not allowed to vote and were often confined to reserves and forced into low paid or effectively slave labour. The social position of mixed-race or "half-caste" individuals varied over time. A 1913 report by Baldwin Spencer states that: After the First World War, however, it became apparent that the number of mixed-race people was growing at a faster rate than the white population, and by 1930 fear of the "half-caste menace" undermining the White Australia ideal from within was being taken as a serious concern. Cecil Cook, the Northern Territory Protector of Natives, noted that: The official policy became one of biological and cultural assimilation: "Eliminate the full-blood and permit the white admixture to half-castes and eventually the race will become white". This led to different treatment for "black" and "half-caste" individuals, with lighter-skinned individuals targeted for removal from their families to be raised as "white" people, restricted from speaking their native language and practising traditional customs, a process now known as the Stolen Generation. The second half of the 20th century to the present has seen a gradual shift towards improved human rights for Aboriginal people. In a 1967 referendum over 90% of the Australian population voted to end constitutional discrimination and to include Aborigines in the national census. During this period many Aboriginal activists began to embrace the term "black" and use their ancestry as a source of pride. Activist Bob Maza said: In 1978 Aboriginal writer Kevin Gilbert received the National Book Council award for his book "Living Black: Blacks Talk to Kevin Gilbert", a collection of Aboriginal people's stories, and in 1998 was awarded (but refused to accept) the Human Rights Award for Literature for "Inside Black Australia", a poetry anthology and exhibition of Aboriginal photography. In contrast to previous definitions based solely on the degree of Aboriginal ancestry, in 1990 the Government changed the legal definition of Aboriginal to include any: This nationwide acceptance and recognition of Aboriginal people led to a significant increase in the number of people self-identifying as Aboriginal or Torres Strait Islander. The reappropriation of the term "black" with a positive and more inclusive meaning has resulted in its widespread use in mainstream Australian culture, including public media outlets, government agencies, and private companies. In 2012, a number of high-profile cases highlighted the legal and community attitude that identifying as Aboriginal or Torres Strait Islander is not dependent on skin colour, with a well-known boxer Anthony Mundine being widely criticised for questioning the "blackness" of another boxer and journalist Andrew Bolt being successfully sued for publishing discriminatory comments about Aboriginals with light skin. John Caesar, nicknamed "Black Caesar", a convict and bushranger with parents born in an unknown area in Africa, was one of the first people of recent Black African ancestry to arrive in Australia. At the 2006 Census, 248,605 residents declared that they were born in Africa. This figure pertains to all immigrants to Australia who were born in nations in Africa regardless of race, and includes White Africans. Black Canadians is a designation used for people of Black African ancestry, who are citizens or permanent residents of Canada. The majority of Black Canadians are of Caribbean origin, though the population also consists of African American immigrants and their descendants (including Black Nova Scotians), as well as many African immigrants. Black Canadians often draw a distinction between those of Afro-Caribbean ancestry and those of other African roots. The term "African Canadian" is occasionally used by some Black Canadians who trace their heritage to the first slaves brought by British and French colonists to the North American mainland. Promised freedom by the British during the American Revolutionary War, thousands of Black Loyalists were resettled by the Crown in Canada afterward, such as Thomas Peters. In addition, an estimated ten to thirty thousand fugitive slaves reached freedom in Canada from the Southern United States during the antebellum years, aided by people along the Underground Railroad. Many Black people of Caribbean origin in Canada reject the term African Canadian as an elision of the uniquely Caribbean aspects of their heritage, and instead identify as "Caribbean Canadian". Unlike in the United States, where African American has become a widely used term, in Canada controversies associated with distinguishing African or Caribbean heritage have resulted in the term "Black Canadian" being widely accepted there. There were eight principal areas used by Europeans to buy and ship slaves to the Western Hemisphere. The number of enslaved people sold to the New World varied throughout the slave trade. As for the distribution of slaves from regions of activity, certain areas produced far more enslaved people than others. Between 1650 and 1900, 10.24 million enslaved West Africans arrived in the Americas from the following regions in the following proportions: The variants "neger" and "negar", derive from the Spanish and Portuguese word (black), and from the now-pejorative French "nègre" (negro). Etymologically, "negro", "noir", "nègre", and "nigger" ultimately derive from "nigrum", the stem of the Latin (black) (pronounced which, in every other grammatical case, grammatical gender, and grammatical number besides nominative masculine singular, is "nigr-", the "r" is trilled). By the 1900s, "nigger" had become a pejorative word in the United States. In its stead, the term "colored" became the mainstream alternative to "negro" and its derived terms. After the civil rights movement, the terms "colored" and "negro" gave way to "black". "Negro" had superseded "colored" as the most polite word for African Americans at a time when "black" was considered more offensive. This term was accepted as normal, including by people classified as Negroes, until the later Civil Rights movement in the late 1960s. One well-known example is the identification by Reverend Martin Luther King, Jr. of his own race as "Negro" in his famous speech of 1963, I Have a Dream. During the American civil rights movement of the 1950s and 1960s, some African-American leaders in the United States, notably Malcolm X, objected to the word "Negro" because they associated it with the long history of slavery, segregation, and discrimination that treated African Americans as second-class citizens, or worse. Malcolm X preferred "Black" to "Negro", but later gradually abandoned that as well for "Afro-American" after leaving the Nation of Islam. Since the late 1960s, various other terms for African Americans have been more widespread in popular usage. Aside from "Black American", these include "Afro-American" (in use from the late 1960s to 1990) and "African American" (used in the United States to refer to Black Americans, people often referred to in the past as "American Negroes"). In the first 200 years that black people were in the United States, they primarily identified themselves by their specific ethnic group (closely allied to language) and not by skin color. Individuals identified themselves, for example, as Ashanti, Igbo, Bakongo, or Wolof. However, when the first captives were brought to the Americas, they were often combined with other groups from West Africa, and individual ethnic affiliations were not generally acknowledged by English colonists. In areas of the Upper South, different ethnic groups were brought together. This is significant as the captives came from a vast geographic region: the West African coastline stretching from Senegal to Angola and in some cases from the south-east coast such as Mozambique. A new "African-American" identity and culture was born that incorporated elements of the various ethnic groups and of European cultural heritage, resulting in fusions such as the Black church and African-American English. This new identity was based on provenance and slave status rather than membership in any one ethnic group. By contrast, slave records from Louisiana show that the French and Spanish colonists recorded more complete identities of the West Africans, including ethnicities and given tribal names. The US racial or ethnic classification "black" refers to people with all possible kinds of skin pigmentation, from the darkest through to the very lightest skin colors, including albinos, if they are believed by others to have African ancestry (in any discernible percentage). There are also certain cultural traits associated with being "African American", a term used effectively as a synonym for "black person" within the United States. In March 1807, Great Britain, which largely controlled the Atlantic, declared the transatlantic slave trade illegal, as did the United States. (The latter prohibition took effect 1 January 1808, the earliest date on which Congress had the power to do so after protecting the slave trade under of the United States Constitution.) By that time, the majority of black people in the United States were native-born, so the use of the term "African" became problematic. Though initially a source of pride, many blacks feared that the use of African as an identity would be a hindrance to their fight for full citizenship in the US. They also felt that it would give ammunition to those who were advocating repatriating black people back to Africa. In 1835, black leaders called upon Black Americans to remove the title of "African" from their institutions and replace it with "Negro" or "Colored American". A few institutions chose to keep their historic names, such as the African Methodist Episcopal Church. African Americans popularly used the terms "Negro" or "colored" for themselves until the late 1960s. The term "black" was used throughout but not frequently since it carried a certain stigma. In his 1963 "I Have a Dream" speech, Martin Luther King, Jr. uses the terms "negro" fifteen times and "black" four times. Each time he uses "black" it is in parallel construction with "white"; for example, "black men and white men". With the successes of the civil rights movement, a new term was needed to break from the past and help shed the reminders of legalized discrimination. In place of "Negro", activists promoted the use of "black" as standing for racial pride, militancy, and power. Some of the turning points included the use of the term "Black Power" by Kwame Toure (Stokely Carmichael) and the popular singer James Brown's song "Say It Loud – I'm Black and I'm Proud". In 1988, the civil rights leader Jesse Jackson urged Americans to use instead the term "African American" because it had a historical cultural base and was a construction similar to terms used by European descendants, such as German American, Italian American, etc. Since then, African American and black have often had parallel status. However, controversy continues over which if any of the two terms is more appropriate. Maulana Karenga argues that the term African-American is more appropriate because it accurately articulates their geographical and historical origin. Others have argued that "black" is a better term because "African" suggests foreignness, although Black Americans helped found the United States. Still others believe that the term black is inaccurate because African Americans have a variety of skin tones. Some surveys suggest that the majority of Black Americans have no preference for "African American" or "Black", although they have a slight preference for "black" in personal settings and "African American" in more formal settings. In the U.S. census race definitions, Black and African Americans are citizens and residents of the United States with origins in Sub-Saharan Africa. According to the Office of Management and Budget, the grouping includes individuals who self-identify as African American, as well as persons who emigrated from nations in the Caribbean and Sub-Saharan Africa. The grouping is thus based on geography, and may contradict or misrepresent an individual's self-identification since not all immigrants from Sub-Saharan Africa are "Black". The Census Bureau also notes that these classifications are socio-political constructs and should not be interpreted as scientific or anthropological. According to US Census Bureau data, African immigrants generally do not self-identify as African American. The overwhelming majority of African immigrants identify instead with their own respective ethnicities (~95%). Immigrants from some Caribbean, Central American and South American nations and their descendants may or may not also self-identify with the term. Recent surveys of African Americans using a genetic testing service have found varied ancestries which show different tendencies by region and sex of ancestors. These studies found that on average, African Americans have 73.2–80.9% West African, 18–24% European, and 0.8–0.9% Native American genetic heritage, with large variation between individuals. From the late 19th century, the South used a colloquial term, the "one-drop rule", to classify as black a person of any known African ancestry. This practice of hypodescent was not put into law until the early 20th century. Legally the definition varied from state to state. Racial definition was more flexible in the 18th and 19th centuries before the American Civil War. For instance, President Thomas Jefferson held persons who were legally white (less than 25% black) according to Virginia law at the time, but, because they were born to slave mothers, they were born into slavery, according to the principle of "partus sequitur ventrem", which Virginia adopted into law in 1662. Outside the U.S., some other countries have adopted the one-drop rule, but the definition of who is black and the extent to which the one-drop "rule" applies varies greatly from country to country. The one-drop rule may have originated as a means of increasing the number of black slaves and was maintained as an attempt to keep the white race pure. One of the results of the one-drop rule was the uniting of the African-American community. Some of the most prominent abolitionists and civil-rights activists of the 19th century were multiracial, such as Frederick Douglass, Robert Purvis and James Mercer Langston. They advocated equality for all. The concept of blackness in the United States has been described as the degree to which one associates themselves with mainstream African-American culture, politics, and values. To a certain extent, this concept is not so much about race but more about political orientation, culture and behavior. Blackness can be contrasted with "acting white", where black Americans are said to behave with assumed characteristics of stereotypical white Americans with regard to fashion, dialect, taste in music, and possibly, from the perspective of a significant number of black youth, academic achievement. Due to the often political and cultural contours of blackness in the United States, the notion of blackness can also be extended to non-black people. Toni Morrison once described Bill Clinton as the first black President of the United States, because, as she put it, he displayed "almost every trope of blackness". Clinton welcomed the label. Scholar Martin Halpern writes that African American leaders "pleased with Clinton's regular appearances in African American settings and frequent socializing with black friends, his travel to Africa, his decision to defend affirmative action, and his 1997 Initiative on Race designed to promote racial understanding," and also defended Clinton during his impeachment. Halpern wrote that although some black leaders, especially on the left, "were critical of Clinton's deal with Republicans on welfare reform, aspects of his anticrime legislation, and the neglect of the poor in the booming economy, his sensitivity in other respects to black concerns kept civil rights leaders from any serious break" with Clinton, who enjoyed very high approval ratings among African Americans. The question of blackness also arose in the Democrat Barack Obama's 2008 presidential campaign. Commentators have questioned whether Obama, who was elected the first president with black ancestry, is "black enough", contending that his background is not typical because his mother was white American, and his father was a black Kenyan immigrant. Obama chose to identify as black and African-American. In July 2012, Ancestry.com reported on historic and DNA research by its staff that discovered that Obama is likely a descendant through his mother of John Punch, considered by some historians to be the first African slave in the Virginia colony. An indentured servant, he was "bound for life" in 1640 after trying to escape. The story of him and his descendants is that of multi-racial America since it appeared he and his sons married or had unions with white women, likely indentured servants and working-class like them. Their multi-racial children were free because they were born to free English women. Over time, Obama's line of the Bunch family (as they became known) were property owners and continued to "marry white"; they became part of white society, likely by the early to mid-18th century. Race mixing in the United States took place between free colored people, Native Americans and European Americans. In Virginia, negro former slaves who had maintained their freedom married Native American women. 200,000 slaves were shipped from West Africa to Mexico by Spanish conquistadors. The history of these Afro-Mexicans was hidden until 2016. Racism against them and dark skin generally in Mexico and most of Latin America is prevalent. The first Afro-Dominican slaves were shipped to the Dominican Republic by Spanish conquistadors during the Transatlantic slave trade. Spanish conquistadors shipped slaves from West Africa to Puerto Rico. Afro-Puerto Ricans in part trace descent to this colonization of the island. Approximately 12 million people were shipped from Africa to the Americas during the Atlantic slave trade from 1492 to 1888. Of these, 11.5 million of those shipped to South America and the Caribbean. Brazil was the largest importer in the Americas, with 5.5 million African slaves imported, followed by the British Caribbean with 2.76 million, the Spanish Caribbean and Spanish Mainland with 1.59 million Africans, and the French Caribbean with 1.32 million. Today their descendants number approximately 150 million in South America and the Caribbean. In addition to skin color, other physical characteristics such as facial features and hair texture are often variously used in classifying peoples as black in South America and the Caribbean. In South America and the Caribbean, classification as black is also closely tied to social status and socioeconomic variables, especially in light of social conceptions of "blanqueamiento" (racial whitening) and related concepts. The concept of race in Brazil is complex. A Brazilian child was never automatically identified with the racial type of one or both of his or her parents, nor were there only two categories to choose from. Between an individual of unmixed West African ancestry and a very light mulatto individual, more than a dozen racial categories were acknowledged, based on various combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and no one category stands significantly isolated from the rest. In Brazil, people are classified by appearance, not heredity. Scholars disagree over the effects of social status on racial classifications in Brazil. It is generally believed that achieving upward mobility and education results in individuals being classified as a category of lighter skin. The popular claim is that in Brazil, poor whites are considered black and wealthy blacks are considered white. Some scholars disagree, arguing that "whitening" of one's social status may be open to people of mixed race, a large part of the population known as "pardo", but a person perceived as "preto" (black) will continue to be classified as black regardless of wealth or social status. From the years 1500 to 1850, an estimated 3.5 million captives were forcibly shipped from West/Central Africa to Brazil. The territory received the highest number of slaves of any country in the Americas. Scholars estimate that more than half of the Brazilian population is at least in part descended from these individuals. Brazil has the largest population of Afro-ancestry outside Africa. In contrast to the US, during the slavery period and after, the Portuguese colonial government in Brazil and the later Brazilian government did not pass formal anti-miscegenation or segregation laws. As in other Latin American countries, intermarriage was prevalent during the colonial period and continued afterward. In addition, people of mixed race ("pardo") often tended to marry white spouses, and their descendants became accepted as white. As a result, some of the European descended population also has West African or Amerindian blood. According to the last census of the 20th century, in which Brazilians could choose from five color/ethnic categories with which they identified, 54% of individuals identified as white, 6.2% identified as black, and 39.5% identified as pardo (brown) — a broad multi-racial category, including tri-racial persons. In the 19th century, a philosophy of racial whitening emerged in Brazil, related to the assimilation of mixed-race people into the white population through intermarriage. Until recently the government did not keep data on race. However, statisticians estimate that in 1835, roughly 50% of the population was "preto" (black; most were enslaved), a further 20% was "pardo" (brown), and 25% white, with the remainder Amerindian. Some classified as pardo were tri-racial. By the 2000 census, demographic changes including the end to slavery, immigration from Europe and Asia, assimilation of multiracial persons, and other factors resulted in a population in which 6.2% of the population identified as black, 40% as pardo, and 55% as white. Essentially most of the black population was absorbed into the multi-racial category by intermixing. A 2007 genetic study found that at least 29% of the middle-class, white Brazilian population had some recent (since 1822 and the end of the colonial period) African ancestry. Because of the acceptance of miscegenation, Brazil has avoided the binary polarization of society into black and white. In addition, it abolished slavery without a civil war. The bitter and sometimes violent racial tensions that have divided the US are notably absent in Brazil. According to the 2010 census, 6.7% of Brazilians said they were black, compared with 6.2% in 2000, and 43.1% said they were racially mixed, up from 38.5%. In 2010, Elio Ferreira de Araujo, Brazil's minister for racial equality, attributed the increases to growing pride among his country's black and indigenous communities. The philosophy of the racial democracy in Brazil has drawn some criticism, based on economic issues. Brazil has one of the largest gaps in income distribution in the world. The richest 10% of the population earn 28 times the average income of the bottom 40%. The richest 10 percent is almost exclusively white or predominantly European in ancestry. One-third of the population lives under the poverty line, with blacks and other people of color accounting for 70 percent of the poor. In 2015 United States, African Americans, including multiracial people, earned 76.8% as much as white people. By contrast, black and mixed race Brazilians earned on average 58% as much as whites in 2014. The gap in income between blacks and other non-whites is relatively small compared to the large gap between whites and all people of color. Other social factors, such as illiteracy and education levels, show the same patterns of disadvantage for people of color. Some commentators observe that the United States practice of segregation and white supremacy in the South, and discrimination in many areas outside that region, forced many African Americans to unite in the civil rights struggle, whereas the fluid nature of race in Brazil has divided individuals of African descent between those with more or less ancestry and helped sustain an image of the country as an example of post-colonial harmony. This has hindered the development of a common identity among black Brazilians. Though Brazilians of at least partial African heritage make up a large percentage of the population, few blacks have been elected as politicians. The city of Salvador, Bahia, for instance, is 80% people of color, but voters have not elected a mayor of color. Journalists like to say that US cities with black majorities, such as Detroit and New Orleans, have not elected white mayors since after the civil rights movement, when the Voting Rights Act of 1965 protected the franchise for minorities, and blacks in the South regained the power to vote for the first time since the turn of the 20th century. New Orleans elected its first black mayor in the 1970s. New Orleans elected a white mayor after the widescale disruption and damage of Hurricane Katrina in 2005. Patterns of discrimination against non-whites have led some academic and other activists to advocate for use of the Portuguese term "negro" to encompass all African-descended people, in order to stimulate a "black" consciousness and identity. Afro-Colombians are the second largest African diaspora population in Latin America after Afro-Brazilians. Most Black Venezuelans came directly from Africa having been brought as slaves during colonization; others have been descendants of immigrants from the Antilles and Colombia. The blacks were part of the independence movement, and several managed to be heroes. There is a deep-rooted heritage of African culture in Venezuelan culture, as demonstrated in many traditional Venezuelan music and dances, such as the Tambor, a musical genre inherited from the blacks of the colony, or the Llanera music or the Gaita zuliana that both are a fusion of all the three major peoples that contribute to the cultural heritage. Also the black inheritance is present in the gastronomy. There are entire communities of blacks in the Barlovento zone, as well as part of the Bolívar state and in other small towns; they also live peaceably among the general population in the rest of Venezuela. Currently blacks represent a relative majority in the Venezuelan population, although many are actually mixed people.
https://en.wikipedia.org/wiki?curid=4745
Plague (disease) Plague is an infectious disease caused by the bacterium "Yersinia pestis". Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. In the bubonic form there is also swelling of lymph nodes, while in the septicemic form tissues may turn black and die, and in the pneumonic form shortness of breath, cough and chest pain may occur. Bubonic and septicemic plague are generally spread by flea bites or handling an infected animal. The pneumonic form is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum. Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%. Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths. When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. "Y. pestis" can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade. "Y. pestis" spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic. If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing "secondary septicemic plague" and if the lungs are seeded, it can cause "secondary pneumonic plague". Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources, so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes white in the center. Untreated, septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 percent. People who die from this form of plague often die on the same day symptoms first appear. The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases mortality is nearly 100%. Transmission of "Y. pestis" to an uninfected individual is possible by any of the following means. "Yersinia pestis" circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55 degrees North and 40 degrees South. Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas ("Xenopsylla cheopis") that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents, or a rise in the rodent population. A study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales—which isolated itself during the outbreak, facilitating modern study—found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much bigger proportion than previously thought. Since human plague is rare in most parts of the world as of 2020, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The CDC thus only recommends vaccination for: (1) all laboratory and field personnel who are working with "Y. pestis" organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with "Y. pestis"; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine. If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017. Globally about 600 cases are reported a year. In 2017 the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in greater than 50 million dead. Plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe detail the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han Dynasty General Huo Qubing is recorded to have died of such a contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege. In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Janibeg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the plague (Black Death) via their ships into the south of Europe, possibly explaining its rapid spread. During World War II, the Japanese Army developed weaponised plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shiro Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted having spread bubonic plague within a radius around the city of Changde. Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII. After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects are largely unavailable. Aerosolized pneumonic plague remains the most significant threat. The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, thus making the threat less severe.
https://en.wikipedia.org/wiki?curid=4746
Baudot code The Baudot code is an early character encoding for telegraphy invented by Émile Baudot in the 1870s, It was the predecessor to the International Telegraph Alphabet No. 2 (ITA2), the most common teleprinter code in use until the advent of ASCII. Each character in the alphabet is represented by a series of five bits, sent over a communication channel such as a telegraph wire or a radio signal. The symbol rate measurement is known as baud, and is derived from the same name. Baudot invented his original code in 1870 and patented it in 1874. It was a five-bit code, with equal on and off intervals, which allowed for transmission of the Roman alphabet, and included punctuation and control signals. It was based on an earlier code developed by Carl Friedrich Gauss and Wilhelm Weber in 1834. It was a Gray code (when vowels and consonants are sorted in their alphabetical order), nonetheless, the code by itself was not patented (only the machine) because French patent law does not allow concepts to be patented. Baudot's original code was adapted to be sent from a manual keyboard, and no teleprinter equipment was ever constructed that used it in its original form. The code was entered on a keyboard which had just five piano-type keys and was operated using two fingers of the left hand and three fingers of the right hand. Once the keys had been pressed, they were locked down until mechanical contacts in a distributor unit passed over the sector connected to that particular keyboard, when the keyboard was unlocked ready for the next character to be entered, with an audible click (known as the "cadence signal") to warn the operator. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. The table "shows the allocation of the Baudot code which was employed in the British Post Office for continental and inland services. A number of characters in the continental code are replaced by fractionals in the inland code. Code elements 1, 2 and 3 are transmitted by keys 1, 2 and 3, and these are operated by the first three fingers of the right hand. Code elements 4 and 5 are transmitted by keys 4 and 5, and these are operated by the first two fingers of the left hand." Baudot's code became known as the International Telegraph Alphabet No. 1 (ITA1). It is no longer used. In 1901, Baudot's code was modified by Donald Murray (1865–1945), prompted by his development of a typewriter-like keyboard. The Murray system employed an intermediate step; a keyboard perforator, which allowed an operator to punch a paper tape, and a tape transmitter for sending the message from the punched tape. At the receiving end of the line, a printing mechanism would print on a paper tape, and/or a reperforator could be used to make a perforated copy of the message. As there was no longer a connection between the operator's hand movement and the bits transmitted, there was no concern about arranging the code to minimize operator fatigue, and instead Murray designed the code to minimize wear on the machinery, assigning the code combinations with the fewest punched holes to the most frequently used characters. For example, the one-hole letters are E and T. The ten two-hole letters are AOINSHRDLZ, very similar to the "Etaoin shrdlu" order used in Linotype machines. Ten more letters, BCGFJMPUWY, have three holes each, and the four-hole letters are VXKQ. The Murray code also introduced what became known as "format effectors" or "control characters" the CR (Carriage Return) and LF (Line Feed) codes. A few of Baudot's codes moved to the positions where they have stayed ever since: the NULL or BLANK and the DEL code. NULL/BLANK was used as an idle code for when no messages were being sent, but the same code was used to encode the space separation between words. Sequences of DEL codes (fully punched columns) were used at start or end of messages or between them, allowing easy separation of distinct messages. (BELL codes could be inserted in those sequences to signal to the remote operator that a new message was coming or that transmission of a message was terminated). Early British Creed machines also used the Murray system. Murray's code was adopted by Western Union which used it until the 1950s, with a few changes that consisted of omitting some characters and adding more control codes. An explicit SPC (space) character was introduced, in place of the BLANK/NULL, and a new BEL code rang a bell or otherwise produced an audible signal at the receiver. Additionally, the WRU or "Who aRe yoU?" code was introduced, which caused a receiving machine to send an identification stream back to the sender. In 1924, the CCITT introduced the International Telegraph Alphabet No. 2 (ITA2) code as an international standard, which was based on the Western Union code with some minor changes. The US standardized on a version of ITA2 called the American Teletypewriter code (US TTY) which was the basis for 5-bit teletypewriter codes until the debut of 7-bit ASCII in 1963. Some code points (marked blue in the table) were reserved for national-specific usage. The code position assigned to Null was in fact used only for the idle state of teleprinters. During long periods of idle time, the impulse rate was not synchronized between both devices (which could even be powered off or not permanently interconnected on commuted phone lines). To start a message it was first necessary to calibrate the impulse rate, a sequence of regularly timed "mark" pulses (1), by a group of five pulses, which could also be detected by simple passive electronic devices to turn on the teleprinter. This sequence of pulses generated a series of Erasure/Delete characters while also initializing the state of the receiver to the Letters shift mode. However, the first pulse could be lost, so this power on procedure could then be terminated by a single Null immediately followed by an Erasure/Delete character. To preserve the synchronization between devices, the Null code could not be used arbitrarily in the middle of messages (this was an improvement to the initial Baudot system where spaces were not explicitly differentiated, so it was difficult to maintain the pulse counters for repeating spaces on teleprinters). But it was then possible to resynchronize devices at any time by sending a Null in the middle of a message (immediately followed by an Erasure/Delete/LS control if followed by a letter, or by a FS control if followed by a figure). Sending Null controls also did not cause the paper band to advance to the next row (as nothing was punched), so this saved precious lengths of punchable paper band. On the other hand, the Erasure/Delete/LS control code was always punched and always shifted to the (initial) letters mode. According to some sources, the Null code point was reserved for country-internal usage only. The Shift to Letters code (LS) is also usable as a way to cancel/delete text from a punched tape after it has been read, allowing the safe destruction of a message before recycling the punched band. Functionally, it can also play the same filler role as the Delete code in ASCII (or other 7-bit and 8-bit encodings, including EBCDIC for punched cards). After codes in a fragment of text have been replaced by an arbitrary number of LS codes, what follows is still preserved and decodable. It can also be used as an initiator to make sure that the decoding of the first code will not give a digit or another symbol from the figures page (because the Null code can be arbitrarily inserted near the end or beginning of a punch band, and has to be ignored, whereas the Space code is significant in text). The cells marked as reserved for extensions (which use the LS code again a second time—just after the first LS code—to shift from the figures page to the letters shift page) has been defined to shift into a new mode. In this new mode, the letters page contains only lowercase letters, but retains access to a third code page for uppercase letters, either by encoding for a single letter (by sending LS before that letter), or locking (with FS+LS) for an unlimited number of capital letters or digits before then unlocking (with a single LS) to return to lowercase mode. The cell marked as "Reserved" is also usable (using the FS code from the figures shift page) to switch the page of figures (which normally contains digits and "national" lowercase letters or symbols) to a fourth page (where national letters are uppercased and other symbols may be encoded). ITA2 is still used in telecommunications devices for the deaf (TDD), telex, and some amateur radio applications, such as radioteletype ("RTTY"). ITA2 is also used in Enhanced Broadcast Solution (an early 21st century financial protocol specified by Deutsche Börse) to reduce the character encoding footprint. Nearly all 20th-century teleprinter equipment used Western Union's code, ITA2, or variants thereof. Radio amateurs casually call ITA2 and variants "Baudot" incorrectly, and even the American Radio Relay League's Amateur Radio Handbook does so, though in more recent editions the tables of codes correctly identifies it as ITA2. The values shown in each cell are the Unicode codepoints, given for comparison. Meteorologists used a variant of ITA2 with the figures-case symbols, except for the ten digits, BEL and a few other characters, replaced by weather symbols: NOTE: This table presumes the space called "1" by Baudot and Murray is rightmost, and least significant. The way the transmitted bits were packed into larger codes varied by manufacturer. The most common solution allocates the bits from the least significant bit towards the most significant bit (leaving the three most significant bits of a byte unused). In ITA2, characters are expressed using five bits. ITA2 uses two code sub-sets, the "letter shift" (LTRS), and the "figure shift" (FIGS). The FIGS character (11011) signals that the following characters are to be interpreted as being in the FIGS set, until this is reset by the LTRS (11111) character. In use, the LTRS or FIGS shift key is pressed and released, transmitting the corresponding shift character to the other machine. The desired letters or figures characters are then typed. Unlike a typewriter or modern computer keyboard, the shift key isn't kept depressed whilst the corresponding characters are typed. "ENQuiry" will trigger the other machine's answerback. It means "Who are you?" CR is carriage return, LF is line feed, BEL is the bell character which rang a small bell (often used to alert operators to an incoming message), SP is space, and NUL is the null character (blank tape). Note: the binary conversions of the codepoints are often shown in reverse order, depending on (presumably) from which side one views the paper tape. Note further that the "control" characters were chosen so that they were either symmetric or in useful pairs so that inserting a tape "upside down" did not result in problems for the equipment and the resulting printout could be deciphered. Thus FIGS (11011), LTRS (11111) and space (00100) are invariant, while CR (00010) and LF (01000), generally used as a pair, are treated the same regardless of order by page printers. LTRS could also be used to overpunch characters to be deleted on a paper tape (much like DEL in 7-bit ASCII). The sequence "RYRYRY..." is often used in test messages, and at the start of every transmission. Since R is 01010 and Y is 10101, the sequence exercises much of a teleprinter's mechanical components at maximum stress. Also, at one time, fine-tuning of the receiver was done using two coloured lights (one for each tone). 'RYRYRY...' produced 0101010101..., which made the lights glow with equal brightness when the tuning was correct. This tuning sequence is only useful when ITA2 is used with two-tone FSK modulation, such as is commonly seen in radioteletype (RTTY) usage. US implementations of Baudot code may differ in the addition of a few characters, such as #, & on the FIGS layer. The Russian version of Baudot code (MTK-2) used three shift modes; the Cyrillic letter mode was activated by the character (00000). Because of the larger number of characters in the Cyrillic alphabet, the characters !, &, £ were omitted and replaced by Cyrillics, and BEL has the same code as Cyrillic letter Ю. The Cyrillic letters Ъ and Ё are omitted, and Ч is merged with the numeral 4.
https://en.wikipedia.org/wiki?curid=4748
Blu Tack Blu Tack is a reusable putty-like pressure-sensitive adhesive produced by Bostik, commonly used to attach lightweight objects (such as posters or sheets of paper) to walls, doors or other dry surfaces. Traditionally blue, it is also available in other colours. Generic versions of the product are also available from other manufacturers. The spelling now used is without a hyphen. The composition is described as a synthetic rubber compound without hazardous properties under normal conditions. It can be swallowed without harm and is noncarcinogenic. It is non-soluble and is denser than water. The material is not flammable, but emits carbon dioxide and carbon monoxide when exposed to fire or high temperatures. As of 2015, Bostik was manufacturing around 100 tonnes of Blu Tack weekly at its Leicester factory. Blu Tack was originally developed in 1969 as an accidental by-product of an attempt to develop a new sealant using chalk powder, rubber and oil. The name of the inventor is unknown. Originally Blu Tack was white, but consumer research showed fears that children may mistake it for chewing gum, and a blue colouring was added. In the United Kingdom in March 2008, 20,000 numbered packs of pink Blu Tack were made available, to help raise money for Breast Cancer Campaign, with 10 pence from each pack going to the charity. The formulation was slightly altered to retain complete consistency with its blue counterpart. Since then, many coloured variations have been made, including red and white, yellow and a green Halloween pack. Similar products of various colours are made by many manufacturers, including Faber-Castell's "Tack-it", Henkel's "Fun-Tak", UHU's "Poster Putty" and "Sticky Tack","Gummy Sticker" Pritt's "Sticky Stuff", Bostik's "Prestik" and Elmer's "Poster Tack". Plasti-Tak by Brooks Manufacturing Company appears to pre-date Blu Tack, with a trademark registration in 1964. Versions of the product are also sold under the generic names "adhesive putty" and "mounting putty". The generic trademark or common name for mounting putty varies by region. It is known as "Patafix" in France, Italy, and Portugal, ' ("teacher's chewing gum") in Iceland, ' ("attachment paste") or ' in Sweden, and ' in South Africa (an Afrikaans word, literally translated as "wonder glue"). Like all poster putties, Blu Tack provides an alternative to the artist's traditional kneaded eraser, having a superior grip and plasticity. Blu Tack can be finely shaped and worked into even very small areas. Like kneaded erasers, it can be stretched and kneaded to freshen its working surfaces. Blu Tack is also used for sculpture. In 2007 artist Elizabeth Thompson created a sculpture of a house spider using Blu Tack over a wire frame. It took around 4000 packs and was exhibited at London Zoo. Other artists have created works from the material including stop-motion animation. Blu Tack can be used as a damping agent for sound and vibration applications, due to its low amplitude response properties. A small amount of Blu Tack can be placed on the head of a screw to hold it onto a screwdriver. Blu Tack can also be used to clean in-ear headphones by gently pressing putty into mesh of headphones. Blu Tack can be regarded as a comfortable alternative to over-the-counter ear plugs for the attenuation of everyday sound.
https://en.wikipedia.org/wiki?curid=4749
Bacillus Bacillus (Latin "stick") is a genus of Gram-positive, rod-shaped bacteria, a member of the phylum "Firmicutes", with 266 named species. The term is also used to describe the shape (rod) of certain bacteria; and the plural "Bacilli" is the name of the class of bacteria to which this genus belongs. "Bacillus" species can be either obligate aerobes: oxygen dependent; or facultative anaerobes: having the ability to be anaerobic in the absence of oxygen. Cultured "Bacillus" species test positive for the enzyme catalase if oxygen has been used or is present. "Bacillus" can reduce themselves to oval endospores and can remain in this dormant state for years. The endospore of one species from Morocco is reported to have survived being heated to 420 °C. Endospore formation is usually triggered by a lack of nutrients: the bacterium divides within its cell wall, and one side then engulfs the other. They are not true spores (i.e., not an offspring). Endospore formation originally defined the genus, but not all such species are closely related, and many species have been moved to other genera of the "Firmicutes". Only one endospore is formed per cell. The spores are resistant to heat, cold, radiation, desiccation, and disinfectants. "Bacillus anthracis" needs oxygen to sporulate; this constraint has important consequences for epidemiology and control. In vivo, "B. anthracis" produces a polypeptide (polyglutamic acid) capsule that protects it from phagocytosis. The genera "Bacillus" and "Clostridium" constitute the family "Bacillaceae". Species are identified by using morphologic and biochemical criteria. Because the spores of many "Bacillus" species are resistant to heat, radiation, disinfectants, and desiccation, they are difficult to eliminate from medical and pharmaceutical materials and are a frequent cause of contamination. "Bacillus" species are well known in the food industries as troublesome spoilage organisms. Ubiquitous in nature, "Bacillus" includes both free-living (nonparasitic) species, and two parasitic pathogenic species. These two "Bacillus" species are medically significant: "B. anthracis" causes anthrax; and "B. cereus" causes food poisoning. Many species of "Bacillus" can produce copious amounts of enzymes, which are used in various industries, such as in the production of alpha amylase used in starch hydrolysis and the protease subtilisin used in detergents. "B. subtilis" is a valuable model for bacterial research. Some "Bacillus" species can synthesize and secrete lipopeptides, in particular surfactins and mycosubtilins. The cell wall of "Bacillus" is a structure on the outside of the cell that forms the second barrier between the bacterium and the environment, and at the same time maintains the rod shape and withstands the pressure generated by the cell's turgor. The cell wall is made of teichoic and teichuronic acids. "B. subtilis" is the first bacterium for which the role of an actin-like cytoskeleton in cell shape determination and peptidoglycan synthesis was identified and for which the entire set of peptidoglycan-synthesizing enzymes was localized. The role of the cytoskeleton in shape generation and maintenance is important. "Bacillus" species are rod-shaped, endospore-forming aerobic or facultatively anaerobic, Gram-positive bacteria; in some species cultures may turn Gram-negative with age. The many species of the genus exhibit a wide range of physiologic abilities that allow them to live in every natural environment. Only one endospore is formed per cell. The spores are resistant to heat, cold, radiation, desiccation, and disinfectants. The genus "Bacillus" was named in 1835 by Christian Gottfried Ehrenberg, to contain rod-shaped (bacillus) bacteria. He had seven years earlier named the genus "Bacterium". "Bacillus" was later amended by Ferdinand Cohn to further describe them as spore-forming, Gram-positive, aerobic or facultatively anaerobic bacteria. Like other genera associated with the early history of microbiology, such as "Pseudomonas" and "Vibrio", the 266 species of "Bacillus" are ubiquitous. The genus has a very large ribosomal 16S diversity. An easy way to isolate "Bacillus" species is by placing nonsterile soil in a test tube with water, shaking, placing in melted mannitol salt agar, and incubating at room temperature for at least a day. Cultured colonies are usually large, spreading, and irregularly shaped. Under the microscope, the "Bacillus" cells appear as rods, and a substantial portion of the cells usually contain oval endospores at one end, making them bulge. Three proposals have been presented as representing the phylogeny of the genus "Bacillus". The first proposal, presented in 2003, is a "Bacillus"-specific study, with the most diversity covered using 16S and the ITS regions. It divides the genus into 10 groups. This includes the nested genera "Paenibacillus, Brevibacillus, Geobacillus, Marinibacillus" and "Virgibacillus". The second proposal, presented in 2008, constructed a 16S (and 23S if available) tree of all validated species. The genus "Bacillus" contains a very large number of nested taxa and majorly in both 16S and 23S. It is paraphyletic to the Lactobacillales ("Lactobacillus, Streptococcus, Staphylococcus, Listeria", etc.), due to "Bacillus coahuilensis" and others. A third proposal, presented in 2010, was a gene concatenation study, and found results similar to the 2008 proposal, but with a much more limited number of species in terms of groups. (This scheme used "Listeria" as an outgroup, so in light of the ARB tree, it may be "inside-out"). One clade, formed by "Bacillus anthracis", "Bacillus cereus", "Bacillus mycoides", "Bacillus pseudomycoides", "Bacillus thuringiensis", and "Bacillus weihenstephanensis" under the 2011 classification standards, should be a single species (within 97% 16S identity), but due to medical reasons, they are considered separate species (an issue also present for four species of "Shigella" and "Escherichia coli"). "Bacillus" species are ubiquitous in nature, e.g. in soil. They can occur in extreme environments such as high pH ("B. alcalophilus"), high temperature ("B. thermophilus"), and high salt concentrations ("B. halodurans"). "B. thuringiensis" produces a toxin that can kill insects and thus has been used as insecticide. "B. siamensis" has antimicrobial compounds that inhibit plant pathogens, such as the fungi "Rhizoctonia solani" and "Botrytis cinerea", and they promote plant growth by volatile emissions. Some species of "Bacillus" are naturally competent for DNA uptake by transformation. Many "Bacillus" species are able to secrete large quantities of enzymes. "Bacillus amyloliquefaciens" is the source of a natural antibiotic protein barnase (a ribonuclease), alpha amylase used in starch hydrolysis, the protease subtilisin used with detergents, and the BamH1 restriction enzyme used in DNA research. A portion of the "Bacillus thuringiensis" genome was incorporated into corn (and cotton) crops. The resulting GMOs are resistant to some insect pests. "Bacillus" species continue to be dominant bacterial workhorses in microbial fermentations. "Bacillus subtilis" (natto) is the key microbial participant in the ongoing production of the soya-based traditional natto fermentation, and some "Bacillus" species are on the Food and Drug Administration's GRAS (generally regarded as safe) list. The capacity of selected "Bacillus" strains to produce and secrete large quantities (20-25 g/L) of extracellular enzymes has placed them among the most important industrial enzyme producers. The ability of different species to ferment in the acid, neutral, and alkaline pH ranges, combined with the presence of thermophiles in the genus, has led to the development of a variety of new commercial enzyme products with the desired temperature, pH activity, and stability properties to address a variety of specific applications. Classical mutation and (or) selection techniques, together with advanced cloning and protein engineering strategies, have been exploited to develop these products. Efforts to produce and secrete high yields of foreign recombinant proteins in "Bacillus" hosts initially appeared to be hampered by the degradation of the products by the host proteases. Recent studies have revealed that the slow folding of heterologous proteins at the membrane-cell wall interface of Gram-positive bacteria renders them vulnerable to attack by wall-associated proteases. In addition, the presence of thiol-disulphide oxidoreductases in "B. subtilis" may be beneficial in the secretion of disulphide-bond-containing proteins. Such developments from our understanding of the complex protein translocation machinery of Gram-positive bacteria should allow the resolution of current secretion challenges and make "Bacillus" species preeminent hosts for heterologous protein production. "Bacillus" strains have also been developed and engineered as industrial producers of nucleotides, the vitamin riboflavin, the flavor agent ribose, and the supplement poly-gamma-glutamic acid. With the recent characterization of the genome of "B. subtilis" 168 and of some related strains, "Bacillus" species are poised to become the preferred hosts for the production of many new and improved products as we move through the genomic and proteomic era. "Bacillus subtilis" is one of the best understood prokaryotes, in terms of molecular and cellular biology. Its superb genetic amenability and relatively large size have provided the powerful tools required to investigate a bacterium from all possible aspects. Recent improvements in fluorescent microscopy techniques have provided novel insight into the dynamic structure of a single cell organism. Research on "B. subtilis" has been at the forefront of bacterial molecular biology and cytology, and the organism is a model for differentiation, gene/protein regulation, and cell cycle events in bacteria.
https://en.wikipedia.org/wiki?curid=4751
Brasília Brasília (, ) is the federal capital of Brazil and seat of government of the Federal District. The city is located atop the Brazilian highlands in the country's center-western region. It was founded on April 21, 1960, to serve as the new national capital. Brasília is estimated to be Brazil's third-most populous city. Among major Latin American cities, it has the highest GDP per capita. Brasília was planned and developed by Lúcio Costa, Oscar Niemeyer and Joaquim Cardozo in 1956 in a scheme to move the capital from Rio de Janeiro to a more central location. The landscape architect was Roberto Burle Marx. The city's design divides it into numbered blocks as well as sectors for specified activities, such as the Hotel Sector, the Banking Sector, and the Embassy Sector. Brasília was chosen as a UNESCO World Heritage Site due to its modernist architecture and uniquely artistic urban planning. It was named "City of Design" by UNESCO in October 2017 and has been part of the Creative Cities Network since then. All three branches of Brazil's federal government are centered in the city: executive, legislative and judiciary. Brasília also hosts 124 foreign embassies. The city's international airport connects it to all other major Brazilian cities and some international destinations, and it is the third-busiest airport in Brazil. Brasília is the second most populous Portuguese-speaking capital city. It was one of the main host cities of the 2014 FIFA World Cup and hosted some of the football matches during the 2016 Summer Olympics; it also hosted the 2013 FIFA Confederations Cup. The city has a unique status in Brazil, as it is an administrative division rather than a legal municipality like other cities in Brazil. Although Brasília is used as a synonym for the Federal District through synecdoche, the Federal District is composed of 31 administrative regions, only one of which is the area of the originally planned city, also called Plano Piloto. The rest of the Federal District is considered by IBGE to make up Brasília's metro area. Brazil's first capital was Salvador; later in 1763 Rio de Janeiro became Brazil's capital until 1960. During this period, resources tended to be centered in Brazil's southeast region, and most of the country's population was concentrated near its Atlantic coast. Brasília's geographically central location fostered a more regionally neutral federal capital. An article of the country's first republican constitution, dated 1891, states that the capital should be moved from Rio de Janeiro to a place close to the country's center. The plan was conceived in 1827 by José Bonifácio, an advisor to Emperor Pedro I. He presented a plan to the General Assembly of Brazil for a new city called Brasília, with the idea of moving the capital westward from the heavily populated southeastern corridor. The bill was not enacted because Pedro I dissolved the Assembly. According to legend, Italian saint Don Bosco in 1883 had a dream in which he described a futuristic city that roughly fitted Brasília's location. In Brasília today, many references to Bosco, who founded the Salesian order, are found throughout the city and one church parish in the city bears his name. Juscelino Kubitschek was elected President of Brazil in 1955. Upon taking office in January 1956, in fulfilment of his campaign pledge, he initiated the planning and construction of the new capital. The following year an international jury selected Lúcio Costa's plan to guide the construction of Brazil's new capital, Brasília. Costa was a student of the famous modernist architect Le Corbusier, and some of modernism's architecture features can be found in his plan. Costa's plan was not as detailed as some of the plans presented by other architects and city planners. It did not include land use schedules, models, population charts or mechanical drawings; however, it was chosen by five out of six jurors because it had the features required to align the growth of a capital city. Even though the initial plan was transformed over time, it oriented much of the construction and most of its features survived. Brasília's accession as the new capital and its designation for the development of an extensive interior region inspired the symbolism of the plan. Costa used a cross-axial design indicating the possession and conquest of this new place with a cross, often likened to a dragonfly, an airplane or a bird. Costa's plan included two principal components, the Monumental Axis (east to west) and the Residential Axis (north to south). The Monumental Axis was assigned political and administrative activities, and is considered the body of the city with the style and simplicity of its buildings, oversized scales, and broad vistas and heights, producing the idea of Monumentality. This axis includes the various ministries, national congress, presidential palace, supreme court building and the television and radio tower. The Residential Axis was intended to contain areas with intimate character and is considered the most important achievement of the plan; it was designed for housing and associated functions such as local commerce, schooling, recreations and churches, constituted of 96 limited to six stories buildings and 12 additional superblocks limited to three stories buildings; Costa's intention with superblocks was to have small self-contained and self-sufficient neighborhoods and uniform buildings with apartments of two or three different categories, where he envisioned the integration of upper and middle classes sharing the same residential area. But at that time he did not foresee the growing population in the city. The capacity limit in his plan later caused the formation of many favelas, poorer, more densely populated satellite cities around Brasilia, peopled by migrants from other place in the country. The urban design of the communal apartment blocks was based on Le Corbusier's Ville Radieuse of 1935, and the superblocks on the North American Radburn layout from 1929./Visually, the blocks were intended to appear absorbed by the landscape because they were isolated by a belt of tall trees and lower vegetation. Costa attempted to introduce a Brazil that was more equitable, he also designed housing for the working classes that was separated from the upper- and middle-class housing and was visually different, with the intention of avoiding slums ("favelas") in the urban periphery. The has been accused of being a space where individuals are oppressed and alienated to a form of spatial segregation. One of the main objectives of the plan was to allow the free flow of automobile traffic, the plan included lanes of traffic in a north–south direction (seven for each direction) for the Monumental Axis and three arterials (the W3, the Eixo and the L2) for the residential Axis; the cul-de-sac access roads of the superblocks were planned to be the end of the main flow of traffic. And the reason behind the heavy emphasis on automobile traffic is the architect's desire to establish the concept of modernity in every level. Though automobiles were invented prior to the 20th century, mass production of vehicles in the early 20th made them widely available; thus, they became a symbol of modernity. The two small axes around the Monumental axis provide loops and exits for cars to enter small roads. Some argue that his emphasis of the plan on automobiles caused the lengthening of distances between centers and it attended only the necessities of a small segment of the population who owned cars. But one can not ignore the bus transportation system in the city. The buses routes inside the city operate heavily on W3 and L2. Almost anywhere, including satellite cities, can be reached just by taking the bus and most of the Plano Piloto can be reached without transferring to other buses. Later when overpopulation turned Brasilia into a dystopia, the transportation system also played an important role in mediating the relationship between the Pilot plan and the satellite cities. Because of overpopulation, the monument axis now has to have traffic lights on it, which violates the concept of modernity and advancement the architect first employed. Additionally, the metro system in Brasilia was mainly built for inhabitants of satellite cities. Though the overpopulation has made Brasilia no longer a pure utopia with incomparable modernity, the later development of traffic lights, buses routes to satellite cities, and the metro system all served as a remedy to the dystopia, enabling the citizens to enjoy the kind of modernity that was not carefully planned. At the intersection of the Monumental and Residential Axis Costa planned the city center with the transportation center (Rodoviaria), the banking sector and the hotel sector, near to the city center, he proposed an amusement center with theatres, cinemas and restaurants. Costa's Plan is seen as a plan with a sectoral tendency, segregating all the banks, the office buildings, and the amusement center. One of the main features of Costa's plan was that he presented a new city with its future shape and patterns evident from the beginning. This meant that the original plan included paving streets that were not immediately put into use; the advantage of this was that the original plan is hard to undo because he provided for an entire street network, but on the other hand, is difficult to adapt and mold to other circumstances in the future. In addition, there has been controversy with the monumental aspect of Lúcio Costa's Plan, because it appeared to some as 19th century city planning, not modern 20th century in urbanism. An interesting analysis can be made of Brasília within the context of Cold War politics and the association of Lúcio Costa's plan to the symbolism of aviation. From an architectural perspective, the airplane-shaped plan was certainly an homage to Le Corbusier and his enchantment with the aircraft as an architectural masterpiece. However, it is important to also note that Brasilia was constructed soon after the end of World War II. Despite Brazil's minor participation in the conflict, the airplane shape of the city was key in envisioning the country as part of the newly globalized world, together with the victorious Allies. Furthermore, Brasília is a unique example of modernism both as a guideline for architectural design but also as a principle for organizing society. Modernism in Brasília is explored in James Holston's book, "The Modernist City". Juscelino Kubitschek, president of Brazil from 1956 to 1961, ordered Brasília's construction, fulfilling the promise of the Constitution and his own political campaign promise. Building Brasília was part of Juscelino's "fifty years of prosperity in five" plan. Already in 1892, the astronomer Louis Cruls, in the service of the Brazilian government, had investigated the site for the future capital. Lúcio Costa won a contest and was the main urban planner in 1957, with 5550 people competing. Oscar Niemeyer was the chief architect of most public buildings, Joaquim Cardozo was the structural engineer, and Roberto Burle Marx was the landscape designer. Brasília was built in 41 months, from 1956 to April 21, 1960, when it was officially inaugurated. The city is located at the top of the Brazilian highlands in the country's center-western region. Paranoá Lake, a large artificial lake, was built to increase the amount of water available and to maintain the region's humidity. It has a marina, and hosts wakeboarders and windsurfers. Diving can also be practiced and one of the main attractions is Vila Amaury, an old village submerged in the lake. This is where the first construction workers of Brasília used to live. Brasília has a tropical savanna climate ("Aw", according to the Köppen climate classification), milder due to the elevation and with two distinct seasons: the rainy season, from October to April, and a dry season, from May to September. The average temperature is . September, at the end of the dry season, has the highest average maximum temperature, , and July has major and minor lower maximum average temperature, of and , respectively. Average temperatures from September through March are a consistent . With , December is the month with the highest rainfall of the year, while June is the lowest, with only . According to Brazilian National Institute of Meteorology (INMET), the record low temperature was on July 18, 1975, and the record high was on October 28, 2008. The highest accumulated rainfall in 24 hours was on November 15, 1963. According to the 2010 IBGE Census, 2,469,489 people resided in Brasília and its metropolitan area, of whom 1,239,882 were Pardo (multiracial) (48.2%), 1,084,418 White (42.2%), 198,072 Black (7.7%), 41,522 Asian (1.6%), and 6,128 Amerindian (0.2%). In 2010, Brasília was ranked the fourth-most populous city in Brazil after São Paulo, Rio de Janeiro, and Salvador. In 2010, the city had 474,871 opposite-sex couples and 1,241 same-sex couples. The population of Brasília was 52.2% female and 47.8% male. In the 1960 census there were almost 140,000 residents in the new Federal district. By 1970 this figure had grown to 537,000. By 2010 the population of the Federal District had surpassed 2,5 million. The city of Brasília proper, the plano piloto was planned for about 500,000 inhabitants, a figure the plano piloto never surpassed, with a current population of only 214,529, but its metropolitan area within the Federal District has grown past this figure. From the beginning, the growth of Brasília was greater than original estimates. According to the original plans, Brasília would be a city for government authorities and staff. However, during its construction, Brazilians from all over the country migrated to the satellite cities of Brasília, seeking public and private employment. At the close of the 20th century, Brasília held the distinction of being the largest city in the world which had not existed at the beginning of the century. Brasília has one of the highest population growth rates in Brazil, with annual growth of 2.82%, mostly due to internal migration. Brasília's inhabitants include a foreign population of mostly embassy workers as well as large numbers of Brazilian internal migrants. Today, the city has important communities of immigrants and refugees. The city's Human Development Index was 0.936 in 2000 (developed level), and the city's literacy rate was around 95.65%. Christianity, in general, is by far the most prevalent religion in Brazil with Roman Catholicism being the largest denomination. "Source: IBGE 2010. " Brasília does not have mayor and councillors, because the article 32 of the 1988 Brazilian Constitution expressly prohibits that the Federal District be divided in municipalities. The Federal District is a legal entity of internal public law, which is part of the political-administrative structure of Brazil of a "sui generis" nature, because it is neither a state nor a municipality, but rather a special entity that accumulates the legislative powers reserved to the states and municipalities, as provided in Article 32, § 1º of the Constitution, which gives it a hybrid nature, both state and municipality. The executive power of the Federal District was represented by the mayor of the Federal District until 1969, when the position was transformed into governor of the Federal District. The legislative power of the Federal District is represented by the Legislative Chamber of the Federal District, whose nomenclature includes a mixture of legislative assembly (legislative power of the other units of the federation) and of municipal chamber (legislative of the municipalities). The Legislative Chamber is made up of 24 district deputies. The judicial power which serves the Federal District also serves federal territories as it is constituted, but Brazil does not have any territories currently. Therefore, the Court of Justice of the Federal District and of the Territories only serves the Federal District. Part of the budget of the Federal District Government comes from the Constitutional Fund of the Federal District. In 2012, the fund totaled 9.6 billion reais. By 2015, the forecast is 12.4 billion reais, of which more than half (6.4 billion) is spent on public security spending. Brasília is twinned with: Of these, Abuja, Canberra, and Washington were likewise cities specifically planned as the seat of government of their respective countries. The major roles of construction and of services (government, communications, banking and finance, food production, entertainment, and legal services) in Brasília's economy reflect the city's status as a governmental rather than an industrial center. Industries connected with construction, food processing, and furnishings are important, as are those associated with publishing, printing, and computer software. The gross domestic product (GDP) is divided in Public Administration 54.8%, Services 28.7%, Industry 10.2%, Commerce 6.1%, Agrobusiness 0.2%. Besides being the political center, Brasília is an important economic center. Brasília has the highest GDP of cities in Brazil, 99.5 billion reais, representing 3.76% of the total Brazilian GDP. Most economic activity in the federal capital results from its administrative function. Its industrial planning is studied carefully by the Government of the Federal District. Being a city registered by UNESCO, the government in Brasília has opted to encourage the development of non-polluting industries such as software, film, video, and gemology among others, with emphasis on environmental preservation and maintaining ecological balance, preserving the city property. According to Mercer's city rankings of cost of living for expatriate employees, Brasília ranks 45th among the most expensive cities in the world in 2012, up from the 70th position in 2010, ranking behind São Paulo (12th) and Rio de Janeiro (13th). (91% of local GDP, according to the IBGE): Industries in the city include construction (Paulo Octavio, Via Construções, and Irmãos Gravia among others); food processing (Perdigão, Sadia); furniture making; recycling (Novo Rio, Rexam, Latasa and others); pharmaceuticals (União Química); and graphic industries. The main agricultural products produced in the city are coffee, guavas, strawberries, oranges, lemons, papayas, soybeans, and mangoes. It has over 110,000 cows and it exports wood products worldwide. The Federal District, where Brasília is located, has a GDP of R$133,4 billion (about US$64.1 billion), about the same as Belarus according to The Economist. Its share of the total Brazilian GDP is about 3.8%. The Federal District has the largest GDP per capita income of Brazil US$25,062, slightly higher than Belarus. The city's planned design included specific areas for almost everything, including accommodation, Hotels Sectors North and South. New hotel facilities are being developed elsewhere, such as the hotels and tourism Sector North, located on the shores of Lake Paranoá. Brasília has a range of tourist accommodation from inns, pensions and hostels to larger international chain hotels. The city's restaurants cater to a wide range of foods from local and regional Brazilian dishes to international cuisine. As a venue for political events, music performances and movie festivals, Brasília is a cosmopolitan city, with around 124 embassies, a wide range of restaurants and complete infrastructure ready to host any kind of event. Not surprisingly, the city stands out as an important business/tourism destination, which is an important part of the local economy, with dozens of hotels spread around the federal capital. Traditional parties take place throughout the year. In June, large festivals known as "festas juninas" are held celebrating Catholic saints such as Saint Anthony of Padua, Saint John the Baptist, and Saint Peter. On September 7, the traditional Independence Day parade is held on the Ministries Esplanade. Throughout the year, local, national, and international events are held throughout the city. Christmas is widely celebrated, and New Year's Eve usually hosts major events celebrated in the city. The city also hosts a varied assortment of art works from artists like Bruno Giorgi, Alfredo Ceschiatti, Athos Bulcão, Marianne Peretti, Alfredo Volpi, Di Cavalcanti, Dyllan Taxman, Victor Brecheret and Burle Marx, whose works have been integrated into the city's architecture, making it a unique landscape. The cuisine in the city is very diverse. Many of the best restaurants in the city can be found in the Asa Sul district. The city is also the birthplace of Brazilian rock and place of origin of many bands like: Legião Urbana, Capital Inicial, Aborto Elétrico, Plebe Rude and Raimundos. Currently Brasília has the Rock Basement Festival which attempts to bring new bands to the national scene. The festival is held in the parking Brasilia National Stadium Mané Garrincha. Since 1965, the annual Brasília Festival of Brazilian Cinema is one of the most traditional cinema festivals in Brazil, being compared only to the Brazilian Cinema Festival of Gramado, in Rio Grande do Sul. The difference between both is that the festival in Brasília still preserves the tradition to only submit and reward Brazilian movies. The International Dance Seminar in Brasília has brought top-notch dance to the Federal Capital since 1991. International teachers, shows with choreographers and guest groups and scholarships abroad are some of the hallmarks of the event. The Seminar is the central axis of the DANCE BRAZIL program and is promoted by the DF State Department of Culture in partnership with the Cultural Association Claudio Santoro. Brasília has also been the focus of modern-day literature. Published in 2008, "The World In Grey: Dom Bosco's Prophecy", by author Ryan J. Lucero, tells an apocalyptical story based on the famous prophecy from the late 19th century by the Italian saint Don Bosco. According to Don Bosco's prophecy: "Between parallels 15 and 20, around a lake which shall be formed; A great civilization will thrive, and that will be the Promised Land". Brasília lies between the parallels 15° S and 20° S, where an artificial lake (Paranoá Lake) was formed. Don Bosco is Brasília's patron saint. "American Flagg!", the First Comics comic book series created by Howard Chaykin, portrays Brasília as a cosmopolitan world capital of culture and exotic romance. In the series, it is a top vacation and party destination. The 2015 Rede Globo series "Felizes para Sempre?" was set in Brasília. At the northwestern end of the Monumental Axis are federal district and municipal buildings, while at the southeastern end, near the middle shore of Lake Paranoá, stand the executive, legislative, and judiciary buildings around the Square of Three Powers, the conceptual heart of the city. These and other major structures were designed by Brazilian architect Oscar Niemeyer and projected by Brazilian structural engineer Joaquim Cardozo in the style of modern Brazilian architecture. In the Square of Three Powers, he created as a focal point the dramatic Congressional Palace, which is composed of five parts: twin administrative towers surrounded by a large, white concrete dome (the meeting place of the Senate) and by an equally massive concrete bowl (the Chamber of Deputies), which is joined to the dome by an underlying, flat-roofed building. The Congress also occupies various other surrounding buildings, some connected by tunnels. A series of low-lying annexes (largely hidden) surround both ends. The National Congress building is located in the middle of the Eixo Monumental, the city's main avenue. In front lies a large lawn and reflecting pool. The building faces the Praça dos Três Poderes where the Palácio do Planalto and the Supreme Federal Court are located. Farther east, on a triangle of land jutting into the lake, is the Palace of the Dawn (Palácio da Alvorada; the presidential residence). Between the federal and civic buildings on the Monumental Axis is the Cathedral of Brasilia, considered by many to be Niemeyer's and Cardozo's finest achievement (see photographs of the interior). The parabolically shaped structure is characterized by its 16 gracefully curving supports, which join in a circle 115 feet (35 meters) above the floor of the nave; stretched between the supports are translucent walls of tinted glass. The nave is entered via a subterranean passage rather than conventional doorways. Other notable buildings are Buriti Palace, Itamaraty Palace, the National Theater, and several foreign embassies that creatively embody features of their national architecture. The Brazilian landscape architect Roberto Burle Marx designed landmark modernist gardens for some of the principal buildings. Both low-cost and luxury buildings were built by the government in Brasília. The residential zones of the inner city are arranged into "" ("superblocks"): groups of apartment buildings inspired in French modernist and bauhaus design and constructed with a prescribed number and type of schools, retail stores, and open spaces. In a not planned spaced in the northern end of Lake Paranoá, separated from the inner city, is a peninsula with many variables homes and a similar city exists on the southern lakeshore. Originally the city planners envisioned extensive public areas along the shores of the artificial lake, but during early development private clubs, hotels, and upscale residences and restaurants gained footholds around the water. Set well apart from the city are satellite cities, including Gama, Ceilândia, Taguatinga, Núcleo Bandeirante, Sobradinho, and Planaltina. These cities, with the exception of Gama and Sobradinho, were not planned. The city has been both acclaimed and criticized for its use of modernist architecture on a grand scale and for its somewhat utopian city plan. After a visit to Brasília, the French writer Simone de Beauvoir complained that all of its ' exuded "the same air of elegant monotony", and other observers have equated the city's large open lawns, plazas, and fields to wastelands. As the city has matured, some of these have gained adornments, and many have been improved by landscaping, giving some observers a sense of "humanized" spaciousness. Although not fully accomplished, the "Brasília utopia" has produced a city of relatively high quality of life, in which the citizens live in forested areas with sporting and leisure structure (the ') surrounded by small commercial areas, bookstores and cafés; the city is famous for its cuisine and efficiency of transit. Even these positive features have sparked controversy, expressed in the nickname "ilha da fantasia" ("fantasy island"), indicating the sharp contrast between the city and surrounding regions, marked by poverty and disorganization in the cities of the states of Goiás and Minas Gerais, around Brasília. Critics of Brasília's grand scale have characterized it as a modernist bauhaus platonic fantasy about the future: The Cathedral of Brasília in the capital of the Federative Republic of Brazil, is an expression of the atheist architect Oscar Niemeyer and the structural engineer . This concrete-framed hyperboloid structure, seems with its glass roof reaching up, open, to the heavens. On May 31, 1970, the cathedral's structure was finished, and only the diameter of the circular area were visible. Niemeyer's and Cardozo's project of Cathedral of Brasília is based in the hyperboloid of revolution which sections are asymmetric. The hyperboloid structure itself is a result of 16 identical assembled concrete columns. There is controversy as to what these columns, having hyperbolic section and weighing 90 t, represent, some say they are two hands moving upwards to heaven, others associate it to the chalice Jesus used in the last supper and some claim it represent his crown of thorns. The cathedral was dedicated on May 31, 1970. At the end of the "Eixo Monumental" ("Monumental Axis") lies the "Esplanada dos Ministérios" ("Ministries Esplanade"), an open area in downtown Brasília. The rectangular lawn is surrounded by two eight-lane avenues where many government buildings, monuments and memorials are located. On Sundays and holidays, the Eixo Monumental is closed to cars so that locals may use it as a place to walk, bike, and have picnics under the trees. "Praça dos Três Poderes" (Portuguese for "Square of the Three Powers") is a plaza in Brasília. The name is derived from the encounter of the three federal branches around the plaza: the Executive, represented by the Palácio do Planalto (presidential office); the Legislative, represented by the National Congress (Congresso Nacional); and the Judiciary branch, represented by the Supreme Federal Court (Supremo Tribunal Federal). It is a tourist attraction in Brasília, designed by Lúcio Costa and Oscar Niemeyer as a place where the three branches would meet harmoniously. The Palácio da Alvorada is the official residence of the president of Brazil. The palace was designed, along with the rest of the city of Brasília, by Oscar Niemeyer and inaugurated in 1958. One of the first structures built in the republic's new capital city, the "Alvorada" lies on a peninsula at the shore of Lake Paranoá. The principles of simplicity and modernity that in the past characterized the great works of architecture motivated Niemeyer. The viewer has an impression of looking at a glass box, softly landing on the ground with the support of thin external columns. The building has an area of 7,000 m2 with three floors consisting of the basement, landing, and second floor. The auditorium, kitchen, laundry, medical center, and administration offices are at basement level. The rooms used by the presidency for official receptions are on the landing. The second floor has four suites, two apartments, and various private rooms which make up the residential part of the palace. The building also has a library, a heated Olympic-sized swimming pool, a music room, two dining rooms and various meeting rooms. A chapel and heliport are in adjacent buildings. The Palácio do Planalto is the official workplace of the president of Brazil. It is located at the Praça dos Três Poderes in Brasília. As the seat of government, the term "Planalto" is often used as a metonym for the executive branch of government. The main working office of the President of the Republic is in the Palácio do Planalto. The President and his or her family do not live in it, rather in the official residence, the Palácio da Alvorada. Besides the President, senior advisors also have offices in the "Planalto", including the Vice-President of Brazil and the Chief of Staff. The other Ministries are along the Esplanada dos Ministérios. The architect of the Palácio do Planalto was Oscar Niemeyer, creator of most of the important buildings in Brasília. The idea was to project an image of simplicity and modernity using fine lines and waves to compose the columns and exterior structures. The Palace is four stories high, and has an area of 36,000 m2. Four other adjacent buildings are also part of the complex. The city has six international schools: American School of Brasília, Brasília International School (BIS), Escola das Nações, Swiss International School (SIS), Lycée français François-Mitterrand (LfFM) and Maple Bear Canadian School. August 2016 will see the opening of a new international school – The British School of Brasilia. Brasília has two universities, three university centers, and many private colleges. The main tertiary educational institutions are: Universidade de Brasília – University of Brasília (UnB) (public); Universidade Católica de Brasília – Catholic University of Brasília (UCB); Centro Universitário de Brasília (UniCEUB); Centro Universitário Euroamaricano (Unieuro); (UDF); (UNIP); and Instituto de Educação Superior de Brasília (IESB). Brasília–Presidente Juscelino Kubitschek International Airport serves the metropolitan area with major domestic and international flights. It is the third busiest Brazilian airport based on passengers and aircraft movements. Because of its strategic location it is a civil aviation hub for the rest of the country. This makes for a large number of takeoffs and landings and it is not unusual for flights to be delayed in the holding pattern before landing. Following the airport's master plan, Infraero built a second runway, which was finished in 2006. In 2007, the airport handled 11,119,872 passengers. The main building's third floor, with 12 thousand square meters, has a panoramic deck, a food court, shops, four movie theatres with total capacity of 500 people, and space for exhibitions. Brasília Airport has 136 vendor spaces. The airport is located about from the central area of Brasília, outside the metro system. The area outside the airport's main gate is lined with taxis as well as several bus line services that connect the airport to Brasília's central district. The parking lot accommodates 1,200 cars. The airport is serviced by domestic and regional airlines (TAM, GOL, Azul, WebJET, Trip and Avianca), in addition to a number of international carriers. In 2012, Brasília's International Airport was won by the InfraAmerica consortium, formed by the Brazilian engineering company ENGEVIX and the Argentine Corporacion America holding company, with a 50% stake each. During the 25-year concession, the airport may be expanded to up to 40 million passengers a year. In 2014 the airport received 15 new boarding bridges, totaling 28 in all. This was the main requirement made by the federal government, which transferred the operation of the terminal to the Inframerica Group after an auction. The group invested R$750 million in the project. In the same year, the number of parking spaces doubled, reaching three thousand. The airport's entrance have a new rooftop cover and a new access road. Furthermore, a VIP room was created on Terminal 1's third floor. The investments resulted an increase the capacity of Brasília's airport from approximately 15 million passengers per year to 21 million by 2014. Brasília has direct flights to all states of Brazil and direct international flights to Buenos Aires, Lisbon, Miami, Orlando, Panama City, Lima, Santiago de Chile, Asunción and Cancún. Like most Brazilian cities, Brasília has a good network of taxi companies. Taxis from the airport are available immediately outside the terminal, but at times there can be quite a queue of people. Although the airport is not far from the downtown area, taxi prices do seem to be higher than in other Brazilian cities. Booking in advance can be advantageous, particularly if time is limited, and local companies should be able to assist airport transfer or transport requirements. The Juscelino Kubitschek bridge, also known as the 'President JK Bridge' or the 'JK Bridge', crosses Lake Paranoá in Brasília. It is named after Juscelino Kubitschek de Oliveira, former president of Brazil. It was designed by architect Alexandre Chan and structural engineer Mário Vila Verde. Chan won the Gustav Lindenthal Medal for this project at the 2003 International Bridge Conference in Pittsburgh due to "...outstanding achievement demonstrating harmony with the environment, aesthetic merit and successful community participation". It consists of three tall asymmetrical steel arches that crisscross diagonally. With a length of 1,200 m (0.75 miles), it was completed in 2002 at a cost of US$56.8 million. The bridge has a pedestrian walkway and is accessible to bicyclists and skaters. The Brasília Metro is Brasília's underground metro system. The system has 24 stations on two lines, the Orange and Green lines, along a total network of , covering some of the metropolitan area. Both lines begin at the Central Station and run parallel until the Águas Claras Station. The Brasília metro is not comprehensive so buses may provide better access to the center. The metro leaves the Rodoviária (bus station) and goes south, avoiding most of the political and tourist areas. The main purpose of the metro is to serve cities, such as Samambaia, Taguatinga and Ceilândia, as well as Guará and Águas Claras. The satellite cities served are more populated in total than the Plano Piloto itself (the census of 2000 indicated that Ceilândia had 344,039 inhabitants, Taguatinga had 243,575, and the Plano Piloto had approximately 400,000 inhabitants), and most residents of the satellite cities depend on public transportation. A high-speed railway was planned between Brasília and Goiânia, the capital of the state of Goias, but it will probably be turned into a regional service linking the capital cities and cities in between, like Anápolis and Alexânia. The main bus hub in Brasília is the Central Bus Station, located in the crossing of the Eixo Monumental and the Eixão, about from the Three Powers Plaza. The original plan was to have a bus station as near as possible to every corner of Brasília. Today, the bus station is the hub of urban buses only, some running within Brasília and others connecting Brasília to the satellite cities. In the original city plan, the interstate buses would also stop at the Central Station. Because of the growth of Brasília (and corresponding growth in the bus fleet), today the interstate buses leave from the older interstate station (called Rodoferroviária) located at the western end of the Eixo Monumental. The Central Bus Station also contains a main metro station. A new bus station was opened in July 2010. It is on Saída Sul (South Exit) near Parkshopping Mall with its metro station, and is also an inter-state bus station, used only to leave the Federal District. There is currently no passenger rail service in Brasília, but the Expresso Pequi rail line is planned to link Brasília and Goiânia. A 22 km light rail line is planned, estimated to cost between 1 billion reais (US$258 million) and 1.5 billion reais with capacity to transport around 200,000 passengers per day. The average commute time on public transit in Brasília, for example to and from work, on a weekday is 96 min. 31% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 28 min, while 61% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 50% travel for over in a single direction. The main stadiums are the Brasília National Stadium Mané Garrincha (which was reinaugurated on May 18, 2013), the Serejão Stadium (home for Brasiliense) and the Bezerrão Stadium (home for Gama). Brasília was one of the host cities of the 2014 FIFA World Cup and 2013 FIFA Confederations Cup, for which Brazil is the host nation. Brasília hosted the opening of the Confederations Cup and hosted 7 World Cup games. Brasília also hosted the football tournaments during the 2016 Summer Olympics held in Rio de Janeiro. Brasília is known as a departing point for the practice of unpowered air sports, sports that may be practiced with hang gliding or paragliding wings. Practitioners of such sports reveal that, because of the city's dry weather, the city offers strong thermal winds and great "cloud-streets", which is also the name for a manoeuvre quite appreciated by practitioners. In 2003, Brasília hosted the 14th Hang Gliding World Championship, one of the categories of free flying. In August 2005, the city hosted the 2nd stage of the Brazilian Hang Gliding Championship. Brasília is the site of the Autódromo Internacional Nelson Piquet which hosted a non-championship round of the 1974 Formula One Grand Prix season. An IndyCar race was cancelled at the last minute in 2015. The city is also home to Uniceub BRB, one of Brazil's best basketball clubs. Currently, NBB champion (2010, 2011 and 2012). The club hosts some of its games at the 16,000 all-seat Nilson Nelson Gymnasium.
https://en.wikipedia.org/wiki?curid=4752
Blue Streak (missile) The de Havilland Propellers Blue Streak was a British Intermediate-range ballistic missile (IRBM), and later the first stage of the Europa satellite launch vehicle. Blue Streak was cancelled without entering full production. The project was intended to maintain an independent British nuclear deterrent, replacing the V bomber fleet which would become obsolete by 1965. The operational requirement for the missile was issued in 1955 and the design was complete by 1957. During development, it became clear that the missile system was too expensive and too vulnerable to a pre-emptive strike. The missile project was cancelled in 1960, with US-led Skybolt the preferred replacement. Partly to avoid political embarrassment from the cancellation, the UK government proposed that the rocket be used as the first stage of a civilian satellite launcher called Black Prince. As the cost was thought to be too great for the UK alone, international collaboration was sought. This led to the formation of the European Launcher Development Organisation (ELDO), with Blue Streak used as the first stage of a carrier rocket named Europa. Europa was tested at Woomera Test Range, Australia, and later at Kourou in French Guiana. Following launch failures, the ELDO project was cancelled in 1972 and development of Blue Streak was halted. Post-war Britain's nuclear weapons armament was initially based on free-fall bombs delivered by the V bomber force. It soon became clear that if Britain wanted to have a credible nuclear deterrent threat, a ballistic missile was essential. There was a political need for an independent deterrent, so that Britain could remain a major world power. Britain was unable to purchase American weapons wholesale due to the restrictions of the Atomic Energy Act of 1946. In April 1954 the Americans proposed a joint development programme for ballistic missiles. The United States would develop an intercontinental ballistic missile (ICBM) of range (SM-65 Atlas), while the United Kingdom with United States support would develop a Intermediate-range ballistic missile (IRBM) of range. The proposal was accepted as part of the Wilson-Sandys Agreement of August 1954, which provided for collaboration, exchange of information, and mutual planning of development programmes. The decision to develop was influenced by what could be learnt about missile design and development in the US. Initial requirements for the booster were made by the Royal Aircraft Establishment at Farnborough with input on the rocket engine design from the Rocket Propulsion Establishment at Westcott. British Operational Requirement 1139 demanded a rocket of at least 1500 n.m. range and the initially proposed rocket would have just reached that threshold. The de Havilland Propellers company won the contract to build the missile, which was to be powered by an uprated liquid-fuelled Rocketdyne S-3D engine, developed by Rolls-Royce, called RZ.2. Two variants of this engine were developed: the first provided a static thrust of and the second (intended for the three-stage satellite launch vehicle) . The engines could be vectored by seven degrees in flight and were used to guide the missile. This configuration, however, put considerable pressure on the autopilot which had to cope with the problem of a vehicle whose weight was diminishing rapidly and that was steered by large engines whose thrust remained more or less constant. Vibration was also a problem, particularly at engine cut-off, and the later development of the autopilot for the satellite launcher was, in itself, a considerable achievement. Subcontractors included the Sperry Gyroscope Company who produced the missile guidance system whilst the nuclear warhead was designed by the Atomic Weapons Research Establishment at Aldermaston. The missiles used liquid oxygen and kerosene propellants. Whilst the vehicle could be left fully laden with over 20 tonnes of kerosene, the 60 tonnes of liquid oxygen had to be loaded immediately before launch or icing became a problem. Due to this, fuelling the rocket took 4.5 minutes, which would have made it useless as a rapid response to an attack. The missile was vulnerable to a pre-emptive nuclear strike, launched without warning or in the absence of any heightening of tension sufficient to warrant readying the missile. To negate this problem de Havilland created a stand-by feature. A missile could be held at 30 seconds' notice to launch for ten hours. As the missiles were to be deployed in pairs and it took ten hours for one missile to be prepared for stand-by, one of the two missiles could always be ready for rapid launch. To protect the missiles against a pre-emptive strike while being fuelled, the idea of siting the missiles in underground launchers was developed. These would have been designed to withstand a one megaton blast at a distance of half a mile (800 m) and were a British innovation, subsequently exported to the United States. Finding sites for these silos proved extremely difficult. RAF Spadeadam in Cumberland (now Cumbria) was the only site where construction was started on a full scale underground launcher, although test borings were undertaken at a number of other locations. The remains of this test silo, known as U1, were rediscovered by tree felling at Spadeadam. This was also the site where the RZ.2 rocket engines and also the complete Blue Streak missile were tested. The best sites for silo construction were the more stable rock strata in parts of southern and north-east England and eastern Scotland, but the construction of many underground silos in the countryside carried enormous economic, social, and political costs. Development of the underground launchers presented a major technical challenge. 1/60- and 1/6-scale models based on a ‘U’-shaped design were constructed and tested at RPE Westcott. Three alternative designs were drawn up with one chosen as the prototype, designated K11. RAF Upavon would appear to have been the preferred location for the prototype operational launcher with the former RNAS at Crail as the likely first operational site. In 1955–1956, the rocket motors were test-fired at The Needles Batteries on the Isle of Wight. As no site in Britain provided enough space for test flights, a test site was established at Woomera, South Australia. Doubts arose as the cost escalated from the first tentative figure of £50 million submitted to the Treasury in early 1955, to £300 million in late 1959. Its detractors in the civil service claimed that the programme was crawling along when compared with the speed of development in the US and the Soviet Union. Estimates within the Civil Service for completion of the project ranged from a total spend of £550 million to £1.3 billion, as different ministers were set on either abandoning or continuing the project. The project was unexpectedly cancelled in April 1960. Whitehall opposition grew, and it was cancelled on the ostensible grounds that it would be too vulnerable to a first-strike attack. Admiral of the Fleet Lord Mountbatten had spent considerable effort arguing that the project should be cancelled at once in favour of the Navy being armed with nuclear weapons, capable of pre-emptive strike. Some considered the cancellation of Blue Streak to be not only a blow to British military-industrial efforts, but also to Commonwealth ally Australia, which had its own vested interest in the project. The British military transferred its hopes for a strategic nuclear delivery system to the Anglo-American Skybolt missile, before the project's cancellation by the United States as its ICBM programme reached maturity. The British instead purchased the Polaris system from the Americans, carried in British-built submarines. After the cancellation as a military project, there was reluctance to cancel the project because of the huge cost incurred. Blue Streak would have become the first stage of a projected all British satellite launcher known as "Black Prince": the second stage was derived from the "Black Knight" test vehicle, and the orbital injection stage was a small hydrogen peroxide/kerosene motor. Black Prince proved too expensive for the UK, and the European Launcher Development Organisation (ELDO) was set up. This used Blue Streak as the first stage, with French and German second and third stages. The Blue Streak first stage was successfully tested three times at the Woomera test range in Australia as part of the ELDO programme. In 1959, a year before the cancellation of the Blue Streak as a missile, the government requested that the RAE and Saunders-Roe design a carrier rocket based on Blue Streak and Black Knight. This design used Blue Streak as a first stage and a 54-inch (137 centimetre) second stage based on the Black Knight. Several different third stages would be available, depending on the required payload and orbit. The cost of developing Black Prince was estimated to be £35 million. It was planned that Black Prince would be a Commonwealth project. As the government of John Diefenbaker in Canada was already spending more money than publicly acknowledged on Alouette and Australia was not interested in the project, these two countries were unwilling to contribute. South Africa was no longer a member of the Commonwealth. New Zealand was only likely to make "modest" contributions. The UK instead proposed a collaboration with other European countries to build a three-stage launcher capable of placing a one-ton payload into low Earth orbit. The European Launcher Development Organisation consisted of Belgium, Britain, France, West Germany, Italy and the Netherlands, with Australia as an associate member. Preliminary work began in 1962 and ELDO was formally signed into existence in 1964. With Blue Streak, the UK became the first stage of the European launch vehicle with France providing the Coralie second stage and Germany the third. Italy worked on the satellite project, the Netherlands and Belgium concentrated on tracking and telemetry systems and Australia supplied the launch site. The combined launcher was named Europa. After ten test launches, the Woomera launch site was not suitable for putting satellites into geosynchronous orbit, and in 1966 it was decided to move to the French site of Kourou in South America. F11 was fired from here in November 1971, but the failure of the autopilot caused the vehicle to break up. The launch of F12 was postponed whilst a project review was carried out, which led to the decision to abandon the Europa design. ELDO was merged with the European Space Research Organisation to form the European Space Agency. Aside from Black Prince, a range of other proposals was made between 1959 and 1972 for a carrier rocket based on Blue Streak, but none of these were ever built in full and today only exist in design. In 1959 de Havilland suggested solving the problem of the Blue Streak/Black Knight geometry by compressing the 10 by 1 metre (30 by 3 foot) Black Knight into a sphere. Although this seemed logical, the development costs proved to be too high for the limited budget of the programme. Following its merger with Saunders Roe, Westland Helicopters developed the three-stage Black Arrow satellite carrier rocket, derived from the Black Knight test vehicle. The first stage of Black Arrow was given the same diameter as the French Coralie (the second stage of Europa) in order to make it compatible with Blue Streak. Using Blue Streak as an additional stage would have increased Black Arrow's payload capacity. To maintain this compatibility, the first stage diameter was given in metres, although the rest of the rocket was defined in imperial units. Black Arrow carried out four test launches (without an additional Blue Streak stage) from Woomera between 1969 and 1971, with the final launch carrying the satellite Prospero X-3 into orbit. The United Kingdom remains the only country to have successfully developed and then abandoned a satellite launch capability. In 1972, Hawker Siddeley Dynamics (HSD) produced a brochure for a design using Blue Streak as the first stage of a two-stage to orbit rocket, with an American Centaur upper stage. The Centaur second stage would have either been built in the UK under licence or imported directly from the USA. Both the Centaur and Blue Streak had proved to be very reliable up to this point, and since they were both already designed development costs would have been low. Furthermore, it had a payload of 870–920 kg to a geosynchronous orbit with, and 650–700 kg without the use of additional booster rockets. Following the cancellation of the Blue Streak project some of the remaining rockets were preserved at: A section of the propulsion bay, engines and equipment can be found at the Solway Aviation Museum, Carlisle Lake District Airport. Only a few miles from the Spadeadam testing site, the museum carries many exhibits, photographs and models of the Blue Streak programme, having inherited the original Spadeadam collection that used to be displayed on site. RZ.2 engines are on display at National Space Centre a pair on cradles alongside the Blue Streak rocket, and at the Armagh Planetarium, Northern Ireland and The Euro Space Center in Redu, Belgium. Footage from the Blue Streak launch was briefly incorporated into "The Prisoner"s final episode, "Fall Out". A part of the Blue Streak rocket launched on 5 June 1964 from Woomera, Australia, found 50 km SE of Giles in 1980 (c.1000 km) is on display at Giles Weather Station. Another piece was located in 2006, but its exact location has been kept secret by the finders. The titanium structure of a German third stage was, for some time, sited on the edge of a gravel pit in Gloucestershire. Images of the Blue Streak 1 are incorporated in the 1997 film "Contact".
https://en.wikipedia.org/wiki?curid=4754
Bakassi Bakassi is a peninsula on the Gulf of Guinea. It lies between the Cross River estuary, near the city of Calabar in the west of the Bight of Biafra, and the Rio del Ray estuary on the east. It is governed by Cameroon, following the transfer of sovereignty from neighbouring Nigeria as a result of a judgment by the International Court of Justice. On 22 November 2007, the Nigerian Senate rejected the transfer, since the Greentree Agreement ceding the area to Cameroon was contrary to Section 12(1) of the 1999 Constitution. Regardless, the territory was transferred to Cameroon on 14 August 2008. The peninsula lies between latitudes 4°25′ and 5°10′N and longitudes 8°20′ and 9°08′E . It consists of a number of low-lying, largely mangrove covered islands covering an area of around 665 km² (257 sq mi). The population of Bakassi is the subject of some dispute, but is generally put at between 150,000 and 300,000 people. Bakassi is situated at the extreme eastern end of the Gulf of Guinea, where the warm east-flowing Guinea Current (called Aya Efiat in Efik) meets the cold north-flowing Benguela Current (called Aya Ubenekang in Efik). These two ocean currents interact, creating huge foamy breakers which constantly advance towards the shore, and building submarine shoals rich in fish, shrimps, and a wide variety of other marine life forms. This makes the Bakassi area a very fertile fishing ground, comparable only to Newfoundland in North America and Scandinavia in Western Europe. Most of the population make their living through fishing. The peninsula is commonly described as "oil-rich", though in fact no commercially viable deposits of oil have been discovered. However, the area has aroused considerable interest from oil companies in the light of the discovery of rich reserves of high grade crude oil in Nigeria. At least eight multinational oil companies have participated in the exploration of the peninsula and its offshore waters. In October 2012, China Petroleum & Chemical Corporation announced it had discovered new oil and gas resources in the Bakassi region. During the Scramble for Africa, Queen Victoria signed a Treaty of Protection with the King and Chiefs of Akwa Akpa, known to Europeans as Old Calabar on 10 September 1884. This enabled the British Empire to exercise control over the entire territory around Calabar, including Bakassi. The territory subsequently became "de facto" part of Nigeria, although the border was never permanently delineated. However, documents released by the Cameroonians, in parity with that of the British and Germans, clearly places Bakassi under Cameroonian Territory as a consequence of colonial era Anglo-German agreements. After Southern Cameroons voted in 1961 to leave Nigeria and became a part of Cameroon, Bakassi remained under Calabar administration in Nigeria until ICJ judgement of 2002. Bakassi inhabitants are mainly the Oron people, the people of Cross River State and Akwa Ibom State of Nigeria. Nigeria and Cameroon have disputed the possession of Bakassi for some years, leading to considerable tension between the two countries. In 1981 the two countries went to the brink of war over Bakassi and another area around Lake Chad, at the other end of the two countries' common border. More armed clashes broke out in the early 1990s. In response, Cameroon took the matter to the International Court of Justice (ICJ) on 29 March 1994. The case was extremely complex, requiring the court to review diplomatic exchanges dating back over 100 years. Nigeria relied largely on Anglo-German correspondence dating from 1885 as well as treaties between the colonial powers and the indigenous rulers in the area, particularly the 1884 Treaty of Protection. Cameroon pointed to the Anglo-German treaty of 1913, which defined sphere of control in the region, as well as two agreements signed in the 1970s between Cameroon and Nigeria. These were the Yaoundé II Declaration of 4 April 1971 and the Maroua Declaration of 1 June 1975, which were devised to outline maritime boundaries between the two countries following their independence. The line was drawn through the Cross River estuary to the west of the peninsula, thereby implying Cameroonian ownership over Bakassi. However, Nigeria never ratified the agreement, while Cameroon regarded it as being in force. The ICJ delivered its judgment on 10 October 2002, finding (based principally on the Anglo-German agreements) that sovereignty over Bakassi did indeed rest with Cameroon. It instructed Nigeria to transfer possession of the peninsula, but did not require the inhabitants to move or to change their nationality. Cameroon was thus given a substantial Nigerian population and was required to protect their rights, infrastructure and welfare. The verdict caused consternation in Nigeria. It aroused vitriolic comments from Nigerian officials and the Nigerian media alike. Chief Richard Akinjide, a former Nigerian Attorney-General and Minister of Justice who had been a leading member of Nigeria's legal team, described the decision as "50% international law and 50% international politics", "blatantly biased and unfair", "a total disaster", and a "complete fraud". The Nigerian newspaper "The Guardian" went further, declaring that the judgment was "a rape and unforeseen potential international conspiracy against Nigerian territorial integrity and sovereignty" and "part of a Western ploy to foment and perpetuate trouble in Africa". The outcome of the controversy was a "de facto" Nigerian refusal to withdraw its troops from Bakassi and transfer sovereignty. The Nigerian government did not, however, openly reject the judgment but instead called for an agreement that would provide "peace with honour, with the interest and welfare of our people." The ICJ judgement was backed up by the United Nations, whose charter potentially allowed sanctions or even the use of force to enforce the court's ruling. Secretary-General Kofi Annan stepped in as a mediator and chaired a tripartite summit with the two countries' presidents on 15 November 2002, which established a commission to facilitate the peaceful implementation of the ICJ's judgement. A further summit was held on 31 January 2004. This has made significant progress, but the process has been complicated by the opposition of Bakassi's inhabitants to being transferred to Cameroon. Bakassian leaders threatened to seek independence if Nigeria renounced sovereignty. This secession was announced on 9 July 2006, as the "Democratic Republic of Bakassi". The decision was reportedly made at a meeting on 2 July 2006 and The Vanguard newspaper of Nigeria reported the decision to secede. The decision was reportedly made by groups of militants including Southern Cameroons under the aegis of Southern Cameroons Peoples Organisation (SCAPO), Bakassi Movement for Self-Determination (BAMOSD), and the Movement for the Emancipation of the Niger Delta (MEND). Meanwhile, Biafra Secessionist Organization, Biafra Nations Youth League (BNYL) led by Princewill Chimezie Richard aka. Prince Obuka and Ebuta Ogar Takon moved their National Presence to the region after series of warning to Nigeria Government over the plight of the internally displaced natives and the reported killing of remnants in the Peninsula by Cameroon Soldiers, this came amid clashes between Nigeria Troops and Bakassi Strike Force, a Militant Group that rose against the plight of the displaced people, BNYL Leaders were later apprehended in the Ikang-Cameroon Border Area on 9 November 2016 by Nigerian troops according to the Nigeria Nation Newspaper, reports linked the Biafra group to the Militant clashes, but that did not deter the group activities in Bakassi. On 13 June 2006, President Olusegun Obasanjo of Nigeria and President Paul Biya of Cameroon resolved the dispute in talks led by UN Secretary General Kofi Annan in New York City. Obasanjo agreed to withdraw Nigerian troops within 60 days and to leave the territory completely in Cameroonian control within the next two years. Annan said, "With today's agreement on the Bakassi peninsula, a comprehensive resolution of the dispute is within our grasp. The momentum achieved must be sustained." Nigeria began to withdraw its forces, comprising some 3,000 troops, beginning 1 August 2006, and a ceremony on 14 August marked the formal handover of the northern part of the peninsula. The remainder stayed under Nigerian civil authority for two more years. On 22 November 2007, the Nigerian Senate passed a resolution declaring that the withdrawal from the Bakassi Peninsula was illegal. The government took no action, and handed the final parts of Bakassi over to Cameroon on 14 August 2008 as planned, but a Federal High Court had stated this should be delayed until all accommodations for resettled Bakassians had been settled; the government did not seem to plan to heed this court order, and set the necessary mechanisms into motion to override it. Fishermen displaced from Bakassi were first settled in a landlocked area called New Bakassi, which they claimed was already inhabited and not suitable for fishermen like them but only for farmers. The displaced people were then moved to Akpabuyo, and eventually established a new community of Dayspring. Despite the formal handover of Bakassi by Nigeria to Cameroon in 2006, the territory of Bakassi is still reflected as part of the 774 local governments in Nigeria as embodied in the First Schedule, Part I of the 1999 Constitution of the Federal Republic of Nigeria, 1999. After the Nigerian 2015 General Elections, Nigeria's 8th National Assembly still accommodates the Calabar-South/Akpabuyo/Bakassi Federal Constituency represented by Hon. Essien Ekpeyong Ayi of the People's Democratic Party.
https://en.wikipedia.org/wiki?curid=4756
Bestiary A bestiary, or bestiarum vocabulum, is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God, and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. The bestiary, then, is also a reference to the symbolic language of animals in Western Christian art and literature. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the "Physiologus", which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's "Historia Animalium" and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists. Following the "Physiologus", Saint Isidore of Seville (Book XII of the "Etymologiae") and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, such as the migration of birds, were discounted by the natural philosophers of later centuries, only to be rediscovered in the modern scientific era. Medieval bestiaries are remarkably similar in sequence of the animals of which they treat. Bestiaries were particularly popular in England and France around the 12th century and were mainly compilations of earlier texts. The Aberdeen Bestiary is one of the best known of over 50 manuscript bestiaries surviving today. Bestiaries influenced early heraldry in the Middle Ages, giving ideas for charges and also for the artistic form. Bestiaries continue to give inspiration to coats of arms created in our time. Two illuminated Psalters, the Queen Mary Psalter (British Library Ms. Royal 2B, vii) and the Isabella Psalter (State Library, Munich), contain full Bestiary cycles. The bestiary in the Queen Mary Psalter is found in the "marginal" decorations that occupy about the bottom quarter of the page, and are unusually extensive and coherent in this work. In fact the bestiary has been expanded beyond the source in the Norman bestiary of Guillaume le Clerc to ninety animals. Some are placed in the text to make correspondences with the psalm they are illustrating. The Italian artist Leonardo da Vinci also made his own bestiary. A "volucrary" is a similar collection of the symbols of birds that is sometimes found in conjunction with bestiaries. The most widely known volucrary in the Renaissance was Johannes de Cuba's "Gart der Gesundheit" which describes 122 birds and which was printed in 1485. The contents of medieval bestiaries were often obtained and created from combining older textual sources and accounts of animals, such as the "Physiologus". Medieval bestiaries contained detailed descriptions and illustrations of species native to Western Europe, exotic animals and what in modern times are considered to be imaginary animals. Descriptions of the animals included the physical characteristics associated with the creature, although these were often physiologically incorrect, along with the Christian morals that the animal represented. The description was then often accompanied by an artistic illustration of the animal as described in the bestiary. Bestiaries were organized in different ways based upon the sources they drew upon. The descriptions could be organized by animal groupings, such as terrestrial and marine creatures, or presented in an alphabetical manner. However, the texts gave no distinction between existing and imaginary animals. Descriptions of creatures such as dragons, unicorns, basilisk, griffin and caladrius were common in such works and found intermingled amongst accounts of bears, boars, deer, lions, and elephants. This lack of separation has often been associated with the assumption that people during this time believed in what the modern period classifies as nonexistent or "imaginary creatures". However, this assumption is currently under debate, with various explanations being offered. Some scholars, such as Pamela Gravestock, have written on the theory that medieval people did not actually think such creatures existed but instead focused on the belief in the importance of the Christian morals these creatures represented, and that the importance of the moral did not change regardless if the animal existed or not. The historian of science David C. Lindberg pointed out that medieval bestiaries were rich in symbolism and allegory, so as to teach moral lessons and entertain, rather than to convey knowledge of the natural world. In modern times, artists such as Henri de Toulouse-Lautrec and Saul Steinberg have produced their own bestiaries. Jorge Luis Borges wrote a contemporary bestiary of sorts, the "Book of Imaginary Beings", which collects imaginary beasts from bestiaries and fiction. Nicholas Christopher wrote a literary novel called "The Bestiary" (Dial, 2007) that describes a lonely young man's efforts to track down the world's most complete bestiary. John Henry Fleming's "Fearsome Creatures of Florida" (Pocol Press, 2009) borrows from the medieval bestiary tradition to impart moral lessons about the environment. Caspar Henderson's "The Book of Barely Imagined Beings" (Granta 2012, Chicago University Press 2013), subtitled "A 21st Century Bestiary", explores how humans imagine animals in a time of rapid environmental change. In July 2014, Jonathan Scott wrote "The Blessed Book of Beasts", Eastern Christian Publications, featuring 101 animals from the various translations of the Bible, in keeping with the tradition of the bestiary found in the writings of the Saints, including Saint John Chrysostom.
https://en.wikipedia.org/wiki?curid=4757
Baroque dance Baroque dance is dance of the Baroque era (roughly 1600–1750), closely linked with Baroque music, theatre, and opera. The majority of surviving choreographies from the period are English country dances, such as those in the many editions of Playford's "The Dancing Master". Playford only gives the floor patterns of the dances, with no indication of the steps. However, other sources of the period, such as the writings of the French dancing-masters Feuillet and Lorin, indicate that steps more complicated than simple walking were used at least some of the time. English country dance survived well beyond the Baroque era and eventually spread in various forms across Europe and its colonies, and to all levels of society. The great innovations in dance in the 17th century originated at the French court under Louis XIV, and it is here that we see the first clear stylistic ancestor of classical ballet. The same basic technique was used both at social events, and as theatrical dance in court ballets and at public theaters. The style of dance is commonly known to modern scholars as the "French noble style" or "belle danse" (French, literally "beautiful dance"), however it is often referred to casually as "baroque dance" in spite of the existence of other theatrical and social dance styles during the baroque era. Primary sources include more than three hundred choreographies in Beauchamp–Feuillet notation, as well as manuals by Raoul Auger Feuillet and Pierre Rameau in France, Kellom Tomlinson and John Weaver in England, and Gottfried Taubert in Germany (i.e. Leipzig, Saxony). This wealth of evidence has allowed modern scholars and dancers to recreate the style, although areas of controversy still exist. The standard modern introduction is Hilton. French dance types include: The English, working in the French style, added their own hornpipe to this list. Many of these dance types are familiar from baroque music, perhaps most spectacularly in the stylized suites of J. S. Bach. Note, however, that the allemandes, that occur in these suites do not correspond to a French dance from the same period. The French noble style was danced both at social events and by professional dancers in theatrical productions such as opera-ballets and court entertainments. However, 18th-century theatrical dance had at least two other styles: comic or grotesque, and semi-serious. Other dance styles, such as the Italian and Spanish dances of the period are much less well studied than either English country dance or the French style. The general picture seems to be that during most of the 17th century, a style of late Renaissance dance was widespread, but as time progressed, French ballroom dances such as the minuet were widely adopted at fashionable courts. Beyond this, the evolution and cross-fertilisation of dance styles is an area of ongoing research. The revival of baroque music in the 1960s and '70s sparked renewed interest in 17th and 18th century dance styles. While some 300 of these dances had been preserved in Beauchamp–Feuillet notation, it wasn't until the mid-20th century that serious scholarship commenced in deciphering the notation and reconstructing the dances. Perhaps best known among these pioneers was Britain's Melusine Wood, who published several books on historical dancing in the 1950s. Wood passed her research on to her student Belinda Quirey, and also to Pavlova Company ballerina and choreographer Mary Skeaping (1902–1984). The latter became well known for her reconstructions of baroque ballets for London's "Ballet for All" company in the 1960s. The leading figures of the second generation of historical dance research include Shirley Wynne and her Baroque Dance Ensemble which was founded at Ohio State University in the early 1970s and Wendy Hilton (1931–2002), a student of Belinda Quirey who supplemented the work of Melusine Wood with her own research into original sources. A native of Britain, Hilton arrived in the U.S. in 1969 joining the faculty of the Juilliard School in 1972 and establishing her own baroque dance workshop at Stanford University in 1974 which endured for more than 25 years. Catherine Turocy (b. circa 1950) began her studies in Baroque dance in 1971 as a student of dance historian Shirley Wynne. She founded The New York Baroque Dance Company in 1976 with Ann Jacoby, and the company has since toured internationally. In 1982/83 as part of the French national celebration of Jean Philippe Rameau's 300th birthday, Turocy choreographed the first production of Jean-Philippe Rameau's "Les Boréades" - it was never performed during the composer's lifetime. This French supported production with John Eliot Gardiner, conductor, and his orchestra was directed by Jean Louis Martinoty. Turocy has been decorated as Chevalier in the Ordre des Arts et des Lettres by the French government. In 1973, French dance historian Francine Lancelot (1929–2003) began her formal studies in ethnomusicology which later led her to research French traditional dance forms and eventually Renaissance and Baroque dances. In 1980, at the invitation of the French Minister of Culture, she founded the baroque dance company "Ris et Danceries". Her work in choreographing the landmark 1986 production of Lully's 1676 tragedie-lyrique "Atys" was part of the national celebration of the 300th anniversary of Lully's death. This production propelled the career of William Christie and his ensemble Les Arts Florissants. Since the Ris et Danseries company was disbanded circa 1993, choreographers from the company have continued with their own work. Béatrice Massin with her "Compagnie Fetes Galantes", along with Marie Genevieve Massé and her company "L'Eventail" are among the most prominent. In 1995 Francine Lancelot's catalogue raisonné of baroque dance, entitled "La Belle Dance", was published.
https://en.wikipedia.org/wiki?curid=4763
Borzoi The borzoi (, literally "fast"), also called the Russian wolfhound (), is a breed of domestic dog. Descended from dogs brought to Russia from central Asian countries, it is similar in shape to a greyhound, and is also a member of the sighthound family. The system by which Russians over the ages named their sighthounds was a series of descriptive terms, not actual names. is the masculine singular form of an archaic Russian adjective that means "fast". ("fast dog") is the basic term used by Russians, though is usually dropped. The name derived from the word , which means "wavy, silky coat", just as (as in hortaya borzaya) means shorthaired. In modern Russian, the breed commonly called the borzoi is officially known as . Other Russian sighthound breeds are (from the steppe), called ; and (from the Crimea), called . The most commonly used plural form is the regular formation "borzois", which is the only plural cited in most dictionaries. However, the Borzoi Club of America and the Borzoi Club UK both prefer "borzoi" as the form for both singular and plural forms. Borzois are large Russian sighthounds that resemble some central Asian breeds such as the Afghan hound, Saluki, and the Kyrgyz Taigan. Borzois can generally be described as "long-haired greyhounds". Borzois come in virtually any colour. The borzoi coat is silky and flat, often wavy or slightly curly. The long top-coat is quite flat, with varying degrees of waviness or curling. The soft undercoat thickens during winter or in cold climates, but is shed in hot weather to prevent overheating. In its texture and distribution over the body, the borzoi coat is unique. There should be a frill on its neck, as well as feathering on its hindquarters and tail. Borzoi males frequently weigh more than . Males stand at least at the shoulder, while the height of females is around . Despite their size, the overall impression is of streamlining and grace, with a curvy shapeliness and compact strength. The borzoi is an athletic and independent breed of dog. Most borzois are fairly quiet, barking on rare occasions. They do not have strong territorial drives and cannot be relied on to raise the alarm upon sighting a human intruder. The borzoi requires patient, experienced handling. They are gentle and highly sensitive dogs with a natural respect for humans, and as adults they are decorative couch potatoes with remarkably gracious house manners. Borzois do not generally display dominance or aggression towards people, but can turn aggressive if handled roughly. Typically, they are rather reserved with strangers but affectionate with people they know well. Their sensitivity to invasion of their personal space can make them nervous around children unless they are brought up with them. Borzois adapt well to suburban life, provided they have a spacious yard and regular opportunities for free exercise. A common misunderstanding about the intelligence of breeds in the Hound group stems from their independent nature, which conflicts with the frequent confusion between the concepts of "intelligence" and "obedience" in discussions of canine brainpower. Stanley Coren's survey of canine obedience trainers published in "The Intelligence of Dogs" reported that borzois obeyed the first command less than 25% of the time. Coren's test, however, was by his own admission heavily weighted towards the "obedience" interpretation of intelligence and based on a better understanding of "working" breeds than hounds. Unfortunately, the publicity given to this report has led to unfair denigration of breeds which are under-represented in obedience clubs and poorly understood by the average obedience trainer. "Work" for hound breeds is done out of hearing and often out of sight of the human companion; it is an activity for which the dogs are "released", rather than an activity which is "commanded". In terms of obedience, borzois are selective learners who quickly become bored with repetitive, apparently pointless, activity, and they can be very stubborn when they are not properly motivated. For example, food rewards, or "baiting", may work well for some individuals, but not at all for others. Nevertheless, borzois are definitely capable of enjoying and performing well in competitive obedience and agility trials with the right kind of training. Like other sighthounds, they are very sensitive and do not cope well with harsh treatment or training based on punishment, and will be extremely unhappy if raised voices and threats are a part of their daily life. However, like any intelligent dog, borzois respond extremely well to the guidance, support, and clear communication of a benevolent human leadership. Borzois were bred to pursue or "course" game and have a powerful instinct to chase things that run from them, including cats and small dogs. Built for speed and endurance, they can cover long distances in a very short time. A fully fenced yard is a necessity for maintaining any sighthound. They are highly independent and will range far and wide without containment, with little regard for road traffic. For off-leash exercise, a borzoi needs a very large field or park, either fully fenced or well away from any roads, to ensure its safety. Borzois are born with specialized coursing skills, but these are quite different from the dog-fighting instincts seen in some breeds. It is quite common for borzois at play to course (i.e., run down) another dog, seize it by the neck and hold it immobile. Young pups do this with their littermates, trading off as to who is the prey. It is a specific hunting behavior, not a fighting or territorial domination behavior. Borzois can be raised very successfully to live with cats and other small animals provided they are introduced to them when they are puppies. Some, however, will possess the hunting instinct to such a degree that they find it impossible not to chase a cat that is moving quickly. The hunting instinct is triggered by movement and much depends on how the cat behaves. Stated life expectancy is 10 to 12 years. Median lifespan based on a UK Kennel Club survey is 9 years 1 month. 1 in 5 died of old age, at an average of 10 to 11.5 years. The longest lived dog lived to 14 years 3 months. Dogs that are physically fit and vigorous in their youth through middle age are more vigorous and healthy as elderly dogs, all other factors being equal. In the UK, cancer and cardiac problems seem to be the most frequent causes of premature death. Like its native relative the Hortaya Borzaya, the borzoi is relatively sound breed. OCD, hip and elbow dysplasia have remained almost unknown, as were congenital eye and heart diseases before the 1970s. However, in some countries modern breeding practices have introduced a few problems. As with other very deep-chested breeds, gastric dilatation volvulus (also known as bloat) is the most common serious health problem in the borzoi. This life-threatening condition is believed to be anatomical rather than strictly genetic in origin. One common recommendation in the past has been to raise the food bowl of the dog when it eats. However, studies have shown that this may actually increase the risk of bloat. Less common are cardiac problems including cardiomyopathy and cardiac arrhythmia disorders. A controversy exists as to the presence of progressive retinal atrophy in the breed. A condition identified as borzoi retinopathy is seen in some individuals, usually active dogs, which differs from progressive retinal atrophy in several ways. First, it is unilateral, and rarely seen in animals less than three years of age; second, a clear-cut pattern of inheritance has not been demonstrated; and finally, most affected individuals do not go blind. Correct nutrition during puppyhood is also debatable for borzois. These dogs naturally experience enormous growth surges in the first year or two of their lives. It is now widely accepted that forcing even faster growth by feeding a highly concentrated, high-energy diet is dangerous for skeletal development, causing unsoundness and increased tendency to joint problems and injury. Being built primarily for speed, borzois do not carry large amounts of body fat or muscle, and therefore have a rather different physiology to other dogs of similar size (such as the Newfoundland, St. Bernard, or Alaskan Malamute). Laboratory-formulated diets designed for a generic "large" or "giant" breed are unlikely to take the needs of the big sighthounds into account. The issues involved in raw feeding may be particularly relevant to tall, streamlined breeds such as the borzoi. The Hortaya Borzaya, a very close relative, is traditionally raised on a meager diet of oats and table scraps. The Hortaya is also said to be intolerant of highly concentrated kibble feeds. A lean body weight in itself is nothing to be concerned about, and force-feeding of healthy young borzoi is definitely not recommended. The Borzoi originated in 17th Century Russia by crossing Arabian greyhounds with a thick-coated breed. The more modern Psovaya Borzaya was founded on Stepnaya, Hortaya and the Ukrainian-Polish version of the old Hort. There were also imports of Western sighthound breeds to add to the height and weight. It was crossed as well with the Russian Laika specifically and singularly to add resistance against Northern cold and a longer and thicker coat than the Southern sighthounds were equipped with. All of these foundation types—Tazi, Hortaya, Stepnaya, Krimskaya, and Hort—already possessed the instincts and agility necessary for hunting and bringing down wolves. The Psovoi was popular with the Tsars before the 1917 revolution. For centuries, Psovoi could not be purchased but only given as gifts from the Tsar. Grand Duke Nicholas Nicolaievich of Russia bred countless Psovoi at Perchino, his private estate. The Russian concept of hunting trials was instituted during the era of the Tsars. As well as providing exciting sport, the tests were used for selecting borzoi breeding stock; only the quickest and most intelligent hunting dogs went on to produce progeny. For the aristocracy these trials were a well-organized ceremony, sometimes going on for days, with the borzois accompanied by mounted hunters and Foxhounds on the Russian steppe. Hares and other small game were by far the most numerous kills, but the hunters especially loved to test their dogs on wolf. If a wolf was sighted, the hunter would release a team of two or three borzois. The dogs would pursue the wolf, attack its neck from both sides, and hold it until the hunter arrived. The classic kill was by the human hunter with a knife. Wolf trials are still a regular part of the hunting diploma for all Russian sightdog breeds of the relevant type, either singly or in pairs or trios, in their native country. After the 1917 Revolution, wolf hunting with sighthounds has soon gone out of fashion as an "aristocratic" and a means- and -time-taking way of hunting. A necessity in a wolf-catching sighthound didn't exist, in addition to the old proved technique of battue with the use of baits, flags and other appeared new, way more effective—from airplanes, from propeller sleighs, with electronic lure whistles. For decades the generations of few remaining sighthounds were regarded as hunting-suited, when showing enough attacking initiative for fox hunting. The rumours about persecution of sighthounds in post-revolutionary Russia is a legend of modern times, possibly based on similar incidents in Maoist China. In the late 1940s, a Soviet soldier named Constantin Esmont made detailed records of the various types of borzoi he found in Cossack villages. Esmont's illustrations were recently published and can be viewed by clicking on the link below. Esmont was concerned that the distinct types of borzaya were in danger of degenerating without a controlled system of breeding. He convinced the Soviet government that borzois were a valuable asset to the hunters who supported the fur industry and henceforth, their breeding was officially regulated. To this day short-haired Hortaya Borzaya are highly valued hunting dogs on the steppes, while the long-haired Psovaya Borzaya, is going through a hard period of restoration of its working qualities after decades of shadow, mainly show existence. Exports of borzois to other countries were extremely rare during the Soviet era. However, enough had been taken to England, Scandinavia, Western Europe, and America in the late 19th century for the breed to establish itself outside its native country. In 2004, the UK Kennel Club held its fourth temporary exhibition, "The Borzoi in Art," which offered unique insights into the borzoi and how the breed has been depicted in art throughout the 19th and 20th centuries. The exhibition included paintings, bronzes, and porcelain which had previously not been available to the public. The exhibition ran from 27 September to 3 December. The borzoi is frequently found in art deco-period works.
https://en.wikipedia.org/wiki?curid=4764
Basenji The Basenji is a breed of hunting dog. It was bred from stock that originated in central Africa. Most of the major kennel clubs in the English-speaking world place the breed in the hound group. More specifically, in the sighthound type. The Fédération Cynologique Internationale places the breed in its group five, and the United Kennel Club places the breed in its Sighthound and pariah group. The Basenji produces an unusual yodel-like sound, due to its unusually shaped larynx. This trait also gives the Basenji the nickname barkless dog. Basenjis share many unique traits with pariah dog types. Basenjis, like dingoes, New Guinea singing dogs and some other breeds of dog such as Tibetan mastiffs, come into estrus only once annually, as compared to other dog breeds, which may have two or more breeding seasons every year. Both dingoes and Basenji lack a distinctive odor, and are prone to howls, yodels, and other vocalizations over the characteristic bark of modern dog breeds. One theory holds that the latter trait is the result of selecting against dogs that frequently bark, because barking could lead enemies to humans' forest encampments. While dogs that resemble the Basenji in some respects are commonplace over much of Africa, the breed's original foundation stock came from the old growth forest regions of the Congo Basin, where its structure and type were fixed by adaptation to its habitat, as well as use. The Basenji has been identified as a basal breed that predates the emergence of the modern breeds in the 19th century. Recent DNA studies based on whole-genome sequences indicate that the domestic dog is a genetically divergent subspecies of the gray wolf and was derived from a now-extinct ghost population of Late Pleistocene wolves, and the basenji and the dingo are both considered to be basal members of the domestic dog clade. The Azande and Mangbetu people from the northeastern Congo region describe Basenjis, in the local Lingala language, as . Translated, this means 'dogs of the savages', or 'dogs of the villagers'. In the Congo, the Basenji is also known as "dog of the bush". The dogs are also known to the Azande of southern Sudan as . The word itself is the plural form of . In Swahili, another Bantu language, from East Africa, translates to 'wild dog'. Another local name is "m’bwa m’kube m’bwa wamwitu", or 'dog that jumps up and down', a reference to their tendency to jump straight up to spot their quarry. Originating on the continent of Africa, basenji-like dogs are depicted in drawings and models dating back to the Twelfth Dynasty of Egypt. Dogs of this type were originally kept for hunting small game by tracking and driving the game into nets. Europeans first described the type of dog the Basenji breed derives from in 1895—in the Congo. These local dogs, which Europeans identified as a unique breed and called "basenji," were prized by locals for their intelligence, courage, speed, and silence. An article published called The Intelligence of Dogs by Stanley Coren, PhD questions this. It ranks the breed at #78 out of 79, which is the second to lowest rank in intelligence. Some consider this an unreliable list, as it focuses on only the ability to listen to a first command. Some consider independent dogs such as Basenjis and Afghan Hounds more intelligent than obedient breeds because of their ability to recognize which actions benefit them, and which simply please another. Several attempts were made to bring the breed to England, but the earliest imports succumbed to disease. In 1923, for example, Lady Helen Nutting brought six Basenjis with her from Sudan, but all six died from distemper shots they received in quarantine. It was not until the 1930s that foundation stock was successfully established in England, and then to the United States by animal importer Henry Trefflich. It is likely that nearly all the Basenjis in the Western world are descended from these few original imports. The breed was officially accepted into the AKC in 1943. In 1990, the AKC stud book was reopened to 14 new imports at the request of the Basenji Club of America. The stud book was reopened again to selected imported dogs from 1 January 2009 to 31 December 2013. An American-led expedition collected breeding stock in villages in the Basankusu area of the Democratic Republic of Congo, in 2010. Basenjis are also registered with the United Kennel Club. The popularity of the Basenji in the United States, according to the American Kennel Club, has declined over the past decade, with the breed ranked 71st in 1999, decreasing to 84th in 2006, and to 93rd in 2011. Basenjis are small, short-haired dogs with erect ears, tightly curled tails and graceful necks. A Basenji's forehead is wrinkled, even more so when they are young or extremely excited. A Basenji's eyes are typically almond-shaped. Basenjis typically weigh about and stand at the shoulder. They are a square breed, which means they are as long as they are tall with males usually larger than females. Basenjis are athletic dogs, and deceptively powerful for their size. They have a graceful, confident gait like a trotting horse, and skim the ground in a double suspension gallop, with their characteristic curled tail straightened out for greater balance when running at their top speed. Basenjis come in a few different colorations: red, black, tricolor, and brindle, and they all have white feet, chests and tail tips. They can also come in "trindle", which is a tricolor with brindle points, a rare combination. The Basenji is alert, energetic, curious and reserved with strangers. The Basenji tends to become emotionally attached to a single human. Basenjis may not get along with non-canine pets. Basenjis dislike wet weather, much like cats, and will often refuse to go outside in any sort of damp conditions. They like to climb, and can easily scale chain wire/link fences. Basenjis often stand on their hind legs, somewhat like a meerkat, by themselves or leaning on something; this behavior is often observed when the dog is curious about something. Basenjis have a strong prey drive. According to the book "The Intelligence of Dogs", they are the second least trainable dog, when required to do human commands (behind only the Afghan Hound). Their real intelligence manifests, when they are required to actually "think". Basenjis are highly prey driven and will go after cats and other small animals. There is only one completed health survey of dog breeds, including the Basenji, that was conducted by the UK Kennel Club in 2004. The survey indicated the prevalence of diseases in Basenjis with dermatitis (9% of responses), incontinence and bladder infection (5%), hypothyroidism (4%), pyometra and infertility (4%). Basenjis in the 2004 UK Kennel Club survey had a median lifespan of 13.6 years (sample size of 46 deceased dogs), which is 1–2 years longer than the median lifespan of other breeds of similar size. The oldest dog in the survey was 17.5 years. The most common causes of death were old age (30%), urologic (incontinence, Fanconi syndrome, chronic kidney failure 13%), behavior ("unspecified" and aggression 9%), and cancer (9%). Fanconi syndrome, an inheritable disorder in which the renal (kidney) tubes fail to reabsorb electrolytes and nutrients, is unusually common in Basenjis. Symptoms include excessive drinking, excessive urination, and glucose in the urine, which may lead to a misdiagnosis of diabetes. Fanconi syndrome usually presents between 4 and 8 years of age, but sometimes as early as 3 years or as late as 10 years. Fanconi syndrome is treatable and organ damage is reduced if treatment begins early. Basenji owners are advised to test their dog's urine for glucose once a month beginning at the age of 3 years. Glucose testing strips designed for human diabetics are inexpensive and available at most pharmacies. A Fanconi disease management protocol has been developed that can be used by veterinarians to treat Fanconi-afflicted dogs. In 2007, the first linked marker DNA test was released for predicting Fanconi syndrome in Basenjis. With this test, it is possible to more accurately determine the probability of a dog's carrying the gene for Fanconi syndrome. Dogs tested using this "linkage test" return one of the following statuses: This linkage test is being provided as a tool to assist breeders whilst research continues towards the development of the direct Fanconi test. Basenjis sometimes carry a simple recessive gene that, when homozygous for the defect, causes genetic hemolytic anemia. Most 21st-century Basenjis are descended from ancestors that have tested clean. When lineage from a fully tested line (set of ancestors) cannot be completely verified, the dog should be tested before breeding. As this is a non-invasive DNA test, a Basenji can be tested for HA at any time. Basenjis sometimes suffer from hip dysplasia, resulting in loss of mobility and arthritis-like symptoms. All dogs should be tested by either OFA or PennHIP prior to breeding. Malabsorption, or immunoproliferative enteropathy, is an autoimmune intestinal disease that leads to anorexia, chronic diarrhea, and even death. A special diet can improve the quality of life for afflicted dogs. The breed can also fall victim to progressive retinal atrophy (a degeneration of the retina causing blindness) and several less serious hereditary eye problems such as coloboma (a hole in the eye structure), and persistent pupillary membrane (tiny threads across the pupil).
https://en.wikipedia.org/wiki?curid=4765
Brit milah The brit milah (, ; Ashkenazi pronunciation: , "covenant of circumcision"; Yiddish pronunciation: "bris" ) is a Jewish religious male circumcision ceremony performed by a mohel ("circumciser") on the eighth day of the infant's life. The "brit milah" is followed by a celebratory meal ("seudat mitzvah"). According to the Hebrew Bible () God commanded the Biblical patriarch Abraham to be circumcised, an act to be followed by his descendants: Also, provides: "And in the eighth day the flesh of his foreskin shall be circumcised." According to the Hebrew Bible, it was "a reproach" for an Israelite to be uncircumcised (Joshua 5:9.) The term "arelim" ("uncircumcised" [plural]) is used opprobriously, denoting the Philistines and other non-Israelites (I Samuel 14:6, 31:4; II Samuel 1:20) and used in conjunction with "tameh" (unpure) for heathen (Isaiah 52:1). The word "arel" ("uncircumcised" [singular]) is also employed for "impermeable" (Leviticus 26:41, "their uncircumcised hearts"; compare Jeremiah 9:25; Ezekiel 44:7, 9); it is also applied to the first three years' fruit of a tree, which is forbidden (Leviticus 19:23). However, the Israelites born in the wilderness after the Exodus from Egypt were not circumcised. Joshua 5:2–9, explains, "all the people that came out" of Egypt were circumcised, but those "born in the wilderness" were not. Therefore, Joshua, before the celebration of the Passover, had them circumcised at Gilgal specifically before they entered Canaan. Abraham, too, was circumcised when he moved into Canaan. The prophetic tradition emphasizes that God expects people to be good as well as pious, and that non-Jews will be judged based on their ethical behavior, see Noahide Law. Thus, Jeremiah 9:25–26 says that circumcised and uncircumcised will be punished alike by the Lord; for "all the nations are uncircumcised, and all the house of Israel are uncircumcised in heart." The penalty of non-observance is "kareth" (spiritual excision from the Jewish nation), as noted in . Conversion to Judaism for non-Israelites in Biblical times necessitated circumcision, otherwise one could not partake in the Passover offering (). Today, as in the time of Abraham, it is required of converts in Orthodox, Conservative and Reform Judaism. (). As found in Genesis 17:1–14, "brit milah" is considered to be so important that should the eighth day fall on the Sabbath, actions that would normally be forbidden because of the sanctity of the day are permitted in order to fulfill the requirement to circumcise. The Talmud, when discussing the importance of Milah, compares it to being equal to all other mitzvot (commandments) based on the gematria for "brit" of 612 (Tractate Nedarim 32a). Covenants in ancient times were sometimes sealed by severing an animal, with the implication that the party who breaks the covenant will suffer a similar fate. In Hebrew, the verb meaning "to seal a covenant" translates literally as "to cut". It is presumed by Jewish scholars that the removal of the foreskin symbolically represents such a sealing of the covenant. Due to Jesus having undertaken this ceremony as a Jewish child, memory of this tradition has been preserved in traditional Christian churches according to the Gospel of Luke. The Feast of the Circumcision of Christ is kept as a feast eight days after Nativity in a number of churches including the Eastern Orthodox Church, Catholic Church, Lutheran and some Anglican Communion churches. In Orthodox Christian tradition, children are officially named on the eighth day after birth with special naming prayers. Significantly, the tradition of baptism universally replaced circumcision among Christians as the primary rite of passage as found in Paul's Epistle to the Colossians and in Acts of the Apostles. A mohel is a Jew trained in the practice of "brit milah", the "covenant of circumcision." According to traditional Jewish law, in the absence of a grown free Jewish male expert, anyone who has the required skills is also authorized to perform the circumcision, provided that they are Jewish. However, most streams of non-Orthodox Judaism allow female mohels, called "mohalot" (, plural of "mohelet", feminine of "mohel"), without restriction. In 1984, Deborah Cohen became the first certified Reform mohelet; she was certified by the Berit Mila program of Reform Judaism. It is customary for the brit to be held in a synagogue, but it can also be held at home or any other suitable location. The brit is performed on the eighth day from the baby's birth, taking into consideration that according to the Jewish calendar, the day begins at the sunset of the day before. If the baby is born on Sunday before sunset, the Brit will be held the following Sunday. However, if the baby is born on Sunday night after sunset, the Brit is on the following Monday. The brit takes place on the eighth day following birth even if that day is Shabbat or a holiday. A brit is traditionally performed in the morning, but it may be performed any time during daylight hours. The Talmud explicitly instructs that a boy must not be circumcised if he had two brothers who died due to complications arising from their circumcisions, and Maimonides says that this excluded paternal half-brothers. This may be due to a concern about hemophilia. An Israeli study found a high rate of urinary tract infections if the bandage is left on too long. If the child is born prematurely or has other serious medical problems, the brit milah will be postponed until the doctors and mohel deem the child strong enough for his foreskin to be surgically removed. In recent years, the circumcision of adult Jews who were not circumcised as infants has become more common than previously thought. In such cases, the brit milah will be done at the earliest date that can be arranged. The actual circumcision will be private, and other elements of the ceremony (e.g., the celebratory meal) may be modified to accommodate the desires of the one being circumcised. Most prominent "acharonim" rule that the "mitzvah" of brit milah lies in the pain it causes, and anesthetic, sedation, or ointment should generally not be used. However, it is traditionally common to feed the infant a drop of wine or other sweet liquid to soothe him. Eliezer Waldenberg, Yechiel Yaakov Weinberg, Shmuel Wosner, Moshe Feinstein and others agree that the child should not be sedated, although pain relieving ointment may be used under certain conditions; Shmuel Wosner particularly asserts that the act ought to be painful, per Psalms 44:23. Regarding an adult circumcision, pain is ideal, but not mandatory. In a letter to the editor published in "The New York Times" on January 3, 1998, Rabbi Moshe David Tendler disagrees with the above and writes, "It is a biblical prohibition to cause anyone unnecessary pain". Rabbi Tendler recommends the use of an analgesic cream. Lidocaine should not be used, however, because Lidocaine has been linked to several pediatric near-death episodes. The title of "kvater" among Ashkenazi Jews is for the person who carries the baby from the mother to the father, who in turn carries him to the "mohel." This honor is usually given to a couple without children, as a merit or "segula" (efficacious remedy) that they should have children of their own. The origin of the term is Middle High German "gevater(e)" ("godfather"). After the ceremony, a celebratory meal takes place. At the "birkat hamazon", additional introductory lines, known as "Nodeh Leshimcha", are added. These lines praise God and request the permission of God, the Torah, Kohanim and distinguished people present to proceed with the grace. When the four main blessings are concluded, special "ha-Rachaman" prayers are recited. They request various blessings by God that include: At the neonatal stage, the inner preputial epithelium is still linked with the surface of the glans. The "mitzvah" is executed only when this epithelium is either removed, or permanently peeled back to uncover the glans. On medical circumcisions performed by surgeons, the epithelium is removed along with the foreskin, to prevent post operative penile adhesion and its complications. However, on ritual circumcisions performed by a mohel, the epithelium is most commonly peeled off only after the foreskin has been amputated. This procedure is called "priah" (), which means: 'uncovering'. The main goal of "priah" (also known as "bris periah"), is to remove as much of the inner layer of the foreskin as possible and prevent the movement of the shaft skin, what creates the look and function of what is known as a "low and tight" circumcision. According to Rabbinic interpretation of traditional Jewish sources, the 'priah' has been performed as part of the Jewish circumcision since the Israelites first inhabited the Land of Israel. However, the "Oxford Dictionary of the Jewish Religion", states that many Hellenistic Jews attempted to restore their foreskins, and that similar action was taken during the Hadrianic persecution, a period in which a prohibition against circumcision was issued. Thus, the writers of the dictionary hypothesize that the more severe method practiced today was probably begun in order to prevent the possibility of restoring the foreskin after circumcision, and therefore the rabbis added the requirement of cutting the foreskin in periah. The frenulum may also be cut away at the same time, in a procedure called frenectomy. According to Shaye J. D. Cohen, in Why Aren't Jewish Women Circumcised?: Gender and Covenant in Judaism, pg 25, the Torah only commands circumcision (milah.) David Gollaher has written that the rabbis added the procedure of priah to discourage men from trying to restore their foreskins: ‘Once established, priah was deemed essential to circumcision; if the mohel failed to cut away enough tissue, the operation was deemed insufficient to comply with God's covenant’ and ‘Depending on the strictness of individual rabbis, boys (or men thought to have been inadequately cut) were subjected to additional operations.’ In the Metzitzah (), the guard is slid over the foreskin as close to the glans as possible to allow for maximum removal of the former without any injury to the latter. A scalpel is used to detach the foreskin. A tube is used for "metzitzah" In addition to ' (the actual circumcision) and ', mentioned above, the Talmud (Mishnah Shabbat 19:2) mentions a third step, "", translated as suction, as one of the steps involved in the circumcision rite. The Talmud writes that a "Mohel (Circumciser) who does not suck creates a danger, and should be dismissed from practice". Rashi on that Talmudic passage explains that this step is in order to draw some blood from deep inside the wound to prevent danger to the baby. There are other modern antiseptic and antibiotic techniques—all used as part of the "brit milah" today—which many say accomplish the intended purpose of "metzitzah", however, since "metzitzah" is one of the four steps to fulfill Mitzvah, it continues to be practiced by a minority of Orthodox and Hassidic Jews. The ancient method of performing "metzitzah b'peh" (), or oral suction—has become controversial. The process has the "mohel" place his mouth directly on the circumcision wound to draw blood away from the cut. The majority of Jewish circumcision ceremonies do not use metzitzah b'peh, but some Haredi Jews use it. It has been documented that the practice poses a serious risk of spreading herpes to the infant. Proponents maintain that there is no conclusive evidence that links herpes to "Metzitza", and that attempts to limit this practice infringe on religious freedom. The practice has become a controversy in both secular and Jewish medical ethics. The ritual of "metzitzah" is found in Mishnah Shabbat 19:2, which lists it as one of the four steps involved in the circumcision rite. Rabbi Moses Sofer (1762–1839) observed that the Talmud states that the rationale for this part of the ritual was hygienic — i.e., to protect the health of the child. The Chasam Sofer issued a leniency (Heter) that some consider to have been conditional to perform "metzitzah" with a sponge to be used instead of oral suction in a letter to his student, Rabbi Lazar Horowitz of Vienna. This letter was never published among Rabbi Sofer's responsa but rather in the secular journal "Kochvei Yitzchok." along with letters from Dr. Wertheimer, the chief doctor of the Viennese General Hospital. It relates the story that a mohel (who was suspected of transmitting herpes via metzizah to infants) was checked several times and never found to have signs of the disease and that a ban was requested because of the "possibility of future infections". Moshe Schick (1807–1879), a student of Moses Sofer, states in his book of Responsa, "She’eilos u’teshuvos Maharam Schick" (Orach Chaim 152,) that Moses Sofer gave the ruling in that specific instance only because the mohel refused to step down and had secular Government connections that prevented his removal in favor of another mohel and the Heter may not be applied elsewhere. He also states ("Yoreh Deah" 244) that the practice is possibly a Sinaitic tradition, i.e., Halacha l'Moshe m'Sinai. Other sources contradict this claim, with copies of Moses Sofer's responsa making no mention of the legal case or of his ruling applying in only one situation. Rather, that responsa makes quite clear that "metzizah" was a health measure and should never be employed where there is a health risk to the infant. Chaim Hezekiah Medini, after corresponding with the greatest Jewish sages of the generation, concluded the practice to be Halacha l'Moshe m'Sinai and elaborates on what prompted Moses Sofer to give the above ruling. He tells the story that a student of Moses Sofer, Lazar Horowitz, Chief Rabbi of Vienna at the time and author of the responsa "Yad Elazer", needed the ruling because of a governmental attempt to ban circumcision completely if it included "metztitzah b'peh." He therefore asked Sofer to give him permission to do "brit milah" without "metzitzah b’peh." When he presented the defense in secular court, his testimony was erroneously recorded to mean that Sofer stated it as a general ruling. The Rabbinical Council of America, (RCA) which claims to be the largest American organization of Orthodox rabbis, published an article by mohel Dr Yehudi Pesach Shields in its summer 1972 issue of Tradition magazine, calling for the abandonment of Metzitzah b'peh. Since then the RCA has issued an opinion that advocates methods that do not involve contact between the mohel's mouth and the open wound, such as the use of a sterile syringe, thereby eliminating the risk of infection. According to the Chief Rabbinate of Israel and the Edah HaChareidis "metzitzah b'peh" should still be performed. The practice of "metzitzah b'peh" was alleged to pose a serious risk in the transfer of herpes from mohelim to eight Israeli infants, one of whom suffered brain damage. When three New York City infants contracted herpes after "metzizah b'peh" by one "mohel" and one of them died, New York authorities took out a restraining order against the "mohel" requiring use of a sterile glass tube, or pipette. The mohel's attorney argued that the New York Department of Health had not supplied conclusive medical evidence linking his client with the disease. In September 2005, the city withdrew the restraining order and turned the matter over to a rabbinical court. Dr. Thomas Frieden, the Health Commissioner of New York City, wrote, "There exists no reasonable doubt that ‘metzitzah b'peh’ can and has caused neonatal herpes infection...The Health Department recommends that infants being circumcised not undergo metzitzah b'peh." In May 2006, the Department of Health for New York State issued a protocol for the performance of metzitzah b'peh. Dr. Antonia C. Novello, Commissioner of Health for New York State, together with a board of rabbis and doctors, worked, she said, to "allow the practice of metzizah b'peh to continue while still meeting the Department of Health's responsibility to protect the public health." Later in New York City in 2012 a 2-week-old baby died of herpes because of metzitzah b'peh. In three medical papers done in Israel, Canada, and the US, oral suction following circumcision was suggested as a cause in 11 cases of neonatal herpes. Researchers noted that prior to 1997, neonatal herpes reports in Israel were rare, and that the late incidences were correlated with the mothers carrying the virus themselves. Rabbi Doctor Mordechai Halperin implicates the "better hygiene and living conditions that prevail among the younger generation", which lowered to 60% the rate of young Israeli Chareidi mothers who carry the virus. He explains that an "absence of antibodies in the mothers’ blood means that their newborn sons received no such antibodies through the placenta, and therefore are vulnerable to infection by HSV-1." Because of the risk of infection, some rabbinical authorities have ruled that the traditional practice of direct contact should be replaced by using a sterile tube between the wound and the mohel's mouth, so there is no direct oral contact. The Rabbinical Council of America, the largest group of Modern Orthodox rabbis, endorses this method. The RCA paper states: ""Rabbi Schachter even reports that Rav Yosef Dov Soloveitchik reports that his father, Rav Moshe Soloveitchik, would not permit a mohel to perform metzitza be’peh with direct oral contact, and that his grandfather, Rav Chaim Soloveitchik, instructed mohelim in Brisk not to do metzitza be’peh with direct oral contact. However, although Rav Yosef Dov Soloveitchik also generally prohibited metzitza be’peh with direct oral contact, he did not ban it by those who insisted upon it...". " The sefer Mitzvas Hametzitzah by Rabbi Sinai Schiffer of Baden, Germany, states that he is in possession of letters from 36 major Russian (Lithuanian) rabbis that categorically prohibit Metzitzah with a sponge and require it to be done orally. Among them is Rabbi Chaim Halevi Soloveitchik of Brisk. In September 2012, the New York Department of Health unanimously ruled that the practice of metztizah b'peh should require informed consent from the parent or guardian of the child undergoing the ritual. Prior to the ruling, several hundred rabbis, including Rabbi David Neiderman, the executive director of the United Jewish Organization of Williamsburg, signed a declaration stating that they would not inform parents of the potential dangers that came with metzitzah b'peh, even if informed consent became law. In a motion for preliminary injunction with intent to sue, filed against New York City Department of Health & Mental Hygiene, affidavits by Awi Federgruen, Brenda Breuer, and Daniel S. Berman, argued that the study on which the department passed its conclusions is flawed. The "informed consent" regulation was challenged in court. In January 2013 the U.S. District court ruled that the law did not specifically target religion and therefore must not pass strict scrutiny. The ruling was appealed to the Court of Appeals. On August 15, 2014 the Second Circuit Court of Appeals reversed the decision by the lower court, and ruled that the regulation does have to be reviewed under strict scrutiny to determine whether it infringes on Orthodox Jews' freedom of religion. On September 9, 2015 after coming to an agreement with the community The New York City Board of Health voted to repeal the informed consent regulation. A brit milah is more than circumcision, it is a sacred ritual in Judaism, as distinguished from its non-ritual requirement in Islam. One ramification is that the brit is not considered complete unless a drop of blood is actually drawn. The standard medical methods of circumcision through constriction do not meet the requirements of the halakhah for brit milah, because they cause hemostasis, "i.e.", they stop the flow of blood. Moreover, circumcision alone, in the absence of the brit milah ceremony, does not fulfill the requirements of the mitzvah. Therefore, in cases where a Jew who was circumcised outside of a brit milah, an already-circumcised convert, or an aposthetic (born without a foreskin) individual, the mohel draws a symbolic drop of blood (, ) from the penis at the point where the foreskin would have been or was attached. A "Milah L'shem giur" is a "Circumcision for the purpose of conversion". In Orthodox Judaism, this procedure is usually done by adoptive parents for adopted boys who are being converted as part of the adoption or by families with young children converting together. It is also required for adult converts who were not previously circumcised, e.g. those born in countries where circumcision at birth is not common. The conversion of a minor is valid in both Orthodox and Conservative Judaism until a child reaches the age of majority (13 for a boy, 12 for a girl); at that time the child has the option of renouncing his conversion and Judaism, and the conversion will then be considered retroactively invalid. He must be informed of his right to renounce his conversion if he wishes. If he does not make such a statement, it is accepted that the boy is halakhically Jewish. Orthodox rabbis will generally not convert a non-Jewish child raised by a mother who has not converted to Judaism. The laws of conversion and conversion-related circumcision in Orthodox Judaism have numerous complications, and authorities recommend that a rabbi be consulted well in advance. In Conservative Judaism, the Milah l'Shem giur procedure is also performed for a boy whose mother has not converted, but with the intention that the child be raised Jewish. This conversion of a child to Judaism without the conversion of the mother is allowed by Conservative interpretations of halakha. Conservative Rabbis will authorize it only under the condition that the child be raised as a Jew in a single-faith household. Should the mother convert, and if the boy has not yet reached his third birthday, the child may be immersed in the mikveh with the mother, after the mother has already immersed, to become Jewish. If the mother does not convert, the child may be immersed in a mikveh, or body of natural waters, to complete the child's conversion to Judaism. This can be done before the child is even one year old. If the child did not immerse in the mikveh, or the boy was too old, then the child may choose of their own accord to become Jewish at age 13 as a Bar Mitzvah, and complete the conversion then. Where the procedure was performed but not followed by immersion or other requirements of the conversion procedure (e.g., in Conservative Judaism, where the mother has not converted), if the boy chooses to complete the conversion at Bar Mitzvah, a "Milah l'shem giur" performed when the boy was an infant removes the obligation to undergo either a full brit milah or "hatafat dam brit". In "Of the Special Laws, Book 1", the Jewish philosopher Philo (20 BCE – CE 50) gives six reasons for the practice of circumcision. He attributes four of the reasons to "men of divine spirit and wisdom." These include the idea that circumcision: To these, Philo added two of his own reasons, including the idea that circumcision Rabbi Saadia Gaon considers something to be "complete," if it lacks nothing, but also has nothing that is unneeded. He regards the foreskin an unneeded organ that God created in man, and so by amputating it, the man is completed. Maimonides (Moses ben Maimon "Rambam", CE 1135–1204), who apart from being a great Torah scholar was also a physician and philosopher, argued that circumcision serves as a common bodily sign to members of the same faith. He also asserted that the main purpose of the act is to repress sexual pleasure, with the strongest reason being that it is difficult for a woman to separate from an uncircumcised man with whom she has had sex: The author of Sefer ha-Chinuch provides three reasons for the practice of circumcision: Talmud professor Daniel Boyarin offered two explanations for circumcision. One is that it is a literal inscription on the Jewish body of the name of God in the form of the letter "yud" (from "yesod"). The second is that the act of bleeding represents a feminization of Jewish men, significant in the sense that the covenant represents a marriage between Jews and (a symbolically male) God. The Reform societies established in Frankfurt and Berlin regarded circumcision as barbaric and wished to abolish it. However, while prominent rabbis such as Abraham Geiger believed the ritual to be barbaric and outdated, they refrained from instituting any change in this matter. In 1843, when a father in Frankfurt refused to circumcise his son, rabbis of all shades in Germany stated it was mandated by Jewish law; even Samuel Holdheim affirmed this. By 1871, Reform rabbinic leadership in Germany reasserted "the supreme importance of circumcision in Judaism", while affirming the traditional viewpoint that non-circumcised are Jews nonetheless. Although the issue of circumcision of converts continues to be debated, the necessity of Brit Milah for Jewish infant boys has been stressed in every subsequent Reform rabbis manual or guide. Since 1984 Reform Judaism has trained and certified over 300 of their own practicing "mohalim" in this ritual. A growing number of contemporary Jews and Intactivist Jewish groups in the United States, United Kingdom, and Israel, both religious and secular, choose not to circumcise their sons. Among the reasons for their choice are the claims that circumcision is a form of child abuse that involves genital mutilation forced on men and violence against helpless infants, a violation of children's rights, and their opinion that circumcision is a dangerous, unnecessary, painful, traumatic and stressful event for the child, which can cause even further psychophysical complications down the road, including serious disability and even death. They are assisted by a small number of Reform, Liberal, and Reconstructionist rabbis, and have developed a welcoming ceremony that they call the "Brit shalom" ("Covenant [of] Peace") for such children, also accepted by Humanistic Judaism. The ceremony of "Brit shalom" is not officially approved of by the Reform or Reconstructionist rabbinical organizations, who make the recommendation that male infants should be circumcised, though the issue of converts remains controversial and circumcision of converts is not mandatory in either movement. The connection of the Reform movement to an anti-circumcision, pro-symbolic stance is a historical one. From the early days of the movement in Germany and Eastern Europe, some classical Reformers hoped to replace ritual circumcision "with a symbolic act, as has been done for other bloody practices, such as the sacrifices". In the US, an official Reform resolution in 1893 announced converts are no longer mandated to undergo the ritual, and this ambivalence towards the practice has carried over to classical-minded Reform Jews today. In Elyse Wechterman's essay "A Plea for Inclusion", she argues that, even in the absence of circumcision, committed Jews should never be turned away, especially by a movement "where no other ritual observance is mandated". She goes on to advocate an alternate covenant ceremony, "brit atifah", for both boys and girls as a welcoming ritual into Judaism. With a continuing negativity towards circumcision still present within a minority of modern-day Reform, Judaic scholar Jon Levenson has warned that if they "continue to judge "brit milah" to be not only medically unnecessary but also brutalizing and mutilating ... the abhorrence of it expressed by some early Reform leaders will return with a vengeance", proclaiming that circumcision will be "the latest front in the battle over the Jewish future in America". Many European Jewish fathers during the nineteenth century chose not to circumcise their sons, including Theodor Herzl. However, unlike many other forms of religious observance, it remained one of the last rituals Jewish communities could enforce. In most of Europe, both the government and the unlearned Jewish masses believed circumcision to be a rite akin to baptism, and the law allowed communities not to register uncircumcised children as Jewish. This legal maneuver spurred several debates addressing the advisibility of its use, since many parents later chose to convert to Christianity. In early 20th-century Russia, Chaim Soloveitchik advised his colleagues to reject this measure, stating that uncircumcised Jewish males are no less Jewish than Jews who violate other commandments. Islamic male circumcision is analogous but not identical to Jewish circumcision. Jewish circumcision is tightly bound by ritual timing and tradition; however, in Islam, there is no fixed age for circumcision. According to some traditions, Muhammad was circumcised by his grandfather Abd-al-Muttalib when he was seven days old.
https://en.wikipedia.org/wiki?curid=4768
Business ethics Business ethics (also known as corporate ethics) is a form of applied ethics or professional ethics, that examines ethical principles and moral or ethical problems that can arise in a business environment. It applies to all aspects of business conduct and is relevant to the conduct of individuals and entire organizations. These ethics originate from individuals, organizational statements or from the legal system. These norms, values, ethical, and unethical practices are the principles that guide a business. They help those businesses maintain a better connection with their stakeholders. Business ethics refers to contemporary organizational standards, principles, sets of values and norms that govern the actions and behavior of an individual in the business organization. Business ethics have two dimensions, normative business ethics or descriptive business ethics. As a corporate practice and a career specialization, the field is primarily normative. Academics attempting to understand business behavior employ descriptive methods. The range and quantity of business ethical issues reflects the interaction of profit-maximizing behavior with non-economic concerns. Interest in business ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and within academia. For example, most major corporations today promote their commitment to non-economic values under headings such as ethics codes and social responsibility charters. Adam Smith said in 1776, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices." Governments use laws and regulations to point business behavior in what they perceive to be beneficial directions. Ethics implicitly regulates areas and details of behavior that lie beyond governmental control. The emergence of large corporations with limited relationships and sensitivity to the communities in which they operate accelerated the development of formal ethics regimes. Maintaining an ethical status is the responsibility of the manager of the business. According to a 1990 article in the "Journal of Business Ethics", "Managing ethical behavior is one of the most pervasive and complex problems facing business organizations today." Business ethics reflect the norms of each historical period. As time passes, norms evolve, causing accepted behaviors to become objectionable. Business ethics and the resulting behavior evolved as well. Business was involved in slavery, colonialism, and the cold war. The term 'business ethics' came into common use in the United States in the early 1970s. By the mid-1980s at least 500 courses in business ethics reached 40,000 students, using some twenty textbooks and at least ten casebooks supported by professional societies, centers and journals of business ethics. The Society for Business Ethics was founded in 1980. European business schools adopted business ethics after 1987 commencing with the European Business Ethics Network. In 1982 the first single-authored books in the field appeared. Firms began highlighting their ethical stature in the late 1980s and early 1990s, possibly in an attempt to distance themselves from the business scandals of the day, such as the savings and loan crisis. The concept of business ethics caught the attention of academics, media and business firms by the end of the Cold War. However, criticism of business practices was attacked for infringing the freedom of entrepreneurs and critics were accused of supporting communists. This scuttled the discourse of business ethics both in media and academia. The Defense Industry Initiative on Business Ethics and Conduct (DII) was created to support corporate ethical conduct. This era began the belief and support of self-regulation and free trade, which lifted tariffs and barriers and allowed businesses to merge and divest in an increasing global atmosphere. One of the earliest written treatments of business ethics is found in the "Tirukkuṛaḷ", a Tamil book dated variously from 300 BCE to the 7th century CE and attributed to Thiruvalluvar. Many verses discuss business ethics, in particular, verse 113, adapting to a changing environment in verses 474, 426, and 140, learning the intricacies of different tasks in verses 462 and 677. Business ethics reflects the philosophy of business, of which one aim is to determine the fundamental purposes of a company. If a company's purpose is to maximize shareholder returns, then sacrificing profits for other concerns is a violation of its fiduciary responsibility. Corporate entities are legal persons but this does not mean they are legally entitled to all of the rights and liabilities as natural persons. Ethics are the rules or standards that govern our decisions on a daily basis. Many consider "ethics" with conscience or a simplistic sense of "right" and "wrong." Others would say that ethics is an internal code that governs an individual's conduct, ingrained into each person by family, faith, tradition, community, laws, and personal mores. Corporations and professional organizations, particularly licensing boards, generally will have a written code of ethics that governs standards of professional conduct expected of all in the field. It is important to note that "law" and "ethics" are not synonymous, nor are the "legal" and "ethical" courses of action in a given situation necessarily the same. Statutes and regulations passed by legislative bodies and administrative boards set forth the "law." Slavery once was legal in the US, but one certainly wouldn't say enslaving another was an "ethical" act. Economist Milton Friedman wrote that corporate executives' "responsibility ... generally will be to make as much money as possible while conforming to their basic rules of the society, both those embodied in law and those embodied in ethical custom". Friedman also said, "the only entities who can have responsibilities are individuals ... A business cannot have responsibilities. So the question is, do corporate executives, provided they stay within the law, have responsibilities in their business activities other than to make as much money for their stockholders as possible? And my answer to that is, no, they do not." This view is known as the Friedman doctrine. A multi-country 2011 survey found support for this view among the "informed public" ranging from 30 to 80%. Ronald Duska and Jacques Cory have described Friedman's argument as consequentialist or utilitarian rather than pragmatic: Friedman's argument implies that unrestrained corporate freedom would benefit the most people in the long term. Duska argued that Friedman failed to differentiate two very different aspects of business: (1) the "motive" of individuals, who are generally motivated by profit to participate in business, and (2) the socially sanctioned "purpose" of business, or the reason why people allow businesses to exist, which is to provide goods and services to people. So Friedman was wrong that making a profit is the only concern of business, Duska argued. Peter Drucker once said, "There is neither a separate ethics of business nor is one needed", implying that standards of personal ethics cover all business situations. However, Drucker in another instance said that the ultimate responsibility of company directors is not to harm—"primum non nocere". Another view of business is that it must exhibit corporate social responsibility (CSR): an umbrella term indicating that an ethical business must act as a responsible citizen of the communities in which it operates even at the cost of profits or other goals. In the US and most other nations corporate entities are legally treated as persons in some respects. For example, they can hold title to property, sue and be sued and are subject to taxation, although their free speech rights are limited. This can be interpreted to imply that they have independent ethical responsibilities. Duska argued that stakeholders expect a business to be ethical and that violating that expectation must be counterproductive for the business. Ethical issues include the rights and duties between a company and its employees, suppliers, customers and neighbors, its fiduciary responsibility to its shareholders. Issues concerning relations between different companies include hostile take-overs and industrial espionage. Related issues include corporate governance; corporate social entrepreneurship; political contributions; legal issues such as the ethical debate over introducing a crime of corporate manslaughter; and the marketing of corporations' ethics policies. According to research published by the Institute of Business Ethics and Ipsos MORI in late 2012, the three major areas of public concern regarding business ethics in Britain are executive pay, corporate tax avoidance and bribery and corruption. Ethical standards of an entire organization can be damaged if a corporate psychopath is in charge. This will not only affect the company and its outcome but the employees who work under a corporate psychopath. The way a corporate psychopath can rise in a company is by their manipulation, scheming, and bullying. They do this in a way that can hide their true character and intentions within a company. Fundamentally, finance is a social science discipline. The discipline borders behavioral economics, sociology, economics, accounting and management. It concerns technical issues such as the mix of debt and equity, dividend policy, the evaluation of alternative investment projects, options, futures, swaps, and other derivatives, portfolio diversification and many others. Finance is often mistaken by the people to be a discipline free from ethical burdens. The 2008 financial crisis caused critics to challenge the ethics of the executives in charge of U.S. and European financial institutions and financial regulatory bodies. Finance ethics is overlooked for another reason—issues in finance are often addressed as matters of law rather than ethics. Aristotle said, "the end and purpose of the polis is the good life". Adam Smith characterized the good life in terms of material goods and intellectual and moral excellences of character. Smith in his "The Wealth of Nations" commented, "All for ourselves, and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind." However, a section of economists influenced by the ideology of neoliberalism, interpreted the objective of economics to be maximization of economic growth through accelerated consumption and production of goods and services. Neoliberal ideology promoted finance from its position as a component of economics to its core. Proponents of the ideology hold that unrestricted financial flows, if redeemed from the shackles of "financial repressions", best help impoverished nations to grow. The theory holds that open financial systems accelerate economic growth by encouraging foreign capital inflows, thereby enabling higher levels of savings, investment, employment, productivity and "welfare", along with containing corruption. Neoliberals recommended that governments open their financial systems to the global market with minimal regulation over capital flows. The recommendations however, met with criticisms from various schools of ethical philosophy. Some pragmatic ethicists, found these claims to be unfalsifiable and a priori, although neither of these makes the recommendations false or unethical per se. Raising economic growth to the highest value necessarily means that welfare is subordinate, although advocates dispute this saying that economic growth provides more welfare than known alternatives. Since history shows that neither regulated nor unregulated firms always behave ethically, neither regime offers an ethical panacea. Neoliberal recommendations to developing countries to unconditionally open up their economies to transnational finance corporations was fiercely contested by some ethicists. The claim that deregulation and the opening up of economies would reduce corruption was also contested. Dobson observes, "a rational agent is simply one who pursues personal material advantage ad infinitum. In essence, to be rational in finance is to be individualistic, materialistic, and competitive. Business is a game played by individuals, as with all games the object is to win, and winning is measured in terms solely of material wealth. Within the discipline this rationality concept is never questioned, and has indeed become the theory-of-the-firm's sine qua non". Financial ethics is in this view a mathematical function of shareholder wealth. Such simplifying assumptions were once necessary for the construction of mathematically robust models. However, signalling theory and agency theory extended the paradigm to greater realism. Fairness in trading practices, trading conditions, financial contracting, sales practices, consultancy services, tax payments, internal audit, external audit and executive compensation also, fall under the umbrella of finance and accounting. Particular corporate ethical/legal abuses include: creative accounting, earnings management, misleading financial analysis, insider trading, securities fraud, bribery/kickbacks and facilitation payments. Outside of corporations, bucket shops and forex scams are criminal manipulations of financial markets. Cases include accounting scandals, Enron, WorldCom and Satyam. Human resource management occupies the sphere of activity of recruitment selection, orientation, performance appraisal, training and development, industrial relations and health and safety issues. Business Ethicists differ in their orientation towards labor ethics. Some assess human resource policies according to whether they support an egalitarian workplace and the dignity of labor. Issues including employment itself, privacy, compensation in accord with comparable worth, collective bargaining (and/or its opposite) can be seen either as inalienable rights or as negotiable. Discrimination by age (preferring the young or the old), gender/sexual harassment, race, religion, disability, weight and attractiveness. A common approach to remedying discrimination is affirmative action. Once hired, employees have the right to the occasional cost of living increases, as well as raises based on merit. Promotions, however, are not a right, and there are often fewer openings than qualified applicants. It may seem unfair if an employee who has been with a company longer is passed over for a promotion, but it is not unethical. It is only unethical if the employer did not give the employee proper consideration or used improper criteria for the promotion. Each employer should know the distinction between what is unethical and what is illegal. If an action is illegal it is breaking the law but if an action seems morally incorrect that is unethical. In the workplace what is unethical does not mean illegal and should follow the guidelines put in place by OSHA, EEOC, and other law binding entities. Potential employees have ethical obligations to employers, involving intellectual property protection and whistle-blowing. Employers must consider workplace safety, which may involve modifying the workplace, or providing appropriate training or hazard disclosure. This differentiates on the location and type of work that is taking place and can need to comply with the standards to protect employees and non-employees under workplace safety. Larger economic issues such as immigration, trade policy, globalization and trade unionism affect workplaces and have an ethical dimension, but are often beyond the purview of individual companies. Trade unions, for example, may push employers to establish due process for workers, but may also cause job loss by demanding unsustainable compensation and work rules. Unionized workplaces may confront union busting and strike breaking and face the ethical implications of work rules that advantage some workers over others. Among the many people management strategies that companies employ are a "soft" approach that regards employees as a source of creative energy and participants in workplace decision making, a "hard" version explicitly focused on control and Theory Z that emphasizes philosophy, culture and consensus. None ensure ethical behavior. Some studies claim that sustainable success requires a humanely treated and satisfied workforce. Marketing ethics came of age only as late as the 1990s. Marketing ethics was approached from ethical perspectives of virtue or virtue ethics, deontology, consequentialism, pragmatism and relativism. Ethics in marketing deals with the principles, values and/or ideas by which marketers (and marketing institutions) ought to act. Marketing ethics is also contested terrain, beyond the previously described issue of potential conflicts between profitability and other concerns. Ethical marketing issues include marketing redundant or dangerous products/services, transparency about environmental risks, transparency about product ingredients such as genetically modified organisms possible health risks, financial risks, security risks, etc., respect for consumer privacy and autonomy, advertising truthfulness and fairness in pricing & distribution. According to Borgerson, and Schroeder (2008), marketing can influence individuals' perceptions of and interactions with other people, implying an ethical responsibility to avoid distorting those perceptions and interactions. Marketing ethics involves pricing practices, including illegal actions such as price fixing and legal actions including price discrimination and price skimming. Certain promotional activities have drawn fire, including greenwashing, bait and switch, shilling, viral marketing, spam (electronic), pyramid schemes and multi-level marketing. Advertising has raised objections about attack ads, subliminal messages, sex in advertising and marketing in schools. Being the most important element of a business, stakeholders' main concern is to determine whether or not the business is behaving ethically or unethically. The business' actions and decisions should be primarily ethical before it happens to become an ethical or even legal issue. "In the case of the government, community, and society what was merely an ethical issue can become a legal debate and eventually law." Some unethical issues are: This area of business ethics usually deals with the duties of a company to ensure that products and production processes do not needlessly cause harm. Since few goods and services can be produced and consumed with zero risks, determining the ethical course can be problematic. In some case, consumers demand products that harm them, such as tobacco products. Production may have environmental impacts, including pollution, habitat destruction and urban sprawl. The downstream effects of technologies nuclear power, genetically modified food and mobile phones may not be well understood. While the precautionary principle may prohibit introducing new technology whose consequences are not fully understood, that principle would have prohibited the newest technology introduced since the industrial revolution. Product testing protocols have been attacked for violating the rights of both humans and animals. There are sources that provide information on companies that are environmentally responsible or do not test on animals. The etymological root of property is the Latin 'proprius' which refers to 'nature', 'quality', 'one's own', 'special characteristic', 'proper', 'intrinsic', 'inherent', 'regular', 'normal', 'genuine', 'thorough, complete, perfect' etc. The word property is value loaded and associated with the personal qualities of propriety and respectability, also implies questions relating to ownership. A 'proper' person owns and is true to herself or himself, and is thus genuine, perfect and pure. Modern discourse on property emerged by the turn of the 17th century within theological discussions of that time. For instance, John Locke justified property rights saying that God had made "the earth, and all inferior creatures, [in] common to all men". In 1802 utilitarian Jeremy Bentham stated, "property and law are born together and die together". One argument for property ownership is that it enhances individual liberty by extending the line of non-interference by the state or others around the person. Seen from this perspective, property right is absolute and property has a special and distinctive character that precedes its legal protection. Blackstone conceptualized property as the "sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe". During the seventeenth and eighteenth centuries, slavery spread to European colonies including America, where colonial legislatures defined the legal status of slaves as a form of property. During this time settlers began the centuries-long process of dispossessing the natives of America of millions of acres of land. The natives lost about of land in the Louisiana Territory under the leadership of Thomas Jefferson, who championed property rights. Combined with theological justification, the property was taken to be essentially natural ordained by God. Property, which later gained meaning as ownership and appeared natural to Locke, Jefferson and to many of the 18th and 19th century intellectuals as land, labor or idea, and property right over slaves had the same theological and essentialized justification It was even held that the property in slaves was a sacred right. Wiecek noted, "slavery was more clearly and explicitly established under the Constitution as it had been under the Articles". Accordingly, US Supreme Court Chief Justice Roger B. Taney in his 1857 judgment stated, "The right of property in a slave is distinctly and expressly affirmed in the Constitution". Neoliberals hold that private property rights are a non-negotiable natural right. Davies counters with "property is no different from other legal categories in that it is simply a consequence of the significance attached by law to the relationships between legal persons." Singer claims, "Property is a form of power, and the distribution of power is a political problem of the highest order". Rose finds, "'Property' is only an effect, a construction, of relationships between people, meaning that its objective character is contestable. Persons and things, are 'constituted' or 'fabricated' by legal and other normative techniques.". Singer observes, "A private property regime is not, after all, a Hobbesian state of nature; it requires a working legal system that can define, allocate, and enforce property rights." Davis claims that common law theory generally favors the view that "property is not essentially a 'right to a thing', but rather a separable bundle of rights subsisting between persons which may vary according to the context and the object which is at stake". In common parlance property rights involve a bundle of rights including occupancy, use and enjoyment, and the right to sell, devise, give, or lease all or part of these rights. Custodians of property have obligations as well as rights. Michelman writes, "A property regime thus depends on a great deal of cooperation, trustworthiness, and self-restraint among the people who enjoy it." Menon claims that the autonomous individual, responsible for his/her own existence is a cultural construct moulded by Western culture rather than the truth about the human condition. Penner views property as an "illusion"—a "normative phantasm" without substance. In the neoliberal literature, the property is part of the private side of a public/private dichotomy and acts a counterweight to state power. Davies counters that "any space may be subject to plural meanings or appropriations which do not necessarily come into conflict". Private property has never been a universal doctrine, although since the end of the Cold War is it has become nearly so. Some societies, e.g., Native American bands, held land, if not all property, in common. When groups came into conflict, the victor often appropriated the loser's property. The rights paradigm tended to stabilize the distribution of property holdings on the presumption that title had been lawfully acquired. Property does not exist in isolation, and so property rights too. Bryan claimed that property rights describe relations among people and not just relations between people and things Singer holds that the idea that owners have no legal obligations to others wrongly supposes that property rights hardly ever conflict with other legally protected interests. Singer continues implying that legal realists "did not take the character and structure of social relations as an important independent factor in choosing the rules that govern market life". Ethics of property rights begins with recognizing the vacuous nature of the notion of property. Intellectual property (IP) encompasses expressions of ideas, thoughts, codes, and information. "Intellectual property rights" (IPR) treat IP as a kind of real property, subject to analogous protections, rather than as a reproducible good or service. Boldrin and Levine argue that "government does not ordinarily enforce monopolies for producers of other goods. This is because it is widely recognized that monopoly creates many social costs. Intellectual monopoly is no different in this respect. The question we address is whether it also creates social benefits commensurate with these social costs." International standards relating to Intellectual Property Rights are enforced through Agreement on Trade-Related Aspects of Intellectual Property Rights. In the US, IP other than copyrights is regulated by the United States Patent and Trademark Office. The US Constitution included the power to protect intellectual property, empowering the Federal government "to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries". Boldrin and Levine see no value in such state-enforced monopolies stating, "we ordinarily think of innovative monopoly as an oxymoron. Further, they comment, 'intellectual property' "is not like ordinary property at all, but constitutes a government grant of a costly and dangerous private monopoly over ideas. We show through theory and example that intellectual monopoly is not necessary for innovation and as a practical matter is damaging to growth, prosperity, and liberty". Steelman defends patent monopolies, writing, "Consider prescription drugs, for instance. Such drugs have benefited millions of people, improving or extending their lives. Patent protection enables drug companies to recoup their development costs because for a specific period of time they have the sole right to manufacture and distribute the products they have invented." The court cases by 39 pharmaceutical companies against South Africa's 1997 Medicines and Related Substances Control Amendment Act, which intended to provide affordable HIV medicines has been cited as a harmful effect of patents. One attack on IPR is moral rather than utilitarian, claiming that inventions are mostly a collective, cumulative, path dependent, social creation and therefore, no one person or firm should be able to monopolize them even for a limited period. The opposing argument is that the benefits of innovation arrive sooner when patents encourage innovators and their investors to increase their commitments. Roderick T. Long, a libertarian philosopher, argued: Machlup concluded that patents do not have the intended effect of enhancing innovation. Self-declared anarchist Proudhon, in his 1847 seminal work noted, "Monopoly is the natural opposite of competition," and continued, "Competition is the vital force which animates the collective being: to destroy it, if such a supposition were possible, would be to kill society." Mindeli and Pipiya argued that the knowledge economy is an economy of abundance because it relies on the "infinite potential" of knowledge and ideas rather than on the limited resources of natural resources, labor and capital. Allison envisioned an egalitarian distribution of knowledge. Kinsella claimed that IPR create artificial scarcity and reduce equality. Bouckaert wrote, "Natural scarcity is that which follows from the relationship between man and nature. Scarcity is natural when it is possible to conceive of it before any human, institutional, contractual arrangement. Artificial scarcity, on the other hand, is the outcome of such arrangements. Artificial scarcity can hardly serve as a justification for the legal framework that causes that scarcity. Such an argument would be completely circular. On the contrary, artificial scarcity itself needs a justification" Corporations fund much IP creation and can acquire IP they do not create, to which Menon and others have objected. Andersen claims that IPR has increasingly become an instrument in eroding public domain. Ethical and legal issues include: patent infringement, copyright infringement, trademark infringement, patent and copyright misuse, submarine patents, biological patents, patent, copyright and trademark trolling, employee raiding and monopolizing talent, bioprospecting, biopiracy and industrial espionage, digital rights management. Notable IP copyright cases include "A&M Records, Inc. v. Napster, Inc.", "Eldred v. Ashcroft", and Disney's lawsuit against the Air Pirates. While business ethics emerged as a field in the 1970s, international business ethics did not emerge until the late 1990s, looking back on the international developments of that decade. Many new practical issues arose out of the international context of business. Theoretical issues such as cultural relativity of ethical values receive more emphasis in this field. Other, older issues can be grouped here as well. Issues and subfields include: Foreign countries often use dumping as a competitive threat, selling products at prices lower than their normal value. This can lead to problems in domestic markets. It becomes difficult for these markets to compete with the pricing set by foreign markets. In 2009, the International Trade Commission has been researching anti-dumping laws. Dumping is often seen as an ethical issue, as larger companies are taking advantage of other less economically advanced companies. Ethical issues often arise in business settings, whether through business transactions or forming new business relationships. An ethical issue in a business atmosphere may refer to any situation that requires business associates as individuals, or as a group (for example, a department or firm) to evaluate the morality of specific actions, and subsequently make a decision amongst the choices. Some ethical issues of particular concern in today's evolving business market include such topics as: honesty, integrity, professional behaviors, environmental issues, harassment, and fraud to name a few. From a 2009 National Business Ethics survey, it was found that types of employee-observed ethical misconduct included abusive behavior (at a rate of 22 percent), discrimination (at a rate of 14 percent), improper hiring practices (at a rate of 10 percent), and company resource abuse (at a rate of percent). The ethical issues associated with honesty are widespread and vary greatly in business, from the misuse of company time or resources to lying with malicious intent, engaging in bribery, or creating conflicts of interest within an organization. Honesty encompasses wholly the truthful speech and actions of an individual. Some cultures and belief systems even consider honesty to be an essential pillar of life, such as Confucianism and Buddhism (referred to as sacca, part of the Four Noble Truths). Many employees lie in order to reach goals, avoid assignments or negative issues; however, sacrificing honesty in order to gain status or reap rewards poses potential problems for the overall ethical culture organization, and jeopardizes organizational goals in the long run. Using company time or resources for personal use is also, commonly viewed as unethical because it boils down to stealing from the company. The misuse of resources costs companies billions of dollars each year, averaging about 4.25 hours per week of stolen time alone, and employees' abuse of Internet services is another main concern. Bribery, on the other hand, is not only considered unethical is business practices, but it is also illegal. In accordance with this, the Foreign Corrupt Practices Act was established in 1977 to deter international businesses from giving or receiving unwarranted payments and gifts that were intended to influence the decisions of executives and political officials. Although, small payments known as facilitation payments will not be considered unlawful under the Foreign Corrupt Practices Act if they are used towards regular public governance activities, such as permits or licenses. Many aspects of the work environment influence an individual's decision-making regarding ethics in the business world. When an individual is on the path of growing a company, many outside influences can pressure them to perform a certain way. The core of the person's performance in the workplace is rooted by their personal code of behavior. A person's personal code of ethics encompasses many different qualities such as integrity, honesty, communication, respect, compassion, and common goals. In addition, the ethical standards set forth by a person's superior(s) often translate into their own code of ethics. The company's policy is the 'umbrella' of ethics that play a major role in the personal development and decision-making processes that people make in respects to ethical behavior. The ethics of a company and its individuals are heavily influenced by the state of their country. If a country is heavily plagued with poverty, large corporations continuously grow, but smaller companies begin to wither and are then forced to adapt and scavenge for any method of survival. As a result, the leadership of the company is often tempted to participate in unethical methods to obtain new business opportunities. Additionally, Social Media is arguably the most influential factor in ethics. The immediate access to so much information and the opinions of millions highly influence people's behaviors. The desire to conform with what is portrayed as the norm often manipulates our idea of what is morally and ethically sound. Popular trends on social media and the instant gratification that is received from participating in such quickly distort people's ideas and decisions. Political economy and political philosophy have ethical implications, particularly regarding the distribution of economic benefits. John Rawls and Robert Nozick are both notable contributors. For example, Rawls has been interpreted as offering a critique of offshore outsourcing on social contract grounds. Laws are the written statutes, codes, and opinions of government organizations by which citizens, businesses, and persons present within a jurisdiction are expected to govern themselves or face legal sanction. Sanctions for violating the law can include (a) civil penalties, such as fines, pecuniary damages, and loss of licenses, property, rights, or privileges; (b) criminal penalties, such as fines, probation, imprisonment, or a combination thereof; or (c) both civil and criminal penalties. Very often it is held that business is not bound by any ethics other than abiding by the law. Milton Friedman is the pioneer of the view. He held that corporations have the obligation to make a profit within the framework of the legal system, nothing more. Friedman made it explicit that the duty of the business leaders is, "to make as much money as possible while conforming to the basic rules of the society, both those embodied in the law and those embodied in ethical custom". Ethics for Friedman is nothing more than abiding by customs and laws. The reduction of ethics to abidance to laws and customs, however, have drawn serious criticisms. Counter to Friedman's logic it is observed that legal procedures are technocratic, bureaucratic, rigid and obligatory whereas ethical act is conscientious, voluntary choice beyond normativity. Law is retroactive. Crime precedes law. Law against crime, to be passed, the crime must have happened. Laws are blind to the crimes undefined in it. Further, as per law, "conduct is not criminal unless forbidden by law which gives advance warning that such conduct is criminal". Also, the law presumes the accused is innocent until proven guilty and that the state must establish the guilt of the accused beyond reasonable doubt. As per liberal laws followed in most of the democracies, until the government prosecutor proves the firm guilty with the limited resources available to her, the accused is considered to be innocent. Though the liberal premises of law is necessary to protect individuals from being persecuted by Government, it is not a sufficient mechanism to make firms morally accountable. As part of more comprehensive compliance and ethics programs, many companies have formulated internal policies pertaining to the ethical conduct of employees. These policies can be simple exhortations in broad, highly generalized language (typically called a corporate ethics statement), or they can be more detailed policies, containing specific behavioral requirements (typically called corporate ethics codes). They are generally meant to identify the company's expectations of workers and to offer guidance on handling some of the more common ethical problems that might arise in the course of doing business. It is hoped that having such a policy will lead to greater ethical awareness, consistency in application, and the avoidance of ethical disasters. An increasing number of companies also require employees to attend seminars regarding business conduct, which often include discussion of the company's policies, specific case studies, and legal requirements. Some companies even require their employees to sign agreements stating that they will abide by the company's rules of conduct. Many companies are assessing the environmental factors that can lead employees to engage in unethical conduct. A competitive business environment may call for unethical behavior. Lying has become expected in fields such as trading. An example of this are the issues surrounding the unethical actions of the Salomon Brothers. Not everyone supports corporate policies that govern ethical conduct. Some claim that ethical problems are better dealt with by depending upon employees to use their own judgment. Others believe that corporate ethics policies are primarily rooted in utilitarian concerns and that they are mainly to limit the company's legal liability or to curry public favor by giving the appearance of being a good corporate citizen. Ideally, the company will avoid a lawsuit because its employees will follow the rules. Should a lawsuit occur, the company can claim that the problem would not have arisen if the employee had only followed the code properly. Some corporations have tried to burnish their ethical image by creating whistle-blower protections, such as anonymity. In the case of Citi, they call this the Ethics Hotline. Though it is unclear whether firms such as Citi take offences reported to these hotlines seriously or not. Sometimes there is a disconnection between the company's code of ethics and the company's actual practices. Thus, whether or not such conduct is explicitly sanctioned by management, at worst, this makes the policy duplicitous, and, at best, it is merely a marketing tool. Jones and Parker wrote, "Most of what we read under the name business ethics is either sentimental common sense or a set of excuses for being unpleasant." Many manuals are procedural form filling exercises unconcerned about the real ethical dilemmas. For instance, the US Department of Commerce ethics program treats business ethics as a set of instructions and procedures to be followed by 'ethics officers'., some others claim being ethical is just for the sake of being ethical. Business ethicists may trivialize the subject, offering standard answers that do not reflect the situation's complexity. Richard DeGeorge wrote in regard to the importance of maintaining a corporate code: Following a series of fraud, corruption, and abuse scandals that affected the United States defense industry in the mid-1980s, the Defense Industry Initiative (DII) was created to promote ethical business practices and ethics management in multiple industries. Subsequent to these scandals, many organizations began appointing ethics officers (also referred to as "compliance" officers). In 1991, the Ethics & Compliance Officer Association —originally the Ethics Officer Association (EOA)—was founded at the Center for Business Ethics at Bentley University as a professional association for ethics and compliance officers. The 1991 passing of the Federal Sentencing Guidelines for Organizations in 1991 was another factor in many companies appointing ethics/compliance officers. These guidelines, intended to assist judges with sentencing, set standards organizations must follow to obtain a reduction in sentence if they should be convicted of a federal offense. Following the high-profile corporate scandals of companies like Enron, WorldCom and Tyco between 2001 and 2004, and following the passage of the Sarbanes–Oxley Act, many small and mid-sized companies also began to appoint ethics officers. Often reporting to the Chief Executive Officer, ethics officers focus on uncovering or preventing unethical and illegal actions. This is accomplished by assessing the ethical implications of the company's activities, making recommendations on ethical policies, and disseminating information to employees. The effectiveness of ethics officers is not clear. The establishment of an ethics officer position is likely to be insufficient in driving ethical business practices without a corporate culture that values ethical behavior. These values and behaviors should be consistently and systemically supported by those at the top of the organization. Employees with strong community involvement, loyalty to employers, superiors or owners, smart work practices, trust among the team members do inculcate a corporate culture Many corporate and business strategies now include sustainability. In addition to the traditional environmental 'green' sustainability concerns, business ethics practices have expanded to include social sustainability. Social sustainability focuses on issues related to human capital in the business supply chain, such as worker's rights, working conditions, child labor, and human trafficking. Incorporation of these considerations is increasing, as consumers and procurement officials demand documentation of a business' compliance with national and international initiatives, guidelines, and standards. Many industries have organizations dedicated to verifying ethical delivery of products from start to finish, such as the Kimberly Process, which aims to stop the flow of conflict diamonds into international markets, or the Fair Wear Foundation, dedicated to sustainability and fairness in the garment industry. As mentioned, initiatives in sustainability encompass "green" topics, as well as social sustainability. There are however many different ways in which sustainability initiatives can be implemented in a company. An organization can implement sustainability initiatives by improving its operations and manufacturing process so as to make it more aligned with environment, social, and governance issues. Johnson & Johnson incorporates policies from the Universal Declaration of Human Rights, International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights, applying these principles not only for members of its supply chain but also internal operations. Walmart has made commitments to doubling its truck fleet efficiency by 2015 by replacing 2/3rds of its fleet with more fuel-efficient trucks, including hybrids. Dell has integrated alternative, recycled, and recyclable materials in its products and packaging design, improving energy efficiency and design for end-of-life and recyclability. Dell plans to reduce the energy intensity of its product portfolio by 80% by 2020. The board of a company can decide to lower executive compensation by a given percentage, and give the percentage of compensation to a specific cause. This is an effort which can only be implemented from the top, as it will affect the compensation of all executives in the company. In Alcoa, an aluminum company based in the US, "1/5th of executive cash compensation is tied to safety, diversity, and environmental stewardship, which includes greenhouse gas emission reductions and energy efficiency" (Best Practices). This is not usually the case for most companies, where we see the board take a uniform step towards the environment, social, and governance issues. This is only the case for companies that are directly linked to utilities, energy, or material industries, something which Alcoa as an aluminum company, falls in line with. Instead, formal committees focused on the environment, social, and governance issues are more usually seen in governance committees and audit committees, rather than the board of directors. "According to research analysis done by Pearl Meyer in support of the NACD 2017 Director Compensation Report shows that among 1,400 public companies reviewed, only slightly more than five percent of boards have a designated committee to address ESG issues." (How compensation can). Similar to board leadership, creating steering committees and other types of committees specialized for sustainability, senior executives are identified who are held accountable for meeting and constantly improving sustainability goals. Introducing bonus schemes that reward executives for meeting non-financial performance goals including safety targets, greenhouse gas emissions, reduction targets, and goals engaging stakeholders to help shape the companies public policy positions. Companies such as Exelon have implemented policies like this. Other companies will keep sustainability within its strategy and goals, presenting findings at shareholder meetings, and actively tracking metrics on sustainability. Companies such as PepsiCo, Heineken, and FIFCO take steps in this direction to implement sustainability initiatives. (Best Practices). Companies such as Coca-Cola have actively tried improve their efficiency of water usage, hiring 3rd party auditors to evaluate their water management approach. FIFCO has also led successfully led water-management initiatives. Implementation of sustainability projects through directly appealing to employees (typically through the human resource department) is another option for companies to implement sustainability. This involves integrating sustainability into the company culture, with hiring practices and employee training. General Electric is a company that is taking the lead in implementing initiatives in this manner. Bank of America directly engaged employees by implement LEED (leadership in Energy and Environmental Design) certified buildings, with a fifth of its building meeting these certifications. Establishing requirements for not only internal operations but also first-tier suppliers as well as second-tier suppliers to help drive environmental and social expectations further down the supply chain. Companies such as Starbucks, FIFCO and Ford Motor Company have implemented requirements that suppliers must meet to win their business. Starbucks has led efforts in engaging suppliers and local communities where they operate to accelerate investment in sustainable farming. Starbucks set a goal of ethically sourcing 100% of its coffee beans by 2015. By revealing decision-making data about how sustainability was reached, companies can give away insights that can help others across the industry and beyond make more sustainable decisions. Nike launched its "making app" in 2013 which released data about the sustainability in the materials it was using. This ultimately allows other companies to make more sustainable design decisions and create lower impact products. As an academic discipline, business ethics emerged in the 1970s. Since no academic business ethics journals or conferences existed, researchers published in general management journals and attended general conferences. Over time, specialized peer-reviewed journals appeared, and more researchers entered the field. Corporate scandals in the earlier 2000s increased the field's popularity. As of 2009, sixteen academic journals devoted to various business ethics issues existed, with "Journal of Business Ethics" and "Business Ethics Quarterly" considered the leaders. "Journal of Business Ethics Education" publishes articles specifically about education in business ethics. The International Business Development Institute is a global non-profit organization that represents 217 nations and all 50 United States. It offers a Charter in Business Development that focuses on ethical business practices and standards. The Charter is directed by Harvard, MIT, and Fulbright Scholars, and it includes graduate-level coursework in economics, politics, marketing, management, technology, and legal aspects of business development as it pertains to business ethics. IBDI also oversees the International Business Development Institute of Asia which provides individuals living in 20 Asian nations the opportunity to earn the Charter. In Sharia law, followed by many Muslims, banking specifically prohibits charging interest on loans. Traditional Confucian thought discourages profit-seeking. Christianity offers the Golden Rule command, "Therefore all things whatsoever ye would that men should do to you, do ye even so to them: for this is the law and the prophets." According to the article "Theory of the real economy", there is a more narrow point of view from the Christianity faith towards the relationship between ethics and religious traditions. This article stresses how Christianity is capable of establishing reliable boundaries for financial institutions. One criticism comes from Pope Benedict by describing the "damaging effects of the real economy of badly managed and largely speculative financial dealing." It is mentioned that Christianity has the potential to transform the nature of finance and investment but only if theologians and ethicist provide more evidence of what is real in the economic life. Business ethics receives an extensive treatment in Jewish thought and Rabbinic literature, both from an ethical ("Mussar") and a legal ("Halakha") perspective; see article "Jewish business ethics" for further discussion. According to the article "Indian Philosophy and Business Ethics: A Review", by Chandrani Chattopadyay, Hindus follow "Dharma" as Business Ethics and unethical business practices are termed "Adharma". Businessmen are supposed to maintain steady-mindedness, self-purification, non-violence, concentration, clarity and control over senses. Books like Bhagavat Gita and Arthashastra contribute a lot towards conduct of ethical business. Business ethics is related to philosophy of economics, the branch of philosophy that deals with the philosophical, political, and ethical underpinnings of business and economics. Business ethics operates on the premise, for example, that the ethical operation of a private business is possible—those who dispute that premise, such as libertarian socialists (who contend that "business ethics" is an oxymoron) do so by definition outside of the domain of business ethics proper. The philosophy of economics also deals with questions such as what, if any, are the social responsibilities of a business; business management theory; theories of individualism vs. collectivism; free will among participants in the marketplace; the role of self interest; invisible hand theories; the requirements of social justice; and natural rights, especially property rights, in relation to the business enterprise. Business ethics is also related to political economy, which is economic analysis from political and historical perspectives. Political economy deals with the distributive consequences of economic actions.
https://en.wikipedia.org/wiki?curid=4770
British Standards British Standards (BS) are the standards produced by the BSI Group which is incorporated under a royal charter (and which is formally designated as the national standards body (NSB) for the UK). The BSI Group produces British Standards under the authority of the charter, which lays down as one of the BSI's objectives to: Formally, as per the 2002 memorandum of understanding between the BSI and the United Kingdom Government, British Standards are defined as: Products and services which BSI certifies as having met the requirements of specific standards within designated schemes are awarded the Kitemark. The BSI Group as a whole does not produce British Standards, as standards work within the BSI is decentralized. The governing board of BSI establishes a Standards Board. The Standards Board does little apart from setting up sector boards (a sector in BSI parlance being a field of standardization such as ICT, quality, agriculture, manufacturing, or fire). Each sector board, in turn, constitutes several technical committees. It is the technical committees that, formally, approve a British Standard, which is then presented to the secretary of the supervisory sector board for endorsement of the fact that the technical committee has indeed completed a task for which it was constituted. The standards produced are titled British Standard XXXX[-P]:YYYY where XXXX is the number of the standard, P is the number of the part of the standard (where the standard is split into multiple parts) and YYYY is the year in which the standard came into effect. BSI Group currently has over 27,000 active standards. Products are commonly specified as meeting a particular British Standard, and in general, this can be done without any certification or independent testing. The standard simply provides a shorthand way of claiming that certain specifications are met, while encouraging manufacturers to adhere to a common method for such a specification. The Kitemark can be used to indicate certification by BSI, but only where a Kitemark scheme has been set up around a particular standard. It is mainly applicable to safety and quality management standards. There is a common misunderstanding that Kitemarks are necessary to prove compliance with any BS standard, but in general, it is neither desirable nor possible that every standard be 'policed' in this way. Following the move on harmonisation of the standard in Europe, some British Standards are gradually superseded or replaced by the relevant European Standards (EN). Standards are continuously reviewed and developed and are periodically allocated one or more of the following status keywords. BSI Group began in 1901 as the "Engineering Standards Committee", led by James Mansergh, to standardize the number and type of steel sections, in order to make British manufacturers more efficient and competitive. Over time the standards developed to cover many aspects of tangible engineering, and then engineering methodologies including quality systems, safety and security. BSI also publishes a series of PAS documents. PAS documents are a flexible and rapid standards development model that is open to all organizations. A PAS is a sponsored piece of work allowing organizations flexibility in the rapid creation of a standard while also allowing for a greater degree of control over the document's development. A typical development time frame for a PAS is around 6–9 months. Once published by BSI a PAS has all the functionality of a British Standard for the purposes of creating schemes such as management systems and product benchmarks as well as codes of practice. A PAS is a living document and after two years the document will be reviewed and a decision made with the client as to whether or not this should be taken forward to become a formal British standard. The term PAS was originally an acronym derived from "product approval specification", a name which was subsequently changed to “publicly available specification”. However, according to BSI, not all PAS documents are structured as specifications and the term is now sufficiently well established not to require any further amplification. Copies of British Standards are sold at the BSI Online Shop or can be accessed via subscription to British Standards Online (BSOL). They can also be ordered via the publishing units of many other national standards bodies (ANSI, DIN, etc.) and from several specialized suppliers of technical specifications. British Standards, including European and international adoptions, are available in many university and public libraries that subscribe to the BSOL platform. Librarians and lecturers at UK-based subscribing universities have full access rights to the collection while students can copy/paste and print but not download a standard. Up to 10% of the content of a standard can be copy/pasted for personal or internal use and up to 5% of the collection made available as a paper or electronic reference collection at the subscribing university. Because of their reference material status standards are not available for interlibrary loan. Public library users in the UK may have access to BSOL on a view-only basis if their library service subscribes to the BSOL platform. Users may also be able to access the collection remotely if they have a valid library card and the library offers secure access to its resources. The BSI Knowledge Centre in Chiswick can be contacted directly about viewing standards in their Members’ Reading Room.
https://en.wikipedia.org/wiki?curid=4775
Building society A building society is a financial institution owned by its members as a mutual organization. Building societies offer banking and related financial services, especially savings and mortgage lending. Building societies exist in the United Kingdom and Australia, and used to exist in Ireland and several Commonwealth countries. They are similar to credit unions in organisation, though few enforce a common bond. However, rather than promoting thrift and offering unsecured and business loans, the purpose of a building society is to provide home mortgages to members. Borrowers and depositors are society members, setting policy and appointing directors on a one-member, one-vote basis. Building societies often provide other retail banking services, such as current accounts, credit cards and personal loans. The term "building society" first arose in the 19th century in Great Britain from cooperative savings groups. In the United Kingdom, building societies actively compete with banks for most consumer banking services, especially mortgage lending and savings accounts, and regulations permit up to half of their lending to be funded by debt to non-members, allowing societies to access wholesale bond and money markets to fund mortgages. The world's largest building society is Britain's Nationwide Building Society. Further, in Australia, building societies also compete with retail banks and offer the full range of banking services to consumers. Building societies as an institution began in late-18th century Birmingham - a town which was undergoing rapid economic and physical expansion driven by a multiplicity of small metalworking firms, whose many highly skilled and prosperous owners readily invested in property. Many of the early building societies were based in taverns or coffeehouses, which had become the focus for a network of clubs and societies for co-operation and the exchange of ideas among Birmingham's highly active citizenry as part of the movement known as the Midlands Enlightenment. The first building society to be established was Ketley's Building Society, founded by Richard Ketley, the landlord of the "Golden Cross" inn, in 1775. Members of Ketley's society paid a monthly subscription to a central pool of funds which was used to finance the building of houses for members, which in turn acted as collateral to attract further funding to the society, enabling further construction. By 1781 three more societies had been established in Birmingham, with a fourth in the nearby town of Dudley; and 19 more formed in Birmingham between 1782 and 1795. The first outside the English Midlands was established in Leeds in 1785. Most of the original societies were fully "terminating", where they would be dissolved when all members had a house: the last of them, First Salisbury and District Perfect Thrift Building Society, was wound up in March 1980. In the 1830s and 1840s a new development took place with the "permanent building society", where the society continued on a rolling basis, continually taking in new members as earlier ones completed purchases, such as Leek United Building Society. The main legislative framework for the building society was the Building Societies Act 1874, with subsequent amending legislation in 1894, 1939 (see Coney Hall), and 1960. In their heyday, there were hundreds of building societies: just about every town in the country had a building society named after that town. Over succeeding decades the number of societies has decreased, as various societies merged to form larger ones, often renaming in the process, and other societies opted for demutualisation followed by – in the great majority of cases – eventual takeover by a listed bank. Most of the existing larger building societies are the end result of the mergers of many smaller societies. All building societies in the UK are members of the Building Societies Association. At the start of 2008, there were 59 building societies in the UK, with total assets exceeding £360 billion. The number of societies in the UK fell by four during 2008 due to a series of mergers brought about, to a large extent, by the consequences of the financial crisis of 2007–2008. With three further mergers in each of 2009 and 2010, and a demutualisation and a merger in 2011, as of 2020 there are now 44 building societies. In the 1980s, changes to British banking laws allowed building societies to offer banking services equivalent to normal banks. The management of a number of societies still felt that they were unable to compete with the banks, and a new Building Societies Act was passed in 1986 in response to their concerns. This permitted societies to 'demutualise'. If more than 75% of members voted in favour, the building society would then become a limited company like any other. Members' mutual rights were exchanged for shares in this new company. A number of the larger societies made such proposals to their members and all were accepted. Some listed on the London Stock Exchange, while others were acquired by larger financial groups. The process began with the demutualisation of the Abbey National Building Society in 1989. Then, from 1995 to late 1999, eight societies demutualised accounting for two-thirds of building societies assets as at 1994. Five of these societies became joint stock banks (plc), one merged with another and the other four were taken over by plcs (in two cases after the mutual had previously converted to a plc). As Tayler (2003) mentions, demutualisation moves succeeded immediately because neither Conservative nor Labour party UK governments created a framework which put obstacles in the way of demutualisation. Political acquiescence in demutualisation was clearest in the case of the position on 'carpet baggers', that is those who joined societies by lodging minimum amounts of £100 or so in the hope of profiting from a distribution of surplus after demutualisation. The deregulating Building Societies Act 1986 contained an anti-carpet bagger provision in the form of a two-year rule. This prescribed a qualifying period of two years before savers could participate in a residual claim. But, before the 1989 Abbey National Building Society demutualisation, the courts found against the two-year rule after legal action brought by Abbey National itself to circumvent the intent of the legislators. After this the legislation did prevent a cash distribution to members of less than two years standing, but the same result was obtained by permitting the issue of 'free' shares in the acquiring plc, saleable for cash. The Thatcher Conservative government declined to introduce amending legislation to make good the defect in the 'two-year rule'. Building societies, like mutual life insurers, arose as people clubbed together to address a common need interest; in the case of the building societies, this was housing and members were originally both savers and borrowers. But it very quickly became clear that 'outsider' savers were needed whose motive was profit through interest on deposits. Thus permanent building societies quickly became mortgage banks and in such institutions there always existed a conflict of interest between borrowers and savers. It was the task of the movement to reconcile that conflict of interest so as to enable savers to conclude that their interests and those of borrowers were to some extent complementary rather than conflictive. Conflict of interest between savers and borrowers was never fully reconciled in the building societies but upon deregulation that reconciliation became something of a lost cause. The management of building societies apparently could expend considerable time and resources (which belonged the organisation) planning their effective capture—of as much of the assets as they could. If so, this is arguably insider dealing on a grand scale with the benefit of inside specialist knowledge of the business and resources of the firm not shared with outsiders like politicians and members (and, perhaps, regulators). Once the opportunity to claim was presented by management the savers in particular could be relied upon to seize it. There were sufficient hard up borrowers to take the inducement offered them by management (in spite of few simple sums sufficing to demonstrate that they were probably going to end up effectively paying back the inducement). (Tayler 2003) Managements promoting demutualisation also thereby met managerial objectives because the end of mutuality brought joint stock company (plc) style remuneration committee pay standards and share options. Share options for management of converting societies appear to be a powerful factor in management calculation. Rasmusen (1988) refers to this in the following terms: " ... perks do not rise in proportion to [mutual] bank size. If a mutual is large, or is expected to grow if it can raise capital by a conversion, its managers derive more value from a conversion but do not suffer much loss of perks than if the bank were small. Their benefit is in the right to purchase the new stock, which are valuable because the new issues are consistently underpriced [referring to USA mutual bank conversions]. Moreover, by no means are all mutual managers incompetent, and conversions allows the bank to expand more easily and to grant executive stock options that are valuable to skilled managers". Instead of deploying their margin advantage as a defence of mutuality, around 1980 building societies began setting mortgage rates with reference to market clearing levels. In sum they began behaving more like banks, seeking to maximise profit instead of the advantages of a mutual organisation. Thus, according to the Bank of England's Boxall and Gallagher (1997), "... there was virtually no difference between banks and building society 'listed' interest rates for home finance mortgage lending between 1984 and 1997. This behaviour resulted in a return on assets for building societies which was at least as high as Plc banks and, in the absence of distribution, led to rapid accumulation of reserves". As Boxall and Gallagher (1997) also observe; "... accumulation of reserves in the early-1990s, beyond regulatory and future growth requirements, is difficult to reconcile with conventional theories of mutual behaviour". Llewellyn (1996) draws a rather more direct and cynical conclusion: Some of these managements ended up in dispute with their own members. Of the first major conversion of the Abbey in 1989, Kay (1991) observed: In the end, after a number of large demutualisations, and pressure from carpetbaggers moving from one building society to another to cream off the windfalls, most of the societies whose management wished to keep them mutual modified their rules of membership in the late 1990s. The method usually adopted were membership rules to ensure that anyone newly joining a society would, for the first few years, be unable to get any profit out of a demutualisation. With the chance of a quick profit removed, the wave of demutualisations came to an end in 2000. One academic study (Heffernan, 2003) found that demutualised societies' pricing behaviour on deposits and mortgages was more favourable to shareholders than to customers, with the remaining mutual building societies offering consistently better rates. The Building Societies (Funding) and Mutual Societies (Transfers) Act 2007, known as the Butterfill Act, was passed in 2007 giving building societies greater powers to merge with other companies. These powers have been used by the Britannia in 2009 and Kent Reliance in 2011 leading to their demutualisation. Prior to 31 December 2010, deposits with building societies of up to £50,000 per individual, per institution, were normally protected by the Financial Services Compensation Scheme (FSCS), but Nationwide and Yorkshire Building Societies negotiated a temporary change to the terms of the FSCS to protect members of the societies they acquired in late 2008/early 2009. The amended terms allowed former members of multiple societies which merge into one to maintain multiple entitlements to FSCS protection until 30 September 2009 (later extended to 30 December 2010), so (for example) a member with £50,000 in each of Nationwide, Cheshire and Derbyshire at the time of the respective mergers would retain £150,000 of FSCS protection for their funds in the merged Nationwide. On 31 December 2010 the general FSCS limit for retail deposits was increased to £85,000 for banks and building societies and the transitional arrangements in respect of building society mergers came to an end. The remaining building societies are: (Total group assets of building societies) "(data from last available annual reports as of Dec 2016)" Source: Building Societies Association updated for subsequent mergers Ten building societies of the United Kingdom demutualised between 1989 and 2000, either becoming a bank or being acquired by a larger bank. By 2008, every building society that floated on the stock market in the wave of demutualisations of the 1980s and 1990s had either been sold to a conventional bank, or been nationalised. The following is an incomplete list of building societies in the United Kingdom that no longer exist independently, since they either merged with or were taken over by other organisations. They may still have an active presence on the high street (or online) as a trading name or as a distinct brand. This is typically because brands will often build up specific reputations and attract certain clientele, and this can continue to be marketed successfully. In Australia, building societies evolved along British lines. Following the end of World War II, the terminating model was revived to fund returning servicemen's need for new houses. Hundreds were created with government seed capital, whereby the capital was returned to the government and the terminating societies retained the interest accumulated. Once all the seed funds were loaned, each terminating society could reapply for more seed capital to the point where they could re-lend their own funds and thus became a permanent society. Terminating loans were still available and used inside the permanent businesses by staff up until the 1980s because their existence was not widely known after the early 1960s. Because of strict regulations on banks, building societies flourished until the deregulation of the Australian financial industry in the 1980s. Eventually many of the smaller building societies disappeared, while some of the largest (such as St. George) officially attained the status of banks. Recent conversions have included Heritage Bank which converted from building society to bank in 2011, Hume in 2014, while Wide Bay Building Society became Auswide Bank and IMB followed suit in 2015, and Greater Building Society became Greater Bank in 2016. Building societies converting to banks are no longer required to demutualise. A particular difference between Australian building societies and those elsewhere, is that Australian building societies are required to incorporate as limited companies. Current building societies are The Republic of Ireland had around 40 building societies at the mid-20th century peak. Many of these were very small and, as the Irish commercial banks began to originate residential mortgages, the small building societies ceased to be competitive. Most merged or dissolved or, in the case of First Active plc, converted into conventional banks. The last remaining building societies, EBS Building Society and Irish Nationwide Building Society, demutualised and were transferred or acquired into Bank subsidiaries in 2011 following the effects of the Irish financial crisis. Leeds Building Society Ireland and Nationwide UK (Ireland) were Irish branches of a building societies based in the United Kingdom; both have since ceased all Irish operations. In Jamaica, three building societies compete with commercial banks and credit unions for most consumer financial services: In New Zealand, building societies are registered with the Registrar of Building Societies under the Building Societies Act 1965. Registration as a building society is merely a process of establishing the entity as a corporation. It is largely a formality, and easily achieved, as the capital requirement is minimal (20 members must be issued shares of not less than NZ$1,000 each, for a total minimum foundation share capital of NZ$200,000). As regards prudential supervision, a divide exists between building societies that operate in New Zealand, on the one hand, and those that (although formally registered in New Zealand) operate offshore: Building societies' registration details and filed documents are available in the Register of Building Societies held at the New Zealand Companies Office. Over the years, a number of building societies were established. Some, including Countrywide Building Society and United Building Society, became banks in the 1980s and 1990s. Heartland Building Society (created in 2011 through a merger of Canterbury Building Society, Southern Cross Building Society, and two other financial institutions) became Heartland Bank on 17 December 2012. Remaining building societies include: In Zimbabwe, "Central Africa Building Society" (CABS) is the leading building society offering a diverse range of financial products and services that include transaction and savings accounts, mobile banking, mortgage loans, money market investments, term deposits and pay-roll loans. In other countries there are mutual organisations similar to building societies: Because most building societies were not direct members of the UK clearing system, it was common for them to use a roll number to identify accounts rather than to allocate a six-digit sort-code and eight-digit account number to the BACS standards. More recently, building societies have tended to obtain sort-code and account number allocations within the clearing system, and hence the use of roll numbers has diminished. When using BACS, one needs to enter roll numbers for the reference field and the building society's generic sort code and account number would be entered in the standard BACS fields.
https://en.wikipedia.org/wiki?curid=4776
Blue Steel (missile) The Avro Blue Steel was a British air-launched, rocket-propelled nuclear armed standoff missile, built to arm the V bomber force. It allowed the bomber to launch the missile against its target while still outside the range of surface-to-air missiles (SAMs). The missile proceeded to the target at speeds up to Mach 3, and would trigger within 100 m of the pre-defined target point. Blue Steel entered service in 1963, by which point improved SAMs with longer range had greatly eroded the advantages of the design. A longer-range version, Blue Steel II, was considered, but cancelled in favour of the much longer-range GAM-87 Skybolt system from the US. When development of that system was cancelled in 1962, the V-bomber fleet was considered highly vulnerable. Blue Steel remained the primary British nuclear deterrent weapon until the Royal Navy started operating Polaris ballistic missiles from "Resolution"-class submarines. Blue Steel was the result of a Ministry of Supply memorandum from 5 November 1954 that predicted that by 1960 Soviet air defences would make it impossible for V bombers to attack with nuclear gravity bombs. The answer was for a rocket-powered, supersonic missile capable of carrying a large nuclear (or projected thermonuclear) warhead with a range of at least . This would keep the bombers out of range of Soviet ground-based defences installed around the target area, allowing the missile to "dash" in at high speed. There would have to be a balance between the size of the warhead, the need for it to be carried by any of the three V-bomber types in use, and that it should be able to reach Mach 3. At the time the only strategic warhead available in the UK was the Green Bamboo, which was very large and so required a large missile fuselage to carry it. The Air Staff issued this requirement for a "stand-off bomb" as OR.1132 in September 1954. The Ministry of Supply selected Avro out of the British manufacturers, although it had no experience in working on guided weapons other than some private venture work; Handley Page had suggested a missile, but the Elliots gyro based guidance system was inaccurate beyond . Avro began work proper in 1955, with the assigned Rainbow Code name of "Blue Steel" which it would keep in service. With Elliots working on the guidance system, Armstrong Siddeley would develop the liquid fuel engine. The design period was protracted, with various development problems exacerbated by the fact that designers lacked information on the actual size and weight of the proposed boosted fission warhead Green Bamboo, or its likely thermonuclear successor derived from the Granite series. The large girth of Blue Steel was determined by the implosion sphere diameter of Green Bamboo. Avro proposed that Blue Steel would evolve over time, subsequent versions increasing speed (to Mach 4.5) and range. The ultimate Blue Steel would be a range weapon that could be launched by the supersonic Avro 730 under development. They were told to limit themselves to the specification of OR.1132. The project was delayed by the need to develop the required stainless steel fabrication techniques; this would have been gained in building the Avro 730 but that had been cancelled by then. The Elliots guidance system was plagued by accuracy problems, delaying test flights. As it turned out, neither of the originally-proposed UK-designed warheads were actually fitted, being superseded by Red Snow, an Anglicised variant of the U.S. W-28 thermonuclear warhead of 1.1 Mt yield. Red Snow was smaller and lighter than the earlier warhead proposals. The missile was fitted with a state-of-the-art inertial navigation unit. This system allowed the missile to strike within 100 metres of its designated target. In addition, the pilots of the Avro Vulcan or Handley Page Victor bombers could tie their systems into those of the missile and make use of the guidance system to help plot their own flight plan, since the unit in the missile was more advanced than that in the aircraft. Blue Steel emerged as a pilotless, winged aircraft roughly the size of the experimental Saunders-Roe SR.53 interceptor, with clipped delta wings and small canard foreplanes. It was powered by a two-chamber Armstrong Siddeley Stentor Mark 101 rocket engine, burning a combination of hydrogen peroxide and kerosene. The fuel was a considerable operational problem, because fuelling the missile before launch took nearly half an hour, and was quite hazardous. It required the fuelling site to be flooded with water, and (during the trials campaigns) very early morning preparations because of the heat experienced during Australian summer. Another issue was the very small ground clearance when attached to the Handley Page Victor, and Victor aircrews were especially aware of the dangers when taking off. (The Vulcan had a much higher ground clearance, and ultimately proved a better platform). On launch the rocket engine's first chamber developing thrust would power the missile along a predetermined course to the target at around Mach 1.5. Once close to the target, the second chamber of the engine (6,000 lb) would accelerate the missile to Mach 3. Over the target the engine would cut out and the missile would free-fall before detonating its warhead as an air burst. To speed the trials at Woomera, the test rounds were flown there by Victors and Vulcans in Operation Blue Ranger. The trials began in 1960 about the time the original requirement expected the weapon to be in service. The missiles were prepared at the Weapons Research Establishment near Salisbury South Australia, and flown to be launched at the Woomera range from RAAF Edinburgh. A specialist RAF unit, 4 JSTU, was established to carry out preparatory and operational tasks. Blue Steel finally entered service in February 1963, carried by Vulcans and Victors, although its limitations were already apparent. The short range of the missile meant that the V bombers were still vulnerable to enemy surface-to-air missiles. A replacement for Blue Steel, the Mark 2, was planned with increased range and a ramjet engine, but was cancelled in 1960 to minimise delays to the Mk.1. The UK sought to acquire the much longer-ranged United States Air Force AGM-48 Skybolt air-launched ballistic missile, and was greatly frustrated when that weapon was cancelled in late 1962. Blue Steel required up to seven hours of launch preparation, and was highly unreliable. The Royal Air Force estimated in 1963 that half the missiles would fail to fire and would have to be dropped over their targets, contradicting their purpose of serving as standoff weapons. Even as it deployed Blue Steel, a high-altitude weapon, that year the government decided that because of anti-aircraft missiles' increasing effectiveness, V bombers would have to convert from high-altitude to low-altitude attacks. These trials were conducted in 1964 and concluded in 1965 With no effective long-range weapon the original Blue Steel served on after a crash programme of minor modifications to permit a low-level launch at , even though its usefulness in a hot war was likely limited. A stop-gap weapon (WE.177B) was quickly produced to extend the life of the V-bomber force in the strategic role until the Polaris missile was deployed. This WE.177 laydown weapon supplemented the remaining modified Blue Steel missiles, using a low-level penetration followed by a pop-up manoeuvre to release the weapon at . One live operational round was deployed on each of forty-eight Vulcan and Victor bombers, and a further five live rounds were produced as operational spares. An additional four non-nuclear rounds were produced for various RAF requirements, and there were sixteen other unspecified training rounds. Blue Steel was officially retired on 31 December 1970, with the United Kingdom's strategic nuclear capacity passing to the submarine fleet.
https://en.wikipedia.org/wiki?curid=4777
Branch Davidians The Branch Davidians (or the General Association of Branch Davidian Seventh-day Adventists) are a religious community which was founded in 1955 by Benjamin Roden. They are an offshoot of the General Association of Davidian Seventh-Day Adventists, established by Victor Houteff in 1935. Houteff, a Bulgarian immigrant and a Seventh-day Adventist, wrote a series of tracts which were titled the "Shepherd's Rod." The tracts called for the reform of the Seventh-day Adventist Church. In 1935, after his ideas were rejected by Adventist leaders, Houteff and his followers settled on a tract of land on the western outskirts of Waco, Texas, where they built a compound called the Mount Carmel Center and began preparing for the Second Coming. After Houteff's death in 1955, his wife Florence became the leader of the Davidians. That same year, Roden, a former follower of Houteff who called himself "the Branch" (Isaiah 11:1), called for Davidians to come to Mount Carmel Center to hear his message. This was the beginning of the group that would be popularly known as the General Association of Davidian Seventh-day Adventists. In 1957, Florence Houteff sold the old Mount Carmel Center and purchased 941 acres near Elk, Texas, thirteen miles northeast of Waco, naming the property New Mount Carmel Center. After the failure of Florence's prophecy of apocalyptic events on or near April 22, 1959, she dissolved the General Association of Davidian Seventh-day Adventists in 1962 and sold all but 77.86 acres of the New Mount Carmel property. Roden took possession of New Mount Carmel in 1962 and began his efforts to purchase the remaining 77.86 acres. On February 27, 1973, New Mount Carmel was sold to "Benjamin Roden, Lois Roden, and [their son] George Roden, Trustees for the General Association of Branch Davidian Seventh-day Adventists." From this point on, the property was simply known as Mount Carmel. Upon the death of Roden in 1978, his wife Lois became the next Branch Davidian prophet at the compound. In 1981, a young man named Vernon Howell, later known as David Koresh, came to Mount Carmel and studied biblical prophecy under Lois Roden. By 1984 the core group of Branch Davidians shifted their allegiance from Lois' son George to Koresh. The Branch Davidians are most associated with the Waco siege of 1993, a 51-day standoff between members of the sect and federal agents. The conflict ended when Mount Carmel was destroyed in a fire. Ten people were killed during the initial raid by Bureau of Alcohol, Tobacco, and Firearms agents on February 28, 1993, and 76 Branch Davidians of all ages died in the fire that was the culmination of an FBI tank and CS gas assault on April 19, 1993. In 1929, Victor Houteff, a Bulgarian immigrant and a Seventh-day Adventist Sabbath School teacher in a local church in Southern California, claimed that he had a new message for the entire church. He presented his views in a book, "The Shepherd's Rod: The 144,000—A Call for Reformation". The Adventist leadership rejected Houteff's views as contrary to the Adventists' basic teachings and local church congregations disfellowshipped Houteff and his followers. In 1934, Houteff established his headquarters to the west of Waco, Texas and his group became known as The Davidians. In 1942, he renamed the group the General Association of Davidian Seventh-day Adventists, 'Davidian' indicating the belief in the restoration of the Davidic Kingdom of Israel. Following Houteff's death in 1955, the segment of the group loyal to Houteff continued as the Davidian Seventh-day Adventists, led by his wife Florence. Convinced of an imminent apocalypse, in a time frame announced by Florence Houteff which was not found in the original writings of her husband Victor, Florence and her council gathered hundreds of faithful followers together at their Mount Carmel Center near Waco in 1959 for the fulfillment of Ezekiel 9. The anticipated events did not occur, and following this disappointment, Benjamin Roden formed another group called the Branch Davidians and succeeded in taking control of Mount Carmel. This name is an allusion to the anointed 'Branch' (mentioned in Zechariah 3:8; 6:12). When Benjamin Roden died in 1978, he was succeeded by his wife Lois Roden. Members of the Branch Davidians were torn between allegiance to Ben's wife, Lois Roden, and his son, George. After Lois Roden died, George Roden assumed the right to the Presidency. But less than a year later, Vernon Howell rose to power and became the leader over those in the group who sympathized with him. Vernon Howell's arrival on the Waco compound in 1981 was well received by nearly everyone at the Davidian commune. Howell had an affair with the then-prophet of the Branch Davidians, Lois Roden, while he was in his late 20s and she was in her late 60s. Howell wanted a child with her, who, according to his understanding, would be the Chosen One. When she died, her son George Roden inherited the positions of prophet and leader of the commune. However, George Roden and Howell began to clash. Howell soon enjoyed the loyalty of the majority of the Branch Davidian community. As an attempt to regain support, George Roden challenged Howell to raise the dead, going so far as to exhume a corpse in order to demonstrate his spiritual supremacy. This illegal act gave Howell an opportunity to attempt to file charges against Roden, but he was told he needed evidence. This led to the raid on November 3, 1987, of the Mount Carmel Center by Howell and 7 of his followers equipped with five .223 caliber semiautomatic rifles, two .22 caliber rifles, two 12-gauge shotguns and nearly 400 rounds of ammunition. Their objective seemed to be to retake the land that Howell had left three years earlier. Although they claimed to have been trying to obtain evidence of Roden's illegal activity, they did not take a camera. The trial ended with the jury finding the followers of Howell not guilty, but the jury members were unable to agree on a verdict for Howell. After his followers were found not guilty, Howell invited the prosecutors to Mount Carmel for ice cream. It is claimed that Howell was never authorized to name his breakaway sect the "Branch Davidians", and the church which bears that name continues to represent the members of the Branch church who did not follow him. Howell, who acquired the position of spiritual leader from Roden, asserted it by changing his name to David Koresh, suggesting that he had ties to the biblical King David and Cyrus the Great (Koresh is the Hebrew version of the name Cyrus). He wanted to create a new lineage of world leaders. This practice later served as the basis for allegations that Koresh was committing child abuse, which contributed to the siege by the ATF. Interpreting Revelation 5:2, Koresh identified himself with the Lamb mentioned therein. This is traditionally believed to symbolize Jesus Christ, however, Koresh suggested that the Lamb would come before Jesus and pave the way for his Second Coming. By the time of the 1993 Waco siege, Koresh had encouraged his followers to think of themselves as "students of the Seven Seals," rather than as "Branch Davidians." During the standoff, one of his followers publicly announced that he wanted them to thereafter be identified by the name "Koreshians". On February 28, at 4:20 AM, the Bureau of Alcohol, Tobacco, and Firearms attempted to execute a search warrant relating to alleged sexual abuse charges and illegal weapons violations. The ATF attempted to breach the compound for approximately two hours until their ammunition ran low. Four ATF agents (Steve Willis, Robert Williams, Todd McKeehan, and Conway Charles LeBleu) were killed and another 16 agents were wounded during the raid. The five Branch Davidians killed in the 9:45 AM raid were Winston Blake (British), Peter Gent (Australian), Peter Hipsman, Perry Jones, and Jaydean Wendell; two were killed by the Branch Davidians. Almost six hours after the ceasefire, Michael Schroeder was shot dead by ATF agents who alleged he fired a pistol at agents as he attempted to re-enter the compound with Woodrow Kendrick and Norman Allison. His wife claimed that he was merely returning from work and had not participated in the day's earlier altercation. Schroeder had been shot once in the eye, once in the heart, and five times in the back. After the raid, ATF agents established contact with Koresh and others inside of the compound. The FBI took command after the deaths of federal agents, and managed to facilitate the release of 19 children (without their parents) relatively early into the negotiations. The children were then interviewed by the FBI and the Texas Rangers. Allegedly, the children had been physically and sexually abused long before the raid. On April 19, 1993, the FBI moved for a final siege of the compound using large weaponry such as .50 caliber (12.7 mm) rifles and armored Combat Engineering Vehicles (CEV) to combat the heavily armed Branch Davidians. The FBI attempted to use tear gas to flush out the Branch Davidians. Officially, FBI agents were only permitted to return any incoming fire, not to actively assault the Branch Davidians. When several Branch Davidians opened fire, the FBI's response was to increase the amount of gas being used. Around noon, three fires broke out simultaneously in different parts of the building. The government maintains that the fires were deliberately started by Branch Davidians. Some Branch Davidian survivors maintain that the fires were started either accidentally or deliberately by the assault. Of the 85 Branch Davidians in the compound when the final siege began, 76 died on April 19 in various ways, from falling rubble to suffocating effects of the fire, or by gunshot from fellow Branch Davidians. The siege lasted 51 days. In all, four ATF agents were killed, 16 were wounded, and six Branch Davidians died in the initial raid on February 28th. Seventy-six more died in the final assault on April 19. The events at Waco spurred criminal prosecution and civil litigation. A federal grand jury indicted 12 of the surviving Branch Davidians charging them with aiding and abetting in murder of federal officers, and unlawful possession and use of various firearms. Eight Branch Davidians were convicted on firearms charges, five convicted of voluntary manslaughter, and four were acquitted of all charges. As of July 2007, all Branch Davidians had been released from prison. Several civil suits were brought against the United States government, federal officials, former governor of Texas Ann Richards, and members of the Texas Army National Guard. The bulk of these claims were dismissed because they were insufficient as a matter of law or because the plaintiffs could advance no material evidence in support of them. One case, Andrade v. Chojnacki made it to the Fifth Circuit, which upheld a previous ruling of "take-nothing, denied". One modern incarnation of the Branch Davidians exists under the leadership of Charles Pace, a follower of Ben and Lois Roden, who was a member of the Branch Davidians from the mid-1970s. The Branch, The Lord Our Righteousness is a legally recognized denomination with 1200 members. Pace claims that Koresh twisted the Bible's teachings by fathering more than a dozen children with members' wives. Pace believes that the Lord "has anointed me and appointed me to be the leader," but he claims that he is "not a prophet" but "a teacher of righteousness". Other Branch Davidians, led by Clive Doyle, continue to believe Koresh was a prophet and await his resurrection, along with the followers who were killed. Both incarnations are still waiting for the end of times. The Seventh-day Adventist Church, the main church in the Adventist tradition, rejected Victor Houteff's teachings and revoked his membership in 1930. Houteff went on to found the Davidians (a splinter group, otherwise known as the Shepherd's Rod). Branch Davidians are considered a splinter group from that dissenting group (the Davidians/Shepherd's Rod) and the product of a schism among the Davidians/Shepherd's Rod initiated by Benjamin Roden. Branch Davidian leaders, while still formally members in the Seventh-day Adventist Church, pushed for a reform of the church and when this was met with opposition (from both the common Seventh-day Adventists and also the already excluded Davidians/Shepherd's Rod), they decided to leave that denomination while at the same time widely distancing themselves from the Davidians/Shepherd's Rod (their "parent group" that arose earlier and was also excluded for their own attempts to reform the Seventh-day Adventist Church). The Seventh-day Adventist Church deprived both the Branch Davidians and the Davidians of their membership in the denomination, but in spite of this fact the Branch Davidians actively continued to "hunt" members of the Seventh-day Adventist Church, encouraging them to leave it and join their group instead. The Seventh-day Adventists were reportedly "apprehensive" about the group's views as Branch Davidians claimed to be the "only rightful continuation of the Adventist message", based on the idea that Victor Houteff was the divinely selected prophet and successor to Ellen G. White. Both the Davidians/Shepherd's Rod and the Branch Davidians claimed Houteff as their spiritual inspiration, although he was the founder of the Davidians/Shepherd's Rod. The Seventh-day Adventist Church issued warnings about the Branch Davidian views to its members on a regular basis. There is documented evidence (FBI negotiation transcripts between Kathryn Shroeder and Steve Schneider with interjections from Koresh himself) that David Koresh and his followers did not call themselves Branch Davidians. In addition, David Koresh, through forgery, stole the identity of the Branch Davidian Seventh-day Adventists for the purpose of obtaining the Mount Carmel Center property. The doctrinal beliefs of the Branch Davidians differ on teachings such as the Holy Spirit and his nature, and the feast days and their requirements. Both groups have disputed the relevance of the other's spiritual authority based on the proceedings following Victor Houteff's death. From its inception in 1930, the Davidians/Shepherd's Rod group believed themselves to be living in a time when Biblical prophecies of a Last Judgment were coming to pass as a prelude to Christ's Second Coming. In the late 1980s, Koresh and his followers abandoned many Branch Davidian teachings. Koresh became the group's self-proclaimed final prophet. "Koreshians" were the majority resulting from the schism among the Branch Davidians, but some of the Branch Davidians did not join Koresh's group and instead gathered around George Roden or became independent. Following a series of violent shootouts between Roden's and Koresh's group, the Mount Carmel compound was eventually taken over by the "Koreshians".
https://en.wikipedia.org/wiki?curid=4778
Burwash Hall Burwash Hall is the second oldest of the residence buildings at Toronto's Victoria College. Construction began in 1911 and was completed in 1913. It was named after Nathanael Burwash, a former president of Victoria. The building is an extravagant Neo-Gothic work with turrets, gargoyles, and battlements. The architect was Henry Sproatt. The building is divided between the large dining hall in the northwest and the student residence proper. The residence area is divided into two sections. The Upper Houses, built in 1913, consist of four houses: North House, Middle House, Gate House ("more below"), and South House. The Lower Houses were built in 1931 and were originally intended to house theology students at Emmanuel College, whose current building was opened the same year. Ryerson House, Nelles House, Caven House, Bowles-Gandier House are now mostly home to undergraduate arts and science students. The latter two are mostly reserved for students in the new Vic One Program. The Vic One Program is an academic opportunity for incoming first-year students. The program explores ideas and events from a multi-disciplinary perspective in small classes limited to 25 students. The focus of the Vic One Program is on the development of strong research, writing, and critical thinking skills that serve as an important foundation for future academic success. Famous residents of Burwash include Vincent Massey, Lester B. Pearson, Don Harron, and Donald Sutherland. The upper houses were gutted and renovated in 1995. The lower houses have only been partially upgraded. Before the renovations the entire building was all male, but now every house is co-ed. Each Upper House consists of three floors. The lower floor contains a common room equipped with kitchen facilities, couches and a television. The upper floors each have their own kitchen and dining area. All except North House have a very high bathroom ratio, with Gate House being the best with nine washrooms for its twenty-eight residents. Upper Houses are divided between double rooms and singles, with about sixty percent of the population being in doubles. The Lower Houses each have four floors, but are much narrower with each level having only four rooms. Each level also has its own kitchen, but these are much smaller than in the Upper Houses. The Lower Houses do have far larger and better fitted common rooms that are similar to the ones the Upper Houses had before the renovations. The rooms in the Lower Houses are also considered more luxurious with hardwood floors and large sizes. Rooms in the Lower Houses are more expensive, however. Until 2003 the Lower Houses were restricted to upper year students but with the double cohort of graduates from Ontario schools many of the rooms were transformed into doubles and now hold first years. To the west the Upper Houses look out on the Vic Quad and the main Victoria College building across it. West of the Lower Houses is the new Lester B. Pearson Garden of Peace and International Understanding and the E.J. Pratt Library beyond it. From the eastern side of the building, the Upper Houses look out at Rowell Jackman Hall and the Lower Houses see the St. Michael's College residence of Elmsley. The only exception is the view from Gate House's tower that looks down St. Mary's Street. The dining hall is perhaps the best known part of the building to outsiders. It is the University of Toronto's largest, holding some 250 students and sixteen large tables. Hanging on the western wall is Queen Victoria's burial flag, given to the college soon after her death. Under the flag is the high table where the professors and college administration lunch. Historically, the Upper Houses each had their own table. Gate sat in the southwest corner, Middle sat in the far northeast, South sat in the table to the west of Middle, while North sat to the west of the southeast corner. The only lower house to have had a designated table was Caven, in the northwest corner beside the alumni table. (Note that prior to the 1995 renovations, some of these houses, particularly North and Caven, 'traditionally' sat elsewhere) Gate House is one of the four Upper Houses of the Burwash Hall residence. Until 2007, when Victoria administration made it co-ed, Gate House was one of the last remaining all-male residence building in the University of Toronto. The Gate House emblem is the Phoenix, visible in the bottom-right corner of the Victoria College insignia. Gate House, with the rest of Upper Burwash, opened in 1913 and has held students every year since then except 1995, when it was renovated. As an all-male residence from 1913 to 2007 it held a number of unique traditions. For 20 years Gate House hosted an annual party called Novemberfest in the Burwash dining hall. The Victoria Dean of Students cancelled Novemberfest in 2003, when police discovered widespread underage drinking and over 800 people in the dining hall, in violation of the fire code. Another Gate House tradition that no longer occurs is the "stirring the chicken," a dinner and keg party where house members cook chicken fajitas for hundreds of guests. Until 2007, Gate House held secretive first-year initiation ceremonies called Traditionals, which involved writing slogans on campus buildings in chalk, singing songs to the all-women's residence (who would then sing back to them), and leading first-years around the house blindfolded. Since Novemberfest, Gate House continued to have conflict with the Administration. In 2004 the Dean evicted three Gate House residents for allegedly "hog-tying" a first-year student. In 2007 President Paul W. Gooch wrote that Gate House undertook an "escalating series of actions" that were "defiant" and "disparaging of women", in response to Gate members constructing a 2.5-metre snow penis and placing a cooked pig's head in an Annesley bathroom. As punishment, during the fall exam period Gooch evicted two residents and relocated the remainder of Gate House to other places in the residence system, banned all current Gate House students from entering the building in 2008. Since this decision Gate House has become a co-ed residence identical to the other Upper Burwash houses. Notable residents of Gate House include Lester B. Pearson, former Prime Minister of Canada, and Simon Pulsifer, who "Time" magazine nicknamed "The Duke of Data" for his contributions to Wikipedia. During its 93 years as a men's residence, Gate House developed a distinct character and reputation. These antics included pranks, toga parties, streaking, caroling to other residences, hazing rituals, "beer bashes" and "incessant pounding" on the Gate House table in the dining hall. Paul Gooch wrote that these traditions gave Gate House an "ethos" that contradicted his vision of residence life. The all-male Gate House was known as a social centre and spirited, tight-knit community. According to Grayson Lee, who created the snow penis sculpture in 2007, most of its residents were "heartbroken" to leave. Former Gate House President Dave Ruhl commented that "the Gate House camaraderie is unique" and that living there was "one of the most important parts of the university experience" for many. The Reuters news agency nicknamed Gate House "U of T's Animal House" because Donald Sutherland's memories of its parties are said to have influenced the script of the 1978 movie. The Toronto Star described Gooch's decision to put an end to its traditions, activities and distinguishing characteristics as "neutering Animal House." Gate House has three floors which house 28 students, as well as a don and the Victoria College Residence Life Co-ordinator. Above the gate there is a tower that rises three stories higher and has a turret-style roof. The tower is locked during the school year and entering it is a Level 4 offense under the Victoria residence agreement for which the punishment is eviction from residence. The first floor has one double room and one bathroom available to students. About half of the floor is taken up by the apartment of the Residence Life Coordinator. Lastly, on the first floor there is a house common room with a kitchen and two couches. The second floor has three double rooms and seven single rooms. It has three single washrooms and one larger communal one, as well as its own kitchen. This floor is home to the residence don, who has a larger room with a private washroom. The third floor is identical to the second except that in place of the don's room there are two single rooms. Past Presidents of Gate House Include:
https://en.wikipedia.org/wiki?curid=4779
Benzodiazepine Benzodiazepines (BZD, BDZ, BZs), sometimes called "benzos", are a class of psychoactive drugs whose core chemical structure is the fusion of a benzene ring and a diazepine ring. The first such drug, chlordiazepoxide (Librium), was discovered accidentally by Leo Sternbach in 1955, and made available in 1960 by Hoffmann–La Roche, which, since 1963, has also marketed the benzodiazepine diazepam (Valium). In 1977 benzodiazepines were globally the most prescribed medications. They are in the family of drugs commonly known as minor tranquilizers. Benzodiazepines enhance the effect of the neurotransmitter gamma-aminobutyric acid (GABA) at the GABAA receptor, resulting in sedative, hypnotic (sleep-inducing), anxiolytic (anti-anxiety), anticonvulsant, and muscle relaxant properties. High doses of many shorter-acting benzodiazepines may also cause anterograde amnesia and dissociation. These properties make benzodiazepines useful in treating anxiety, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Benzodiazepines are categorized as either short, intermediary, or long-acting. Short- and intermediate-acting benzodiazepines are preferred for the treatment of insomnia; longer-acting benzodiazepines are recommended for the treatment of anxiety. Benzodiazepines are generally viewed as safe and effective for short-term use, although cognitive impairment and paradoxical effects such as aggression or behavioral disinhibition occasionally occur. A minority of people can have paradoxical reactions such as worsened agitation or panic. Benzodiazepines are also associated with increased risk of suicide. Long-term use is controversial because of concerns about decreasing effectiveness, physical dependence, withdrawal, and an increased risk of dementia and cancer. In the long-term, stopping benzodiazepines often leads to improved physical and mental health. The elderly are at an increased risk of both short- and long-term adverse effects, and as a result, all benzodiazepines are listed in the Beers List of inappropriate medications for older adults. There is controversy concerning the safety of benzodiazepines in pregnancy. While they are not major teratogens, uncertainty remains as to whether they cause cleft palate in a small number of babies and whether neurobehavioural effects occur as a result of prenatal exposure; they are known to cause withdrawal symptoms in the newborn. Benzodiazepines can be taken in overdoses and can cause dangerous deep unconsciousness. However, they are less toxic than their predecessors, the barbiturates, and death rarely results when a benzodiazepine is the only drug taken. When combined with other central nervous system (CNS) depressants such as alcoholic drinks and opioids, the potential for toxicity and fatal overdose increases. Benzodiazepines are commonly misused and taken in combination with other drugs of abuse. Benzodiazepines possess psycholeptic, sedative, hypnotic, anxiolytic, anticonvulsant, muscle relaxant, and amnesic actions, which are useful in a variety of indications such as alcohol dependence, seizures, anxiety disorders, panic, agitation, and insomnia. Most are administered orally; however, they can also be given intravenously, intramuscularly, or rectally. In general, benzodiazepines are well tolerated and are safe and effective drugs in the short term for a wide range of conditions. Tolerance can develop to their effects and there is also a risk of dependence, and upon discontinuation a withdrawal syndrome may occur. These factors, combined with other possible secondary effects after prolonged use such as psychomotor, cognitive, or memory impairments, limit their long-term applicability. The effects of long-term use or misuse include the tendency to cause or worsen cognitive deficits, depression, and anxiety. The College of Physicians and Surgeons of British Columbia recommends discontinuing the usage of benzodiazepines in those on opioids and those who have used them long term. Benzodiazepines can have serious adverse health outcomes, and these findings support clinical and regulatory efforts to reduce usage, especially in combination with non-benzodiazepine receptor agonists. Because of their effectiveness, tolerability, and rapid onset of anxiolytic action, benzodiazepines are frequently used for the treatment of anxiety associated with panic disorder. However, there is disagreement among expert bodies regarding the long-term use of benzodiazepines for panic disorder. The views range from those holding benzodiazepines are not effective long-term and should be reserved for treatment-resistant cases to those holding they are as effective in the long term as selective serotonin reuptake inhibitors. The American Psychiatric Association (APA) guidelines note that, in general, benzodiazepines are well tolerated, and their use for the initial treatment for panic disorder is strongly supported by numerous controlled trials. APA states that there is insufficient evidence to recommend any of the established panic disorder treatments over another. The choice of treatment between benzodiazepines, SSRIs, serotonin–norepinephrine reuptake inhibitors, tricyclic antidepressants, and psychotherapy should be based on the patient's history, preference, and other individual characteristics. Selective serotonin reuptake inhibitors are likely to be the best choice of pharmacotherapy for many patients with panic disorder, but benzodiazepines are also often used, and some studies suggest that these medications are still used with greater frequency than the SSRIs. One advantage of benzodiazepines is that they alleviate the anxiety symptoms much faster than antidepressants, and therefore may be preferred in patients for whom rapid symptom control is critical. However, this advantage is offset by the possibility of developing benzodiazepine dependence. APA does not recommend benzodiazepines for persons with depressive symptoms or a recent history of substance abuse. The APA guidelines state that, in general, pharmacotherapy of panic disorder should be continued for at least a year, and that clinical experience supports continuing benzodiazepine treatment to prevent recurrence. Although major concerns about benzodiazepine tolerance and withdrawal have been raised, there is no evidence for significant dose escalation in patients using benzodiazepines long-term. For many such patients, stable doses of benzodiazepines retain their efficacy over several years. Guidelines issued by the UK-based National Institute for Health and Clinical Excellence (NICE), carried out a systematic review using different methodology and came to a different conclusion. They questioned the accuracy of studies that were not placebo-controlled. And, based on the findings of placebo-controlled studies, they do not recommend use of benzodiazepines beyond two to four weeks, as tolerance and physical dependence develop rapidly, with withdrawal symptoms including rebound anxiety occurring after six weeks or more of use. Nevertheless, benzodiazepines are still prescribed for long-term treatment of anxiety disorders, although specific antidepressants and psychological therapies are recommended as the first-line treatment options with the anticonvulsant drug pregabalin indicated as a second- or third-line treatment and suitable for long-term use. NICE stated that long-term use of benzodiazepines for panic disorder with or without agoraphobia is an unlicensed indication, does not have long-term efficacy, and is, therefore, not recommended by clinical guidelines. Psychological therapies such as cognitive behavioural therapy are recommended as a first-line therapy for panic disorder; benzodiazepine use has been found to interfere with therapeutic gains from these therapies. Benzodiazepines are usually administered orally; however, very occasionally lorazepam or diazepam may be given intravenously for the treatment of panic attacks. Benzodiazepines have robust efficacy in the short-term management of generalized anxiety disorder (GAD), but were not shown effective in producing long-term improvement overall. According to National Institute for Health and Clinical Excellence (NICE), benzodiazepines can be used in the immediate management of GAD, if necessary. However, they should not usually be given for longer than 2–4 weeks. The only medications NICE recommends for the longer term management of GAD are antidepressants. Likewise, Canadian Psychiatric Association (CPA) recommends benzodiazepines alprazolam, bromazepam, lorazepam, and diazepam only as a second-line choice, if the treatment with two different antidepressants was unsuccessful. Although they are second-line agents, benzodiazepines can be used for a limited time to relieve severe anxiety and agitation. CPA guidelines note that after 4–6 weeks the effect of benzodiazepines may decrease to the level of placebo, and that benzodiazepines are less effective than antidepressants in alleviating ruminative worry, the core symptom of GAD. However, in some cases, a prolonged treatment with benzodiazepines as the add-on to an antidepressant may be justified. A 2015 review found a larger effect with medications than talk therapy. Medications with benefit include serotonin-noradrenaline reuptake inhibitors, benzodiazepines, and selective serotonin reuptake inhibitors. Benzodiazepines can be useful for short-term treatment of insomnia. Their use beyond 2 to 4 weeks is not recommended due to the risk of dependence. The Committee on Safety of Medicines report recommended that where long-term use of benzodiazepines for insomnia is indicated then treatment should be intermittent wherever possible. It is preferred that benzodiazepines be taken intermittently and at the lowest effective dose. They improve sleep-related problems by shortening the time spent in bed before falling asleep, prolonging the sleep time, and, in general, reducing wakefulness. However, they worsen sleep quality by increasing light sleep and decreasing deep sleep. Other drawbacks of hypnotics, including benzodiazepines, are possible tolerance to their effects, rebound insomnia, and reduced slow-wave sleep and a withdrawal period typified by rebound insomnia and a prolonged period of anxiety and agitation. The list of benzodiazepines approved for the treatment of insomnia is fairly similar among most countries, but which benzodiazepines are officially designated as first-line hypnotics prescribed for the treatment of insomnia varies between countries. Longer-acting benzodiazepines such as nitrazepam and diazepam have residual effects that may persist into the next day and are, in general, not recommended. Since the release of non benzodiazepines in 1992 in response to safety concerns, individuals with insomnia and other sleep disorders have increasingly been prescribed nonbenzodiazepines (2.3% in 1993 to 13.7% of Americans in 2010), less often prescribed benzodiazepines (23.5% in 1993 to 10.8% in 2010). It is not clear as to whether the new non benzodiazepine hypnotics (Z-drugs) are better than the short-acting benzodiazepines. The efficacy of these two groups of medications is similar. According to the US Agency for Healthcare Research and Quality, indirect comparison indicates that side-effects from benzodiazepines may be about twice as frequent as from nonbenzodiazepines. Some experts suggest using nonbenzodiazepines preferentially as a first-line long-term treatment of insomnia. However, the UK National Institute for Health and Clinical Excellence did not find any convincing evidence in favor of Z-drugs. NICE review pointed out that short-acting Z-drugs were inappropriately compared in clinical trials with long-acting benzodiazepines. There have been no trials comparing short-acting Z-drugs with appropriate doses of short-acting benzodiazepines. Based on this, NICE recommended choosing the hypnotic based on cost and the patient's preference. Older adults should not use benzodiazepines to treat insomnia unless other treatments have failed. When benzodiazepines are used, patients, their caretakers, and their physician should discuss the increased risk of harms, including evidence that shows twice the incidence of traffic collisions among driving patients, and falls and hip fracture for older patients. Prolonged convulsive epileptic seizures are a medical emergency that can usually be dealt with effectively by administering fast-acting benzodiazepines, which are potent anticonvulsants. In a hospital environment, intravenous clonazepam, lorazepam, and diazepam are first-line choices. In the community, intravenous administration is not practical and so rectal diazepam or buccal midazolam are used, with a preference for midazolam as its administration is easier and more socially acceptable. When benzodiazepines were first introduced, they were enthusiastically adopted for treating all forms of epilepsy. However, drowsiness and tolerance become problems with continued use and none are now considered first-line choices for long-term epilepsy therapy. Clobazam is widely used by specialist epilepsy clinics worldwide and clonazepam is popular in the Netherlands, Belgium and France. Clobazam was approved for use in the United States in 2011. In the UK, both clobazam and clonazepam are second-line choices for treating many forms of epilepsy. Clobazam also has a useful role for very short-term seizure prophylaxis and in catamenial epilepsy. Discontinuation after long-term use in epilepsy requires additional caution because of the risks of rebound seizures. Therefore, the dose is slowly tapered over a period of up to six months or longer. Chlordiazepoxide is the most commonly used benzodiazepine for alcohol detoxification, but diazepam may be used as an alternative. Both are used in the detoxification of individuals who are motivated to stop drinking, and are prescribed for a short period of time to reduce the risks of developing tolerance and dependence to the benzodiazepine medication itself. The benzodiazepines with a longer half-life make detoxification more tolerable, and dangerous (and potentially lethal) alcohol withdrawal effects are less likely to occur. On the other hand, short-acting benzodiazepines may lead to breakthrough seizures, and are, therefore, not recommended for detoxification in an outpatient setting. Oxazepam and lorazepam are often used in patients at risk of drug accumulation, in particular, the elderly and those with cirrhosis, because they are metabolized differently from other benzodiazepines, through conjugation. Benzodiazepines are the preferred choice in the management of alcohol withdrawal syndrome, in particular, for the prevention and treatment of the dangerous complication of seizures and in subduing severe delirium. Lorazepam is the only benzodiazepine with predictable intramuscular absorption and it is the most effective in preventing and controlling acute seizures. Benzodiazepines are sometimes used in the treatment of acute anxiety, as they bring about rapid and marked or moderate relief of symptoms in most individuals; however, they are not recommended beyond 2–4 weeks of use due to risks of tolerance and dependence and a lack of long-term effectiveness. As for insomnia, they may also be used on an irregular/"as-needed" basis, such as in cases where said anxiety is at its worst. Compared to other pharmacological treatments, benzodiazepines are twice as likely to lead to a relapse of the underlying condition upon discontinuation. Psychological therapies and other pharmacological therapies are recommended for the long-term treatment of generalized anxiety disorder. Antidepressants have higher remission rates and are, in general, safe and effective in the short and long term. Benzodiazepines are often prescribed for a wide range of conditions: Because of their muscle relaxant action, benzodiazepines may cause respiratory depression in susceptible individuals. For that reason, they are contraindicated in people with myasthenia gravis, sleep apnea, bronchitis, and COPD. Caution is required when benzodiazepines are used in people with personality disorders or intellectual disability because of frequent paradoxical reactions. In major depression, they may precipitate suicidal tendencies and are sometimes used for suicidal overdoses. Individuals with a history of alcohol, opioid and barbiturate abuse should avoid benzodiazepines, as there is a risk of life-threatening interactions with these drugs. In the United States, the Food and Drug Administration has categorized benzodiazepines into either category D or X meaning potential for harm in the unborn has been demonstrated. Exposure to benzodiazepines during pregnancy has been associated with a slightly increased (from 0.06 to 0.07%) risk of cleft palate in newborns, a controversial conclusion as some studies find no association between benzodiazepines and cleft palate. Their use by expectant mothers shortly before the delivery may result in a floppy infant syndrome, with the newborns suffering from hypotonia, hypothermia, lethargy, and breathing and feeding difficulties. Cases of neonatal withdrawal syndrome have been described in infants chronically exposed to benzodiazepines in utero. This syndrome may be hard to recognize, as it starts several days after delivery, for example, as late as 21 days for chlordiazepoxide. The symptoms include tremors, hypertonia, hyperreflexia, hyperactivity, and vomiting and may last for up to three to six months. Tapering down the dose during pregnancy may lessen its severity. If used in pregnancy, those benzodiazepines with a better and longer safety record, such as diazepam or chlordiazepoxide, are recommended over potentially more harmful benzodiazepines, such as temazepam or triazolam. Using the lowest effective dose for the shortest period of time minimizes the risks to the unborn child. The benefits of benzodiazepines are least and the risks are greatest in the elderly. They are listed as a potentially inappropriate medication for older adults by the American Geriatrics Society. The elderly are at an increased risk of dependence and are more sensitive to the adverse effects such as memory problems, daytime sedation, impaired motor coordination, and increased risk of motor vehicle accidents and falls, and an increased risk of hip fractures. The long-term effects of benzodiazepines and benzodiazepine dependence in the elderly can resemble dementia, depression, or anxiety syndromes, and progressively worsens over time. Adverse effects on cognition can be mistaken for the effects of old age. The benefits of withdrawal include improved cognition, alertness, mobility, reduced risk incontinence, and a reduced risk of falls and fractures. The success of gradual-tapering benzodiazepines is as great in the elderly as in younger people. Benzodiazepines should be prescribed to the elderly only with caution and only for a short period at low doses. Short to intermediate-acting benzodiazepines are preferred in the elderly such as oxazepam and temazepam. The high potency benzodiazepines alprazolam and triazolam and long-acting benzodiazepines are not recommended in the elderly due to increased adverse effects. Nonbenzodiazepines such as zaleplon and zolpidem and low doses of sedating antidepressants are sometimes used as alternatives to benzodiazepines. Long-term use of benzodiazepines is associated with increased risk of cognitive impairment and dementia, and reduction in prescribing levels is likely to reduce dementia risk. The association of a past history of benzodiazepine use and cognitive decline is unclear, with some studies reporting a lower risk of cognitive decline in former users, some finding no association and some indicating an increased risk of cognitive decline. Benzodiazepines are sometimes prescribed to treat behavioral symptoms of dementia. However, like antidepressants, they have little evidence of effectiveness, although antipsychotics have shown some benefit. Cognitive impairing effects of benzodiazepines that occur frequently in the elderly can also worsen dementia. The most common side-effects of benzodiazepines are related to their sedating and muscle-relaxing action. They include drowsiness, dizziness, and decreased alertness and concentration. Lack of coordination may result in falls and injuries, in particular, in the elderly. Another result is impairment of driving skills and increased likelihood of road traffic accidents. Decreased libido and erection problems are a common side effect. Depression and disinhibition may emerge. Hypotension and suppressed breathing (hypoventilation) may be encountered with intravenous use. Less common side effects include nausea and changes in appetite, blurred vision, confusion, euphoria, depersonalization and nightmares. Cases of liver toxicity have been described but are very rare. The long-term effects of benzodiazepine use can include cognitive impairment as well as affective and behavioural problems. Feelings of turmoil, difficulty in thinking constructively, loss of sex-drive, agoraphobia and social phobia, increasing anxiety and depression, loss of interest in leisure pursuits and interests, and an inability to experience or express feelings can also occur. Not everyone, however, experiences problems with long-term use. Additionally, an altered perception of self, environment and relationships may occur. Compared to other sedative-hypnotics, visits to the hospital involving benzodiazepines had a 66% greater odds of a serious adverse health outcome. This included hospitalization, patient transfer, or death, and visits involving a combination of benzodiazepines and non-benzodiapine receptor agonists had almost four-times increased odds of a serious health outcome. The short-term use of benzodiazepines adversely affects multiple areas of cognition, the most notable one being that it interferes with the formation and consolidation of memories of new material and may induce complete anterograde amnesia. However, researchers hold contrary opinions regarding the effects of long-term administration. One view is that many of the short-term effects continue into the long-term and may even worsen, and are not resolved after stopping benzodiazepine usage. Another view maintains that cognitive deficits in chronic benzodiazepine users occur only for a short period after the dose, or that the anxiety disorder is the cause of these deficits. While the definitive studies are lacking, the former view received support from a 2004 meta-analysis of 13 small studies. This meta-analysis found that long-term use of benzodiazepines was associated with moderate to large adverse effects on all areas of cognition, with visuospatial memory being the most commonly detected impairment. Some of the other impairments reported were decreased IQ, visiomotor coordination, information processing, verbal learning and concentration. The authors of the meta-analysis and a later reviewer noted that the applicability of this meta-analysis is limited because the subjects were taken mostly from withdrawal clinics; the coexisting drug, alcohol use, and psychiatric disorders were not defined; and several of the included studies conducted the cognitive measurements during the withdrawal period. Paradoxical reactions, such as increased seizures in epileptics, aggression, violence, impulsivity, irritability and suicidal behavior sometimes occur. These reactions have been explained as consequences of disinhibition and the subsequent loss of control over socially unacceptable behavior. Paradoxical reactions are rare in the general population, with an incidence rate below 1% and similar to placebo. However, they occur with greater frequency in recreational abusers, individuals with borderline personality disorder, children, and patients on high-dosage regimes. In these groups, impulse control problems are perhaps the most important risk factor for disinhibition; learning disabilities and neurological disorders are also significant risks. Most reports of disinhibition involve high doses of high-potency benzodiazepines. Paradoxical effects may also appear after chronic use of benzodiazepines. While benzodiazepines may have short-term benefits for anxiety, sleep and agitation in some patients, long-term (i.e., greater than 2–4 weeks) use can result in a worsening of the very symptoms the medications are meant to treat. Potential explanations include exacerbating cognitive problems that are already common in anxiety disorders, causing or worsening depression and suicidality, disrupting sleep architecture by inhibiting deep stage sleep, withdrawal symptoms or rebound symptoms in between doses mimicking or exacerbating underlying anxiety or sleep disorders, inhibiting the benefits of psychotherapy by inhibiting memory consolidation and reducing fear extinction, and reducing coping with trauma/stress and increasing vulnerability to future stress. Anxiety, insomnia and irritability may be temporarily exacerbated during withdrawal, but psychiatric symptoms after discontinuation are usually less than even while taking benzodiazepines. Functioning significantly improves within 1 year of discontinuation. The main problem of the chronic use of benzodiazepines is the development of tolerance and dependence. Tolerance manifests itself as diminished pharmacological effect and develops relatively quickly to the sedative, hypnotic, anticonvulsant, and muscle relaxant actions of benzodiazepines. Tolerance to anti-anxiety effects develops more slowly with little evidence of continued effectiveness beyond four to six months of continued use. In general, tolerance to the amnesic effects does not occur. However, controversy exists as to tolerance to the anxiolytic effects with some evidence that benzodiazepines retain efficacy and opposing evidence from a systematic review of the literature that tolerance frequently occurs and some evidence that anxiety may worsen with long-term use. The question of tolerance to the amnesic effects of benzodiazepines is, likewise, unclear. Some evidence suggests that partial tolerance does develop, and that, "memory impairment is limited to a narrow window within 90 minutes after each dose". A major disadvantage of benzodiazepines that tolerance to therapeutic effects develops relatively quickly while many adverse effects persist. Tolerance develops to hypnotic and myorelexant effects within days to weeks, and to anticonvulsant and anxiolytic effects within weeks to months. Therefore, benzodiazepines are unlikely to be effective long-term treatments for sleep and anxiety. While BZD therapeutic effects disappear with tolerance, depression and impulsivity with high suicidal risk commonly persist. Several studies have confirmed that long-term benzodiazepines are not significantly different from placebo for sleep or anxiety. This may explain why patients commonly increase doses over time and many eventually take more than one type of benzodiazepine after the first loses effectiveness. Additionally, because tolerance to benzodiazepine sedating effects develops more quickly than does tolerance to brainstem depressant effects, those taking more benzodiazepines to achieve desired effects may suffer sudden respiratory depression, hypotension or death. Most patients with anxiety disorders and PTSD have symptoms that persist for at least several months, making tolerance to therapeutic effects a distinct problem for them and necessitating the need for more effective long-term treatment (e.g., psychotherapy, serotonergic antidepressants). Discontinuation of benzodiazepines or abrupt reduction of the dose, even after a relatively short course of treatment (two to four weeks), may result in two groups of symptoms—rebound and withdrawal. Rebound symptoms are the return of the symptoms for which the patient was treated but worse than before. Withdrawal symptoms are the new symptoms that occur when the benzodiazepine is stopped. They are the main sign of physical dependence. The most frequent symptoms of withdrawal from benzodiazepines are insomnia, gastric problems, tremors, agitation, fearfulness, and muscle spasms. The less frequent effects are irritability, sweating, depersonalization, derealization, hypersensitivity to stimuli, depression, suicidal behavior, psychosis, seizures, and delirium tremens. Severe symptoms usually occur as a result of abrupt or over-rapid withdrawal. Abrupt withdrawal can be dangerous, therefore a gradual reduction regimen is recommended. Symptoms may also occur during a gradual dosage reduction, but are typically less severe and may persist as part of a protracted withdrawal syndrome for months after cessation of benzodiazepines. Approximately 10% of patients experience a notable protracted withdrawal syndrome, which can persist for many months or in some cases a year or longer. Protracted symptoms tend to resemble those seen during the first couple of months of withdrawal but usually are of a sub-acute level of severity. Such symptoms do gradually lessen over time, eventually disappearing altogether. Benzodiazepines have a reputation with patients and doctors for causing a severe and traumatic withdrawal; however, this is in large part due to the withdrawal process being poorly managed. Over-rapid withdrawal from benzodiazepines increases the severity of the withdrawal syndrome and increases the failure rate. A slow and gradual withdrawal customised to the individual and, if indicated, psychological support is the most effective way of managing the withdrawal. Opinion as to the time needed to complete withdrawal ranges from four weeks to several years. A goal of less than six months has been suggested, but due to factors such as dosage and type of benzodiazepine, reasons for prescription, lifestyle, personality, environmental stresses, and amount of available support, a year or more may be needed to withdraw. Withdrawal is best managed by transferring the physically dependent patient to an equivalent dose of diazepam because it has the longest half-life of all of the benzodiazepines, is metabolised into long-acting active metabolites and is available in low-potency tablets, which can be quartered for smaller doses. A further benefit is that it is available in liquid form, which allows for even smaller reductions. Chlordiazepoxide, which also has a long half-life and long-acting active metabolites, can be used as an alternative. Nonbenzodiazepines are contraindicated during benzodiazepine withdrawal as they are cross tolerant with benzodiazepines and can induce dependence. Alcohol is also cross tolerant with benzodiazepines and more toxic and thus caution is needed to avoid replacing one dependence with another. During withdrawal, fluoroquinolone-based antibiotics are best avoided if possible; they displace benzodiazepines from their binding site and reduce GABA function and, thus, may aggravate withdrawal symptoms. Antipsychotics are not recommended for benzodiazepine withdrawal (or other CNS depressant withdrawal states) especially clozapine, olanzapine or low potency phenothiazines e.g. chlorpromazine as they lower the seizure threshold and can worsen withdrawal effects; if used extreme caution is required. Withdrawal from long term benzodiazepines is beneficial for most individuals. Withdrawal of benzodiazepines from long-term users, in general, leads to improved physical and mental health particularly in the elderly; although some long term users report continued benefit from taking benzodiazepines, this may be the result of suppression of withdrawal effects. Beyond the well established link between benzodiazepines and psychomotor impairment resulting in motor vehicle accidents and falls leading to fracture; research in the 2000s and 2010s has raised the association between benzodiazepines (and Z-drugs) and other, as of yet unproven, adverse effects including dementia, cancer, infections, pancreatitis and respiratory disease exacerbations. A number of studies have drawn an association between long-term benzodiazepine use and neuro-degenerative disease, particularly Alzheimer's disease. It has been determined that long-term use of benzodiazepines is associated with increased dementia risk, even after controlling for protopathic bias. Some observational studies have detected significant associations between benzodiazepines and respiratory infections such as pneumonia where others have not. A large meta-analysis of pre-marketing randomized controlled trials on the pharmacologically related Z-Drugs suggest a small increase in infection risk as well. An immunodeficiency effect from the action of benzodiazepines on GABA-A receptors has been postulated from animal studies. A Meta-analysis of observational studies has determined an association between benzodiazepine use and cancer, though the risk across different agents and different cancers varied significantly. In terms of experimental basic science evidence, an analysis of carcinogenetic and genotoxicity data for various benzodiazepines has suggested a small possibility of carcinogenesis for a small number of benzodiazepines. The evidence suggesting a link between benzodiazepines (and Z-Drugs) and pancreatic inflammation is very sparse and limited to a few observational studies from Taiwan. A criticism of confounding can be applied to these findings as with the other controversial associations above. Further well-designed research from other populations as well as a biologically plausible mechanism is required to confirm this association. Although benzodiazepines are much safer in overdose than their predecessors, the barbiturates, they can still cause problems in overdose. Taken alone, they rarely cause severe complications in overdose; statistics in England showed that benzodiazepines were responsible for 3.8% of all deaths by poisoning from a single drug. However, combining these drugs with alcohol, opiates or tricyclic antidepressants markedly raises the toxicity. The elderly are more sensitive to the side effects of benzodiazepines, and poisoning may even occur from their long-term use. The various benzodiazepines differ in their toxicity; temazepam appears most toxic in overdose and when used with other drugs. The symptoms of a benzodiazepine overdose may include; drowsiness, slurred speech, nystagmus, hypotension, ataxia, coma, respiratory depression, and cardiorespiratory arrest. A reversal agent for benzodiazepines exists, flumazenil (Anexate). Its use as an antidote is not routinely recommended because of the high risk of resedation and seizures. In a double-blind, placebo-controlled trial of 326 people, 4 people had serious adverse events and 61% became resedated following the use of flumazenil. Numerous contraindications to its use exist. It is contraindicated in people with a history of long-term use of benzodiazepines, those having ingested a substance that lowers the seizure threshold or may cause an arrhythmia, and in those with abnormal vital signs. One study found that only 10% of the people presenting with a benzodiazepine overdose are suitable candidates for treatment with flumazenil. Individual benzodiazepines may have different interactions with certain drugs. Depending on their metabolism pathway, benzodiazepines can be divided roughly into two groups. The largest group consists of those that are metabolized by cytochrome P450 (CYP450) enzymes and possess significant potential for interactions with other drugs. The other group comprises those that are metabolized through glucuronidation, such as lorazepam, oxazepam, and temazepam, and, in general, have few drug interactions. Many drugs, including oral contraceptives, some antibiotics, antidepressants, and antifungal agents, inhibit cytochrome enzymes in the liver. They reduce the rate of elimination of the benzodiazepines that are metabolized by CYP450, leading to possibly excessive drug accumulation and increased side-effects. In contrast, drugs that induce cytochrome P450 enzymes, such as St John's wort, the antibiotic rifampicin, and the anticonvulsants carbamazepine and phenytoin, accelerate elimination of many benzodiazepines and decrease their action. Taking benzodiazepines with alcohol, opioids and other central nervous system depressants potentiates their action. This often results in increased sedation, impaired motor coordination, suppressed breathing, and other adverse effects that have potential to be lethal. Antacids can slow down absorption of some benzodiazepines; however, this effect is marginal and inconsistent. Benzodiazepines work by increasing the effectiveness of the endogenous chemical, GABA, to decrease the excitability of neurons. This reduces the communication between neurons and, therefore, has a calming effect on many of the functions of the brain. GABA controls the excitability of neurons by binding to the GABAA receptor. The GABAA receptor is a protein complex located in the synapses between neurons. All GABAA receptors contain an ion channel that conducts chloride ions across neuronal cell membranes and two binding sites for the neurotransmitter gamma-aminobutyric acid (GABA), while a subset of GABAA receptor complexes also contain a single binding site for benzodiazepines. Binding of benzodiazepines to this receptor complex does not alter binding of GABA. Unlike other positive allosteric modulators that increase ligand binding, benzodiazepine binding acts as a positive allosteric modulator by increasing the total conduction of chloride ions across the neuronal cell membrane when GABA is already bound to its receptor. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely. Different GABAA receptor subtypes have varying distributions within different regions of the brain and, therefore, control distinct neuronal circuits. Hence, activation of different GABAA receptor subtypes by benzodiazepines may result in distinct pharmacological actions. In terms of the mechanism of action of benzodiazepines, their similarities are too great to separate them into individual categories such as anxiolytic or hypnotic. For example, a hypnotic administered in low doses produces anxiety-relieving effects, whereas a benzodiazepine marketed as an anti-anxiety drug at higher doses induces sleep. The subset of GABAA receptors that also bind benzodiazepines are referred to as benzodiazepine receptors (BzR). The GABAA receptor is a heteromer composed of five subunits, the most common ones being two "α"s, two "β"s, and one "γ" (α2β2γ1). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors that are made up of different combinations of subunit subtypes have different properties, different distributions in the brain and different activities relative to pharmacological and clinical effects. Benzodiazepines bind at the interface of the α and γ subunits on the GABAA receptor. Binding also requires that alpha subunits contain a histidine amino acid residue, ("i.e.", α1, α2, α3, and α5 containing GABAA receptors). For this reason, benzodiazepines show no affinity for GABAA receptors containing α4 and α6 subunits with an arginine instead of a histidine residue. Once bound to the benzodiazepine receptor, the benzodiazepine ligand locks the benzodiazepine receptor into a conformation in which it has a greater affinity for the GABA neurotransmitter. This increases the frequency of the opening of the associated chloride ion channel and hyperpolarizes the membrane of the associated neuron. The inhibitory effect of the available GABA is potentiated, leading to sedative and anxiolytic effects. For instance, those ligands with high activity at the α1 are associated with stronger hypnotic effects, whereas those with higher affinity for GABAA receptors containing α2 and/or α3 subunits have good anti-anxiety activity. The benzodiazepine class of drugs also interact with peripheral benzodiazepine receptors. Peripheral benzodiazepine receptors are present in peripheral nervous system tissues, glial cells, and to a lesser extent the central nervous system. These peripheral receptors are not structurally related or coupled to GABAA receptors. They modulate the immune system and are involved in the body response to injury. Benzodiazepines also function as weak adenosine reuptake inhibitors. It has been suggested that some of their anticonvulsant, anxiolytic, and muscle relaxant effects may be in part mediated by this action. Benzodiazepines have binding sites in the periphery, however their effects on muscle tone is not mediated through these peripheral receptors. The peripheral binding sites for benzodiazepines are present in immune cells and gastrointestinal tract. A benzodiazepine can be placed into one of three groups by its elimination half-life, or time it takes for the body to eliminate half of the dose. Some benzodiazepines have long-acting active metabolites, such as diazepam and chlordiazepoxide, which are metabolised into desmethyldiazepam. Desmethyldiazepam has a half-life of 36–200 hours, and flurazepam, with the main active metabolite of desalkylflurazepam, with a half-life of 40–250 hours. These long-acting metabolites are partial agonists. Benzodiazepines share a similar chemical structure, and their effects in humans are mainly produced by the allosteric modification of a specific kind of neurotransmitter receptor, the GABAA receptor, which increases the overall conductance of these inhibitory channels; this results in the various therapeutic effects as well as adverse effects of benzodiazepines. Other less important modes of action are also known. The term "benzodiazepine" is the chemical name for the heterocyclic ring system (see figure to the right), which is a fusion between the benzene and diazepine ring systems. Under Hantzsch–Widman nomenclature, a diazepine is a heterocycle with two nitrogen atoms, five carbon atom and the maximum possible number of cumulative double bonds. The "benzo" prefix indicates the benzene ring fused onto the diazepine ring. Benzodiazepine drugs are substituted 1,4-benzodiazepines, although the chemical term can refer to many other compounds that do not have useful pharmacological properties. Different benzodiazepine drugs have different side groups attached to this central structure. The different side groups affect the binding of the molecule to the GABAA receptor and so modulate the pharmacological properties. Many of the pharmacologically active "classical" benzodiazepine drugs contain the 5-phenyl-1"H"-benzo["e"] [1,4]diazepin-2(3"H")-one substructure (see figure to the right). Benzodiazepines have been found to mimic protein reverse turns structurally, which enable them with their biological activity in many cases. Nonbenzodiazepines also bind to the benzodiazepine binding site on the GABAA receptor and possess similar pharmacological properties. While the nonbenzodiazepines are by definition structurally unrelated to the benzodiazepines, both classes of drugs possess a common pharmacophore (see figure to the lower-right), which explains their binding to a common receptor site. The first benzodiazepine, chlordiazepoxide ("Librium"), was synthesized in 1955 by Leo Sternbach while working at Hoffmann–La Roche on the development of tranquilizers. The pharmacological properties of the compounds prepared initially were disappointing, and Sternbach abandoned the project. Two years later, in April 1957, co-worker Earl Reeder noticed a "nicely crystalline" compound left over from the discontinued project while spring-cleaning in the lab. This compound, later named chlordiazepoxide, had not been tested in 1955 because of Sternbach's focus on other issues. Expecting pharmacology results to be negative, and hoping to publish the chemistry-related findings, researchers submitted it for a standard battery of animal tests. The compound showed very strong sedative, anticonvulsant, and muscle relaxant effects. These impressive clinical findings led to its speedy introduction throughout the world in 1960 under the brand name "Librium". Following chlordiazepoxide, diazepam marketed by Hoffmann–La Roche under the brand name "Valium" in 1963, and for a while the two were the most commercially successful drugs. The introduction of benzodiazepines led to a decrease in the prescription of barbiturates, and by the 1970s they had largely replaced the older drugs for sedative and hypnotic uses. The new group of drugs was initially greeted with optimism by the medical profession, but gradually concerns arose; in particular, the risk of dependence became evident in the 1980s. Benzodiazepines have a unique history in that they were responsible for the largest-ever class-action lawsuit against drug manufacturers in the United Kingdom, involving 14,000 patients and 1,800 law firms that alleged the manufacturers knew of the dependence potential but intentionally withheld this information from doctors. At the same time, 117 general practitioners and 50 health authorities were sued by patients to recover damages for the harmful effects of dependence and withdrawal. This led some doctors to require a signed consent form from their patients and to recommend that all patients be adequately warned of the risks of dependence and withdrawal before starting treatment with benzodiazepines. The court case against the drug manufacturers never reached a verdict; legal aid had been withdrawn and there were allegations that the consultant psychiatrists, the expert witnesses, had a conflict of interest. This litigation led to changes in the British law, making class action lawsuits more difficult. Although antidepressants with anxiolytic properties have been introduced, and there is increasing awareness of the adverse effects of benzodiazepines, prescriptions for short-term anxiety relief have not significantly dropped. For treatment of insomnia, benzodiazepines are now less popular than nonbenzodiazepines, which include zolpidem, zaleplon and eszopiclone. Nonbenzodiazepines are molecularly distinct, but nonetheless, they work on the same benzodiazepine receptors and produce similar sedative effects. Benzodiazepines have been detected in plant specimens and brain samples of animals not exposed to synthetic sources, including a human brain from the 1940s. However, it is unclear whether these compounds are biosynthesized by microbes or by plants and animals themselves. A microbial biosynthetic pathway has been proposed. In the United States, benzodiazepines are Schedule IV drugs under the Federal Controlled Substances Act, even when not on the market (for example, nitrazepam and bromazepam). Flunitrazepam is subject to more stringent regulations in certain states and temazepam prescriptions require specially coded pads in certain states. In Canada, possession of benzodiazepines is legal for personal use. All benzodiazepines are categorized as Schedule IV substances under the Controlled Drugs and Substances Act. Since 2000, benzodiazepines have been classed as "targeted substances", meaning that additional regulations exist especially affecting pharmacists' records. Since approximately 2014, Health Canada, the Canadian Medical Association and provincial Colleges of Physicians and Surgeons have been issuing progressively stricter guidelines for the prescription of benzodiazepines, especially for the elderly (e.g. College of Physicians and Surgeons of British Columbia). Many of these guidelines are not readily available to the public. In the United Kingdom, the benzodiazepines are Class C controlled drugs, carrying the maximum penalty of 7 years imprisonment, an unlimited fine or both for possession and a maximum penalty of 14 years imprisonment an unlimited fine or both for supplying benzodiazepines to others. In the Netherlands, since October 1993, benzodiazepines, including formulations containing less than 20 mg of temazepam, are all placed on List 2 of the Opium Law. A prescription is needed for possession of all benzodiazepines. Temazepam formulations containing 20 mg or greater of the drug are placed on List 1, thus requiring doctors to write prescriptions in the List 1 format. In East Asia and Southeast Asia, temazepam and nimetazepam are often heavily controlled and restricted. In certain countries, triazolam, flunitrazepam, flutoprazepam and midazolam are also restricted or controlled to certain degrees. In Hong Kong, all benzodiazepines are regulated under Schedule 1 of Hong Kong's Chapter 134 "Dangerous Drugs Ordinance". Previously only brotizolam, flunitrazepam and triazolam were classed as dangerous drugs. Internationally, benzodiazepines are categorized as Schedule IV controlled drugs, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances. Benzodiazepines are considered major drugs of abuse. Benzodiazepine abuse is mostly limited to individuals who abuse other drugs, i.e., poly-drug abusers. On the international scene, benzodiazepines are categorized as Schedule IV controlled drugs by the INCB, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances. Some variation in drug scheduling exists in individual countries; for example, in the United Kingdom, midazolam and temazepam are Schedule III controlled drugs. British law requires that temazepam (but "not" midazolam) be stored in safe custody. Safe custody requirements ensures that pharmacists and doctors holding stock of temazepam must store it in securely fixed double-locked steel safety cabinets and maintain a written register, which must be bound and contain separate entries for temazepam and must be written in ink with no use of correction fluid (although a written register is not required for temazepam in the United Kingdom). Disposal of expired stock must be witnessed by a designated inspector (either a local drug-enforcement police officer or official from health authority). Benzodiazepine abuse ranges from occasional binges on large doses, to chronic and compulsive drug abuse of high doses. Benzodiazepines are used recreationally and by problematic drug misusers. Mortality is higher among poly-drug misusers that also use benzodiazepines. Heavy alcohol use also increases mortality among poly-drug users. Dependence and tolerance, often coupled with dosage escalation, to benzodiazepines can develop rapidly among drug misusers; withdrawal syndrome may appear after as little as three weeks of continuous use. Long-term use has the potential to cause both physical and psychological dependence and severe withdrawal symptoms such as depression, anxiety (often to the point of panic attacks), and agoraphobia. Benzodiazepines and, in particular, temazepam are sometimes used intravenously, which, if done incorrectly or in an unsterile manner, can lead to medical complications including abscesses, cellulitis, thrombophlebitis, arterial puncture, deep vein thrombosis, and gangrene. Sharing syringes and needles for this purpose also brings up the possibility of transmission of hepatitis, HIV, and other diseases. Benzodiazepines are also misused intranasally, which may have additional health consequences. Once benzodiazepine dependence has been established, a clinician usually converts the patient to an equivalent dose of diazepam before beginning a gradual reduction program. A 1999–2005 Australian police survey of detainees reported preliminary findings that self-reported users of benzodiazepines were less likely than non-user detainees to work full-time and more likely to receive government benefits, use methamphetamine or heroin, and be arrested or imprisoned. Benzodiazepines are sometimes used for criminal purposes; they serve to incapacitate a victim in cases of drug assisted rape or robbery. Overall, anecdotal evidence suggests that temazepam may be the most psychologically habit-forming (addictive) benzodiazepine. Temazepam abuse reached epidemic proportions in some parts of the world, in particular, in Europe and Australia, and is a major drug of abuse in many Southeast Asian countries. This led authorities of various countries to place temazepam under a more restrictive legal status. Some countries, such as Sweden, banned the drug outright. Temazepam also has certain pharmacokinetic properties of absorption, distribution, elimination, and clearance that make it more apt to abuse compared to many other benzodiazepines. Benzodiazepines are used in veterinary practice in the treatment of various disorders and conditions. As in humans, they are used in the first-line management of seizures, status epilepticus, and tetanus, and as maintenance therapy in epilepsy (in particular, in cats). They are widely used in small and large animals (including horses, swine, cattle and exotic and wild animals) for their anxiolytic and sedative effects, as pre-medication before surgery, for induction of anesthesia and as adjuncts to anesthesia.
https://en.wikipedia.org/wiki?curid=4781
Body mass index Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is universally expressed in units of kg/m2, resulting from mass in kilograms and height in metres. The BMI may be determined using a table or chart which displays BMI as a function of mass and height using contour lines or colours for different BMI categories, and which may use other units of measurement (converted to metric units for the calculation). The BMI is a convenient rule of thumb used to broadly categorize a person as "underweight", "normal weight", "overweight", or "obese" based on tissue mass (muscle, fat, and bone) and height. Commonly accepted BMI ranges are underweight (under 18.5 kg/m2), normal weight (18.5 to 25), overweight (25 to 30), and obese (over 30). BMIs under 20 and over 25 have been associated with higher all-causes mortality, with the risk increasing with distance from the 20–25 range. Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". The modern term "body mass index" (BMI) for the ratio of human body weight to squared height was coined in a paper published in the July 1972 edition of the "Journal of Chronic Diseases" by Ancel Keys and others. In this paper, Keys argued that what he termed the BMI was "...if not fully satisfactory, at least as good as any other relative weight index as an indicator of relative obesity". The interest in an index that measures body fat came with observed increasing obesity in prosperous Western societies. Keys explicitly judged BMI as appropriate for "population" studies and inappropriate for individual evaluation. Nevertheless, due to its simplicity, it has come to be widely used for preliminary diagnoses. Additional metrics, such as waist circumference, can be more useful. The BMI is universally expressed in kg/m2, resulting from mass in kilograms and height in metres. If pounds and inches are used, a conversion factor of 703 (kg/m2)/(lb/in2) must be applied. When the term BMI is used informally, the units are usually omitted. BMI provides a simple numeric measure of a person's "thickness" or "thinness", allowing health professionals to discuss weight problems more objectively with their patients. BMI was designed to be used as a simple means of classifying average sedentary (physically inactive) populations, with an average body composition. For such individuals, the value recommendations are as follows: a BMI from 18.5 up to 25 kg/m2 may indicate optimal weight, a BMI lower than 18.5 suggests the person is underweight, a number from 25 up to 30 may indicate the person is overweight, and a number from 30 upwards suggests the person is obese. Lean male athletes often have a high muscle-to-fat ratio and therefore a BMI that is misleadingly high relative to their body-fat percentage. BMI is proportional to the mass and inversely proportional to the square of the height. So, if all body dimensions double, and mass scales naturally with the cube of the height, then BMI doubles instead of remaining the same. This results in taller people having a reported BMI that is uncharacteristically high, compared to their actual body fat levels. In comparison, the Ponderal index is based on the natural scaling of mass with the third power of the height. However, many taller people are not just "scaled up" short people but tend to have narrower frames in proportion to their height. Carl Lavie has written that, "The B.M.I. tables are excellent for identifying obesity and body fat in large populations, but they are far less reliable for determining fatness in individuals." A common use of the BMI is to assess how far an individual's body weight departs from what is normal or desirable for a person's height. The weight excess or deficiency may, in part, be accounted for by body fat (adipose tissue) although other factors such as muscularity also affect BMI significantly (see discussion below and overweight). The WHO regards a BMI of less than 18.5 as underweight and may indicate malnutrition, an eating disorder, or other health problems, while a BMI equal to or greater than 25 is considered overweight and above 30 is considered obese. These ranges of BMI values are valid only as statistical categories. BMI is used differently for children. It is calculated in the same way as for adults, but then compared to typical values for other children of the same age. Instead of comparison against fixed thresholds for underweight and overweight, the BMI is compared against the percentiles for children of the same sex and age. A BMI that is less than the 5th percentile is considered underweight and above the 95th percentile is considered obese. Children with a BMI between the 85th and 95th percentile are considered to be overweight. Recent studies in Britain have indicated that females between the ages 12 and 16 have a higher BMI than males of the same age by 1.0 kg/m2 on average. These recommended distinctions along the linear scale may vary from time to time and country to country, making global, longitudinal surveys problematic. People from different ethnic groups, populations, and descent have different associations between BMI, percentage of body fat, and health risks, with a higher risk of type 2 diabetes mellitus and atherosclerotic cardiovascular disease at BMIs lower than the WHO cut-off point for overweight, 25 kg/m2, although the cut-off for observed risk varies among different populations. The cut-off for observed risk varies based on populations and subpopulations in Europe, Asia and Africa. The Hospital Authority of Hong Kong recommends the use of the following BMI ranges: In Japan, the following table is the criteria for BMI and its different stages determined by a 2000 study from the Japan Society for the Study of Obesity: In Singapore, the BMI cut-off figures were revised in 2005, motivated by studies showing that many Asian populations, including Singaporeans, have higher proportion of body fat and increased risk for cardiovascular diseases and diabetes mellitus, compared with general BMI recommendations in other countries. The BMI cut-offs are presented with an emphasis on health risk rather than weight. In 1998, the U.S. National Institutes of Health and the Centers for Disease Control and Prevention brought U.S. definitions in line with World Health Organization guidelines, lowering the normal/overweight cut-off from BMI 27.8 to BMI 25. This had the effect of redefining approximately 29 million Americans, previously "healthy", to "overweight". This can partially explain the increase in the "overweight" diagnosis in the past 20 years, and the increase in sales of weight loss products during the same time. WHO also recommends lowering the normal/overweight threshold for South East Asian body types to around BMI 23, and expects further revisions to emerge from clinical studies of different body types. The U.S. National Health and Nutrition Examination Survey of 2015-2016 showed that 71.6% of American men and women had BMIs over 25. Obesity—a BMI of 30 or more—was found in 39.8% of the US adults. A survey in 2007 showed 63% of Americans are overweight or obese, with 26% in the obese category (a BMI of 30 or more). , 37.7% of adults in the United States were obese, categorized as 35.0% of men and 40.4% of women; class 3 obesity (BMI over 40) values were 7.7% for men and 9.9% for women. The BMI ranges are based on the relationship between body weight and disease and death. Overweight and obese individuals are at an increased risk for the following diseases: Among people who have never smoked, overweight/obesity is associated with 51% increase in mortality compared with people who have always been a normal weight. The BMI is generally used as a means of correlation between groups related by general mass and can serve as a vague means of estimating adiposity. The duality of the BMI is that, while it is easy to use as a general calculation, it is limited as to how accurate and pertinent the data obtained from it can be. Generally, the index is suitable for recognizing trends within sedentary or overweight individuals because there is a smaller margin of error. The BMI has been used by the WHO as the standard for recording obesity statistics since the early 1980s. This general correlation is particularly useful for consensus data regarding obesity or various other conditions because it can be used to build a semi-accurate representation from which a solution can be stipulated, or the RDA for a group can be calculated. Similarly, this is becoming more and more pertinent to the growth of children, due to the fact that the majority of children are sedentary. Cross-sectional studies indicated that sedentary people can decrease BMI by becoming more physically active. Smaller effects are seen in prospective cohort studies which lend to support active mobility as a means to prevent a further increase in BMI. BMI categories are generally regarded as a satisfactory tool for measuring whether sedentary individuals are "underweight", "overweight", or "obese" with various exceptions, such as: athletes, children, the elderly, and the infirm. Also, the growth of a child is documented against a BMI-measured growth chart. Obesity trends can then be calculated from the difference between the child's BMI and the BMI on the chart. In the United States, BMI is also used as a measure of underweight, owing to advocacy on behalf of those with eating disorders, such as anorexia nervosa and bulimia nervosa. In France, Italy, and Spain, legislation has been introduced banning the usage of fashion show models having a BMI below 18. In Israel, a BMI below 18.5 is banned. This is done to fight anorexia among models and people interested in fashion. A study published by "Journal of the American Medical Association" ("JAMA") in 2005 showed that "overweight" people had a death rate similar to "normal" weight people as defined by BMI, while "underweight" and "obese" people had a higher death rate. A study published by "The Lancet" in 2009 involving 900,000 adults showed that "overweight" and "underweight" people both had a mortality rate higher than "normal" weight people as defined by BMI. The optimal BMI was found to be in the range of 22.5–25. High BMI is associated with type 2 diabetes only in persons with high serum gamma-glutamyl transpeptidase. In an analysis of 40 studies involving 250,000 people, patients with coronary artery disease with "normal" BMIs were at higher risk of death from cardiovascular disease than people whose BMIs put them in the "overweight" range (BMI 25–29.9). One study found that BMI had a good general correlation with body fat percentage, and noted that obesity has overtaken smoking as the world's number one cause of death. But it also notes that in the study 50% of men and 62% of women were obese according to body fat defined obesity, while only 21% of men and 31% of women were obese according to BMI, meaning that BMI was found to underestimate the number of obese subjects. A 2010 study that followed 11,000 subjects for up to eight years concluded that BMI is not a good measure for the risk of heart attack, stroke or death. A better measure was found to be the waist-to-height ratio. A 2011 study that followed 60,000 participants for up to 13 years found that waist–hip ratio was a better predictor of ischaemic heart disease mortality. The medical establishment and statistical community have both highlighted the limitations of BMI. The exponent in the denominator of the formula for BMI is arbitrary. The BMI depends upon weight and the "square" of height. Since mass increases to the "third power" of linear dimensions, taller individuals with exactly the same body shape and relative composition have a larger BMI. According to mathematician Nick Trefethen, "BMI divides the weight by too large a number for short people and too small a number for tall people. So short people are misled into thinking that they are thinner than they are, and tall people are misled into thinking they are fatter." For US adults, exponent estimates range from 1.92 to 1.96 for males and from 1.45 to 1.95 for females. The BMI overestimates roughly 10% for a large (or tall) frame and underestimates roughly 10% for a smaller frame (short stature). In other words, persons with small frames would be carrying more fat than optimal, but their BMI indicates that they are "normal". Conversely, large framed (or tall) individuals may be quite healthy, with a fairly low body fat percentage, but be classified as "overweight" by BMI. For example, a height/weight chart may say the ideal weight (BMI 21.5) for a man is . But if that man has a slender build (small frame), he may be overweight at and should reduce by 10% to roughly (BMI 19.4). In the reverse, the man with a larger frame and more solid build should increase by 10%, to roughly (BMI 23.7). If one teeters on the edge of small/medium or medium/large, common sense should be used in calculating one's ideal weight. However, falling into one's ideal weight range for height and build is still not as accurate in determining health risk factors as waist-to-height ratio and actual body fat percentage. Accurate frame size calculators use several measurements (wrist circumference, elbow width, neck circumference, and others) to determine what category an individual falls into for a given height. The BMI also fails to take into account loss of height through ageing. In this situation, BMI will increase without any corresponding increase in weight. A new formula for computing Body Mass Index that accounts for the distortions of the traditional BMI formula for shorter and taller individuals has been proposed by Nick Trefethen, Professor of numerical analysis at the University of Oxford: The scaling factor of 1.3 was determined to make the proposed new BMI formula align with the traditional BMI formula for adults of average height, while the exponent of 2.5 is a compromise between the exponent of 2 in the traditional formula for BMI and the exponent of 3 that would be expected for the scaling of weight (which at constant density would theoretically scale with volume, "i. e.", as the cube of the height) with height; however, in Trefethen’s analysis, an exponent of 2.5 was found to fit empirical data more closely with less distortion than either an exponent of 2 or 3. Assumptions about the distribution between muscle mass and fat mass are inexact. BMI generally overestimates adiposity on those with more lean body mass (e.g., athletes) and underestimates excess adiposity on those with less lean body mass. A study in June 2008 by Romero-Corral et al. examined 13,601 subjects from the United States' third National Health and Nutrition Examination Survey (NHANES III) and found that BMI-defined obesity (BMI > 30) was present in 21% of men and 31% of women. Body fat-defined obesity was found in 50% of men and 62% of women. While BMI-defined obesity showed high specificity (95% for men and 99% for women), BMI showed poor sensitivity (36% for men and 49% for women). In other words, BMI is better at determining a person is not obese than it is at determining a person is obese. Despite this undercounting of obesity by BMI, BMI values in the intermediate BMI range of 20–30 were found to be associated with a wide range of body fat percentages. For men with a BMI of 25, about 20% have a body fat percentage below 20% and about 10% have body fat percentage above 30%. BMI is particularly inaccurate for people who are very fit or athletic, as their high muscle mass can classify them in the "overweight" category by BMI, even though their body fat percentages frequently fall in the 10–15% category, which is below that of a more sedentary person of average build who has a "normal" BMI number. For example, the BMI of bodybuilder and eight-time Mr. Olympia Ronnie Coleman was 41.8 at his peak physical condition, which would be considered morbidly obese. Body composition for athletes is often better calculated using measures of body fat, as determined by such techniques as skinfold measurements or underwater weighing and the limitations of manual measurement have also led to new, alternative methods to measure obesity, such as the body volume indicator. It is not clear where on the BMI scale the threshold for "overweight" and "obese" should be set. Because of this, the standards have varied over the past few decades. Between 1980 and 2000 the U.S. Dietary Guidelines have defined overweight at a variety of levels ranging from a BMI of 24.9 to 27.1. In 1985 the National Institutes of Health (NIH) consensus conference recommended that overweight BMI be set at a BMI of 27.8 for men and 27.3 for women. In 1998 a NIH report concluded that a BMI over 25 is overweight and a BMI over 30 is obese. In the 1990s the World Health Organization (WHO) decided that a BMI of 25 to 30 should be considered overweight and a BMI over 30 is obese, the standards the NIH set. This became the definitive guide for determining if someone is overweight. The current WHO and NIH ranges of "normal" weights are proved to be associated with decreased risks of some diseases such as diabetes type II; however using the same range of BMI for men and women is considered arbitrary, and makes the definition of underweight quite unsuitable for men. One study found that the vast majority of people labelled 'overweight' and 'obese' according to current definitions do not in fact face any meaningful increased risk for early death. In a quantitative analysis of a number of studies, involving more than 600,000 men and women, the lowest mortality rates were found for people with BMIs between 23 and 29; most of the 25–30 range considered 'overweight' was not associated with higher risk. BMI prime, a modification of the BMI system, is the ratio of actual BMI to upper limit optimal BMI (currently defined at 25 kg/m2), i.e., the actual BMI expressed as a proportion of upper limit optimal. The ratio of actual body weight to body weight for upper limit optimal BMI (25 kg/m2) is equal to BMI Prime. BMI Prime is a dimensionless number independent of units. Individuals with BMI Prime less than 0.74 are underweight; those with between 0.74 and 1.00 have optimal weight; and those at 1.00 or greater are overweight. BMI Prime is useful clinically because it shows by what ratio (e.g. 1.36) or percentage (e.g. 136%, or 36% above) a person deviates from the maximum optimal BMI. For instance, a person with BMI 34 kg/m2 has a BMI Prime of 34/25 = 1.36, and is 36% over their upper mass limit. In South East Asian and South Chinese populations (see § international variations), BMI Prime should be calculated using an upper limit BMI of 23 in the denominator instead of 25. BMI Prime allows easy comparison between populations whose upper-limit optimal BMI values differ. Waist circumference is a good indicator of visceral fat, which poses more health risks than fat elsewhere. According to the U.S. National Institutes of Health (NIH), waist circumference in excess of for men and for (non-pregnant) women, is considered to imply a high risk for type 2 diabetes, dyslipidemia, hypertension, and CVD. Waist circumference can be a better indicator of obesity-related disease risk than BMI. For example, this is the case in populations of Asian descent and older people. for men and for women has been stated to pose "higher risk", with the NIH figures "even higher". Waist-to-hip circumference ratio has also been used, but has been found to be no better than waist circumference alone, and more complicated to measure. A related indicator is waist circumference divided by height. The values indicating increased risk are: greater than 0.5 for people under 40 years of age, 0.5 to 0.6 for people aged 40–50, and greater than 0.6 for people over 50 years of age. The Surface-based Body Shape Index (SBSI) is far more rigorous and is based upon four key measurements: the body surface area (BSA), vertical trunk circumference (VTC), waist circumference (WC) and height (H). Data on 11,808 subjects from the National Health and Human Nutrition Examination Surveys (NHANES) 1999–2004, showed that SBSI outperformed BMI, waist circumference, and A Body Shape Index (ABSI), an alternative to BMI. A simplified, dimensionless form of SBSI, known as SBSI*, has also been developed. Within some medical contexts, such as familial amyloid polyneuropathy, serum albumin is factored in to produce a modified body mass index (mBMI). The mBMI can be obtained by multiplying the BMI by serum albumin, in grams per litre.
https://en.wikipedia.org/wiki?curid=4788
Barry Goldwater Barry Morris Goldwater (January 2, 1909 – May 29, 1998) was an American politician, businessman, and author who was a five-term Senator from Arizona (1953–1965, 1969–1987) and the Republican Party nominee for president of the United States in 1964. Despite his loss of the 1964 presidential election in a landslide, Goldwater is the politician most often credited with having sparked the resurgence of the American conservative political movement in the 1960s. He also had a substantial impact on the libertarian movement. Goldwater rejected the legacy of the New Deal and fought with the conservative coalition against the New Deal coalition. Although he had voted in favor of the Civil Rights Act of 1957 and the 24th Amendment to the U.S. Constitution, he opposed the Civil Rights Act of 1964 believing it to be an overreach by the federal government. In 1964, Goldwater mobilized a large conservative constituency to win the hard-fought Republican presidential primaries. Although raised as an Episcopalian, Goldwater was the first candidate of ethnically Jewish heritage to be nominated for President by a major American party (his father was Jewish). Goldwater's platform ultimately failed to gain the support of the electorate and he lost the 1964 presidential election to incumbent Democrat Lyndon B. Johnson. Goldwater returned to the Senate in 1969 and specialized in defense and foreign policy. As an elder statesman of the party, Goldwater successfully urged President Richard Nixon to resign in 1974 when evidence of a cover-up in the Watergate scandal became overwhelming and impeachment was imminent. Goldwater's views grew more libertarian as he neared the end of his career (he retired from the Senate in 1987). A significant accomplishment of his career was the passage of the Goldwater–Nichols Act of 1986. He was succeeded by John McCain, who praised his predecessor as the man who "transformed the Republican Party from an Eastern elitist organization to the breeding ground for the election of Ronald Reagan". Goldwater strongly supported the 1980 presidential campaign of Reagan, who had become the standard-bearer of the conservative movement after his "A Time for Choosing" speech. Reagan reflected many of the principles of Goldwater's earlier run in his campaign. "The Washington Post" columnist George Will took note of this, writing: "We [...] who voted for him in 1964 believe he won, it just took 16 years to count the votes". After leaving the Senate, Goldwater's views cemented as libertarian. He criticized the "moneymaking ventures by fellows like Pat Robertson and others [in the Republican Party] who are trying to...make a religious organization out of it." He lobbied for homosexuals to be able to serve openly in the military, opposed the Clinton administration's plan for health care reform, and supported abortion rights and the legalization of medicinal marijuana. In 1997, Goldwater was revealed to be in the early stages of Alzheimer's disease. He died one year later at the age of 89. Goldwater was born in Phoenix in what was then the Arizona Territory, the son of Baron M. Goldwater and his wife, Hattie Josephine "JoJo" Williams. His father's family had founded Goldwater's, a leading upscale department store in Phoenix. Goldwater's paternal grandfather, Michel Goldwasser, a Polish Jew, was born in 1821 in Konin, then part of the Russian Empire, whence he emigrated to London following the Revolutions of 1848. Soon after arriving in London, he anglicized his name from Goldwasser to Goldwater. Michel married Sarah Nathan, a member of an English-Jewish family, in the Great Synagogue of London. His father was Jewish and his mother, who was Episcopalian, came from a New England family that included the theologian Roger Williams of Rhode Island. Goldwater's parents were married in an Episcopal church in Phoenix; for his entire life, Goldwater was an Episcopalian, though on rare occasions he referred to himself as Jewish. While he did not often attend church, he stated that "If a man acts in a religious way, an ethical way, then he's really a religious man—and it doesn't have a lot to do with how often he gets inside a church." After he did poorly as a freshman in high school, Goldwater's parents sent him to Staunton Military Academy in Virginia where he played varsity football, basketball, track and swimming, was senior class treasurer and attained the rank of captain. He graduated from the academy in 1928 and enrolled at the University of Arizona. Goldwater dropped out of college after one year. He is the most recent non-college graduate to be the nominee of a major political party in a presidential election. Goldwater entered the family's business around the time of his father's death in 1930. Six years later, he took over the department store, though he was not particularly enthused about running the business. In 1934, he married Margaret "Peggy" Johnson, daughter of a prominent industrialist from Muncie, Indiana. They had four children: Joanne (born January 18, 1936), Barry (born July 15, 1938), Michael (born March 15, 1940), and Peggy (born July 27, 1944). Goldwater became a widower in 1985, and in 1992 he married Susan Wechsler, a nurse 32 years his junior. Goldwater's son Barry Goldwater Jr. served as a United States House of Representatives member from California from 1969 to 1983. Goldwater's uncle Morris Goldwater (1852–1939) was an Arizona territorial and state legislator, mayor of Prescott, Arizona, and a businessman. Goldwater's grandson Ty Ross, a former Zoli model, is openly gay and HIV positive, and the one who inspired the elder Goldwater "to become an octogenarian proponent of gay civil rights". With the American entry into World War II, Goldwater received a reserve commission in the United States Army Air Forces. He became a pilot assigned to the Ferry Command, a newly formed unit that flew aircraft and supplies to war zones worldwide. He spent most of the war flying between the U.S. and India, via the Azores and North Africa or South America, Nigeria, and Central Africa. He also flew "the hump" over the Himalayas to deliver supplies to the Republic of China. Following World War II, Goldwater was a leading proponent of creating the United States Air Force Academy, and later served on the Academy's Board of Visitors. The visitor center at the Academy is now named in his honor. As a colonel he also founded the Arizona Air National Guard, and he desegregated it two years before the rest of the U.S. military. Goldwater was instrumental in pushing the Pentagon to support desegregation of the armed services. Remaining in the Arizona Air National Guard and Air Force Reserve after the war, he eventually retired as a Command Pilot with the rank of major general. By that time, he had flown 165 different types of aircraft. As an Air Force Reserve major general, he continued piloting aircraft, to include the B-52 Stratofortress, until late in his military career. As a U.S. Senator, Goldwater had a sign in his office that referenced his military career and mindset: "There are old pilots and there are bold pilots, but there are no old, bold pilots." Goldwater ran track and cross country in high school, where he specialized in the 880 yard run. His parents strongly encouraged him to compete in these sports, to his dismay. He often went by the nickname of "Rolling Thunder". In 1940, Goldwater became one of the first people to run the Colorado River recreationally through Grand Canyon participating as an oarsman on Norman Nevills' second commercial river trip. Goldwater joined them in Green River, Utah, and rowed his own boat down to Lake Mead. In 1970 the Arizona Historical Foundation published the daily journal Goldwater had maintained on the Grand Canyon journey, including his photographs, in a 209-page volume titled "Delightful Journey". In 1963 he joined the Arizona Society of the Sons of the American Revolution. He was also a lifetime member of the Veterans of Foreign Wars, the American Legion, and Sigma Chi fraternity. He belonged to both the York Rite and Scottish Rite of Freemasonry, and was awarded the 33rd degree in the Scottish Rite. In a heavily Democratic state, Goldwater became a conservative Republican and a friend of Herbert Hoover. He was outspoken against New Deal liberalism, especially its close ties to labor unions. A pilot, amateur radio operator, outdoorsman and photographer, he criss-crossed Arizona and developed a deep interest in both the natural and the human history of the state. He entered Phoenix politics in 1949, when he was elected to the City Council as part of a nonpartisan team of candidates pledged to clean up widespread prostitution and gambling. The team won every mayoral and council election for the next two decades. Goldwater rebuilt the weak Republican party and was instrumental in electing Howard Pyle as Governor in 1950. As a Republican he won a seat in the U.S. Senate in 1952, when he upset veteran Democrat and Senate Majority Leader Ernest McFarland. He won largely by defeating McFarland in his native Maricopa County by 12,600 votes, almost double the overall margin of 6,725 votes. As a measure of how Democratic Arizona had been since joining the Union 40 years earlier, Goldwater was only the second Republican ever to represent Arizona in the Senate. In his first year in the Senate he desegregated the Senate cafeteria, insisting that his black legislative assistant, Kathrine Maxwell, be served along with every other Senate employee. He defeated McFarland again in 1958, with a strong showing in his first reelection; he was the first Arizona Republican to win a second term in the Senate. Goldwater's victory was all the more remarkable since it came in a year the Democrats gained 13 seats in the Senate. He gave up re-election for the Senate in 1964 in favor of his presidential campaign. During his Senate career, Goldwater was regarded as the "Grand Old Man of the Republican Party and one of the nation's most respected exponents of conservatism". Goldwater was outspoken about the Eisenhower administration, calling some of the policies of the Eisenhower administration too liberal for a Republican President. "...Democrats delighted in pointing out that the junior senator was so headstrong that he had gone out his way to criticize the president of his own party." There was a Democratic majority in Congress for most of Eisenhower's career and Goldwater felt that President Dwight Eisenhower was compromising too much with Democrats in order to get legislation passed. Early on in his career as a senator for Arizona, he criticized the $71.8 billion budget that President Eisenhower sent to Congress, stating "Now, however, I am not so sure. A $71.8 billion budget not only shocks me, but it weakens my faith." Goldwater opposed Eisenhower's pick of Earl Warren for Chief Justice of the Supreme Court. "The day that Eisenhower appointed Governor Earl Warren of California as Chief Justice of the Supreme Court, Goldwater did not hesitate to express his misgivings." Goldwater and the Eisenhower administration supported the integration of schools in the south, but Goldwater felt the states should choose how they wanted to integrate and should not be forced by the federal government. "Goldwater criticized the use of federal troops. He accused the Eisenhower administration of violating the Constitution by assuming powers reserved by the states. While he agreed that under the law, every state should have integrated its schools, each state should integrate in its own way." There were high-ranking government officials following Goldwater's critical stance on the Eisenhower administration, even an Army General. "Fulbright's startling revelation that military personnel were being indoctrinated with the idea that the policies of the Commander in Chief were treasonous dovetailed with the return to the news of the strange case of General Edwin Walker." In 1964, Goldwater fought and won a multi-candidate race for the Republican Party's presidential nomination. His main rival was New York Governor Nelson Rockefeller, whom he defeated by a narrow margin in the California primary. Eisenhower gave his support to Goldwater when he told reporters, "I personally believe that Goldwater is not an extremist as some people have made him, but in any event we're all Republicans." His nomination was opposed by liberal Republicans, who thought Goldwater's demand for active measures to defeat the Soviet Union, would foment a nuclear war. He delivered a well-received acceptance speech. The author Lee Edwards says "[Goldwater] devoted more care [to it] than to any other speech in his political career. And with good reason: he would deliver it to the largest and most attentive audience of his life." At the time of Goldwater's presidential candidacy, the Republican Party was split between its conservative wing (based in the West and South) and moderate/liberal wing, sometimes called Rockefeller Republicans (based in the Northeast). Goldwater alarmed even some of his fellow partisans with his brand of staunch fiscal conservatism and militant anti-communism. He was viewed by many traditional Republicans as being too far on the right wing of the political spectrum to appeal to the mainstream majority necessary to win a national election. As a result, moderate Republicans recruited a series of opponents, including New York Governor Nelson Rockefeller, Henry Cabot Lodge Jr., of Massachusetts and Pennsylvania Governor William Scranton, to challenge him. Goldwater would defeat Rockefeller in the winner-take-all California primary and secure the nomination. He also had a solid backing from Southern Republicans. A young Birmingham lawyer, John Grenier, secured commitments from 271 of 279 Southern convention delegates to back Goldwater. Grenier would serve as executive director of the national GOP during the Goldwater campaign, the number 2 position to party chairman Dean Burch of Arizona. Journalist John Adams says, "his acceptance speech was bold, reflecting his conservative views, but not irrational. Rather than shrinking from those critics who accuse him of extremism, Goldwater challenged them head-on" in his acceptance speech at the 1964 Republican Convention. In his own words: His paraphrase of Cicero was included at the suggestion of Harry V. Jaffa, though the speech was primarily written by Karl Hess. Because of President Johnson's popularity, Goldwater refrained from attacking the president directly. He did not mention Johnson by name at all in his convention speech. Former U.S. Senator Prescott Bush, a moderate Republican from Connecticut, was a friend of Goldwater and supported him in the general election campaign. Bush's son, George H. W. Bush (then running for the Senate from Texas against Democrat Ralph Yarborough), was also a strong Goldwater supporter in both the nomination and general election campaigns. Future Chief Justice of the United States and fellow Arizonan William H. Rehnquist also first came to the attention of national Republicans through his work as a legal adviser to Goldwater's presidential campaign. Rehnquist had begun his law practice in 1953 in the firm of Denison Kitchel of Phoenix, Goldwater's national campaign manager and friend of nearly three decades. Goldwater was painted as a dangerous figure by the Johnson campaign, which countered Goldwater's slogan "In your heart, you know he's right" with the lines "In your guts, you know he's nuts," and "In your heart, you know he might" (that is, he might actually use nuclear weapons as opposed to using only deterrence). Johnson himself did not mention Goldwater in his own acceptance speech at the 1964 Democratic National Convention. Goldwater's provocative advocacy of aggressive tactics to prevent the spread of communism in Asia led to effective counterattacks from Lyndon B. Johnson and his supporters, who claimed that Goldwater's militancy would have dire consequences, possibly even nuclear war. In a May 1964 speech, Goldwater suggested that nuclear weapons should be treated more like conventional weapons and used in Vietnam, specifically that they should have been used at Dien Bien Phu in 1954 to defoliate trees. Regarding Vietnam, Goldwater charged that Johnson's policy was devoid of "goal, course, or purpose," leaving "only sudden death in the jungles and the slow strangulation of freedom". Goldwater's rhetoric on nuclear war was viewed by many as quite uncompromising, a view buttressed by off-hand comments such as, "Let's lob one into the men's room at the Kremlin." He also advocated that field commanders in Vietnam and Europe should be given the authority to use tactical nuclear weapons (which he called "small conventional nuclear weapons") without presidential confirmation. Goldwater countered the Johnson attacks by criticizing the administration for its perceived ethical lapses, and stating in a commercial that "we, as a nation, are not far from the kind of moral decay that has brought on the fall of other nations and people... I say it is time to put conscience back in government. And by good example, put it back in all walks of American life." Goldwater campaign commercials included statements of support by actor Raymond Massey and moderate Republican senator Margaret Chase Smith. Before the 1964 election, "Fact" magazine, published by Ralph Ginzburg, ran a special issue titled "The Unconscious of a Conservative: A Special Issue on the Mind of Barry Goldwater". The two main articles contended that Goldwater was mentally unfit to be president. The magazine supported this claim with the results of a poll of board-certified psychiatrists. "Fact" had mailed questionnaires to 12,356 psychiatrists, receiving responses from 2,417, of whom 1,189 said Goldwater was mentally incapable of holding the office of president. Most of the other respondents declined to diagnose Goldwater because they had not clinically interviewed him, but claimed that, although not psychologically unfit to preside, Goldwater would be negligent and egregious in the role. After the election, Goldwater sued the publisher, the editor and the magazine for libel in "Goldwater v. Ginzburg". "Although the jury awarded Goldwater only $1.00 in compensatory damages against all three defendants, it went on to award him punitive damages of $25,000 against Ginzburg and $50,000 against "Fact" magazine, Inc." According to Warren Boroson, then-managing editor of "Fact" and later a financial columnist, the main biography of Goldwater in the magazine was written by David Bar-Illan, the Israeli pianist. A Democratic campaign advertisement known as Daisy showed a young girl counting daisy petals, from one to ten. Immediately following this scene, a voiceover counted down from ten to one. The child's face was shown as a still photograph followed by images of nuclear explosions and mushroom clouds. The campaign advertisement ended with a plea to vote for Johnson, implying that Goldwater (though not mentioned by name) would provoke a nuclear war if elected. The advertisement, which featured only a few spoken words and relied on imagery for its emotional impact, was one of the most provocative in American political campaign history, and many analysts credit it as being the birth of the modern style of "negative political ads" on television. The ad aired only once and was immediately pulled, but it was then shown many times by local television stations covering the controversy. Goldwater did not have ties to the Ku Klux Klan (KKK), but he was publicly endorsed by members of the organization. Lyndon B. Johnson exploited this association during the elections, but Goldwater barred the KKK from supporting him and denounced them. Past comments came back to haunt Goldwater throughout the campaign. He had once called the Eisenhower administration "a dime-store New Deal" and the former President never fully forgave him. However, Eisenhower did film a television commercial with Goldwater. Eisenhower qualified his voting for Goldwater in November by remarking that he had voted not specifically for Goldwater, but for the Republican Party. In December 1961, Goldwater had told a news conference that "sometimes I think this country would be better off if we could just saw off the Eastern Seaboard and let it float out to sea." That comment boomeranged on him during the campaign in the form of a Johnson television commercial, as did remarks about making Social Security voluntary, and statements in Tennessee about selling the Tennessee Valley Authority, a large local New Deal employer. The Goldwater campaign spotlighted Ronald Reagan, who appeared in a campaign ad. In turn, Reagan gave a stirring, nationally televised speech, "A Time for Choosing", in support of Goldwater. The speech prompted Reagan to seek the California Governorship in 1966 and jump-started his political career. Conservative activist Phyllis Schlafly, later well known for her fight against the Equal Rights Amendment, first became known for writing a pro-Goldwater book, "A Choice, Not an Echo", attacking the moderate Republican establishment. Goldwater lost to President Lyndon Johnson by a landslide, pulling down the Republican Party which lost many seats in both houses of Congress. Goldwater only won his home state of Arizona and five states in the Deep South, depicted in red. The Southern states, traditionally Democratic up to that time, voted Republican primarily as a statement of opposition to the Civil Rights Act, which had been passed by Johnson and the Northern Democrats, as well as the majority of Republicans in Congress, earlier that year. In the end, Goldwater received 38% of the popular vote, and carried just six states: Arizona (with 51% of the popular vote) and the core states of the Deep South: Alabama, Georgia, Louisiana, Mississippi, and South Carolina. In carrying Georgia by a margin of 54–45%, Goldwater became the first Republican nominee to win the state. However, the overall result was the worst showing in terms of popular vote and electoral college vote for any post-World War II Republican. Indeed, he wouldn't have even carried his own state if not for a 20,000-vote margin in Maricopa County. In all, Johnson won an overwhelming 486 electoral votes, to Goldwater's 52. Goldwater, with his customary bluntness, remarked, "We would have lost even if Abraham Lincoln had come back and campaigned with us." He maintained later in life that he would have won the election if the country had not been in a state of extended grief following the assassination of John F. Kennedy, and that it was simply not ready for a third president in just 14 months. Goldwater's poor showing pulled down many supporters. Of the 57 Republican Congressmen who endorsed Goldwater before the convention, 20 were defeated for reelection, along with many promising young Republicans. On the other hand, the defeat of so many older politicians created openings for young conservatives to move up the ladder. While the loss of moderate Republicans was temporary—they were back by 1966—Goldwater also permanently pulled many conservative Southerners and white ethnics out of the New Deal Coalition. According to Steve Kornacki of "Salon", "Goldwater broke through and won five [Southern] states—the best showing in the region for a GOP candidate since Reconstruction. In Mississippi—where Franklin D. Roosevelt had won nearly 100 percent of the vote 28 years earlier—Goldwater claimed a staggering 87 percent." It has frequently been argued that Goldwater's strong performance in Southern states previously regarded as Democratic strongholds foreshadowed a larger shift in electoral trends in the coming decades that would make the South a Republican bastion (an end to the "Solid South")—first in presidential politics and eventually at the congressional and state levels, as well. Also, Goldwater's uncompromising promotion of freedom was the start of a continuing shift in American politics from liberalism to a conservative economic philosophy. Goldwater remained popular in Arizona, and in the 1968 Senate election he was elected to the seat of retiring Senator Carl Hayden. He was subsequently reelected in 1974 and 1980. The 1974 election saw Goldwater easily reelected over his Democratic opponent, Jonathan Marshall, the publisher of "The Scottsdale Progress". With his fourth Senate term due to end in January 1981, Goldwater seriously considered retiring from the Senate in 1980 before deciding to run for one final term. It was a surprisingly tough battle for re-election. He was viewed by some as out of touch and vulnerable for several reasons. Chief among them was that Goldwater, because he had planned to retire in 1981, had not visited many areas of Arizona outside of Phoenix and Tucson. Additionally, his challenger, Bill Schulz, proved to be a formidable opponent. A former Republican turned Democrat and a wealthy real estate developer, his campaign slogan was "Energy for the Eighties." In the general election, Goldwater won by a very narrow margin, receiving 49.5% of the vote to Schulz's 48.4%. Arizona's changing population also hurt Goldwater. The state's population had soared, and a huge portion of the electorate had not lived in the state when Goldwater was previously elected; hence, many voters were less familiar with Goldwater's actual beliefs, and he was on the defensive for much of the campaign. Early returns on election night seemed to indicate that Schulz would win. The counting of votes continued through the night and into the next morning. At around daybreak, Goldwater learned that he had been reelected thanks to absentee ballots, which were among the last to be counted. Goldwater's surprisingly close victory in 1980 came despite Reagan's 61% landslide over Jimmy Carter in Arizona. Republicans regained control of the Senate, putting Goldwater in the most powerful position he ever had in the Senate. In October 1983, Goldwater voted against the legislation establishing Martin Luther King Jr. Day as a federal holiday. Goldwater said later that the close result in 1980 convinced him not to run again. He retired in 1987, serving as chair of the Senate Intelligence and Armed Services Committees in his final term. Despite his reputation as a firebrand in the 1960s, by the end of his career he was considered a stabilizing influence in the Senate, one of the most respected members of either major party. Although Goldwater remained staunchly anti-communist and "hawkish" on military issues, he was a key supporter of the fight for ratification of the Panama Canal Treaty in the 1970s, which would give control of the canal zone to the Republic of Panama. His most important legislative achievement may have been the Goldwater–Nichols Act, which reorganized the U.S. military's senior-command structure. Goldwater became most associated with labor-union reform and anti-communism; he was a supporter of the conservative coalition in Congress. His work on labor issues led to Congress passing major anti-corruption reforms in 1957, and an all-out campaign by the AFL-CIO to defeat his 1958 reelection bid. He voted against the censure of Senator Joseph McCarthy in 1954, but he never actually charged any individual with being a communist/Soviet agent. Goldwater emphasized his strong opposition to the worldwide spread of communism in his 1960 book "The Conscience of a Conservative". The book became an important reference text in conservative political circles. In 1964, Goldwater ran a conservative campaign that emphasized states' rights. Goldwater's 1964 campaign was a magnet for conservatives since he opposed interference by the federal government in state affairs. Goldwater voted in favor of the Civil Rights Act of 1957 and the 24th Amendment to the U.S. Constitution, but did not vote on the Civil Rights Act of 1960. Though Goldwater had supported the original Senate version of the bill, Goldwater voted against the Civil Rights Act of 1964. His stance was based on his view that Article II and Article VII of the Act interfered with the rights of private persons to do or not to do business with whomever they chose, and believed that the private employment provisions of the Act would lead to racial quotas. In the segregated city of Phoenix in the 1950s, he had quietly supported civil rights for blacks, but would not let his name be used. All this appealed to white Southern Democrats, and Goldwater was the first Republican to win the electoral votes of all of the Deep South states (South Carolina, Georgia, Alabama, Mississippi and Louisiana) since Reconstruction (although Dwight Eisenhower did carry Louisiana in 1956). However, Goldwater's vote on the Civil Rights Act proved devastating to his campaign everywhere outside the South (besides Dixie, Goldwater won only in Arizona, his home state), contributing to his landslide defeat in 1964. While Goldwater had been depicted by his opponents in the Republican primaries as a representative of a conservative philosophy that was extreme and alien, his voting records show that his positions were in harmony with those of his fellow Republicans in the Congress. According to Hans J. Morgenthau, what distinguished him from his predecessors was his firmness of principle and determination, which did not allow him to be content with mere rhetoric. Goldwater fought in 1971 to stop U.S. funding of the United Nations after the People's Republic of China was admitted to the organization. He said: Goldwater was grief-stricken by the assassination of Kennedy and was greatly disappointed that his opponent in 1964 would not be Kennedy but instead his Vice President, former Senate Majority Leader Lyndon B. Johnson of Texas. Goldwater disliked Johnson (saying he "used every dirty trick in the bag"), and Richard Nixon of California (whom he later called "the most dishonest individual I have ever met in my life"). After Goldwater again became a senator, he urged Nixon to resign at the height of the Watergate scandal, warning that fewer than 10 senators would vote against conviction if Nixon were impeached by the House of Representatives. The term "Goldwater moment" has since been used to describe situations when influential members of Congress disagree so strongly with a president from their own party that they openly oppose him. Although Goldwater was not as important in the American conservative movement as Ronald Reagan after 1965, he shaped and redefined the movement from the late 1950s to 1964. Arizona Senator John McCain, who had succeeded Goldwater in the Senate in 1987, summed up Goldwater's legacy, "He transformed the Republican Party from an Eastern elitist organization to the breeding ground for the election of Ronald Reagan." Columnist George Will remarked after the 1980 presidential election that it took 16 years to count the votes from 1964 and Goldwater won. The Republican Party recovered from the 1964 election debacle, acquiring 47 seats in the House of Representatives in the 1966 mid-term election. Further Republican successes ensued, including Goldwater's return to the Senate in 1969. In January of that year, Goldwater wrote an article in the "National Review" "affirming that he [was] not against liberals, that liberals are needed as a counterweight to conservatism, and that he had in mind a fine liberal like Max Lerner". Goldwater was a strong supporter of environmental protection. He explained his position in 1969: Throughout the 1970s, as the conservative wing under Reagan gained control of the party, Goldwater concentrated on his Senate duties, especially in military affairs. He played little part in the election or administration of Richard Nixon, but he helped force Nixon's resignation in 1974. In 1976 he helped block Rockefeller's renomination as vice president. When Reagan challenged Ford for the presidential nomination in 1976, Goldwater endorsed Ford, looking for consensus rather than conservative idealism. As one historian notes, "The Arizonan had lost much of his zest for battle." In 1979, when President Carter normalized relations with Communist China, Goldwater and some other Senators sued him in the Supreme Court, arguing that the President could not terminate the Sino-American Mutual Defense Treaty with Republic of China (Taiwan) without the approval of Congress. The case, "Goldwater v. Carter" 444 U.S. 996, was dismissed by the court as a political question. By the 1980s, with Ronald Reagan as president and the growing involvement of the religious right in conservative politics, Goldwater's libertarian views on personal issues were revealed; he believed that they were an integral part of true conservatism. Goldwater viewed abortion as a matter of personal choice and as such supported abortion rights. As a passionate defender of personal liberty, he saw the religious right's views as an encroachment on personal privacy and individual liberties. In his 1980 Senate reelection campaign, Goldwater won support from religious conservatives but in his final term voted consistently to uphold legalized abortion and in 1981 gave a speech on how he was angry about the bullying of American politicians by religious organizations, and would "fight them every step of the way". Goldwater also disagreed with the Reagan administration on certain aspects of foreign policy (for example, he opposed the decision to mine Nicaraguan harbors). Notwithstanding his prior differences with Dwight D. Eisenhower, Goldwater in a 1986 interview rated him the best of the seven presidents with whom he had worked. He introduced the 1984 Cable Franchise Policy and Communications Act, which allowed local governments to require the transmission of public, educational, and government access (PEG) channels, barred cable operators from exercising editorial control over content of programs carried on PEG channels, and absolved them from liability for their content. On May 12, 1986, Goldwater was presented with the Presidential Medal of Freedom by President Ronald Reagan. After his retirement in 1987, Goldwater described the Arizona Governor Evan Mecham as "hardheaded" and called on him to resign, and two years later stated that the Republican party had been taken over by a "bunch of kooks". In 1987 he received the Langley Gold Medal from the Smithsonian Institution. In 1988, Princeton University's American Whig-Cliosophic Society awarded Goldwater the James Madison Award for Distinguished Public Service in recognition of his career. In a 1994 interview with "The Washington Post", the retired senator said, Goldwater visited the small town of Bowen, Illinois, in 1989 to see where his mother was raised. In response to Moral Majority founder Jerry Falwell's opposition to the nomination of Sandra Day O'Connor to the Supreme Court, of which Falwell had said, "Every good Christian should be concerned", Goldwater retorted: "Every good Christian ought to kick Falwell right in the ass." According to John Dean, Goldwater actually suggested that good Christians ought to kick Falwell in the "nuts", but the news media "changed the anatomical reference". Goldwater also had harsh words for his one-time political protegé, President Reagan, particularly after the Iran–Contra Affair became public in 1986. Journalist Robert MacNeil, a friend of Goldwater's from the 1964 presidential campaign, recalled interviewing him in his office shortly afterward. "He was sitting in his office with his hands on his cane... and he said to me, 'Well, aren't you going to ask me about the Iran arms sales?' It had just been announced that the Reagan administration had sold arms to Iran. And I said, 'Well, if I asked you, what would you say?' He said, 'I'd say it's the god-damned stupidest foreign policy blunder this country's ever made!'", although aside from the Iran–Contra scandal, Goldwater thought nonetheless that Reagan was a good president. In 1988 during that year's presidential campaign, he pointedly told vice-presidential nominee Dan Quayle at a campaign event in Arizona "I want you to go back and tell George Bush to start talking about the issues." Some of Goldwater's statements in the 1990s alienated many social conservatives. He endorsed Democrat Karan English in an Arizona congressional race, urged Republicans to lay off Bill Clinton over the Whitewater scandal, and criticized the military's ban on homosexuals: He said that "Everyone knows that gays have served honorably in the military since at least the time of Julius Caesar" and that "You don't need to be 'straight' to fight and die for your country. You just need to shoot straight." A few years before his death, he addressed establishment Republicans by saying, "Do not associate my name with anything you do. You are extremists, and you've hurt the Republican party much more than the Democrats have." In 1996, he told Bob Dole, whose own presidential campaign received lukewarm support from conservative Republicans: "We're the new liberals of the Republican party. Can you imagine that?" In that same year, with Senator Dennis DeConcini, Goldwater endorsed an Arizona initiative to legalize medical marijuana against the countervailing opinion of social conservatives. Goldwater was an avid amateur radio operator from the early 1920s onwards, with the call signs 6BPI, K3UIG and K7UGA. The last is now used by an Arizona club honoring him as a commemorative call. During the Vietnam War he was a Military Affiliate Radio System (MARS) operator. Goldwater was a prominent spokesman for amateur radio and its enthusiasts. Beginning in 1969 up to his death he appeared in numerous educational and promotional films (and later videos) about the hobby that were produced for the American Radio Relay League (the United States national society representing the interests of radio amateurs) by such producers as Dave Bell (W6AQ), ARRL Southwest Director John R. Griggs (W6KW), Alan Kaul (W6RCL), Forrest Oden (N6ENV), and the late Roy Neal (K6DUE). His first appearance was in Dave Bell's "The World of Amateur Radio" where Goldwater discussed the history of the hobby and demonstrated a live contact with Antarctica. His last on-screen appearance dealing with "ham radio" was in 1994, explaining a then-upcoming, Earth-orbiting ham radio relay satellite. Electronics was a hobby for Goldwater beyond amateur radio. He enjoyed assembling Heathkits, completing more than 100 and often visiting their maker in Benton Harbor, Michigan, to buy more, before the company exited the kit business in 1992. In 1916 Goldwater visited the Hopi Reservation with Phoenix architect John Rinker Kibby, and obtained his first kachina doll. Eventually his doll collection included 437 items and was presented in 1969 to the Heard Museum in Phoenix. Goldwater was an amateur photographer and in his estate left some 15,000 of his images to three Arizona institutions. He was very keen on candid photography. He got started in photography after receiving a camera as a gift from his wife on their first Christmas together. He was known to use a 4×5 Graflex, Rolleiflex, 16 mm Bell and Howell motion picture camera, and 35 mm Nikkormat FT. He was a member of the Royal Photographic Society from 1941 becoming a Life Member in 1948. For decades, he contributed photographs of his home state to "Arizona Highways" and was best known for his Western landscapes and pictures of native Americans in the United States. Three books with his photographs are "People and Places", from 1967; "Barry Goldwater and the Southwest", from 1976; and "Delightful Journey", first published in 1940 and reprinted in 1970. Ansel Adams wrote a foreword to the 1976 book. Goldwater's photography interests occasionally crossed over with his political career. John F. Kennedy, as president, was known to invite former congressional colleagues to the White House for a drink. On one occasion, Goldwater brought his camera and photographed President Kennedy. When Kennedy received the photo, he returned it to Goldwater, with the inscription, "For Barry Goldwater—Whom I urge to follow the career for which he has shown such talent—photography!—from his friend – John Kennedy." This quip became a classic of American political humor after it was made famous by humorist Bennett Cerf. The photo itself was prized by Goldwater for the rest of his life, and recently sold for $17,925 in a Heritage auction. Son Michael Prescott Goldwater formed the Goldwater Family Foundation with the goal of making his father's photography available via the internet. ("Barry Goldwater Photographs") was launched in September 2006 to coincide with the HBO documentary "Mr. Conservative", produced by granddaughter CC Goldwater. On March 28, 1975, Goldwater wrote to Shlomo Arnon: "The subject of UFOs has interested me for some long time. About ten or twelve years ago I made an effort to find out what was in the building at Wright-Patterson Air Force Base where the information has been stored that has been collected by the Air Force, and I was understandably denied this request. It is still classified above Top Secret." Goldwater further wrote that there were rumors the evidence would be released, and that he was "just as anxious to see this material as you are, and I hope we will not have to wait much longer". The April 25, 1988 issue of "The New Yorker" carried an interview where Goldwater said he repeatedly asked his friend, General Curtis LeMay, if there was any truth to the rumors that UFO evidence was stored in a secret room at Wright-Patterson, and if he (Goldwater) might have access to the room. According to Goldwater, an angry LeMay gave him "holy hell" and said, "Not only can't you get into it but don't you ever mention it to me again." In a 1988 interview on Larry King's radio show, Goldwater was asked if he thought the U.S. Government was withholding UFO evidence; he replied "Yes, I do." He added: The Barry M. Goldwater Scholarship and Excellence in Education Program was established by Congress in 1986. Its goal is to provide a continuing source of highly qualified scientists, mathematicians, and engineers by awarding scholarships to college students who intend to pursue careers in these fields. The Scholarship is widely considered the most prestigious award in the U.S. conferred upon undergraduates studying the sciences. It is awarded to about 300 students (college sophomores and juniors) nationwide in the amount of $7,500 per academic year (for their senior year, or junior and senior years). It honors Goldwater's keen interest in science and technology. Goldwater's public appearances ended in late 1996 after he suffered a massive stroke; family members then disclosed he was in the early stages of Alzheimer's disease. He died on May 29, 1998, at the age of 89 at his long-time home in Paradise Valley, Arizona, of complications from the stroke. His funeral was co-officiated by both a reverend and a rabbi. His ashes were buried at the Episcopal Christ Church of the Ascension in Paradise Valley, Arizona. A memorial statue set in a small park has been erected to honor the memory of Goldwater in that town, near his former home and current resting place. Among the buildings and monuments named after Barry Goldwater are: the Barry M. Goldwater Terminal at Phoenix Sky Harbor International Airport, Goldwater Memorial Park in Paradise Valley, Arizona, the Barry Goldwater Air Force Academy Visitor Center at the United States Air Force Academy, and Barry Goldwater High School in northern Phoenix. In 2010, former Arizona Attorney General Grant Woods, himself a Goldwater scholar and supporter, founded the Goldwater Women's Tennis Classic Tournament to be held annually at the Phoenix Country Club in Phoenix. On February 11, 2015, a statue of Goldwater by Deborah Copenhaver Fellows was unveiled by U.S. House and Senate leaders at a dedication ceremony in National Statuary Hall of the U.S. Capitol building in Washington, D.C. Barry Goldwater Peak is the highest peak in the White Tank Mountains. Goldwater's granddaughter, CC Goldwater, has co-produced with longtime friend and independent film producer Tani L. Cohen a documentary on Goldwater's life, "Mr. Conservative: Goldwater on Goldwater", first shown on HBO on September 18, 2006. In his song "I Shall Be Free No. 10", Bob Dylan refers to Goldwater: "I'm liberal to a degree, I want everybody to be free. But if you think I'll let Barry Goldwater move in next door and marry my daughter, you must think I'm crazy." Goldwater's son Barry Goldwater Jr. served as a Congressman from California from 1969 to 1983. He was the first Congressman to serve while having a father in the Senate. Goldwater's uncle Morris Goldwater served in the Arizona territorial and state legislatures and as mayor of Prescott, Arizona. Goldwater's nephew Don Goldwater sought the Arizona Republican Party nomination for Governor of Arizona in 2006, but he was defeated by Len Munsil.
https://en.wikipedia.org/wiki?curid=4792
Bob Young (businessman) Robert Young is a serial entrepreneur who is best known for founding Red Hat Inc., the open source software company. He also owns the franchise for the Hamilton Tiger-Cats of the Canadian Football League and serves as self-styled Caretaker of the team. He was born in Hamilton, Ontario, Canada. He attended Trinity College School in Port Hope, Ontario. He received a Bachelor of Arts from Victoria College at the University of Toronto. Prior to Red Hat, Young built a couple of computer rental and leasing businesses, including founding Vernon Computer Rentals in 1984. Descendants of Vernon are still operating under that name. After leaving Vernon, Young founded the ACC Corp Inc. in 1993. Marc Ewing and Young co-founded open-source software company Red Hat. Red Hat was a member of the S&P 500 Index before being purchased by IBM on July 9, 2019. Marc Ewing and Young's partnership started in 1994 when ACC acquired the Red Hat trademarks from Ewing. In early 1995, ACC changed its name to Red Hat Software, which has subsequently been shortened to simply Red Hat, Inc. Young served as Red Hat's CEO until 1999. In 2002, Young founded Lulu.com, a print-on-demand, self-publishing company, and served as CEO. In 2006, Young established the Lulu Blooker Prize, a book prize for books that began as blogs. He launched the prize partly as a means to promote Lulu. Young served as CEO of PrecisionHawk, a commercial drone technology company, from 2015 to 2017. Prior to being named PrecisionHawk's CEO in 2015, he was an early investor in the company. He continues to serve on its board as Chairman. Young also co-founded Linux Journal in 1994, and in 2003, he purchased the Hamilton Tiger-Cats of the Canadian Football League. Young focuses his philanthropic efforts on access to information and advancement of knowledge. In 1999, he co-founded The Center for the Public Domain. Young has supported the Creative Commons, Public Knowledge.org, the Dictionary of Old English, Loran Scholarship Foundation, ibiblio.org, and the NCSU eGames, among others.
https://en.wikipedia.org/wiki?curid=4797
Babylon 5 Babylon 5 is an American space opera television series created by writer and producer J. Michael Straczynski, under the Babylonian Productions label, in association with Straczynski's Synthetic Worlds Ltd. and Warner Bros. Domestic Television. After the successful airing of a test pilot movie on February 22, 1993, "", in May 1993 Warner Bros. commissioned the series for production as part of its Prime Time Entertainment Network (PTEN). The show premiered in the US on January 26, 1994, and ran for five seasons. Unusual for the time, "Babylon 5" was conceived as a "novel for television", with a defined beginning, middle, and end; in essence, each episode would be a single "chapter" of this "novel". A coherent five-year story arc unfolds over five 22-episode seasons. Tie-in novels, comic books, and short stories were also developed to play a significant canonical part in the overall story. The series follows the human military staff and alien diplomats stationed on a space station, "Babylon 5", built in the aftermath of several major inter-species wars as a neutral focal point for galactic diplomacy and trade. "Babylon 5" was an early example of a television series featuring story arcs which spanned episodes or whole seasons. Whereas contemporary television shows tended to confine conflicts to individual episodes, maintaining the overall status quo, each season of "Babylon 5" contains plot elements which permanently change the series universe. "Babylon 5" utilized multiple episodes to address the repercussions of some plot events or character decisions, and episode plots would at times reference or be influenced by events from prior episodes or seasons. Many races of sentient creatures are seen frequenting the station, with most episodes drawing from a core of around a dozen species. Major plotlines included "Babylon 5"'s embroilment in a millennia-long cyclical conflict between ancient, powerful races, inter-race wars and their aftermaths, and intrigue or upheaval within particular races, including the human characters who fight to resist Earth's descent into totalitarianism. Many episodes focus on the effect of wider events on individual characters, with episodes containing themes such as personal change, loss, subjugation, corruption, defiance, and redemption. "Babylon 5", set primarily between the years 2257 and 2262, depicts a future where Earth has a unifying Earth government and has gained the technology for faster-than-light travel. Colonies within the solar system, and beyond, make up the Earth Alliance, which has established contact with other spacefaring species. Ten years before the series is set, Earth itself was nearly defeated in a war with the spiritual Minbari, only to escape destruction when the Minbari unexpectedly surrendered at the brink of victory; since then, Earth has established peaceful relationships with them. Among the other species are the imperialist Centauri; the Narn, who only recently gained independence from the Centauri empire; and the mysterious, powerful Vorlons. Several dozen less powerful species from the League of Non-Aligned Worlds also have diplomatic contact with the major races, including the Drazi, Brakiri, Vree, Markab, and pak'ma'ra. An ancient and secretive race, the Shadows, unknown to humans but documented in many other races' religious texts, malevolently influence events to bring chaos and war among the known species. The "Babylon 5" space station is located in the Epsilon Eridani system, at the fifth Lagrangian point between the fictional planet Epsilon III and its moon. It is an O'Neill cylinder long and in diameter. The station is the last of its line; the first three stations were all destroyed during construction, while "Babylon 4" was completed but mysteriously vanished shortly after being made operational. It contains living areas which accommodate various alien species, providing differing atmospheres and gravities. Human visitors to the alien sectors are shown using breathing equipment and other measures to tolerate the conditions. "Babylon 5" featured an ensemble cast which changed over the course of the show's run: In addition, several other actors have filled more than one minor role on the series. Kim Strauss played the Drazi Ambassador in four episodes, as well as nine other characters in ten more episodes. Some actors had difficulty dealing with the application of prosthetics required to play some of the alien characters. The producers therefore used the same group of people (as many as 12) in various mid-level speaking roles, taking full head and body casts from each. The group came to be unofficially known by the production as the "Babylon 5 Alien Rep Group." The five seasons of the series each correspond to one fictional sequential year in the period 2258–2262. Each season shares its title with an episode that is central to that season's plot. Sinclair, the commander of "Babylon 5", is a hero of the Minbari war, troubled by his inability to remember events of the war’s last day. Though supported by Minbari ambassador Delenn, who is secretly a member of the Minbari ruling Grey Council, other Minbari remain distrustful of him. The Narn have recently succeeded in regaining their independence from Centauri, and Centauri ambassador Mollari finds a new ally in the enigmatic Mr. Morden to strike back at the Narn. Xenophobic groups on Earth challenge humanity's contact with aliens, and President Santiago, who has favored such contact, is assassinated. Sinclair is transferred to be ambassador to Minbar, and Sheridan is assigned command of the station. He and Delenn believe Santiago's death is part of a conspiracy as now-president Clark takes steps to isolate Earth and install a totalitarian government. The aging Centauri Emperor dies, and against his wishes, Mollari and his ally Lord Refa replace him with his unstable nephew Cartagia. Aided by Morden's allies, the Shadows, the Centauri initiate a full-scale attack on the Narn homeworld, re-conquering it. Vorlon ambassador Kosh requests Sheridan's help to fight the Shadows. Sheridan and Delenn establish a "conspiracy of light" to fight the influence of the Shadows. When Clark declares martial law, Sheridan declares "Babylon 5"'s independence from Earth government. Mollari realizes his deal with Morden has become dangerous but is unable to end it. Sinclair travels back in time to become Valen, the legendary Minbari religious leader. In retaliation for involving the Vorlons in the war, Kosh is killed by the Shadows. Despite Kosh's warnings, Sheridan goes to Z'ha'dum, the Shadow homeworld, and crashes a spacecraft filled with explosives on it, seemingly dying in the explosion. Sheridan is rescued from Z'ha'dum by the mysterious Lorien. With the Shadows in retreat, the Vorlons further their efforts to destroy any species touched by the Shadows. Mollari overthrows Cartagia with the aid of Narn ambassador G’Kar. Sheridan discovers the Vorlons and Shadows have used the younger races in a proxy war, and convinces both sides to stop the war and leave the younger races in peace. Sheridan leads Minbar, Centauri, and Narn to form a new Interstellar Alliance. With their help, Sheridan is able to free Earth from control of President Clark and install a new, more peaceful regime. A group of rogue human telepaths take sanctuary on the station, seeking Sheridan’s aid to escape the control of Psi Corps, the autocratic Earth agency that oversees telepaths. Remnants of the Shadows' allies attempt to break up the Alliance by setting the various races against each other, and manipulate events on Centauri to install Mollari as Emperor but under their control. Mollari withdraws Centauri from the Alliance. Twenty years later, Sheridan has a last reunion with his friends before leaving to join Lorien and the older races "beyond the rim". No longer needed, "Babylon 5" is decommissioned and scuttled. Having worked on a number of television science fiction shows which had regularly gone over budget, Straczynski concluded that a lack of long-term planning was to blame, and set about looking at ways in which a series could be done responsibly. Taking note of the lessons of mainstream television, which brought stories to a centralized location such as a hospital, police station, or law office, he decided that instead of "[going] in search of new worlds, building them anew each week", a fixed space station setting would keep costs at a reasonable level. A fan of sagas such as the "Foundation" series, "Childhood's End", "The Lord of the Rings", "Dune" and the "Lensman" series, Straczynski wondered why no one had done a television series with the same epic sweep, and concurrently with the first idea started developing the concept for a vastly ambitious epic covering massive battles and other universe-changing events. Realizing that both the fixed-locale series and the epic could be done in a single series, he began to sketch the initial outline of what would become "Babylon 5". Straczynski set five goals for "Babylon 5". He said that the show "would have to be good science fiction" as well as good television – "rarely are shows both good [science fiction] "and" good TV; there're generally one or the other [emphasis in original]." It would have to do for science fiction television what "Hill Street Blues" had done for police dramas, by taking an adult approach to the subject. It would have to be reasonably budgeted, and "it would have to look unlike anything ever seen before on TV, presenting individual stories against a much broader canvas." He further stressed that his approach was "to take [science fiction] seriously, to build characters for grown-ups, to incorporate real science but keep the characters at the center of the story." Some of the staples of television science fiction were also out of the question (the show would have "no kids or cute robots"). The idea was not to present a perfect utopian future, but one with greed and homelessness; one where characters grow, develop, live, and die; one where not everything was the same at the end of the day's events. Citing Mark Twain as an influence, Straczynski said he wanted the show to be a mirror to the real world and to covertly teach. Following production on "Captain Power and the Soldiers of the Future", Straczynski approached John Copeland and Doug Netter, who had also been involved with "Captain Power" and showed him the bible and pilot script for his show, and both were impressed with his ideas. They were able to secure an order for the pilot from Warner Bros. who were looking at the time to get programming for a planned broadcast network. Warner Bros. had remained skeptical about the show even after greenlighting the pilot. According to Straczynski, Warner Bros. had three main concerns: that American attention spans were too short for a series-long narrative to work, that it would be difficult to sell the show into syndication as the syndicate networks would air the episodes out of order, and that no other science-fiction television show outside of "Star Trek" had gone more than three seasons before it was cancelled. Straczynski had proved out that the syndication fear was incorrect, since syndicate stations told him they show their shows in episode order to track broadcasts for royalties; however, he could not assure Warner Bros. about the attention span or premature cancellation concerns, but still set out to show Warner Bros. they were wrong. Straczynski wrote 92 of the 110 episodes of "Babylon 5", including all 44 episodes in the third and fourth seasons, a feat never before accomplished in American television. Other writers to have contributed scripts to the show include Peter David, Neil Gaiman, Kathryn M. Drennan, Lawrence G. DiTillio, D. C. Fontana, and David Gerrold. Harlan Ellison, a creative consultant on the show, received story credits for two episodes. Each writer was informed of the overarching storyline, enabling the show to be produced consistently under-budget. The rules of production were strict; scripts were written six episodes in advance, and changes could not be made once production had started. With not all cast members being hired for every episode of a season, the five-year plot length caused some planning difficulties. If a critical scene involving an actor not hired for every episode had to be moved, that actor had to be paid for work on an extra episode. It was sometimes necessary to adjust the plotline to accommodate external influences, an example being the "trap door" that was written for every character: in the event of that actor's unexpected departure from the series, the character could be written out with minimal impact on the storyline. Straczynski stated, "As a writer, doing a long-term story, it'd be dangerous and short-sighted for me to construct the story without trap doors for every single character. ... That was one of the big risks going into a long-term storyline which I considered long in advance;..." This device was eventually used to facilitate the departures of Claudia Christian and Andrea Thompson from the series. Straczynski purposely went light on elements of the five-year narrative during the first season as he felt the audience would not be ready for the full narrative at that time, but he still managed to drop in some scenes that would be critical to the future narrative. This also made it challenging for the actors to understand their motivations without knowing where their characters were going; Straczynski said "I didn’t want to tell them too much, because that risks having them play the result, rather than the process." He recalled that Peter Jurasik had asked him about the context of Londo's premonition, shown partially in "Midnight on the Firing Line", of himself and G'Kar choking each other to death, but Straczynski had to be coy about it. The full death scene was shown in context in "War Without End - Part 2" near the end of the third season. During production of the fourth season, the Prime Time Entertainment Network, which Warner Bros. opted to use for "Babylon 5", was shut down, leaving the planned fifth season in doubt. Unwilling to short-change fans of the show, Straczynski began preparing modifications to the fourth season that would allow him to conclude his overall arc should a fifth season not be greenlit, which ultimately became the direction the fourth season took. Straczynski identified three primary narrative threads which would require resolution: the Shadow war, Earth's slide into a dictatorship, and a series of sub-threads which branched off from those. Estimating they would still take around 27 episodes to resolve without having the season feel rushed, the solution came when the TNT network commissioned two "Babylon 5" television films. Several hours of material was thus able to be moved into the films, including a three-episode arc which would deal with the background to the Earth–Minbari War, and a sub-thread which would have set up the sequel series, "Crusade". Further standalone episodes and plot-threads were dropped from season four, which could be inserted into "Crusade", or the fifth season, were it to be given the greenlight. The intended series finale, "Sleeping in Light", was filmed during season four as a precaution against cancellation. When word came that TNT had picked up "Babylon 5", this was moved to the end of season five and replaced with a newly filmed season four finale, "The Deconstruction of Falling Stars". Ann Bruice Aling was costume designer for the show, after production designer John Iacovelli suggested her for the position having previously worked with Bruice on a number of film and theatrical productions. With the variety of costumes required she compared "Babylon 5" to "eclectic theatre", with fewer rules about period, line, shape and textures having to be adhered to. Preferring natural materials whenever possible, such as ostrich leather in the Narn body armor, Bruice combined and layered fabrics as diverse as rayon and silk with brocades from the 1930s and '40s to give the clothing the appearance of having evolved within different cultures. With an interest in costume history, she initially worked closely with Straczynski to get a sense of the historical perspective of the major alien races, "so I knew if they were a peaceful people or a warring people, cold climate etc. and then I would interpret what kind of sensibility that called for." Collaborating with other departments to establish co-ordinated visual themes for each race, a broad palette of colors was developed with Iacovelli, which he referred to as "spicy brights". These warm shades of grey and secondary colors, such as certain blues for the Minbari, would often be included when designing both the costumes and relevant sets. As the main characters evolved, Bruice referred back to Straczynski and producer John Copeland who she viewed as "surprisingly more accessible to me as advisors than other producers and directors", so the costumes could reflect these changes. Ambassador Londo Mollari's purple coat became dark blue and more tailored while his waistcoats became less patterned and brightly colored as Bruice felt "Londo has evolved in my mind from a buffoonish character to one who has become more serious and darker." Normally there were three changes of costume for the primary actors; one for on set, one for the stunt double and one on standby in case of "coffee spills". For human civilians, garments were generally purchased off-the-rack and altered in various ways, such as removing lapels from jackets and shirts while rearranging closures, to suggest future fashions. For some of the main female characters a more couture approach was taken, as in the suits worn by Talia Winters which Bruice described as being designed and fitted to within "an inch of their life". Costumes for the destitute residents of downbelow would be distressed through a combination of bleaching, sanding, dipping in dye baths and having stage blood added. Like many of the crew on the show, members of the costume department made onscreen cameos. During the season 4 episode "Atonement", the tailors and costume supervisor appeared as the Minbari women fitting Zack Allan for his new uniform as the recently promoted head of security. His complaints, and the subsequent stabbing of him with a needle by costume supervisor Kim Holly, was a light-hearted reference to the previous security uniforms, a design carried over from the pilot movie which were difficult to work with and wear due to the combination of leather and wool. While the original pilot film featured some aliens which were puppets and animatronics, the decision was made early on in the show's production to portray most alien species as humanoid in appearance. Barring isolated appearances, fully computer-generated aliens were discounted as an idea due to the "massive rendering power" required. Long-term use of puppets and animatronics was also discounted, as Straczynski believed they would not be able to convey "real emotion" without an actor inside. In anticipation of the emerging HDTV standard, rather than the usual format, the series was shot in , with the image cropped to 4:3 for initial SDTV television transmissions. It was one of the first television shows to use computer technology in creating visual effects, rather than models and miniatures, primarily out of budgetary concerns; Straczynski estimated that each of their episodes cost to make, compared to the cost of each episode of "". The visual effects were achieved using Amiga-based Video Toasters at first, and later Pentium, Macintosh, and Alpha-based systems. The effects sequences were designed to simulate Newtonian physics, with particular emphasis on the effects of inertia on the motion of spacecraft. Foundation Imaging provided the special effects for the pilot film (for which it won an Emmy) and the first three seasons of the show, led by Ron Thornton. After co-executive producer Douglas Netter and producer John Copeland approached Straczynski with the idea of producing the effects in-house, Straczynski agreed to replace Foundation, for season 4 and 5, once a new team had been established by Netter Digital, and an equal level of quality was assured, by using similar technology and a number of former Foundation employees. The Emmy-winning alien make-up was provided by Optic Nerve Studios. Christopher Franke composed and scored the musical soundtrack for all 5 years of the show when Stewart Copeland, who worked on the original telefilm, was unable to return for the first season due to recording and touring commitments. Initially concerned composing for an episodic television show could become "annoying because of the repetition", Franke found the evolving characters and story of "Babylon 5" afforded him the opportunity to continually take new directions. Given creative freedom by the producers, Franke also orchestrated and mixed all the music, which one reviewer described as having "added another dimension of mystery, suspense, and excitement to the show, with an easily distinguishable character that separates "Babylon 5" from other sci-fi television entries of the era." With his recording studio in the same building as his home located in the Hollywood Hills, Franke would attend creative meetings before scoring the on average 25 minutes of music for each episode. Utilizing the "acoustic dirt produced by live instruments and the ability to play so well between two semitones" and the "frequency range, dynamics and control" provided by synthesizers, he described his approach "as experimental friendly as possible without leaving the happy marriage between the orchestral and electronic sounds". Utilising Cubase software through an electronic keyboard, or for more complex pieces a light pen and graphics tablet, he would begin by developing the melodic content round which the ambient components and transitions were added. Using playbacks with digital samples of the appropriate instruments, such as a group of violins, he would decide which tracks to produce electronically or record acoustically. Scores for the acoustic tracks were emailed to his Berlin scoring stage, and would require from four musicians to the full orchestra, with a maximum of 24 present at any one time. One of three conductors would also be required for any score that involved more than 6 musicians. Franke would direct recording sessions via six fibre optic digital telephone lines to transmit and receive video, music and the SMPTE timecode. The final edit and mixing of the tracks would take place in his Los Angeles studio. A total of 24 episode and three television film soundtracks were released under Franke's record label, Sonic Images Records, between 1995 and 2001. These contain the musical scores in the same chronological order as they played in the corresponding episodes, or television films. Three compilation albums were also produced, containing extensively re-orchestrated and remixed musical passages taken from throughout the series to create more elaborate suites. In 2007 his soundtrack for was released under the Varèse Sarabande record label. Warner Bros. slotted the show to premiere on its nascent Prime Time Entertainment Network (PTEN). As original content from another studio, it was somewhat anomalous in a stable of syndicated content from Warner Bros., and the cause of some friction between Straczynski's company and Warner Bros. The pilot film, , premiered on February 22, 1993, with strong viewing figures, achieving a 9.7 in the Nielsen national syndication rankings. The regular series initially aired from January 26, 1994 through November 25, 1998, first on PTEN, then in first-run syndication, debuting with a 6.8 rating/10 share. Figures dipped in its second week, and while it posted a solid 5.0 rating/8 share, with an increase in several major markets, ratings for the first season continued to fall, to a low of 3.4 during reruns. Ratings remained low-to-middling throughout the first four seasons, but "Babylon 5" scored well with the demographics required to attract the leading national sponsors and saved up to $300,000 per episode by shooting off the studio lot, therefore remaining profitable. The fifth season, which aired on cable network TNT, had ratings about 1.0% lower than seasons two through four. In the United Kingdom, the show aired every week on Channel 4 without a break, with the result that the last four or five episodes of the early seasons screened in the UK before the US. "Babylon 5" was one of the better-rated US television shows on Channel 4, and achieved high audience Appreciation Indexes, with the season 4 episode "Endgame" achieving the rare feat of beating the prime-time soap operas for first position. Straczynski stated that PTEN only required the show to be profitable for the network to remain in production, and said that while this was the case for its first four seasons, on paper it was always losing money; Straczynski estimated in 2019 that the show ended about in the red, and stated that he had never made any profits on "Babylon 5". The entire series cost an estimated $90 million for 110 episodes. "Babylon 5" successfully completed its five-year story arc on November 25, 1998, after five seasons and 109 aired episodes, when TNT aired the 110th (epilogue) episode "Sleeping in Light," which had been filmed as the Season finale 4 when "Babylon 5" was under threat of ending production at that point. After a fifth season was assured, a new Season 4 finale was used so that "Sleeping in Light" could remain as the series finale. As of 2018, Babylon 5 reruns have become part of the programming for the Comet Channel. Throughout its run, "Babylon 5" found ways to portray themes relevant to modern and historical social issues. It marked several firsts in television science fiction, such as the exploration of the political and social landscapes of the first human colonies, their interactions with Earth, and the underlying tensions. "Babylon 5" was also one of the first television science fiction shows to denotatively refer to a same-sex relationship. In the show, sexual orientation is as much of an issue as "being left-handed or right-handed". Unrequited love is explored as a source of pain for the characters, though not all the relationships end unhappily. The clash between order and chaos, and the people caught in between, plays an important role in "Babylon 5". The conflict between two unimaginably powerful older races, the Vorlons and the Shadows, is represented as a battle between competing ideologies, each seeking to turn the humans and the other younger races to their beliefs. The Vorlons represent an authoritarian philosophy of unquestioning obedience. Vorlon characters frequently ask, "who are you?" focusing on identity as a catalyst for shaping personal goals; the intention is not to solicit a correct answer, but to "tear down the artifices we construct around ourselves until we're left facing ourselves, not our roles." The Shadows represent another authoritarian philosophy cloaked in a disguise of evolution through fire, of fomenting conflict in order to promote evolutionary progress. Characters affiliated with the Shadows repeatedly ask, "what do you want?" emphasising personal desire and ambition, using it to shape identity, encouraging conflict between groups who choose to serve their own glory or profit. The representation of order and chaos was informed by the Babylonian myth that the universe was born in the conflict between both. The climax of this conflict comes with the younger races' exposing of the Vorlons' and the Shadows' "true faces" and the rejection of both philosophies, heralding the dawn of a new age without their interference. The notion that the war was about "killing your parents" is echoed in the portrayal of the civil war between the human colonies and Earth. Deliberately dealing in historical and political metaphor, with particular emphasis upon McCarthyism and the HUAC, the Earth Alliance becomes increasingly authoritarian, eventually sliding into a dictatorship. The show examines the impositions on civil liberties under the pretext of greater defense against outside threats which aid its rise, and the self-delusion of a populace which believes its moral superiority will never allow a dictatorship to come to power, until it is too late. The successful rebellion led by the Babylon 5 station results in the restoration of a democratic government and true autonomy for Mars and the colonies. The "Babylon 5" universe deals with numerous armed conflicts which range on an interstellar scale. The story begins in the aftermath of a war which brought the human race to the brink of extinction, caused by a misunderstanding during a first contact. Babylon 5 is built to foster peace through diplomacy, described as the "last, best hope for peace" in the opening credits monologue during its first three seasons. Wars between separate alien civilizations are featured. The conflict between the Narn and the Centauri is followed from its beginnings as a minor territorial dispute amplified by historical animosity, through to its end, in which weapons of mass destruction are employed to subjugate and enslave a planet. The war is an attempt to portray a more sobering kind of conflict than usually seen on science fiction television. Informed by the events of the first Gulf War, the Cuban Missile Crisis and the Soviet invasion of Prague, the intent was to recreate these moments when "the world held its breath" and the emotional core of the conflict was the disbelief that the situation could have occurred at all, and the desperation to find a way to bring it to an end. By the start of the third season, the opening monologue had changed to say that the hope for peace had "failed" and the Babylon 5 station had become the "last, best hope for victory", indicating that while peace is ostensibly a laudable goal, it can also mean a capitulation to an enemy intent on committing horrendous acts and that "peace is a byproduct of victory against those who do not want peace." The Shadow War also features prominently in the show, wherein the Shadows work to instigate conflict between other races to promote technological and cultural advancement, opposed by the Vorlons who are attempting to impose their own authoritarian philosophy of obedience. The gradual discovery of the scheme and the rebellion against it underpin the first three seasons, but also as a wider metaphor for competing forces of order and chaos. In that respect, Straczynski stated he presented Earth's descent into a dictatorship as its own "shadow war". In ending the Shadow War before the conclusion of the series, the show was able to more fully explore its aftermath, and it is this "war at home" which forms the bulk of the remaining two seasons. The struggle for independence between Mars and Earth culminates with a civil war between the human colonies (led by the Babylon 5 station) and the home planet. Choosing Mars as both the spark for the civil war, and the staging ground for its dramatic conclusion, enabled the viewer to understand the conflict more fully than had it involved an anonymous colony orbiting a distant star. The conflict, and the reasons behind it, were informed by Nazism, McCarthyism and the breakup of Yugoslavia, and the destruction of the state also served as partial inspiration for the Minbari civil war. The post-war landscape has its roots in the Reconstruction. The attempt to resolve the issues of the American Civil War after the conflict had ended, and this struggle for survival in a changed world was also informed by works such as "Alas, Babylon", a novel dealing with the after-effects of a nuclear war on a small American town. The show expresses that the end of these wars is not an end to war itself. Events shown hundreds of years into the show's future tell of wars which will once again bring the human race to the edge of annihilation, demonstrating that humanity will not change, and the best that can be hoped for after it falls is that it climbs a little higher each time, until it can one day "take [its] place among the stars, teaching those who follow." Many of Earth's contemporary religions are shown to still exist, with the main human characters often having religious convictions. Among those specifically identified are the Roman Catholic branch of Christianity (including the Jesuits), Judaism, and the fictional Foundationism (which developed after first contact with alien races). Alien beliefs in the show range from the Centauri's Bacchanalian-influenced religions, of which there are up to seventy different denominations, to the more pantheistic as with the Narn and Minbari religions. In the show's third season, a community of Cistercian monks takes up residence on the Babylon 5 station, in order to learn what other races call God, and to come to a better understanding of the different religions through study at close quarters. References to both human and alien religion is often subtle and brief, but can also form the main theme of an episode. The first season episode "The Parliament of Dreams" is a conventional "showcase" for religion, in which each species on the Babylon 5 station has an opportunity to demonstrate its beliefs (humanity's are presented as being numerous and varied), while "Passing Through Gethsemane" focuses on a specific position of Roman Catholic beliefs, as well as concepts of justice, vengeance, and biblical forgiveness. Other treatments have been more contentious, such as the David Gerrold-scripted "Believers", in which alien parents would rather see their son die than undergo a life-saving operation because their religious beliefs forbid it. When religion is an integral part of an episode, various characters express differing view points. In the episode "Soul Hunter", where the concept of an immortal soul is touched upon, and whether after death it is destroyed, reincarnated, or simply does not exist. The character arguing the latter, Doctor Stephen Franklin, often appears in the more spiritual storylines as his scientific rationality is used to create dramatic conflict. Undercurrents of religions such as Buddhism have been viewed by some in various episode scripts, and while identifying himself as an atheist, Straczynski believes that passages of dialog can take on distinct meanings to viewers of differing faiths, and that the show ultimately expresses ideas which cross religious boundaries. Substance abuse and its impact on human personalities also features in the "Babylon 5" storyline. Garibaldi is a relapsing-remitting alcoholic, who practices complete abstinence throughout most of the series until the middle of season five, only recovering at the end of the season. Zack Allan, his eventual replacement as chief of security, was given a second chance by Garibaldi after overcoming his own addiction to an unspecified drug. Dr. Stephen Franklin develops an addiction to injectable stimulant drugs while trying to cope with the chronic stress and work overload in Medlab, and takes a leave of absence from his position to recover. Executive Officer Susan Ivanova mentions that her father became an alcoholic after her mother's suicide. Captain Elizabeth Lochley tells Garibaldi that her father was an alcoholic, and that she is a recovering alcoholic herself. "Babylon 5" draws upon a number of cultural, historical, political and religious influences to inform and illustrate its characters and storylines. Straczynski has stated that there was no intent to wholly represent any particular period of history or preceding work of fiction, but has acknowledged their influence on the series, insamuch as it uses similar well established storytelling structures, such as the Hero's Journey. While the series is replete with elements which some argue are intended to invoke other works of fiction, myth or legend, there are a number of specific literary references. Several episodes take their titles from Shakespearean monologues, and at least one character quotes Shakespeare directly. The Psi-Cop Alfred Bester was named after the science fiction author of the same name, as his work influenced the autocratic Psi Corps organisation the character represents. There are a number of references to the legend of King Arthur, with ships named "Excalibur" appearing in the main series and the "Crusade" spin-off, and a character in "A Late Delivery from Avalon" claiming to possess the sword itself. Straczynski links the incident which sparked the Earth-Minbari war, in which actions are misinterpreted during a tense situation, to a sequence in "Le Morte d'Arthur", in which a standoff between two armies turns violent when innocent actions are misinterpreted as hostile. The series also references contemporary and ancient history. The Centauri are in part modelled on the Roman empire. Emperor Cartagia believes himself to be a god, a deliberate reference to Caligula. His eventual assassination leads to the ascension of Londo and eventually Vir, both unlikely candidates for the throne, similar to Claudius' improbable ascension after Caligula was assassinated. The series also references the novel I, Claudius by Robert Graves when Cartagia jokes that he has cured a man of his cough after having him beheaded, something also done by Caligula. In more recent historical references, in the episode "In the Shadow of Z'ha'dhum," Sheridan ponders Winston Churchill's Coventry dilemma, of whether or not to act on covertly gathered intelligence during a war. Lives would be saved, but at the risk of revealing to the enemy that their intentions are known, which may be far more damaging in the long term. The swearing in of Vice President Morgan Clark invokes the assassination of President John F. Kennedy, being deliberately staged to mirror the scene aboard Air Force One when Lyndon Johnson was sworn in as President. Although Straczynski is a professed atheist, "Babylon 5" refers to the Christian faith in a number of places. Several episodes have titles which refer to the Christian faith, such as "Passing Through Gethsemane", "A Voice in the Wilderness," and "And the Rock Cried Out, No Hiding Place," the latter being a line from the gospel song "There's no Hiding Place Down Here." The monks led by Brother Theo who, in the episode "Convictions," take up residence on Babylon 5, belong to the Dominican Order, a Roman Catholic mendicant Order. The show employed Internet marketing to create a buzz among online readers far in advance of the airing of the pilot episode, with Straczynski participating in online communities on USENET (in the rec.arts.sf.tv.babylon5.moderated newsgroup), and the GEnie and CompuServe systems before the Web came together as it exists today. The station's location, in "grid epsilon" at coordinates of 470/18/22, was a reference to GEnie ("grid epsilon" = "GE") and the original forum's address on the system's bulletin boards (page 470, category 18, topic 22). Also during this time, Warner Bros. executive Jim Moloshok created and distributed electronic trading cards to help advertise the series. In 1995, Warner Bros. started the Official "Babylon 5" Website on the now defunct Pathfinder portal. In September 1995, they hired series fan Troy Rutter to take over the site and move it to its own domain name, and to oversee the "Keyword B5" area on America Online. In 2004 and 2007, TV Guide ranked "Babylon 5" #13 and #16 on its list of the top cult shows ever. Awards presented to "Babylon 5" include: Nominated Awards include: Straczynski indicated that Paramount Television was aware of his concept as early as 1989, when he attempted to sell the show to the studio, and provided them with the series bible, pilot script, artwork, lengthy character background histories, and plot synopses for 22 "or so planned episodes taken from the overall course of the planned series". Paramount declined to produce "Babylon 5", but later announced "" was in development, two months after Warner Bros. announced its plans for "Babylon 5". Unlike previous "Star Trek" shows, "Deep Space Nine" was based on a space station, and had themes similar to those of "Babylon 5", which drew some to compare it with "Babylon 5". Straczynski stated that, even though he was confident that "Deep Space Nine" producer/creators Rick Berman and Michael Piller had not seen this material, he suspected that Paramount executives used his bible and scripts to steer development of "Deep Space Nine". Straczynski and Warner did not file suit against Paramount, largely because he believed it would negatively affect both TV series. He argued the same when confronted by claims that the lack of legal action was proof that his allegation was unfounded. Generally viewed as having "launched the new era of television CGI visual effects", "Babylon 5" received multiple awards during its initial run, including two consecutive Hugo Awards for best dramatic presentation, and continues to regularly feature prominently in various polls and listings highlighting top-rated science fiction series. "Babylon 5" has been praised for its depth and complexity against a backdrop of contemporary shows which largely lacked long-term consequences, with plots typically being resolved in the course of a single episode, occasionally two. Straczynski was deeply involved in scriptwriting, writing 92 of 110 teleplays, a greater proportion than some of his contemporaries. Reviewers rated the quality of writing on a widely varying scale, identifying both eloquent soliloquies and dialogue that felt "stilted and theatrical." Straczynski has claimed that the multi-year story arc, now a feature of most mainstream televised drama, is the lasting legacy of the series. He stated that both Ronald D. Moore and Damon Lindelof used the 5-year narrative structure of "Babylon 5" as blueprints for their respective shows, "Battlestar Galactica" and "Lost". He also claims "Babylon 5" was the first series to be shot in the , and to use 5.1 channel sound mixes. It was an early example of widespread use of CGI rather than models for space scenes, which allowed for more freedom and larger scale in creating said scenes. While praised at the time, due to budgetary and mastering issues these sequences are considered to have aged poorly. A recurring theme among reviewers is that the series was more than the sum of its parts: while variously criticising the writing, directing, acting and effects, particularly by comparison to current television productions, reviewers praised the consistency of plotting over the series' run, transcending the quality of its individual elements. Many retrospectives, while criticising virtually every individual aspect of the production, have praised the series as a whole for its narrative cohesion and contribution to serialized television. DC began publishing "Babylon 5" comics in 1994, with stories (initially written by Straczynski) that closely tied in with events depicted in the show, with events in the comics eventually being referenced onscreen in the actual television series. The franchise continued to expand into short stories, RPGs, and novels, with the Technomage trilogy of books being the last to be published in 2001, shortly after the spin-off television series, "Crusade", was cancelled. Excepting movie rights, which are retained by Straczynski, all production rights for the franchise are owned by Warner Bros. In November 1994, DC began publishing monthly "Babylon 5" comics. A number of short stories and novels were also produced between 1995 and 2001. Additional books were published by the gaming companies Chameleon Eclectic and Mongoose Publishing, to support their desk-top strategy and role-playing games. Three tv films were released by Turner Network Television (TNT) in 1998, after funding a fifth season of "Babylon 5", following the demise of the Prime Time Entertainment Network the previous year. In addition to "", "", and "", they released a re-edited special edition of the original 1993 tv film, "". In 1999, a fifth tv film was also produced, "", which acted as a pilot movie for the spin-off series "Crusade", which TNT cancelled after 13 episodes had been filmed. Dell Publishing started publication of a series of "Babylon 5" novels in 1995, which were ostensibly considered canon within the TV series' continuity, nominally supervised by Straczynski, with later novels in the line being more directly based upon Straczynski's own notes and story outlines. In 1997, Del Rey obtained the publication license from Warner Bros., and proceeded to release a number of original trilogies directly scenarized by Straczynski, as well as novelizations of three of the TNT telefilms ("In the Beginning, Thirdspace", and "A Call to Arms"). All of the Del Rey novels are considered completely canonical within the filmic "Babylon 5" universe. In 2000, the Sci-Fi Channel purchased the rights to rerun the "Babylon 5" series, and premiered a new telefilm, "" in 2002, which failed to be picked up as a series. In 2007, the first in a planned anthology of straight-to-DVD short stories entitled, "", was released by Warner Home Video, but no others were produced, due to funding issues. Straczynski announced a "Babylon 5" film at the 2014 San Diego Comic-Con, but stated in 2016 that it had been delayed while he completed other productions. In 2018 Straczynski stated that although he possesses the movie rights, he believed that neither a film nor television series revival would happen while Warner Bros. retained the intellectual property for the TV series, believing that Warner Bros would insist on handling production, and that other studios would be hesitant to produce a film without also having the rights to the TV series. In July 1995, Warner Home Video began distributing "Babylon 5" VHS video tapes under its Beyond Vision label in the UK. Beginning with the original telefilm, "", these were PAL tapes, showing video in the same 4:3 aspect ratio as the initial television broadcasts. By the release of Season 2, tapes included closed captioning of dialogue and Dolby Surround sound. Columbia House began distributing NTSC tapes, via mail order in 1997, followed by repackaged collector's editions and three-tape boxed sets in 1999, by which time the original pilot telefilm had been replaced by the re-edited TNT special edition. Additional movie and complete season boxed-sets were also released by Warner Bros. until 2000. Image Entertainment released "Babylon 5" laserdiscs between December 1998 and September 1999. Produced on double-sided 12-inch Pioneer discs, each contained two episodes displayed in the 4:3 broadcast aspect-ratio, with Dolby Surround audio and closed captioning for the dialogue. Starting with two TNT telefilms, "" and the re-edited special edition of "The Gathering", Seasons 1 and 5 were released simultaneously over a six-month period. Seasons 2 and 4 followed, but with the decision to halt production due to a drop in sales, precipitated by rumors of a pending DVD release, only the first twelve episodes of Season 2 and the first six episodes of Season 4 were ultimately released. In November 2001, Warner Home Video began distributing "Babylon 5" DVDs with a two-movie set containing the re-edited TNT special edition of "The Gathering" and "In The Beginning". The telefilms were later individually released in region 2 in April 2002, though some markets received the original version of "The Gathering" in identical packaging. DVD boxed sets of the individual seasons, each containing six discs, began being released in October 2002. Each included a printed booklet containing episode summaries, with each disc containing audio options for German, French, and English, plus subtitles in a wider range of languages, including Arabic and Dutch. Video was digitally remastered from original broadcast masters and displayed in anamorphic widescreen with remastered and remixed Dolby Digital 5.1 sound. Disc 1 of each set contained an introduction to the season by Straczynski, while disc 6 included featurettes containing interviews with various production staff, as well as information on the fictional universe, and a gag reel. Three episodes in each season also contained commentary from either Straczynski, members of the main cast, and/or the episode director. Since its initial release, a number of repackaged DVD boxed sets have been produced for various regional markets. With slightly altered cover art, they included no additional content, but the discs were more securely stored in slimline cases, rather than the early "book" format, with hard plastic pages used during the original release of the first three seasons. While the series was in pre-production, studios were looking at ways for their existing shows to make the transition from the then-standard to the widescreen formats that would accompany the next generation of televisions. After visiting Warner Bros., who were stretching the horizontal interval for an episode of "", producer John Copeland convinced them to allow "Babylon 5" to be shot on Super 35mm film stock. "The idea being that we would telecine to 4:3 for the original broadcast of the series. But what it also gave us was a negative that had been shot for the new 16×9 widescreen-format televisions that we knew were on the horizon." Though the CG scenes, and those containing live action combined with digital elements, could have been created in a suitable widescreen format, a cost-saving decision was taken to produce them in the 4:3 aspect ratio. When those images were prepared for widescreen release, the top and bottom of the images were simply cropped, and the remaining image 'blown up' to match the dimensions of the live action footage, noticeably reducing the image quality. The scenes containing live action ready to be composited with matte paintings, CG animation, etc., were delivered on tape already telecined to the 4:3 aspect-ratio, and contained a high level of grain, which resulted in further image noise being present when enlarged and stretched for widescreen. For the purely live-action scenes, rather than using the film negatives, according to Copeland, "Warners had even forgotten that they had those. They used PAL versions and converted them to NTSC for the US market. They actually didn't go back and retransfer the shows." With the resulting aliasing, and the progressive scan transfer of the video to DVD, this has created a number of visual flaws throughout the widescreen release. In particular, quality has been noted to drop significantly in composite shots.
https://en.wikipedia.org/wiki?curid=4800
BeOS BeOS is an operating system for personal computers first developed by Be Inc. in 1991. It was first written to run on BeBox hardware. BeOS was built for digital media work and was written to take advantage of modern hardware facilities such as symmetric multiprocessing by utilizing modular I/O bandwidth, pervasive multithreading, preemptive multitasking and a 64-bit journaling file system known as BFS. The BeOS GUI was developed on the principles of clarity and a clean, uncluttered design. The API was written in C++ for ease of programming. It has partial POSIX compatibility and access to a command-line interface through Bash, although internally it is not a Unix-derived operating system. BeOS used Unicode as the default encoding in the GUI, though support for input methods such as bidirectional text input was never realized. BeOS was positioned as a multimedia platform that could be used by a substantial population of desktop users and a competitor to Classic Mac OS and Microsoft Windows. It was ultimately unable to achieve a significant market share, however, and proved commercially unviable for Be Inc. The company was acquired by Palm Inc. and today BeOS is mainly used and developed by a small population of enthusiasts. The open-source operating system Haiku, a complete reimplementation of BeOS, is designed to start up where BeOS left off. Beta 1 of Haiku was released in September 2018, six years after Alpha 4. Beta 2 of Haiku was released in June 2020. Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Inc.'s PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Classic Mac OS. Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs. In 1997, Power Computing began bundling BeOS (on a CD for optional installation) with its line of PowerPC-based Macintosh clones. These systems could dual boot either the Classic Mac OS or BeOS, with a start-up screen offering the choice. Due to Apple's moves and the mounting debt of Be Inc., BeOS was soon ported to the Intel x86 platform with its R3 release in March 1998. Through the late 1990s, BeOS managed to create a niche of followers, but the company failed to remain viable. Be Inc. also released a stripped-down, but free, copy of BeOS R5 known as BeOS Personal Edition (BeOS PE). BeOS PE could be started from within Microsoft Windows or Linux, and was intended to nurture consumer interest in its product and give developers something to tinker with. Be Inc. also released a stripped-down version of BeOS for Internet Appliances (BeIA), which soon became the company's business focus in place of BeOS. In 2001 Be's copyrights were sold to Palm, Inc. for some $11 million. BeOS R5 is considered the last official version, but BeOS R5.1 "Dano", which was under development before Be's sale to Palm and included the BeOS Networking Environment (BONE) networking stack, was leaked to the public shortly after the company's demise. In 2002, Be Inc. sued Microsoft claiming that Hitachi had been dissuaded from selling PCs loaded with BeOS, and that Compaq had been pressured not to market an Internet appliance in partnership with Be. Be also claimed that Microsoft acted to artificially depress Be Inc.'s initial public offering (IPO). The case was eventually settled out of court for $23.25 million with no admission of liability on Microsoft's part. After the split from Palm, PalmSource used parts of BeOS's multimedia framework for its failed Palm OS Cobalt product. With the takeover of PalmSource, the BeOS rights now belong to Access Co. In the years that followed the demise of Be Inc. a handful of projects formed to recreate BeOS or key elements of the OS with the eventual goal of then continuing where Be Inc. left off. This was facilitated by the fact that Be Inc. released some components of BeOS under a free licence. Here is a list of these projects: Zeta was a commercially available operating system based on the BeOS R5.1 codebase. Originally developed by yellowTAB, the operating system was then distributed by magnussoft. During development by yellowTAB, the company received criticism from the BeOS community for refusing to discuss its legal position with regard to the BeOS codebase (perhaps for contractual reasons). Access Co. (which bought PalmSource, until then the holder of the intellectual property associated with BeOS) has since declared that yellowTAB had no right to distribute a modified version of BeOS, and magnussoft has ceased distribution of the operating system. BeOS (and now Zeta) continue to be used in media appliances, such as the Edirol DV-7 video editors from Roland Corporation, which run on top of a modified BeOS and the Tunetracker Radio Automation software that used to run it on BeOS and Zeta, and it was also sold as a "Station-in-a-Box" with the Zeta operating system included. In 2015, Tunetracker released Haiku distribution on USB flash disk bundled with its broadcasting software. The Tascam SX-1 digital audio recorder runs a heavily modified version of BeOS that will only launch the recording interface software. iZ Technology Corporation sells the RADAR 24, RADAR V and RADAR Studio, hard disk-based, 24-track professional audio recorders based on BeOS 5, although the newer RADAR 6 is not based on BeOS. Magicbox, a manufacturer of signage and broadcast display machines, uses BeOS to power their Aavelin product line. Final Scratch, a 12-inch vinyl timecode record-driven DJ software/hardware system, was first developed on BeOS. The "ProFS" version was sold to a few dozen DJs prior to the 1.0 release, which ran on a Linux virtual partition.
https://en.wikipedia.org/wiki?curid=4801
Biome A biome is a community of plants and animals that have common characteristics for the environment they exist in. They can be found over a range of continents. Biomes are distinct biological communities that have formed in response to a shared physical climate. "Biome" is a broader term than "habitat"; any biome can comprise a variety of habitats. While a biome can cover large areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body. A 'biota' is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere. The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome. However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil). In Brazilian literature, the term "biome" is sometimes used as synonym of "biogeographic province", an area based on species composition (the term "floristic province" being used when plant species are considered), or also as synonym of the "morphoclimatic and phytogeographical domain" of Ab'Sáber, a geographic space with subcontinental dimensions, with the predominance of similar geomorphologic and climatic characteristics, and of a certain vegetation form. Both include many biomes in fact. To divide the world in a few ecological zones is a difficult attempt, notably because of the small-scale variations that exist everywhere on earth and because of the gradual changeover from one biome to the other. Their boundaries must therefore be drawn arbitrarily and their characterization made according to the average conditions that predominate in them. A 1978 study on North American grasslands found a positive logistic correlation between evapotranspiration in mm/yr and above-ground net primary production in g/m2/yr. The general results from the study were that precipitation and water use led to above-ground primary production, while solar irradiation and temperature lead to below-ground primary production (roots), and temperature and water lead to cool and warm season growth habit. These findings help explain the categories used in Holdridge's bioclassification scheme (see below), which were then later simplified by Whittaker. The number of classification schemes and the variety of determinants used in those schemes, however, should be taken as strong indicators that biomes do not fit perfectly into the classification schemes created. Holdridge classified climates based on the biological effects of temperature and rainfall on vegetation under the assumption that these two abiotic factors are the largest determinants of the types of vegetation found in a habitat. Holdridge uses the four axes to define 30 so-called "humidity provinces", which are clearly visible in his diagram. While this scheme largely ignores soil and sun exposure, Holdridge acknowledged that these were important. The principal biome-types by Allee (1949): The principal biomes of the world by Kendeigh (1961): Whittaker classified biomes using two abiotic factors: precipitation and temperature. His scheme can be seen as a simplification of Holdridge's; more readily accessible, but missing Holdridge's greater specificity. Whittaker based his approach on theoretical assertions and empirical sampling. He was in a unique position to make such a holistic assertion because he had previously compiled a review of biome classifications. Whittaker's distinction between biome and formation can be simplified: formation is used when applied to plant communities only, while biome is used when concerned with both plants and animals. Whittaker's convention of biome-type or formation-type is simply a broader method to categorize similar communities. Whittaker, seeing the need for a simpler way to express the relationship of community structure to the environment, used what he called "gradient analysis" of ecocline patterns to relate communities to climate on a worldwide scale. Whittaker considered four main ecoclines in the terrestrial realm. Along these gradients, Whittaker noted several trends that allowed him to qualitatively establish biome-types: Whittaker summed the effects of gradients (3) and (4) to get an overall temperature gradient and combined this with a gradient (2), the moisture gradient, to express the above conclusions in what is known as the Whittaker classification scheme. The scheme graphs average annual precipitation (x-axis) versus average annual temperature (y-axis) to classify biome-types. The multiauthored series "Ecosystems of the world", edited by David W. Goodall, provides a comprehensive coverage of the major "ecosystem types or biomes" on earth: The eponymously-named Heinrich Walter classification scheme considers the seasonality of temperature and precipitation. The system, also assessing precipitation and temperature, finds nine major biome types, with the important climate traits and vegetation types. The boundaries of each biome correlate to the conditions of moisture and cold stress that are strong determinants of plant form, and therefore the vegetation that defines the region. Extreme conditions, such as flooding in a swamp, can create different kinds of communities within the same biome. Schultz (1988) defined nine ecozones (note that his concept of ecozone is more similar to the concept of biome used in this article than to the concept of ecozone of BBC): Robert G. Bailey nearly developed a biogeographical classification system of ecoregions for the United States in a map published in 1976. He subsequently expanded the system to include the rest of North America in 1981, and the world in 1989. The Bailey system, based on climate, is divided into seven domains (polar, humid temperate, dry, humid, and humid tropical), with further divisions based on other climate characteristics (subarctic, warm temperate, hot temperate, and subtropical; marine and continental; lowland and mountain). A team of biologists convened by the World Wildlife Fund (WWF) developed a scheme that divided the world's land area into biogeographic realms (called "ecozones" in a BBC scheme), and these into ecoregions (Olson & Dinerstein, 1998, etc.). Each ecoregion is characterized by a main biome (also called major habitat type). This classification is used to define the Global 200 list of ecoregions identified by the WWF as priorities for conservation. For the terrestrial ecoregions, there is a specific EcoID, format XXnnNN (XX is the biogeographic realm, nn is the biome number, NN is the individual number). The applicability of the realms scheme above - based on Udvardy (1975)—to most freshwater taxa is unresolved. According to the WWF, the following are classified as freshwater biomes: Biomes of the coastal and continental shelf areas (neritic zone): Example: Pruvot (1896) zones or "systems": Longhurst (1998) biomes: Other marine habitat types (not covered yet by the Global 200/WWF scheme): Humans have altered global patterns of biodiversity and ecosystem processes. As a result, vegetation forms predicted by conventional biome systems can no longer be observed across much of Earth's land surface as they have been replaced by crop and rangelands or cities. Anthropogenic biomes provide an alternative view of the terrestrial biosphere based on global patterns of sustained direct human interaction with ecosystems, including agriculture, human settlements, urbanization, forestry and other uses of land. Anthropogenic biomes offer a new way forward in ecology and conservation by recognizing the irreversible coupling of human and ecological systems at global scales and moving us toward an understanding of how best to live in and manage our biosphere and the anthropogenic biomes we live in. Major anthropogenic biomes: The endolithic biome, consisting entirely of microscopic life in rock pores and cracks, kilometers beneath the surface, has only recently been discovered, and does not fit well into most classification schemes.
https://en.wikipedia.org/wiki?curid=4802
Behavior Behavior (American English) or behaviour (British English; see spelling differences) is the actions and mannerisms made by individuals, organisms, systems or artificial entities in conjunction with themselves or their environment, which includes the other systems or organisms around as well as the (inanimate) physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. Taking a behavior informatics perspective, a behavior consists of behavior actor, operation, interactions, and their properties. A behavior can be represented as a behavior vector. Although there is some disagreement as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal and/or external stimuli". A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are result of development (ontogeny). Behaviors can be either innate or learned from the environment. Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment. Human behavior is believed to be influenced by the endocrine system and the nervous system. It is most commonly believed that complexity in the behavior of an organism is correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior. Ethology is the scientific and objective study of animal behavior, usually with a focus on behavior under natural conditions, and viewing behavior as an evolutionarily adaptive trait. behaviorism is a term that also describes the scientific and objective study of animal behavior, usually referring to measured responses to stimuli or trained behavioral responses in a laboratory context, without a particular emphasis on evolutionary adaptivity. Consumer behavior refers to the processes consumers go through, and reactions they have towards products or services (Dowhan, 2013). It is to do with consumption, and the processes consumers go through around purchasing and consuming goods and services (Szwacka-Mokrzycka, 2015). Consumers recognise needs or wants, and go through a process to satisfy these needs. Consumer behavior is the process they go through as customers, which includes types of products purchased, amount spent, frequency of purchases and what influences them to make the purchase decision or not. There is a lot that influences consumer behavior, with contributions from both internal and external factors (Szwacka-Mokrzycka, 2015). Internal factors include attitudes, needs, motives, preferences and perceptual processes, whilst external factors include marketing activities, social and economic factors, and cultural aspects (Szwacka-Mokrzycka, 2015). Doctor Lars Perner of the University of Southern California claims that there are also physical factors that influence consumer behavior, for example if a consumer is hungry, then this physical feeling of hunger will influence them so that they go and purchase a sandwich to satisfy the hunger (Perner, 2008). There is a model described by Lars Perner which illustrates the decision making process with regards to consumer behavior. It begins with recognition of a problem, the consumer recognises a need or want which has not been satisfied. This leads the consumer to search for information, if it is a low involvement product then the search will be internal, identifying alternatives purely from memory. If the product is high involvement then the search be more thorough, such as reading reviews or reports or asking friends. The consumer will then evaluate his or her alternatives, comparing price, quality, doing trade-offs between products and narrowing down the choice by eliminating the less appealing products until there is one left. After this has been identified, the consumer will purchase the product. Finally the consumer will evaluate the purchase decision, and the purchased product, bringing in factors such as value for money, quality of goods and purchase experience (Model taken from Perner, 2008). However, this logical process does not always happen this way, people are emotional and irrational creatures. People make decisions with emotion and then justify it with logic according to Robert Caladini Ph.D Psychology. The 4 P's are a marketing tool, and stand for Price, Promotion, Product and Place or Product Placement (Clemons, 2008). Consumer behavior is influenced greatly by business to consumer marketing, so being a prominent marketing tool, the 4 P's will have an effect on consumer's behavior. The price of a good or service is largely determined by the market, as businesses will set their prices to be similar to that of other business so as to remain competitive whilst making a profit (Clemons, 2008). When market prices for a product are high, it will cause consumers to purchase less and use purchased goods for longer periods of time, meaning they are purchasing the product less often. Alternatively, when market prices for a product are low, consumers are more likely to purchase more of the product, and more often. The way that promotion influences consumer behavior has changed over time. In the past, large promotional campaigns and heavy advertising would convert into sales for a business, but nowadays businesses can have success on products with little or no advertising (Clemons, 2008). This is due to the Internet, and in particular social media. They rely on word of mouth from consumers using social media, and as products trend online, so sales increase as products effectively promote themselves (Clemons, 2008). Thus, promotion by businesses does not necessarily result in consumer behavior trending towards purchasing products. The way that product influences consumer behavior is through consumer willingness to pay, and consumer preferences (Clemons, 2008). This means that even if a company were to have a long history of products in the market, consumers will still pick a cheaper product over the company in question's product if it means they will pay less for something that is very similar (Clemons, 2008). This is due to consumer willingness to pay, or their willingness to part with their money they have earned. Product also influences consumer behavior through customer preferences. For example, take Pepsi vs Coca-Cola, a Pepsi-drinker is less likely to purchase Coca-Cola, even if it is cheaper and more convenient. This is due to the preference of the consumer, and no matter how hard the opposing company tries they will not be able to force the customer to change their mind. Product placement in the modern era has little influence on consumer behavior, due to the availability of goods online (Clemons, 2008). If a customer can purchase a good from the comfort of their home instead of purchasing in-store, then the placement of products is not going to influence their purchase decision. In management, behaviors are associated with desired or undesired focuses. Managers generally note what the desired outcome is, but behavioral patterns can take over. These patterns are the reference to how often the desired behavior actually occurs. Before a behavior actually occurs, antecedents focus on the stimuli that influence the behavior that is about to happen. After the behavior occurs, consequences fall into place. Consequences consist of rewards or punishments. Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when one gives, one will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in. Behavior informatics also called behavior computing, explores behavior intelligence and behavior insights from the informatics and computing perspectives. Different from applied behavior analysis from the psychological perspective, BI builds computational theories, systems and tools to qualitatively and quantitatively model, represent, analyze, and manage behaviors of individuals, groups and/or organizations. Health behavior refers to a person's beliefs and actions regarding their health and well-being. Health behaviors are direct factors in maintaining a healthy lifestyle. Health behaviors are influenced by the social, cultural and physical environments in which we live and work. They are shaped by individual choices and external constraints. Positive behaviors help promote health and prevent disease, while the opposite is true for risk behaviors. Health behaviors are early indicators of population health. Because of the time lag that often occurs between certain behaviors and the development of disease, these indicators may foreshadow the future burdens and benefits of health-risk and health-promoting behaviors. Health behaviors do not occur in isolation—they are influenced and constrained by social and cultural norms. A variety of studies have examined the relationship between health behaviors and health outcomes (e.g., Blaxter 1990) and have demonstrated their role in both morbidity and mortality. These studies have identified seven features of lifestyle which were associated with lower morbidity and higher subsequent long-term survival (Belloc and Breslow 1972): Health behaviors impact upon individuals' quality of life, by delaying the onset of chronic disease and extending active lifespan. Smoking, alcohol consumption, diet, gaps in primary care services and low screening uptake are all significant determinants of poor health, and changing such behaviors should lead to improved health. For example, in USA, Healthy People 2000, United States Department of Health and Human Services, lists increased physical activity, changes in nutrition and reductions in tobacco, alcohol and drug use as important for health promotion and disease prevention. Any interventions done are matched with the needs of each individual in an ethical and respected manner. Health belief model encourages increasing individuals' perceived susceptibility to negative health outcomes and making individuals aware of the severity of such negative health behavior outcomes. E.g. through health promotion messages. In addition, the health belief model suggests the need to focus on the benefits of health behaviors and the fact that barriers to action are easily overcome. The theory of planned behavior suggests using persuasive messages for tackling behavioral beliefs to increase the readiness to perform a behavior, called "intentions". The theory of planned behavior advocates the need to tackle normative beliefs and control beliefs in any attempt to change behavior. Challenging the normative beliefs is not enough but to follow through the "intention" with self-efficacy from individual's mastery in problem solving and task completion is important to bring about a positive change. Self efficacy is often cemented through standard persuasive techniques.
https://en.wikipedia.org/wiki?curid=4805
Battle of Marathon The Battle of Marathon () took place in 490 BC during the first Persian invasion of Greece. It was fought between the citizens of Athens, aided by Plataea, and a Persian force commanded by Datis and Artaphernes. The battle was the culmination of the first attempt by Persia, under King Darius I, to subjugate Greece. The Greek army decisively defeated the more numerous Persians, marking a turning point in the Greco-Persian Wars. The first Persian invasion was a response to Athenian involvement in the Ionian Revolt, when Athens and Eretria had sent a force to support the cities of Ionia in their attempt to overthrow Persian rule. The Athenians and Eretrians had succeeded in capturing and burning Sardis, but they were then forced to retreat with heavy losses. In response to this raid, Darius swore to burn down Athens and Eretria. According to Herodotus, Darius had his bow brought to him and then shot an arrow "upwards towards heaven", saying as he did so: "Zeus, that it may be granted me to take vengeance upon the Athenians!" Herodotus further writes that Darius charged one of his servants to say "Master, remember the Athenians" three times before dinner each day. At the time of the battle, Sparta and Athens were the two largest city-states in Greece. Once the Ionian revolt was finally crushed by the Persian victory at the Battle of Lade in 494 BC, Darius began plans to subjugate Greece. In 490 BC, he sent a naval task force under Datis and Artaphernes across the Aegean, to subjugate the Cyclades, and then to make punitive attacks on Athens and Eretria. Reaching Euboea in mid-summer after a successful campaign in the Aegean, the Persians proceeded to besiege and capture Eretria. The Persian force then sailed for Attica, landing in the bay near the town of Marathon. The Athenians, joined by a small force from Plataea, marched to Marathon, and succeeded in blocking the two exits from the plain of Marathon. The Athenians also sent a message to the Spartans asking for support. When the messenger arrived in Sparta, the Spartans were involved in a religious festival and gave this as a reason for not coming to aid of the Athenians. The Athenians and their allies chose a location for the battle, with marshes and mountainous terrain, that prevented the Persian cavalry from joining the Persian infantry. Miltiades, the Athenian general, ordered a general attack against the Persian forces, composed primarily of missile troops. He reinforced his flanks, luring the Persians' best fighters into his centre. The inward wheeling flanks enveloped the Persians, routing them. The Persian army broke in panic towards their ships, and large numbers were slaughtered. The defeat at Marathon marked the end of the first Persian invasion of Greece, and the Persian force retreated to Asia. Darius then began raising a huge new army with which he meant to completely subjugate Greece; however, in 486 BC, his Egyptian subjects revolted, indefinitely postponing any Greek expedition. After Darius died, his son Xerxes I restarted the preparations for a second invasion of Greece, which finally began in 480 BC. The Battle of Marathon was a watershed in the Greco-Persian wars, showing the Greeks that the Persians could be beaten; the eventual Greek triumph in these wars can be seen to have begun at Marathon. The battle also showed the Greeks that they were able to win battles without the Spartans, as they had heavily relied on Sparta previously. This victory was largely due to the Athenians, and Marathon raised Greek esteem of them. The following two hundred years saw the rise of the Classical Greek civilization, which has been enduringly influential in western society and so the Battle of Marathon is often seen as a pivotal moment in Mediterranean and European history. The first Persian invasion of Greece had its immediate roots in the Ionian Revolt, the earliest phase of the Greco-Persian Wars. However, it was also the result of the longer-term interaction between the Greeks and Persians. In 500 BC the Persian Empire was still relatively young and highly expansionistic, but prone to revolts amongst its subject peoples. Moreover, the Persian King Darius was a usurper, and had spent considerable time extinguishing revolts against his rule. Even before the Ionian Revolt, Darius had begun to expand the empire into Europe, subjugating Thrace, and forcing Macedon to become a vassal of Persia. Attempts at further expansion into the politically fractious world of ancient Greece may have been inevitable. However, the Ionian Revolt had directly threatened the integrity of the Persian empire, and the states of mainland Greece remained a potential menace to its future stability. Darius thus resolved to subjugate and pacify Greece and the Aegean, and to punish those involved in the Ionian Revolt. The Ionian Revolt had begun with an unsuccessful expedition against Naxos, a joint venture between the Persian satrap Artaphernes and the Milesian tyrant Aristagoras. In the aftermath, Artaphernes decided to remove Aristagoras from power, but before he could do so, Aristagoras abdicated, and declared Miletus a democracy. The other Ionian cities followed suit, ejecting their Persian-appointed tyrants, and declaring themselves democracies. Aristagoras then appealed to the states of mainland Greece for support, but only Athens and Eretria offered to send troops. The involvement of Athens in the Ionian Revolt arose from a complex set of circumstances, beginning with the establishment of the Athenian Democracy in the late 6th century BC. In 510 BC, with the aid of Cleomenes I, King of Sparta, the Athenian people had expelled Hippias, the tyrant ruler of Athens. With Hippias's father Peisistratus, the family had ruled for 36 out of the previous 50 years and fully intended to continue Hippias's rule. Hippias fled to Sardis to the court of the Persian satrap, Artaphernes and promised control of Athens to the Persians if they were to help restore him. In the meantime, Cleomenes helped install a pro-Spartan tyranny under Isagoras in Athens, in opposition to Cleisthenes, the leader of the traditionally powerful Alcmaeonidae family, who considered themselves the natural heirs to the rule of Athens. Cleisthenes, however, found himself being politically defeated by a coalition led by Isagoras and decided to change the rules of the game by appealing to the "demos" (the people), in effect making them a new faction in the political arena. This tactic succeeded, but the Spartan King, Cleomenes I, returned at the request of Isagoras and so Cleisthenes, the Alcmaeonids and other prominent Athenian families were exiled from Athens. When Isagoras attempted to create a narrow oligarchic government, the Athenian people, in a spontaneous and unprecedented move, expelled Cleomenes and Isagoras. Cleisthenes was thus restored to Athens (507 BC), and at breakneck speed began to reform the state with the aim of securing his position. The result was not actually a democracy or a real civic state, but he enabled the development of a fully democratic government, which would emerge in the next generation as the demos realized its power. The new-found freedom and self-governance of the Athenians meant that they were thereafter exceptionally hostile to the return of the tyranny of Hippias, or any form of outside subjugation, by Sparta, Persia, or anyone else. Cleomenes was not pleased with events, and marched on Athens with the Spartan army. Cleomenes's attempts to restore Isagoras to Athens ended in a debacle, but fearing the worst, the Athenians had by this point already sent an embassy to Artaphernes in Sardis, to request aid from the Persian empire. Artaphernes requested that the Athenians give him an 'earth and water', a traditional token of submission, to which the Athenian ambassadors acquiesced. They were, however, severely censured for this when they returned to Athens. At some later point Cleomenes instigated a plot to restore Hippias to the rule of Athens. This failed and Hippias again fled to Sardis and tried to persuade the Persians to subjugate Athens. The Athenians dispatched ambassadors to Artaphernes to dissuade him from taking action, but Artaphernes merely instructed the Athenians to take Hippias back as tyrant. The Athenians indignantly declined, and instead resolved to open war with Persia. Having thus become the enemy of Persia, Athens was already in a position to support the Ionian cities when they began their revolt. The fact that the Ionian democracies were inspired by the example the Athenians had set no doubt further persuaded the Athenians to support the Ionian Revolt, especially since the cities of Ionia were originally Athenian colonies. The Athenians and Eretrians sent a task force of 25 triremes to Asia Minor to aid the revolt. Whilst there, the Greek army surprised and outmaneuvered Artaphernes, marching to Sardis and burning the lower city. This was, however, as much as the Greeks achieved, and they were then repelled and pursued back to the coast by Persian horsemen, losing many men in the process. Despite the fact that their actions were ultimately fruitless, the Eretrians and in particular the Athenians had earned Darius's lasting enmity, and he vowed to punish both cities. The Persian naval victory at the Battle of Lade (494 BC) all but ended the Ionian Revolt, and by 493 BC, the last hold-outs were vanquished by the Persian fleet. The revolt was used as an opportunity by Darius to extend the empire's border to the islands of the eastern Aegean and the Propontis, which had not been part of the Persian dominions before. The pacification of Ionia allowed the Persians to begin planning their next moves; to extinguish the threat to the empire from Greece and to punish Athens and Eretria. In 492 BC, after the Ionian Revolt had finally been crushed, Darius dispatched an to Greece under the command of his son-in-law, Mardonius. Mardonius re-subjugated Thrace and made Macedonia a fully subordinate part of the Persians; they had been vassals of the Persians since the late 6th century BC, but retained their general autonomy. Not long after however, his fleet became wrecked by a violent storm, which brought a premature end to the campaign. However, in 490 BC, following the successes of the previous campaign, Darius decided to send a maritime expedition led by Artaphernes, (son of the satrap to whom Hippias had fled) and Datis, a Median admiral. Mardonius had been injured in the prior campaign and had fallen out of favor. The was intended to bring the Cyclades into the Persian empire, to punish Naxos (which had resisted a Persian assault in 499 BC) and then to head to Greece to force Eretria and Athens to submit to Darius or be destroyed. After island-hopping across the Aegean, including successfully attacking Naxos, the Persian task force arrived off Euboea in mid summer. The Persians then proceeded to besiege, capture and burn Eretria. They then headed south down the coast of Attica, en route to complete the final objective of the campaign—punish Athens. The Persians sailed down the coast of Attica, and landed at the bay of Marathon, from Athens, on the advice of the exiled Athenian tyrant Hippias (who had accompanied the expedition). Under the guidance of Miltiades, the Athenian general with the greatest experience of fighting the Persians, the Athenian army marched quickly to block the two exits from the plain of Marathon, and prevent the Persians moving inland. At the same time, Athens's greatest runner, Pheidippides (or Philippides in some accounts) had been sent to Sparta to request that the Spartan army march to the aid of Athens. Pheidippides arrived during the festival of "Carneia", a sacrosanct period of peace, and was informed that the Spartan army could not march to war until the full moon rose; Athens could not expect reinforcement for at least ten days. The Athenians would have to hold out at Marathon for the time being, although they were reinforced by the full muster of 1,000 hoplites from the small city of Plataea, a gesture which did much to steady the nerves of the Athenians and won unending Athenian gratitude to Plataea. For approximately five days the armies therefore confronted each other across the plain of Marathon in stalemate. The flanks of the Athenian camp were protected either by a grove of trees, or an "abbatis" of stakes (depending on the exact reading). Since every day brought the arrival of the Spartans closer, the delay worked in favor of the Athenians. There were ten Athenian "strategoi" (generals) at Marathon, elected by each of the ten tribes that the Athenians were divided into; Miltiades was one of these. In addition, in overall charge, was the War-Archon (polemarch), Callimachus, who had been elected by the whole citizen body. Herodotus suggests that command rotated between the "strategoi", each taking in turn a day to command the army. He further suggests that each "strategos", on his day in command, instead deferred to Miltiades. In Herodotus's account, Miltiades is keen to attack the Persians (despite knowing that the Spartans are coming to aid the Athenians), but strangely, chooses to wait until his actual day of command to attack. This passage is undoubtedly problematic; the Athenians had little to gain by attacking before the Spartans arrived, and there is no real evidence of this rotating generalship. There does, however, seem to have been a delay between the Athenian arrival at Marathon and the battle; Herodotus, who evidently believed that Miltiades was eager to attack, may have made a mistake while seeking to explain this delay. As is discussed below, the reason for the delay was probably simply that neither the Athenians nor the Persians were willing to risk battle initially. This then raises the question of why the battle occurred when it did. Herodotus explicitly tells us that the Greeks attacked the Persians (and the other sources confirm this), but it is not clear why they did this before the arrival of the Spartans. There are two main theories to explain this. The first theory is that the Persian cavalry left Marathon for an unspecified reason, and that the Greeks moved to take advantage of this by attacking. This theory is based on the absence of any mention of cavalry in Herodotus' account of the battle, and an entry in the "Suda" dictionary. The entry "χωρίς ἱππέων" ("without cavalry") is explained thus: The cavalry left. When Datis surrendered and was ready for retreat, the Ionians climbed the trees and gave the Athenians the signal that the cavalry had left. And when Miltiades realized that, he attacked and thus won. From there comes the above-mentioned quote, which is used when someone breaks ranks before battle. There are many variations of this theory, but perhaps the most prevalent is that the cavalry were completing the time-consuming process of re-embarking on the ships, and were to be sent by sea to attack (undefended) Athens in the rear, whilst the rest of the Persians pinned down the Athenian army at Marathon. This theory therefore utilises Herodotus' suggestion that after Marathon, the Persian army began to re-embark, intending to sail around Cape Sounion to attack Athens directly. Thus, this re-embarcation would have occurred "before" the battle (and indeed have triggered the battle). The second theory is simply that the battle occurred because the Persians finally moved to attack the Athenians. Although this theory has the Persians moving to the "strategic" offensive, this can be reconciled with the traditional account of the Athenians attacking the Persians by assuming that, seeing the Persians advancing, the Athenians took the "tactical" offensive, and attacked them. Obviously, it cannot be firmly established which theory (if either) is correct. However, both theories imply that there was some kind of Persian activity which occurred on or about the fifth day which ultimately triggered the battle. It is also possible that both theories are correct: when the Persians sent the cavalry by ship to attack Athens, they simultaneously sent their infantry to attack at Marathon, triggering the Greek counterattack. Herodotus mentions for several events a date in the lunisolar calendar, of which each Greek city-state used a variant. Astronomical computation allows us to derive an absolute date in the proleptic Julian calendar which is much used by historians as the chronological frame. Philipp August Böckh in 1855 concluded that the battle took place on September 12, 490 BC in the Julian calendar, and this is the conventionally accepted date. However, this depends on when exactly the Spartans held their festival and it is possible that the Spartan calendar was one month ahead of that of Athens. In that case the battle took place on August 12, 490 BC. Herodotus does not give a figure for the size of the Athenian army. However, Cornelius Nepos, Pausanias and Plutarch all give the figure of 9,000 Athenians and 1,000 Plataeans; while Justin suggests that there were 10,000 Athenians and 1,000 Plataeans. These numbers are highly comparable to the number of troops Herodotus says that the Athenians and Plataeans sent to the Battle of Plataea 11 years later. Pausanias noticed on the monument to the battle the names of former slaves who were freed in exchange for military services. Modern historians generally accept these numbers as reasonable. The areas ruled by Athens (Attica) had a population of 315,000 at this time including slaves, which implies the full Athenian army at the times of both Marathon and Plataea numbered about 3% of the population. According to Herodotus, the fleet sent by Darius consisted of 600 triremes. Herodotus does not estimate the size of the Persian army, only saying that they were a "large infantry that was well packed". Among ancient sources, the poet Simonides, another near-contemporary, says the campaign force numbered 200,000; while a later writer, the Roman Cornelius Nepos estimates 200,000 infantry and 10,000 cavalry, of which only 100,000 fought in the battle, while the rest were loaded into the fleet that was rounding Cape Sounion; Plutarch and Pausanias both independently give 300,000, as does the Suda dictionary. Plato and Lysias give 500,000; and Justinus 600,000. Modern historians have proposed wide-ranging numbers for the infantry, from 20,000–100,000 with a consensus of perhaps 25,000; estimates for the cavalry are in the range of 1,000. The fleet included various contingents from different parts of the Achaemenid Empire, particularly Ionians and Aeolians, although they are not mentioned as participating directly to the battle and may have remained on the ships: Regarding the ethnicities involved in the battle, Herodotus specifically mentions the presence of the Persians and the Sakae at the center of the Achaemenid line: From a strategic point of view, the Athenians had some disadvantages at Marathon. In order to face the Persians in battle, the Athenians had to summon all available hoplites; and even then they were still probably outnumbered at least 2 to 1. Furthermore, raising such a large army had denuded Athens of defenders, and thus any secondary attack in the Athenian rear would cut the army off from the city; and any direct attack on the city could not be defended against. Still further, defeat at Marathon would mean the complete defeat of Athens, since no other Athenian army existed. The Athenian strategy was therefore to keep the Persian army pinned down at Marathon, blocking both exits from the plain, and thus preventing themselves from being outmaneuvered. However, these disadvantages were balanced by some advantages. The Athenians initially had no need to seek battle, since they had managed to confine the Persians to the plain of Marathon. Furthermore, time worked in their favour, as every day brought the arrival of the Spartans closer. Having everything to lose by attacking, and much to gain by waiting, the Athenians remained on the defensive in the run up to the battle. Tactically, hoplites were vulnerable to attacks by cavalry, and since the Persians had substantial numbers of cavalry, this made any offensive maneuver by the Athenians even more of a risk, and thus reinforced the defensive strategy of the Athenians. The Persian strategy, on the other hand, was probably principally determined by tactical considerations. The Persian infantry was evidently lightly armoured, and no match for hoplites in a head-on confrontation (as would be demonstrated at the later battles of Thermopylae and Plataea.) Since the Athenians seem to have taken up a strong defensive position at Marathon, the Persian hesitance was probably a reluctance to attack the Athenians head-on. The camp of the Athenians was located on a spur of mount Agrieliki next to the plain of Marathon; remains of its fortifications are still visible. Whatever event eventually triggered the battle, it obviously altered the strategic or tactical balance sufficiently to induce the Athenians to attack the Persians. If the first theory is correct (see above), then the absence of cavalry removed the main Athenian tactical disadvantage, and the threat of being outflanked made it imperative to attack. Conversely, if the second theory is correct, then the Athenians were merely reacting to the Persians attacking them. Since the Persian force obviously contained a high proportion of missile troops, a static defensive position would have made little sense for the Athenians; the strength of the hoplite was in the melee, and the sooner that could be brought about, the better, from the Athenian point of view. If the second theory is correct, this raises the further question of why the Persians, having hesitated for several days, then attacked. There may have been several strategic reasons for this; perhaps they were aware (or suspected) that the Athenians were expecting reinforcements. Alternatively, they may have felt the need to force some kind of victory—they could hardly remain at Marathon indefinitely. The distance between the two armies at the point of battle had narrowed to "a distance not less than 8 stadia" or about 1,500 meters. Miltiades ordered the two tribes forming the center of the Greek formation, the Leontis tribe led by Themistocles and the Antiochis tribe led by Aristides, to be arranged in the depth of four ranks while the rest of the tribes at their flanks were in ranks of eight. Some modern commentators have suggested this was a deliberate ploy to encourage a double envelopment of the Persian centre. However, this suggests a level of training that the Greeks are thought not to have possessed. There is little evidence for any such tactical thinking in Greek battles until Leuctra in 371 BC. It is therefore possible that this arrangement was made, perhaps at the last moment, so that the Athenian line was as long as the Persian line, and would not therefore be outflanked. When the Athenian line was ready, according to one source, the simple signal to advance was given by Miltiades: "At them". Herodotus implies the Athenians ran the whole distance to the Persian lines, a feat under the weight of hoplite armory generally thought to be physically impossible. More likely, they marched until they reached the limit of the archers' effectiveness, the "beaten zone" (roughly 200 meters), and then broke into a run towards their enemy. Another possibility is that they ran "up to" the 200 meter-mark in broken ranks, and then reformed for the march into battle from there. Herodotus suggests that this was the first time a Greek army ran into battle in this way; this was probably because it was the first time that a Greek army had faced an enemy composed primarily of missile troops. All this was evidently much to the surprise of the Persians; "... in their minds they charged the Athenians with madness which must be fatal, seeing that they were few and yet were pressing forwards at a run, having neither cavalry nor archers". Indeed, based on their previous experience of the Greeks, the Persians might be excused for this; Herodotus tells us that the Athenians at Marathon were "first to endure looking at Median dress and men wearing it, for up until then just hearing the name of the Medes caused the Hellenes to panic". Passing through the hail of arrows launched by the Persian army, protected for the most part by their armour, the Greek line finally made contact with the enemy army. The Athenian wings quickly routed the inferior Persian levies on the flanks, before turning inwards to surround the Persian centre, which had been more successful against the thin Greek centre. The battle ended when the Persian centre then broke in panic towards their ships, pursued by the Greeks. Some, unaware of the local terrain, ran towards the swamps where unknown numbers drowned. The Athenians pursued the Persians back to their ships, and managed to capture seven ships, though the majority were able to launch successfully. Herodotus recounts the story that Cynaegirus, brother of the playwright Aeschylus, who was also among the fighters, charged into the sea, grabbed one Persian trireme, and started pulling it towards shore. A member of the crew saw him, cut off his hand, and Cynaegirus died. Herodotus records that 6,400 Persian bodies were counted on the battlefield, and it is unknown how many more perished in the swamps. He also reported that the Athenians lost 192 men and the Plataeans 11. Among the dead were the war archon Callimachus and the general Stesilaos. There are several explanations of the Greek success. Most scholars believe that the Greeks had better equipment and used superior tactics. According to Herodotus, the Greeks were better equipped. They did not use bronze upper body armour at this time, but that of leather or linen. The phalanx formation proved successful, because the hoplites had a long tradition in hand-to-hand combat, whereas the Persian soldiers were accustomed to a very different kind of conflict. At Marathon, the Athenians thinned their centre in order to make their army equal in length to the Persian army, not as a result of a tactical planning. It seems that the Persian centre tried to return, realizing that their wings had broken, and was caught in the flanks by the victorious Greek wings. Lazenby (1993) believes that the ultimate reason for the Greek success was the courage the Greeks displayed: According to Vic Hurley, the Persian defeat is explained by the "complete failure ... to field a representative army", calling the battle the "most convincing" example of the fact that infantry-bowmen cannot defend any position while stationed in close-quarters and unsupported (i.e. by fortifications, or failing to support them by cavalry and chariots, as was the common Persian tactic). In the immediate aftermath of the battle, Herodotus says that the Persian fleet sailed around Cape Sounion to attack Athens directly. As has been discussed above, some modern historians place this attempt just before the battle. Either way, the Athenians evidently realised that their city was still under threat, and marched as quickly as possible back to Athens. The two tribes which had been in the centre of the Athenian line stayed to guard the battlefield under the command of Aristides. The Athenians arrived in time to prevent the Persians from securing a landing, and seeing that the opportunity was lost, the Persians turned about and returned to Asia. Connected with this episode, Herodotus recounts a rumour that this manoeuver by the Persians had been planned in conjunction with the Alcmaeonids, the prominent Athenian aristocratic family, and that a "shield-signal" had been given after the battle. Although many interpretations of this have been offered, it is impossible to tell whether this was true, and if so, what exactly the signal meant. On the next day, the Spartan army arrived at Marathon, having covered the in only three days. The Spartans toured the battlefield at Marathon, and agreed that the Athenians had won a great victory. The Athenian and Plataean dead of Marathon were buried on the battlefield in two tumuli. On the tomb of the Athenians this epigram composed by Simonides was written: Meanwhile, Darius began raising a huge new army with which he meant to completely subjugate Greece; however, in 486 BC, his Egyptian subjects revolted, indefinitely postponing any Greek expedition. Darius then died whilst preparing to march on Egypt, and the throne of Persia passed to his son Xerxes I. Xerxes crushed the Egyptian revolt, and very quickly restarted the preparations for the invasion of Greece. The epic second Persian invasion of Greece finally began in 480 BC, and the Persians met with initial success at the battles of Thermopylae and Artemisium. However, defeat at the Battle of Salamis would be the turning point in the campaign, and the next year the expedition was ended by the decisive Greek victory at the Battle of Plataea. The defeat at Marathon barely touched the vast resources of the Persian empire, yet for the Greeks it was an enormously significant victory. It was the first time the Greeks had beaten the Persians, proving that the Persians were not invincible, and that resistance, rather than subjugation, was possible. The battle was a defining moment for the young Athenian democracy, showing what might be achieved through unity and self-belief; indeed, the battle effectively marks the start of a "golden age" for Athens. This was also applicable to Greece as a whole; "their victory endowed the Greeks with a faith in their destiny that was to endure for three centuries, during which western culture was born". John Stuart Mill's famous opinion was that "the Battle of Marathon, even as an event in British history, is more important than the Battle of Hastings". According to Isaac Asimov,"if the Athenians had lost in Marathon, . . . Greece might have never gone to develop the peak of its civilization, a peak whose fruits we moderns have inherited." It seems that the Athenian playwright Aeschylus considered his participation at Marathon to be his greatest achievement in life (rather than his plays) since on his gravestone there was the following epigram: Militarily, a major lesson for the Greeks was the potential of the hoplite phalanx. This style had developed during internecine warfare amongst the Greeks; since each city-state fought in the same way, the advantages and disadvantages of the hoplite phalanx had not been obvious. Marathon was the first time a phalanx faced more lightly armed troops, and revealed how effective the hoplites could be in battle. The phalanx formation was still vulnerable to cavalry (the cause of much caution by the Greek forces at the Battle of Plataea), but used in the right circumstances, it was now shown to be a potentially devastating weapon. The main source for the Greco-Persian Wars is the Greek historian Herodotus. Herodotus, who has been called the "Father of History", was born in 484 BC in Halicarnassus, Asia Minor (then under Persian overlordship). He wrote his "Enquiries" (Greek – "Historiai"; English – "(The) Histories") around 440–430 BC, trying to trace the origins of the Greco-Persian Wars, which would still have been relatively recent history (the wars finally ended in 450 BC). Herodotus's approach was entirely novel, and at least in Western society, he does seem to have invented "history" as we know it. As Holland has it: "For the first time, a chronicler set himself to trace the origins of a conflict not to a past so remote so as to be utterly fabulous, nor to the whims and wishes of some god, nor to a people's claim to manifest destiny, but rather explanations he could verify personally." Some subsequent ancient historians, despite following in his footsteps, criticised Herodotus, starting with Thucydides. Nevertheless, Thucydides chose to begin his history where Herodotus left off (at the Siege of Sestos), and may therefore have felt that Herodotus's history was accurate enough not to need re-writing or correcting. Plutarch criticised Herodotus in his essay "On the malice of Herodotus", describing Herodotus as ""Philobarbaros"" (barbarian-lover), for not being pro-Greek enough, which suggests that Herodotus might actually have done a reasonable job of being even-handed. A negative view of Herodotus was passed on to Renaissance Europe, though he remained well read. However, since the 19th century his reputation has been dramatically rehabilitated by archaeological finds which have repeatedly confirmed his version of events. The prevailing modern view is that Herodotus generally did a remarkable job in his "Historiai", but that some of his specific details (particularly troop numbers and dates) should be viewed with skepticism. Nevertheless, there are still some historians who believe Herodotus made up much of his story. The Sicilian historian Diodorus Siculus, writing in the 1st century BC in his "Bibliotheca Historica", also provides an account of the Greco-Persian wars, partially derived from the earlier Greek historian Ephorus. This account is fairly consistent with Herodotus's. The Greco-Persian wars are also described in less detail by a number of other ancient historians including Plutarch, Ctesias of Cnidus, and are alluded by other authors, such as the playwright Aeschylus. Archaeological evidence, such as the Serpent Column, also supports some of Herodotus's specific claims. The most famous legend associated with Marathon is that of the runner Pheidippides (or Philippides) bringing news to Athens of the battle, which is described below. Pheidippides' run to Sparta to bring aid has other legends associated with it. Herodotus mentions that Pheidippides was visited by the god Pan on his way to Sparta (or perhaps on his return journey). Pan asked why the Athenians did not honor him and the awed Pheidippides promised that they would do so from then on. The god apparently felt that the promise would be kept, so he appeared in battle and at the crucial moment he instilled the Persians with his own brand of fear, the mindless, frenzied fear that bore his name: "panic". After the battle, a sacred precinct was established for Pan in a grotto on the north slope of the Acropolis, and a sacrifice was annually offered. Similarly, after the victory the festival of the "Agroteras Thysia" ("sacrifice to the Agrotéra") was held at Agrae near Athens, in honor of Artemis Agrotera ("Artemis the Huntress"). This was in fulfillment of a vow made by the city before the battle, to offer in sacrifice a number of goats equal to that of the Persians slain in the conflict. The number was so great, it was decided to offer 500 goats yearly until the number was filled. Xenophon notes that at his time, 90 years after the battle, goats were still offered yearly. Plutarch mentions that the Athenians saw the phantom of King Theseus, the mythical hero of Athens, leading the army in full battle gear in the charge against the Persians, and indeed he was depicted in the mural of the Stoa Poikile fighting for the Athenians, along with the twelve Olympian gods and other heroes. Pausanias also tells us that: They say too that there chanced to be present in the battle a man of rustic appearance and dress. Having slaughtered many of the foreigners with a plough he was seen no more after the engagement. When the Athenians made enquiries at the oracle, the god merely ordered them to honor Echetlaeus ("he of the Plough-tail") as a hero. Another tale from the conflict is of the dog of Marathon. Aelian relates that one hoplite brought his dog to the Athenian encampment. The dog followed his master to battle and attacked the Persians at his master's side. He also informs us that this dog is depicted in the mural of the Stoa Poikile. According to Herodotus, an Athenian runner named Pheidippides was sent to run from Athens to Sparta to ask for assistance before the battle. He ran a distance of over 225 kilometers (140 miles), arriving in Sparta the day after he left. Then, following the battle, the Athenian army marched the 40 kilometers (25 miles) or so back to Athens at a very high pace (considering the quantity of armour, and the fatigue after the battle), in order to head off the Persian force sailing around Cape Sounion. They arrived back in the late afternoon, in time to see the Persian ships turn away from Athens, thus completing the Athenian victory. Later, in popular imagination, these two events were conflated, leading to a legendary but inaccurate version of events. This myth has Pheidippides running from Marathon to Athens after the battle, to announce the Greek victory with the word "nenikēkamen!" (Attic: ; we've won!), whereupon he promptly died of exhaustion. Most accounts incorrectly attribute this story to Herodotus; actually, the story first appears in Plutarch's "On the Glory of Athens" in the 1st century AD, who quotes from Heracleides of Pontus's lost work, giving the runner's name as either Thersipus of Erchius or Eucles. Lucian of Samosata (2nd century AD) gives the same story but names the runner Philippides (not Pheidippides). In some medieval codices of Herodotus, the name of the runner between Athens and Sparta before the battle is given as Philippides, and this name is also preferred in a few modern editions. When the idea of a modern Olympics became a reality at the end of the 19th century, the initiators and organizers were looking for a great popularizing event, recalling the ancient glory of Greece. The idea of organizing a "marathon race" came from Michel Bréal, who wanted the event to feature in the first modern Olympic Games in 1896 in Athens. This idea was heavily supported by Pierre de Coubertin, the founder of the modern Olympics, as well as the Greeks. This would echo the legendary version of events, with the competitors running from Marathon to Athens. So popular was this event that it quickly caught on, becoming a fixture at the Olympic games, with major cities staging their own annual events. The distance eventually became fixed at 26 miles 385 yards, or 42.195 km, though for the first years it was variable, being around —the approximate distance from Marathon to Athens. The has a journal article about this subject:
https://en.wikipedia.org/wiki?curid=4806
Balance of trade The balance of trade, commercial balance, or net exports (sometimes symbolized as NX), is the difference between the monetary value of a nation's exports and imports over a certain time period. Sometimes a distinction is made between a balance of trade for goods versus one for services. The balance of trade measures a flow of exports and imports over a given period of time. The notion of the balance of trade does not mean that exports and imports are "in balance" with each other. If a country exports a greater value than it imports, it has a trade surplus or positive trade balance, and conversely, if a country imports a greater value than it exports, it has a trade deficit or negative trade balance. As of 2016, about 60 out of 200 countries have a trade surplus. The notion that bilateral trade deficits are bad in and of themselves is overwhelmingly rejected by trade experts and economists. The balance of trade forms part of the current account, which includes other transactions such as income from the net international investment position as well as international aid. If the current account is in surplus, the country's net international asset position increases correspondingly. Equally, a deficit decreases the net international asset position. The trade balance is identical to the difference between a country's output and its domestic demand (the difference between what goods a country produces and how many goods it buys from abroad; this does not include money re-spent on foreign stock, nor does it factor in the concept of importing goods to produce for the domestic market). Measuring the balance of trade can be problematic because of problems with recording and collecting data. As an illustration of this problem, when official data for all the world's countries are added up, exports exceed imports by almost 1%; it appears the world is running a positive balance of trade with itself. This cannot be true, because all transactions involve an equal credit or debit in the account of each nation. The discrepancy is widely believed to be explained by transactions intended to launder money or evade taxes, smuggling and other visibility problems. While the accuracy of developing countries statistics would be suspicious, most of the discrepancy actually occurs between developed countries of trusted statistics, Factors that can affect the balance of trade include: In addition, the trade balance is likely to differ across the business cycle. In export-led growth (such as oil and early industrial goods), the balance of trade will shift towards exports during an economic expansion. However, with domestic demand-led growth (as in the United States and Australia) the trade balance will shift towards imports at the same stage in the business cycle. The monetary balance of trade is different from the physical balance of trade (which is expressed in amount of raw materials, known also as Total Material Consumption). Developed countries usually import a substantial amount of raw materials from developing countries. Typically, these imported materials are transformed into finished products and might be exported after adding value. Financial trade balance statistics conceal material flow. Most developed countries have a large physical trade deficit because they consume more raw materials than they produce. Many civil society organisations claim this imbalance is predatory and campaign for ecological debt repayment. Many countries in early modern Europe adopted a policy of mercantilism, which theorized that a trade surplus was beneficial to a country, among other elements such as colonialism and trade barriers with other countries and their colonies. (Bullionism was an early philosophy supporting mercantilism.) The practices and abuses of mercantilism led the natural resources and cash crops of British North America to be exported in exchange for finished goods from Great Britain, a factor leading to the American Revolution. An early statement appeared in "Discourse of the Common Wealth of this Realm of England", 1549: "We must always take heed that we buy no more from strangers than we sell them, for so should we impoverish ourselves and enrich them." Similarly, a systematic and coherent explanation of balance of trade was made public through Thomas Mun's 1630 "England's treasure by foreign trade, or, The balance of our foreign trade is the rule of our treasure" Since the mid-1980s, the United States has had a growing deficit in tradeable goods, especially with Asian nations (China and Japan) which now hold large sums of U.S debt that has in part funded the consumption. The U.S. has a trade surplus with nations such as Australia. The issue of trade deficits can be complex. Trade deficits generated in tradeable goods such as manufactured goods or software may impact domestic employment to different degrees than do trade deficits in raw materials. Economies that have savings surpluses, such as Japan and Germany, typically run trade surpluses. China, a high-growth economy, has tended to run trade surpluses. A higher savings rate generally corresponds to a trade surplus. Correspondingly, the U.S. with its lower savings rate has tended to run high trade deficits, especially with Asian nations. Some have said that China pursues a mercantilist economic policy. Russia pursues a policy based on protectionism, according to which international trade is not a "win-win" game but a zero-sum game: surplus countries get richer at the expense of deficit countries. In March 2019, Armenia recorded a trade deficit of US$203.90 Million. For the last two decades, the Armenian trade balance has been negative, reaching an all-time high of –33.98 USD Million in August 2003. The reason of trade deficit is because Armenia's foreign trade is limited due to landlocked location and border disputes with Turkey and Azerbaijan, from the west and east sides respectively. The situation results in the country's usual report of high trade deficits. The notion that bilateral trade deficits are bad in and of themselves is overwhelmingly rejected by trade experts and economists. According to the IMF trade deficits can cause a balance of payments problem, which can affect foreign exchange shortages and hurt countries. On the other hand, Joseph Stiglitz points out that countries running surpluses exert a "negative externality" on trading partners, and pose a threat to global prosperity, far more than those in deficit. Ben Bernanke argues that "persistent imbalances within the euro zone are... unhealthy, as they lead to financial imbalances as well as to unbalanced growth. The fact that Germany is selling so much more than it is buying redirects demand from its neighbors (as well as from other countries around the world), reducing output and employment outside Germany." A 2018 National Bureau of Economic Research paper by economists at the International Monetary Fund and University of California, Berkeley, found in a study of 151 countries over 1963-2014 that the imposition of tariffs had little effect on the trade balance. In the last few years of his life, John Maynard Keynes was much preoccupied with the question of balance in international trade. He was the leader of the British delegation to the United Nations Monetary and Financial Conference in 1944 that established the Bretton Woods system of international currency management. He was the principal author of a proposal – the so-called Keynes Plan – for an International Clearing Union. The two governing principles of the plan were that the problem of settling outstanding balances should be solved by 'creating' additional 'international money', and that debtor and creditor should be treated almost alike as disturbers of equilibrium. In the event, though, the plans were rejected, in part because "American opinion was naturally reluctant to accept the principle of equality of treatment so novel in debtor-creditor relationships". The new system is not founded on free-trade (liberalisation of foreign trade) but rather on the regulation of international trade, in order to eliminate trade imbalances: the nations with a surplus would have a powerful incentive to get rid of it, and in doing so they would automatically clear other nations deficits. He proposed a global bank that would issue its own currency – the bancor – which was exchangeable with national currencies at fixed rates of exchange and would become the unit of account between nations, which means it would be used to measure a country's trade deficit or trade surplus. Every country would have an overdraft facility in its bancor account at the International Clearing Union. He pointed out that surpluses lead to weak global aggregate demand – countries running surpluses exert a "negative externality" on trading partners, and posed far more than those in deficit, a threat to global prosperity. In ""National Self-Sufficiency" The Yale Review, Vol. 22, no. 4 (June 1933)", he already highlighted the problems created by free trade. His view, supported by many economists and commentators at the time, was that creditor nations may be just as responsible as debtor nations for disequilibrium in exchanges and that both should be under an obligation to bring trade back into a state of balance. Failure for them to do so could have serious consequences. In the words of Geoffrey Crowther, then editor of The Economist, "If the economic relationships between nations are not, by one means or another, brought fairly close to balance, then there is no set of financial arrangements that can rescue the world from the impoverishing results of chaos." These ideas were informed by events prior to the Great Depression when – in the opinion of Keynes and others – international lending, primarily by the U.S., exceeded the capacity of sound investment and so got diverted into non-productive and speculative uses, which in turn invited default and a sudden stop to the process of lending. Influenced by Keynes, economics texts in the immediate post-war period put a significant emphasis on balance in trade. For example, the second edition of the popular introductory textbook, "An Outline of Money", devoted the last three of its ten chapters to questions of foreign exchange management and in particular the 'problem of balance'. However, in more recent years, since the end of the Bretton Woods system in 1971, with the increasing influence of monetarist schools of thought in the 1980s, and particularly in the face of large sustained trade imbalances, these concerns – and particularly concerns about the destabilising effects of large trade surpluses – have largely disappeared from mainstream economics discourse and Keynes' insights have slipped from view. They are receiving some attention again in the wake of the financial crisis of 2007–08. Prior to 20th-century monetarist theory, the 19th-century economist and philosopher Frédéric Bastiat expressed the idea that trade deficits actually were a manifestation of profit, rather than a loss. He proposed as an example to suppose that he, a Frenchman, exported French wine and imported British coal, turning a profit. He supposed he was in France and sent a cask of wine which was worth 50 francs to England. The customhouse would record an export of 50 francs. If in England, the wine sold for 70 francs (or the pound equivalent), which he then used to buy coal, which he imported into France, and was found to be worth 90 francs in France, he would have made a profit of 40 francs. But the customhouse would say that the value of imports exceeded that of exports and was trade deficit against the ledger of France. By "reductio ad absurdum", Bastiat argued that the national trade deficit was an indicator of a successful economy, rather than a failing one. Bastiat predicted that a successful, growing economy would result in greater trade deficits, and an unsuccessful, shrinking economy would result in lower trade deficits. This was later, in the 20th century, echoed by economist Milton Friedman. In the 1980s, Milton Friedman, a Nobel Memorial Prize-winning economist and a proponent of monetarism, contended that some of the concerns of trade deficits are unfair criticisms in an attempt to push macroeconomic policies favorable to exporting industries. Friedman argued that trade deficits are not necessarily important, as high exports raise the value of the currency, reducing aforementioned exports, and vice versa for imports, thus naturally removing trade deficits "not due to investment". Since 1971, when the Nixon administration decided to abolish fixed exchange rates, America's Current Account accumulated trade deficits have totaled $7.75 trillion as of 2010. This deficit exists as it is matched by investment coming into the United States – purely by the definition of the balance of payments, any current account deficit that exists is matched by an inflow of foreign investment. In the late 1970s and early 1980s, the U.S. had experienced high inflation and Friedman's policy positions tended to defend the stronger dollar at that time. He stated his belief that these trade deficits were not necessarily harmful to the economy at the time since the currency comes back to the country (country A sells to country B, country B sells to country C who buys from country A, but the trade deficit only includes A and B). However, it may be in one form or another including the possible tradeoff of foreign control of assets. In his view, the "worst-case scenario" of the currency never returning to the country of origin was actually the best possible outcome: the country actually purchased its goods by exchanging them for pieces of cheaply made paper. As Friedman put it, this would be the same result as if the exporting country burned the dollars it earned, never returning it to market circulation. This position is a more refined version of the theorem first discovered by David Hume. Hume argued that England could not permanently gain from exports, because hoarding gold (i.e., currency) would make gold more plentiful in England; therefore, the prices of English goods would rise, making them less attractive exports and making foreign goods more attractive imports. In this way, countries' trade balances would balance out. Friedman presented his analysis of the balance of trade in "Free to Choose", widely considered his most significant popular work. Exports directly increase and imports directly reduce a nation's balance of trade (i.e. net exports). A trade surplus is a positive net balance of trade, and a trade deficit is a negative net balance of trade. Due to the balance of trade being explicitly added to the calculation of the nation's gross domestic product using the expenditure method of calculating gross domestic product (i.e. GDP), trade surpluses are contributions and trade deficits are "drags" upon their nation's GDP.
https://en.wikipedia.org/wiki?curid=4810
Biosphere The biosphere (from Greek βίος "bíos" "life" and σφαῖρα "sphaira" "sphere"), also known as the ecosphere (from Greek οἶκος "oîkos" "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth, a closed system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-regulating. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, geosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from non-living matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as Biosphere 2 and BIOS-3, and potentially ones on other planets or moons. The term "biosphere" was coined by geologist Eduard Suess in 1875, which he defined as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being "geosphere", "hydrosphere", and "atmosphere". When these four component spheres are combined into one system, it is known as the Ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet. The Second International Conference on Closed Life Systems defined "biospherics" as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics. The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe." Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface, and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench. There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs. Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, the deepest parts of the ocean, and at least high in the atmosphere. Microorganisms, under certain test conditions, have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g, or the "weight of the United Kingdom". The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is ("Methanopyrus kandleri" Strain 116), and it is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica. According to one researcher, "You can find microbes everywhere – they're extremely adaptable to conditions, and survive wherever they are." Our biosphere is divided into a number of biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life, while most of the more populous biomes lie near the equator. Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories: No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres. Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth. It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars.
https://en.wikipedia.org/wiki?curid=4816
Biological membrane A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates cell from the external environment or creates intracellular compartments. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipid in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes. The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet and inner leaflet of the membrane are asymmetrical in their composition. Certain proteins and lipids rest only on one surface of the membrane and not the other. • Both the plasma membrane and internal membranes have cytosolic and exoplasmic faces • This orientation is maintained during membrane trafficking – proteins, lipids, glycoconjugates facing the lumen of the ER and Golgi get expressed on the extracellular side of the plasma membrane. In eucaryotic cells, new phospholipids are manufactured by enzymes bound to the part of the endoplasmic reticulum membrane that faces the cytosol. These enzymes, which use free fatty acids as substrates, deposit all newly made phospholipids into the cytosolic half of the bilayer. To enable the membrane as a whole to grow evenly, half of the new phospholipid molecules then have to be transferred to the opposite monolayer. This transfer is catalyzed by enzymes called flippases. In the plasma membrane, flippases transfer specific phospholipids selectively, so that different types become concentrated in each monolayer. Using selective flippases is not the only way to produce asymmetry in lipid bilayers, however. In particular, a different mechanism operates for glycolipids—the lipids that show the most striking and consistent asymmetric distribution in animal cells. The biological membrane is made up of lipids with hydrophobic tails and hydrophilic heads. The hydrophobic tails are hydrocarbon tails whose length and saturation is important in characterizing the cell. Lipid rafts occur when lipid species and proteins aggregate in domains in the membrane. These help organize membrane components into localized areas that are involved in specific processes, such as signal transduction. Red blood cells, or erythrocytes, have a unique lipid composition. The bilayer of red blood cells is composed of cholesterol and phospholipids in equal proportions by weight. Erythrocyte membrane plays a crucial role in blood clotting. In the bilayer of red blood cells is phosphatidylserine. This is usually in the cytoplasmic side of the membrane. However, it is flipped to the outer membrane to be used during blood clotting. Phospholipid bilayers contain different proteins. These membrane proteins have various functions and characteristics and catalyze different chemical reactions. Integral proteins span the membranes with different domains on either side. Integral proteins hold strong association with the lipid bilayer and cannot easily become detached. They will dissociate only with chemical treatment that breaks the membrane. Peripheral proteins are unlike integral proteins in that they hold weak interactions with the surface of the bilayer and can easily become dissociated from the membrane. Peripheral proteins are located on only one face of a membrane and create membrane asymmetry. Oligosaccharides are sugar containing polymers. In the membrane, they can be covalently bound to lipids to form glycolipids or covalently bound to proteins to form glycoproteins. Membranes contain sugar-containing lipid molecules known as glycolipids. In the bilayer, the sugar groups of glycolipids are exposed at the cell surface, where they can form hydrogen bonds. Glycolipids provide the most extreme example of asymmetry in the lipid bilayer. Glycolipids perform a vast number of functions in the biological membrane that are mainly communicative, including cell recognition and cell-cell adhesion. Glycoproteins are integral proteins. They play an important role in the immune response and protection. The phospholipid bilayer is formed due to the aggregation of membrane lipids in aqueous solutions. Aggregation is caused by the hydrophobic effect, where hydrophobic ends come into contact with each other and are sequestered away from water. This arrangement maximises hydrogen bonding between hydrophilic heads and water while minimising unfavorable contact between hydrophobic tails and water. The increase in available hydrogen bonding increases the entropy of the system, creating a spontaneous process. Biological molecules are amphiphilic or amphipathic, i.e. are simultaneously hydrophobic and hydrophilic. The phospholipid bilayer contains charged hydrophilic headgroups, which interact with polar water. The layers also contain hydrophobic tails, which meet with the hydrophobic tails of the complementary layer. The hydrophobic tails are usually fatty acids that differ in lengths. The interactions of lipids, especially the hydrophobic tails, determine the lipid bilayer physical properties such as fluidity. Membranes in cells typically define enclosed spaces or compartments in which cells may maintain a chemical or biochemical environment that differs from the outside. For example, the membrane around peroxisomes shields the rest of the cell from peroxides, chemicals that can be toxic to the cell, and the cell membrane separates a cell from its surrounding medium. Peroxisomes are one form of vacuole found in the cell that contain by-products of chemical reactions within the cell. Most organelles are defined by such membranes, and are called "membrane-bound" organelles. Probably the most important feature of a biomembrane is that it is a selectively permeable structure. This means that the size, charge, and other chemical properties of the atoms and molecules attempting to cross it will determine whether they succeed in doing so. Selective permeability is essential for effective separation of a cell or organelle from its surroundings. Biological membranes also have certain mechanical or elastic properties that allow them to change shape and move as required. Generally, small hydrophobic molecules can readily cross phospholipid bilayers by simple diffusion. Particles that are required for cellular function but are unable to diffuse freely across a membrane enter through a membrane transport protein or are taken in by means of endocytosis, where the membrane allows for a vacuole to join onto it and push its contents into the cell. Many types of specialized plasma membranes can separate cell from external environment: apical, basolateral, presynaptic and postsynaptic ones, membranes of flagella, cilia, microvillus, filopodia and lamellipodia, the sarcolemma of muscle cells, as well as specialized myelin and dendritic spine membranes of neurons. Plasma membranes can also form different types of "supramembrane" structures such as caveolae, postsynaptic density, podosome, invadopodium, desmosome, hemidesmosome, focal adhesion, and cell junctions. These types of membranes differ in lipid and protein composition. Distinct types of membranes also create intracellular organelles: endosome; smooth and rough endoplasmic reticulum; sarcoplasmic reticulum; Golgi apparatus; lysosome; mitochondrion (inner and outer membranes); nucleus (inner and outer membranes); peroxisome; vacuole; cytoplasmic granules; cell vesicles (phagosome, autophagosome, clathrin-coated vesicles, COPI-coated and COPII-coated vesicles) and secretory vesicles (including synaptosome, acrosomes, melanosomes, and chromaffin granules). Different types of biological membranes have diverse lipid and protein compositions. The content of membranes defines their physical and biological properties. Some components of membranes play a key role in medicine, such as the efflux pumps that pump drugs out of a cell. The hydrophobic core of the phospholipid bilayer is constantly in motion because of rotations around the bonds of lipid tails. Hydrophobic tails of a bilayer bend and lock together. However, because of hydrogen bonding with water, the hydrophilic head groups exhibit less movement as their rotation and mobility are constrained. This results in increasing viscosity of the lipid bilayer closer to the hydrophilic heads. Below a transition temperature, a lipid bilayer loses fluidity when the highly mobile lipids exhibits less movement becoming a gel-like solid. The transition temperature depends on such components of the lipid bilayer as the hydrocarbon chain length and the saturation of its fatty acids. Temperature-dependence fluidity constitutes an important physiological attribute for bacteria and cold-blooded organisms. These organisms maintain a constant fluidity by modifying membrane lipid fatty acid composition in accordance with differing temperatures. In animal cells, membrane fluidity is modulated by the inclusion of the sterol cholesterol. This molecule is present in especially large amounts in the plasma membrane, where it constitutes approximately 20% of the lipids in the membrane by weight. Because cholesterol molecules are short and rigid, they fill the spaces between neighboring phospholipid molecules left by the kinks in their unsaturated hydrocarbon tails. In this way, cholesterol tends to stiffen the bilayer, making it more rigid and less permeable. For all cells, membrane fluidity is important for many reasons. It enables membrane proteins to diffuse rapidly in the plane of the bilayer and to interact with one another, as is crucial, for example, in cell signaling. It permits membrane lipids and proteins to diffuse from sites where they are inserted into the bilayer after their synthesis to other regions of the cell. It allows membranes to fuse with one another and mix their molecules, and it ensures that membrane molecules are distributed evenly between daughter cells when a cell divides. If biological membranes were not fluid, it is hard to imagine how cells could live, grow, and reproduce.
https://en.wikipedia.org/wiki?curid=4817
Balfour Declaration The Balfour Declaration was a public statement issued by the British government in 1917 during the First World War announcing support for the establishment of a "national home for the Jewish people" in Palestine, then an Ottoman region with a small minority Jewish population. The declaration was contained in a letter dated 2November 1917 from the United Kingdom's Foreign Secretary Arthur Balfour to Lord Rothschild, a leader of the British Jewish community, for transmission to the Zionist Federation of Great Britain and Ireland. The text of the declaration was published in the press on 9November 1917. Immediately following their declaration of war on the Ottoman Empire in November 1914, the British War Cabinet began to consider the future of Palestine; within two months a memorandum was circulated to the Cabinet by a Zionist Cabinet member, Herbert Samuel, proposing the support of Zionist ambitions in order to enlist the support of Jews in the wider war. A committee was established in April 1915 by British Prime Minister H. H. Asquith to determine their policy toward the Ottoman Empire including Palestine. Asquith, who had favoured post-war reform of the Ottoman Empire, resigned in December 1916; his replacement David Lloyd George, favoured partition of the Empire. The first negotiations between the British and the Zionists took place at a conference on 7 February 1917 that included Sir Mark Sykes and the Zionist leadership. Subsequent discussions led to Balfour's request, on 19 June, that Rothschild and Chaim Weizmann submit a draft of a public declaration. Further drafts were discussed by the British Cabinet during September and October, with input from Zionist and anti-Zionist Jews but with no representation from the local population in Palestine. By late 1917, in the lead up to the Balfour Declaration, the wider war had reached a stalemate, with two of Britain's allies not fully engaged: the United States had yet to suffer a casualty, and the Russians were in the midst of a revolution with Bolsheviks taking over the government. A stalemate in southern Palestine was broken by the Battle of Beersheba on 31 October 1917. The release of the final declaration was authorised on 31 October; the preceding Cabinet discussion had referenced perceived propaganda benefits amongst the worldwide Jewish community for the Allied war effort. The opening words of the declaration represented the first public expression of support for Zionism by a major political power. The term "national home" had no precedent in international law, and was intentionally vague as to whether a Jewish state was contemplated. The intended boundaries of Palestine were not specified, and the British government later confirmed that the words "in Palestine" meant that the Jewish national home was not intended to cover all of Palestine. The second half of the declaration was added to satisfy opponents of the policy, who had claimed that it would otherwise prejudice the position of the local population of Palestine and encourage antisemitism worldwide by "stamping the Jews as strangers in their native lands". The declaration called for safeguarding the civil and religious rights for the Palestinian Arabs, who composed the vast majority of the local population, and also the rights and political status of the Jewish communities in other countries outside of Palestine. The British government acknowledged in 1939 that the local population's views should have been taken into account, and recognised in 2017 that the declaration should have called for protection of the Palestinian Arabs' political rights. The declaration had many long-lasting consequences. It greatly increased popular support for Zionism within Jewish communities worldwide, and became a core component of the British Mandate for Palestine, the founding document of Mandatory Palestine, which later became Israel and the Palestinian territories. As a result, it is considered a principal cause of the ongoing Israeli–Palestinian conflict, often described as the world's most intractable conflict. Controversy remains over a number of areas, such as whether the declaration contradicted earlier promises the British made to the Sharif of Mecca in the McMahon–Hussein correspondence. Early British political support for an increased Jewish presence in the region of Palestine was based upon geopolitical calculations. This support began in the early 1840s and was led by Lord Palmerston, following the occupation of Syria and Palestine by separatist Ottoman governor Muhammad Ali of Egypt. French influence had grown in Palestine and the wider Middle East, and its role as protector of the Catholic communities began to grow, just as Russian influence had grown as protector of the Eastern Orthodox in the same regions. This left Britain without a sphere of influence, and thus a need to find or create their own regional "protégés". These political considerations were supported by a sympathetic evangelical Christian sentiment towards the "restoration of the Jews" to Palestine among elements of the mid-19th-century British political elite – most notably Lord Shaftesbury. The British Foreign Office actively encouraged Jewish emigration to Palestine, exemplified by Charles Henry Churchill's 1841–1842 exhortations to Moses Montefiore, the leader of the British Jewish community. Such efforts were premature, and did not succeed; only 24,000 Jews were living in Palestine on the eve of the emergence of Zionism within the world's Jewish communities in the last two decades of the 19th century. With the geopolitical shakeup occasioned by the outbreak of the First World War, the earlier calculations, which had lapsed for some time, led to a renewal of strategic assessments and political bargaining over the Middle and Far East. Zionism arose in the late 19th century in reaction to anti-Semitic and exclusionary nationalist movements in Europe. Romantic nationalism in Central and Eastern Europe had helped to set off the Haskalah, or "Jewish Enlightenment", creating a split in the Jewish community between those who saw Judaism as their religion and those who saw it as their ethnicity or nation. The 1881–1884 anti-Jewish pogroms in the Russian Empire encouraged the growth of the latter identity, resulting in the formation of the Hovevei Zion pioneer organizations, the publication of Leon Pinsker's "Autoemancipation", and the first major wave of Jewish immigration to Palestine – retrospectively named the "First Aliyah". In 1896, Theodor Herzl, a Jewish journalist living in Austria-Hungary, published the foundational text of political Zionism, "Der Judenstaat" ("The Jews' State" or "The State of the Jews"), in which he asserted that the only solution to the "Jewish Question" in Europe, including growing anti-Semitism, was the establishment of a state for the Jews. A year later, Herzl founded the Zionist Organization, which at its first congress called for the establishment of "a home for the Jewish people in Palestine secured under public law". Proposed measures to attain that goal included the promotion of Jewish settlement there, the organisation of Jews in the diaspora, the strengthening of Jewish feeling and consciousness, and preparatory steps to attain necessary governmental grants. Herzl died in 1904, 44 years before the establishment of State of Israel, the Jewish state that he proposed, without having gained the political standing required to carry out his agenda. Zionist leader Chaim Weizmann, later President of the World Zionist Organisation and first President of Israel, moved from Switzerland to the UK in 1904 and met Arthur Balfour – who had just launched his 1905–1906 election campaign after resigning as Prime Minister – in a session arranged by Charles Dreyfus, his Jewish constituency representative. Earlier that year, Balfour had successfully driven the Aliens Act through Parliament with impassioned speeches regarding the need to restrict the wave of immigration into Britain from Jews fleeing the Russian Empire. During this meeting, he asked what Weizmann's objections had been to the 1903 Uganda Scheme that Herzl had supported to provide a portion of British East Africa to the Jewish people as a homeland. The scheme, which had been proposed to Herzl by Joseph Chamberlain, Colonial Secretary in Balfour's Cabinet, following his trip to East Africa earlier in the year, had been subsequently voted down following Herzl's death by the Seventh Zionist Congress in 1905 after two years of heated debate in the Zionist Organization. Weizmann responded that he believed the English are to London as the Jews are to Jerusalem. In January 1914 Weizmann first met Baron Edmond de Rothschild, a member of the French branch of the Rothschild family and a leading proponent of the Zionist movement, in relation to a project to build a Hebrew university in Jerusalem. The Baron was not part of the World Zionist Organization, but had funded the Jewish agricultural colonies of the First Aliyah and transferred them to the Jewish Colonization Association in 1899. This connection was to bear fruit later that year when the Baron's son, James deRothschild, requested a meeting with Weizmann on 25November 1914, to enlist him in influencing those deemed to be receptive within the British government to their agenda of a "Jewish State" in Palestine. Through James's wife Dorothy, Weizmann was to meet Rózsika Rothschild, who introduced him to the English branch of the familyin particular her husband Charles and his older brother Walter, a zoologist and former member of parliament (MP). Their father, Nathan Rothschild, 1st Baron Rothschild, head of the English branch of the family, had a guarded attitude towards Zionism, but he died in March 1915 and his title was inherited by Walter.  Prior to the declaration, about 8,000 of Britain's 300,000 Jews belonged to a Zionist organisation. Globally, as of 1913 – the latest known date prior to the declaration – the equivalent figure was approximately 1%. The year 1916 marked four centuries since Palestine had become part of the Ottoman Empire, also known as the Turkish Empire. For most of this period, the Jewish population represented a small minority, approximately 3% of the total, with Muslims representing the largest segment of the population, and Christians the second. Ottoman government in Constantinople began to apply restrictions on Jewish immigration to Palestine in late 1882, in response to the start of the First Aliyah earlier that year. Although this immigration was creating a certain amount of tension with the local population, mainly among the merchant and notable classes, in 1901 the Sublime Porte (the Ottoman central government) gave Jews the same rights as Arabs to buy land in Palestine and the percentage of Jews in the population rose to 7% by 1914. At the same time, with growing distrust of the Young Turks – Turkish nationalists who had taken control of the Empire in 1908 – and the Second Aliyah, Arab nationalism and Palestinian nationalism was on the rise, and in Palestine anti-Zionism was a unifying characteristic. Historians do not know whether these strengthening forces would still have ultimately resulted in conflict in the absence of the Balfour Declaration. In July 1914 war broke out in Europe between the Triple Entente (Britain, France, and the Russian Empire) and the Central Powers (Germany, Austria-Hungary, and, later that year, the Ottoman Empire). The British Cabinet first discussed Palestine at a meeting on 9November 1914, four days after Britain's declaration of war on the Ottoman Empire, of which the Mutasarrifate of Jerusalemoften referred to as Palestinewas a component. At the meeting David Lloyd George, then Chancellor of the Exchequer, "referred to the ultimate destiny of Palestine". The Chancellor, whose law firm Lloyd George, Roberts and Co had been engaged a decade before by the Zionist Federation of Great Britain and Ireland to work on the Uganda Scheme, was to become Prime Minister by the time of the declaration, and was ultimately responsible for it. Weizmann's political efforts picked up speed, and on 10December 1914 he met with Herbert Samuel, a British Cabinet member and a secular Jew who had studied Zionism; Samuel believed Weizmann's demands were too modest. Two days later, Weizmann met Balfour again, for the first time since their initial meeting in 1905; Balfour had been out of government ever since his electoral defeat in 1906, but remained a senior member of the Conservative Party in their role as Official Opposition. A month later, Samuel circulated a memorandum entitled "The Future of Palestine" to his Cabinet colleagues. The memorandum stated: "I am assured that the solution of the problem of Palestine which would be much the most welcome to the leaders and supporters of the Zionist movement throughout the world would be the annexation of the country to the British Empire". Samuel discussed a copy of his memorandum with Nathan Rothschild in February 1915, a month before the latter's death. It was the first time in an official record that enlisting the support of Jews as a war measure had been proposed. Many further discussions followed, including the initial meetings in 1915–16 between Lloyd George, who had been appointed Minister of Munitions in May 1915, and Weizmann, who was appointed as a scientific advisor to the ministry in September 1915. Seventeen years later, in his "War Memoirs", Lloyd George described these meetings as being the "fount and origin" of the declaration; historians have rejected this claim. In late 1915 the British High Commissioner to Egypt, Henry McMahon, exchanged ten letters with Hussein bin Ali, Sharif of Mecca, in which he promised Hussein to recognize Arab independence "in the limits and boundaries proposed by the Sherif of Mecca" in return for Hussein launching a revolt against the Ottoman Empire. The pledge excluded "portions of Syria" lying to the west of "the districts of Damascus, Homs, Hama and Aleppo". In the decades after the war, the extent of this coastal exclusion was hotly disputed since Palestine lay to the southwest of Damascus and was not explicitly mentioned. The Arab Revolt was launched on June5th, 1916, on the basis of the "quid pro quo" agreement in the correspondence. However, less than three weeks earlier the governments of the United Kingdom, France, and Russia secretly concluded the Sykes–Picot Agreement, which Balfour described later as a "wholly new method" for dividing the region, after the 1915 agreement "seems to have been forgotten". This Anglo-French treaty was negotiated in late 1915 and early 1916 between Sir Mark Sykes and François Georges-Picot, with the primary arrangements being set out in draft form in a joint memorandum on 5 January 1916. Sykes was a British Conservative MP who had risen to a position of significant influence on Britain's Middle East policy, beginning with his seat on the 1915 De Bunsen Committee and his initiative to create the Arab Bureau. Picot was a French diplomat and former consul-general in Beirut. Their agreement defined the proposed spheres of influence and control in Western Asia should the Triple Entente succeed in defeating the Ottoman Empire during World WarI, dividing many Arab territories into British- and French-administered areas. In Palestine, internationalisation was proposed, with the form of administration to be confirmed after consultation with both Russia and Hussein; the January draft noted Christian and Muslim interests, and that "members of the Jewish community throughout the world have a conscientious and sentimental interest in the future of the country." Prior to this point, no active negotiations with Zionists had taken place, but Sykes had been aware of Zionism, was in contact with Moses Gaster – a former President of the English Zionist Federation – and may have seen Samuel's 1915 memorandum. On 3 March, while Sykes and Picot were still in Petrograd, Lucien Wolf (secretary of the Foreign Conjoint Committee, set up by Jewish organizations to further the interests of foreign Jews) submitted to the Foreign Office, the draft of an assurance (formula) that could be issued by the allies in support of Jewish aspirations: In the event of Palestine coming within the spheres of influence of Great Britain or France at the close of the war, the governments of those powers will not fail to take account of the historic interest that country possesses for the Jewish community. The Jewish population will be secured in the enjoyment of civil and religious liberty, equal political rights with the rest of the population, reasonable facilities for immigration and colonisation, and such municipal privileges in the towns and colonies inhabited by them as may be shown to be necessary. On 11 March, telegrams were sent in Grey's name to Britain's Russian and French ambassadors for transmission to Russian and French authorities, including the formula, as well as : The scheme might be made far more attractive to the majority of Jews if it held out to them the prospect that when in course of time the Jewish colonists in Palestine grow strong enough to cope with the Arab population they may be allowed to take the management of the internal affairs of Palestine (with the exception of Jerusalem and the holy places) into their own hands. Sykes, having seen the telegram, had discussions with Picot and proposed (making reference to Samuel's memorandum ) the creation of an Arab Sultanate under French and British protection, some means of administering the holy places along with the establishment of a company to purchase land for Jewish colonists, who would then become citizens with equal rights to Arabs. Shortly after returning from Petrograd, Sykes briefed Samuel, who then briefed a meeting of Gaster, Weizmann and Sokolow. Gaster recorded in his diary on 16 April 1916: "We are offered French-English condominium in Palest[ine]. Arab Prince to conciliate Arab sentiment and as part of the Constitution a Charter to Zionists for which England would stand guarantee and which would stand by us in every case of friction... It practically comes to a complete realisation of our Zionist programme. However, we insisted on: national character of Charter, freedom of immigration and internal autonomy, and at the same time full rights of citizenship to [illegible] and Jews in Palestine." In Sykes' mind, the agreement which bore his name was outdated even before it was signed – in March 1916, he wrote in a private letter: "to my mind the Zionists are now the key of the situation". In the event, neither the French nor the Russians were enthusiastic about the proposed formulation and eventually on 4 July, Wolf was informed that "the present moment is inopportune for making any announcement." These wartime initiatives, inclusive of the declaration, are frequently considered together by historians because of the potential, real or imagined, for incompatibility between them, particularly in regard to the disposition of Palestine. In the words of Professor Albert Hourani, founder of the Middle East Centre at St Antony's College, Oxford: "The argument about the interpretation of these agreements is one which is impossible to end, because they were intended to bear more than one interpretation." In terms of British politics, the declaration resulted from the coming into power of Lloyd George and his Cabinet, which had replaced the H. H. Asquith led-Cabinet in December 1916. Whilst both Prime Ministers were Liberals and both governments were wartime coalitions, Lloyd George and Balfour, appointed as his Foreign Secretary, favoured a post-war partition of the Ottoman Empire as a major British war aim, whereas Asquith and his Foreign Secretary, Sir Edward Grey, had favoured its reform. Two days after taking office, Lloyd George told General Robertson, the Chief of the Imperial General Staff, that he wanted a major victory, preferably the capture of Jerusalem, to impress British public opinion, and immediately consulted his War Cabinet about a "further campaign into Palestine when El Arish had been secured." Subsequent pressure from Lloyd George, over the reservations of Robertson, resulted in the recapture of the Sinai for British-controlled Egypt, and, with the capture of El Arish in December 1916 and Rafah in January 1917, the arrival of British forces at the southern borders of the Ottoman Empire. Following two unsuccessful attempts to capture Gaza between 26 March and 19 April, a six-month stalemate in Southern Palestine began; the Sinai and Palestine Campaign would not make any progress into Palestine until 31October 1917. Following the change in government, Sykes was promoted into the War Cabinet Secretariat with responsibility for Middle Eastern affairs. In January 1917, despite having previously built a relationship with Moses Gaster, he began looking to meet other Zionist leaders; by the end of the month he had been introduced to Weizmann and his associate Nahum Sokolow, a journalist and executive of the World Zionist Organization who had moved to Britain at the beginning of the war. On 7February 1917, Sykes, claiming to be acting in a private capacity, entered into substantive discussions with the Zionist leadership. The previous British correspondence with "the Arabs" was discussed at the meeting; Sokolow's notes record Sykes' description that "The Arabs professed that language must be the measure [by which control of Palestine should be determined] and [by that measure] could claim all Syria and Palestine. Still the Arabs could be managed, particularly if they received Jewish support in other matters." At this point the Zionists were still unaware of the Sykes-Picot Agreement, although they had their suspicions. One of Sykes' goals was the mobilization of Zionism to the cause of British suzerainty in Palestine, so as to have arguments to put to France in support of that objective. During the period of the British War Cabinet discussions leading up to the declaration, the war had reached a period of stalemate. On the Western Front the tide would first turn in favour of the Central Powers in spring 1918, before decisively turning in favour of the Allies from July 1918 onwards. Although the United States declared war on Germany in the spring of 1917, it did not suffer its first casualties until 2 November 1917, at which point President Woodrow Wilson still hoped to avoid dispatching large contingents of troops into the war. The Russian forces were known to be distracted by the ongoing Russian Revolution and the growing support for the Bolshevik faction, but Alexander Kerensky's Provisional Government had remained in the war; Russia only withdrew after the final stage of the revolution on 7November 1917. Balfour met Weizmann at the Foreign Office on 22 March 1917; two days later, Weizmann described the meeting as being "the first time I had a real business talk with him". Weizmann explained at the meeting that the Zionists had a preference for a British protectorate over Palestine, as opposed to an American, French or international arrangement; Balfour agreed, but warned that "there may be difficulties with France and Italy". The French position in regard to Palestine and the wider Syria region during the lead up to the Balfour Declaration was largely dictated by the terms of the Sykes-Picot Agreement, and was complicated from 23 November 1915 by increasing French awareness of the British discussions with the Sherif of Mecca. Prior to 1917, the British had led the fighting on the southern border of the Ottoman Empire alone, given their neighbouring Egyptian colony and the French preoccupation with the fighting on the Western Front that was taking place on their own soil. Italy's participation in the war, which began following the April 1915 Treaty of London, did not include involvement in the Middle Eastern sphere until the April 1917 Agreement of Saint-Jean-de-Maurienne; at this conference, Lloyd George had raised the question of a British protectorate of Palestine and the idea "had been very coldly received" by the French and the Italians. In May and June 1917, the French and Italians sent detachments to support the British as they built their reinforcements in preparation for a renewed attack on Palestine. In early April, Sykes and Picot were appointed to act as the chief negotiators once more, this time on a month-long mission to the Middle East for further discussions with the Sherif of Mecca and other Arab leaders. On 3 April 1917, Sykes met with Lloyd George, Curzon and Hankey to receive his instructions in this regard, namely to keep the French onside while "not prejudicing the Zionist movement and the possibility of its development under British auspices, [and not] enter into any political pledges to the Arabs, and particularly none in regard to Palestine". Before travelling to the Middle East, Picot, via Sykes, invited Nahum Sokolow to Paris to educate the French government on Zionism. Sykes, who had prepared the way in correspondence with Picot, arrived a few days after Sokolow; in the meantime Sokolow had met Picot and other French officials, and convinced the French Foreign Office to accept for study a statement of Zionist aims "in regard to facilities of colonization, communal autonomy, rights of language and establishment of a Jewish chartered company." Sykes went on ahead to Italy and had meetings with the British ambassador and British Vatican representative to prepare the way for Sokolow once again. Sokolow was granted an audience with Pope Benedict XV on 6 May 1917. Sokolow's notes of the meeting – the only meeting records known to historians – stated that the Pope expressed general sympathy and support for the Zionist project. On 21 May 1917 Angelo Sereni, president of the Committee of the Jewish Communities, presented Sokolow to Sidney Sonnino, the Italian Minister of Foreign Affairs. He was also received by Paolo Boselli, the Italian prime minister. Sonnino arranged for the secretary general of the ministry to send a letter to the effect that, although he could not express himself on the merits of a program which concerned all the allies, "generally speaking" he was not opposed to the legitimate claims of the Jews. On his return journey, Sokolow met with French leaders again and secured a letter dated 4 June 1917, giving assurances of sympathy towards the Zionist cause by Jules Cambon, head of the political section of the French foreign ministry. This letter was not published, but was deposited at the British Foreign Office. Following the United States' entry into the war on 6 April, the British Foreign Secretary led the Balfour Mission to Washington D.C. and New York, where he spent a month between mid-April and mid-May. During the trip he spent significant time discussing Zionism with Louis Brandeis, a leading Zionist and a close ally of Wilson who had been appointed as a Supreme Court Justice a year previously. By 13 June 1917, it was acknowledged by Ronald Graham, head of the Foreign Office's Middle Eastern affairs department, that the three most relevant politiciansthe Prime Minister, the Foreign Secretary, and the Parliamentary Under-Secretary of State for Foreign Affairs, Lord Robert Cecilwere all in favour of Britain supporting the Zionist movement; on the same day Weizmann had written to Graham to advocate for a public declaration. Six days later, at a meeting on 19June, Balfour asked Lord Rothschild and Weizmann to submit a formula for a declaration. Over the next few weeks, a 143-word draft was prepared by the Zionist negotiating committee, but it was considered too specific on sensitive areas by Sykes, Graham and Rothschild. Separately, a very different draft had been prepared by the Foreign Office, described in 1961 by Harold Nicolson – who had been involved in preparing the draft – as proposing a "sanctuary for Jewish victims of persecution". The Foreign Office draft was strongly opposed by the Zionists, and was discarded; no copy of the draft has been found in the Foreign Office archives. Following further discussion, a revised – and at just 46 words in length, much shorter – draft declaration was prepared and sent by Lord Rothschild to Balfour on 18 July. It was received by the Foreign Office, and the matter was brought to the Cabinet for formal consideration. The decision to release the declaration was taken by the British War Cabinet on 31 October 1917. This followed discussion at four War Cabinet meetings (including the 31 October meeting) over the space of the previous two months. In order to aid the discussions, the War Cabinet Secretariat, led by Maurice Hankey and supported by his Assistant Secretaries – primarily Sykes and his fellow Conservative MP and pro-Zionist Leo Amery – solicited outside perspectives to put before the Cabinet. These included the views of government ministers, war allies – notably from President Woodrow Wilson – and in October, formal submissions from six Zionist leaders and four non-Zionist Jews. British officials asked President Wilson for his consent on the matter on two occasions – first on 3 September, when he replied the time was not ripe, and later on 6 October, when he agreed with the release of the declaration. Excerpts from the minutes of these four War Cabinet meetings provide a description of the primary factors that the ministers considered: Declassification of British government archives has allowed scholars to piece together the choreography of the drafting of the declaration; in his widely cited 1961 book, Leonard Stein published four previous drafts of the declaration. The drafting began with Weizmann's guidance to the Zionist drafting team on its objectives in a letter dated 20 June 1917, one day following his meeting with Rothschild and Balfour. He proposed that the declaration from the British government should state: "its conviction, its desire or its intention to support Zionist aims for the creation of a Jewish national home in Palestine; no reference must be made I think to the question of the Suzerain Power because that would land the British into difficulties with the French; it must be a Zionist declaration." A month after the receipt of the much-reduced 12 July draft from Rothschild, Balfour proposed a number of mainly technical amendments. The two subsequent drafts included much more substantial amendments: the first in a late August draft by Lord Milner – one of the original five members of Lloyd George's War Cabinet as a minister without portfolio – which reduced the geographic scope from all of Palestine to "in Palestine", and the second from Milner and Amery in early October, which added the two "safeguard clauses". Subsequent authors have debated who the "primary author" really was. In his posthumously published 1981 book "The Anglo-American Establishment", Georgetown University history professor Carroll Quigley explained his view that Lord Milner was the primary author of the declaration, and more recently, William D. Rubinstein, Professor of Modern History at Aberystwyth University, Wales, proposed Amery instead. Huneidi wrote that Ormsby-Gore, in a report he prepared for Shuckburgh, claimed authorship, together with Amery, of the final draft form. The agreed version of the declaration, a single sentence of just 67 words, was sent on 2November 1917 in a short letter from Balfour to Walter Rothschild, for transmission to the Zionist Federation of Great Britain and Ireland. The declaration contained four clauses, of which the first two promised to support "the establishment in Palestine of a national home for the Jewish people", followed by two "safeguard clauses" with respect to "the civil and religious rights of existing non-Jewish communities in Palestine", and "the rights and political status enjoyed by Jews in any other country". The term "national home" was intentionally ambiguous, having no legal value or precedent in international law, such that its meaning was unclear when compared to other terms such as "state". The term was intentionally used instead of "state" because of opposition to the Zionist program within the British Cabinet. According to historian Norman Rose, the chief architects of the declaration contemplated that a Jewish State would emerge in time while the Palestine Royal Commission concluded that the wording was "the outcome of a compromise between those Ministers who contemplated the ultimate establishment of a Jewish State and those who did not." Interpretation of the wording has been sought in the correspondence leading to the final version of the declaration. An official report to the War Cabinet sent by Sykes on 22 September said that the Zionists did "not" want "to set up a Jewish Republic or any other form of state in Palestine or in any part of Palestine" but rather preferred some form of protectorate as provided in the Palestine Mandate. A month later, Curzon produced a memorandum circulated on 26 October 1917 where he addressed two questions, the first concerning the meaning of the phrase "a National Home for the Jewish race in Palestine"; he noted that there were different opinions ranging from a fully fledged state to a merely spiritual centre for the Jews. Sections of the British press assumed that a Jewish state was intended even before the Declaration was finalized. In the United States the press began using the terms "Jewish National Home", "Jewish State", "Jewish republic" and "Jewish Commonwealth" interchangeably. Treaty expert David Hunter Miller, who was at the conference and subsequently compiled a 22 volume compendium of documents, provides a report of the Intelligence Section of the American Delegation to the Paris Peace Conference of 1919 which recommended that "there be established a separate state in Palestine," and that "it will be the policy of the League of Nations to recognize Palestine as a Jewish state, as soon as it is a Jewish state in fact." The report further advised that an independent Palestinian state under a British League of Nations mandate be created. Jewish settlement would be allowed and encouraged in this state and this state's holy sites would be under the control of the League of Nations. Indeed, the Inquiry spoke positively about the possibility of a Jewish state eventually being created in Palestine if the necessary demographics for this were to exist. Historian Matthew Jacobs later wrote that the US approach was hampered by the "general absence of specialist knowledge about the region" and that "like much of the Inquiry's work on the Middle East, the reports on Palestine were deeply flawed" and "presupposed a particular outcome of the conflict". He quotes Miller, writing about one report on the history and impact of Zionism, "absolutely inadequate from any standpoint and must be regarded as nothing more than material for a future report" Lord Robert Cecil on 2 December 1917, assured an audience that the government fully intended that "Judea [was] for the Jews." Yair Auron opines that Cecil, then a deputy Foreign Secretary representing the British Government at a celebratory gathering of the English Zionist Federation, "possibly went beyond his official brief" in saying (he cites Stein) "Our wish is that Arabian countries shall be for the Arabs, Armenia for the Armenians and Judaea for the Jews". The following October Neville Chamberlain, while chairing a Zionist meeting, discussed a "new Jewish State."At the time, Chamberlain was a Member of Parliament for Ladywood, Birmingham; recalling the event in 1939, just after Chamberlain had approved the 1939 White Paper, the Jewish Telegraph Agency noted that the Prime Minister had "experienced a pronounced change of mind in the 21 years intervening" A year later, on the Declaration's second anniversary, General Jan Smuts said that Britain "would redeem her pledge ... and a great Jewish state would ultimately rise." In similar vein, Churchill a few months later stated: At the 22 June 1921 meeting of the Imperial Cabinet, Churchill was asked by Arthur Meighen, the Canadian Prime Minister, about the meaning of the national home. Churchill said "If in the course of many years they become a majority in the country, they naturally would take it over...pro rata with the Arab. We made an equal pledge that we would not turn the Arab off his land or invade his political and social rights". Responding to Curzon in January 1919, Balfour wrote "Weizmann has never put forward a claim for the Jewish Government of Palestine. Such a claim in my opinion is clearly inadmissible and personally I do not think we should go further than the original declaration which I made to Lord Rothschild". In February 1919, France issued a statement that it would not oppose putting Palestine under British trusteeship and the formation of a Jewish State. Friedman further notes that France's attitude went on to change; Yehuda Blum, while discussing France's "unfriendly attitude towards the Jewish national movement", notes the content of a report made by Robert Vansittart (a leading member of the British delegation to the Paris Peace Conference) to Curzon in November 1920 which said: Greece's Foreign Minister told the editor of the Salonica Jewish organ Pro-Israel that "the establishment of a Jewish State meets in Greece with full and sincere sympathy ... A Jewish Palestine would become an ally of Greece." In Switzerland, a number of noted historians including professors Tobler, Forel-Yvorne, and Rogaz, supported the idea of establishing a Jewish state, with one referring to it as "a sacred right of the Jews." While in Germany, officials and most of the press took the Declaration to mean a British sponsored state for the Jews. The British government, including Churchill, made it clear that the Declaration did not intend for the whole of Palestine to be converted into a Jewish National Home, "but that such a Home should be founded in Palestine." Emir Faisal, King of Syria and Iraq, made a formal written agreement with Zionist leader Chaim Weizmann, which was drafted by T.E. Lawrence, whereby they would try to establish a peaceful relationship between Arabs and Jews in Palestine. The 3 January 1919 Faisal–Weizmann Agreement was a short-lived agreement for Arab–Jewish cooperation on the development of a Jewish homeland in Palestine. Faisal did treat Palestine differently in his presentation to the Peace Conference on 6 February 1919 saying "Palestine, for its universal character, [should be] left on one side for the mutual consideration of all parties concerned". The agreement was never implemented. In a subsequent letter written in English by Lawrence for Faisal's signature, he explained: When the letter was tabled at the Shaw Commission in 1929, Rustam Haidar spoke to Faisal in Baghdad and cabled that Faisal had "no recollection that he wrote anything of the sort". In January 1930, Haidar wrote to a newspaper in Baghdad that Faisal: "finds it exceedingly strange that such a matter is attributed to him as he at no time would consider allowing any foreign nation to share in an Arab country". Awni Abd al-Hadi, Faisal's secretary, wrote in his memoirs that he was not aware that a meeting between Frankfurter and Faisal took place and that: "I believe that this letter, assuming that it is authentic, was written by Lawrence, and that Lawrence signed it in English on behalf of Faisal. I believe this letter is part of the false claims made by Chaim Weizmann and Lawrence to lead astray public opinion." According to Allawi, the most likely explanation for the Frankfurter letter is that a meeting took place, a letter was drafted in English by Lawrence, but that its "contents were not entirely made clear to Faisal. He then may or may not have been induced to sign it", since it ran counter to Faisal's other public and private statements at the time. A 1 March interview by Le Matin quoted Faisal as saying: This feeling of respect for other religions dictates my opinion about Palestine, our neighbor. That the unhappy Jews come to reside there and behave as good citizens of this country, our humanity rejoices given that they are placed under a Muslim or Christian government mandated by The League of Nations. If they want to constitute a state and claim sovereign rights in this region, I foresee very serious dangers. It is to be feared that there will be a conflict between them and the other races. Referring to his 1922 White Paper, Churchill later wrote that "there is nothing in it to prohibit the ultimate establishment of a Jewish State." And in private, many British officials agreed with the Zionists' interpretation that a state would be established when a Jewish majority was achieved. When Chaim Weizmann met with Churchill, Lloyd George and Balfour at Balfour's home in London on 21 July 1921, Lloyd George and Balfour assured Weizmann "that by the Declaration they had always meant an eventual Jewish State," according to Weizmann minutes of that meeting. Lloyd George stated in 1937 that it was intended that Palestine would become a Jewish Commonwealth if and when Jews "had become a definite majority of the inhabitants", and Leo Amery echoed the same position in 1946. In the UNSCOP report of 1947, the issue of home versus state was subjected to scrutiny arriving at a similar conclusion to that of Lloyd George. The statement that such a homeland would be found "in Palestine" rather than "of Palestine" was also deliberate. The proposed draft of the declaration contained in Rothschild's 12 July letter to Balfour referred to the principle "that Palestine should be reconstituted as the National Home of the Jewish people." In the final text, following Lord Milner's amendment, the word "reconstituted" was removed and the word "that" was replaced with "in". This text thereby avoided committing the entirety of Palestine as the National Home of the Jewish people, resulting in controversy in future years over the intended scope, especially the Revisionist Zionism sector, which claimed entirety of Mandatory Palestine and Emirate of Transjordan as Jewish Homeland This was clarified by the 1922 Churchill White Paper, which wrote that "the terms of the declaration referred to do not contemplate that Palestine as a whole should be converted into a Jewish National Home, but that such a Home should be founded 'in Palestine.'" The declaration did not include any geographical boundaries for Palestine. Following the end of the war, three documents – the declaration, the Hussein-McMahon Correspondence and the Sykes-Picot Agreement – became the basis for the negotiations to set the boundaries of Palestine. The declaration's first safeguard clause referred to protecting the civil and religious rights of non-Jews in Palestine. The clause had been drafted together with the second safeguard by Leo Amery in consultation with Lord Milner, with the intention to "go a reasonable distance to meeting the objectors, both Jewish and pro-Arab, without impairing the substance of the proposed declaration". The "non-Jews" constituted 90% of the population of Palestine; in the words of Ronald Storrs, Britain's Military Governor of Jerusalem between 1917 and 1920, the community observed that they had been "not so much as named, either as Arabs, Moslems or Christians, but were lumped together under the negative and humiliating definition of 'Non-Jewish Communities' and relegated to subordinate provisos". The community also noted that there was no reference to protecting their "political status" or political rights, as there was in the subsequent safeguard relating to Jews in other countries. This protection was frequently contrasted against the commitment to the Jewish community, and over the years a variety of terms were used to refer to these two obligations as a pair; a particularly heated question was whether these two obligations had "equal weight", and in 1930 this equal status was confirmed by the Permanent Mandates Commission and by the British government in the Passfield white paper. Balfour stated in February 1919 that Palestine was considered an exceptional case in which, referring to the local population, "we deliberately and rightly decline to accept the principle of self-determination," although he considered that the policy provided self-determination to Jews. Avi Shlaim considers this the declaration's "greatest contradiction". This principle of self-determination had been declared on numerous occasions subsequent to the declarationPresident Wilson's January 1918 Fourteen Points, McMahon's Declaration to the Seven in June 1918, the November 1918 Anglo-French Declaration, and the June 1919 Covenant of the League of Nations that had established the mandate system. In an August 1919 memo Balfour acknowledged the inconsistency among these statements, and further explained that the British had no intention of consulting the existing population of Palestine. The results of the ongoing American King–Crane Commission of Enquiry consultation of the local population – from which the British had withdrawn – were suppressed for three years until the report was leaked in 1922. Subsequent British governments have acknowledged this deficiency, in particular the 1939 committee led by the Lord Chancellor, Frederic Maugham, which concluded that the government had not been "free to dispose of Palestine without regard for the wishes and interests of the inhabitants of Palestine", and the April 2017 statement by British Foreign Office minister of state Baroness Anelay that the government acknowledged that "the Declaration should have called for the protection of political rights of the non-Jewish communities in Palestine, particularly their right to self-determination." The second safeguard clause was a commitment that nothing should be done which might prejudice the rights of the Jewish communities in other countries outside of Palestine. The original drafts of Rothschild, Balfour, and Milner did not include this safeguard, which was drafted together with the preceding safeguard in early October, in order to reflect opposition from influential members of the Anglo-Jewish community. Lord Rothschild took exception to the proviso on the basis that it presupposed the possibility of a danger to non-Zionists, which he denied. The Conjoint Foreign Committee of the Board of Deputies of British Jews and the Anglo-Jewish Association had published a letter in "The Times" on 24 May 1917 entitled "Views of Anglo-Jewry", signed by the two organisations' presidents, David Lindo Alexander and Claude Montefiore, stating their view that: "the establishment of a Jewish nationality in Palestine, founded on this theory of homelessness, must have the effect throughout the world of stamping the Jews as strangers in their native lands, and of undermining their hard-won position as citizens and nationals of these lands." This was followed in late August by Edwin Montagu, an influential anti-Zionist Jew and Secretary of State for India, and the only Jewish member of the British Cabinet, who wrote in a Cabinet memorandum that: "The policy of His Majesty's Government is anti-Semitic in result and will prove a rallying ground for anti-Semites in every country of the world." The text of the declaration was published in the press one week after it was signed, on 9November 1917. Other related events took place within a short timeframe, the two most relevant being the almost immediate British military capture of Palestine and the leaking of the previously secret Sykes-Picot Agreement. On the military side, both Gaza and Jaffa fell within several days, and Jerusalem was surrendered to the British on 9 December. The publication of the Sykes–Picot Agreement, following the Russian Revolution, in the Bolshevik "Izvestia" and "Pravda" on 23 November 1917 and in the British "Manchester Guardian" on 26 November 1917, represented a dramatic moment for the Allies' Eastern campaign: "the British were embarrassed, the Arabs dismayed and the Turks delighted." The Zionists had been aware of the outlines of the agreement since April and specifically the part relevant to Palestine, following a meeting between Weizmann and Cecil where Weizmann made very clear his objections to the proposed scheme. The declaration represented the first public support for Zionism by a major political power – its publication galvanized Zionism, which finally had obtained an official charter. In addition to its publication in major newspapers, leaflets were circulated throughout Jewish communities. These leaflets were airdropped over Jewish communities in Germany and Austria, as well as the Pale of Settlement, which had been given to the Central Powers following the Russian withdrawal. Weizmann had argued that the declaration would have three effects: it would swing Russia to maintain pressure on Germany's Eastern Front, since Jews had been prominent in the March Revolution of 1917; it would rally the large Jewish community in the United States to press for greater funding for the American war effort, underway since April of that year; and, lastly, that it would undermine German Jewish support for Kaiser Wilhelm II. The declaration spurred an unintended and extraordinary increase in the number of adherents of American Zionism; in 1914 the 200 American Zionist societies comprised a total of 7,500 members, which grew to 30,000 members in 600 societies in 1918 and 149,000 members in 1919. Whilst the British had considered that the declaration reflected a previously established dominance of the Zionist position in Jewish thought, it was the declaration itself that was subsequently responsible for Zionism's legitimacy and leadership. Exactly one month after the declaration was issued, a large-scale celebration took place at the Royal Opera House – speeches were given by leading Zionists as well as members of the British administration including Sykes and Cecil. From 1918 until the Second World War, Jews in Mandatory Palestine celebrated Balfour Day as an annual national holiday on 2November. The celebrations included ceremonies in schools and other public institutions and festive articles in the Hebrew press. In August 1919 Balfour approved Weizmann's request to name the first post-war settlement in Mandatory Palestine, "Balfouria", in his honour. It was intended to be a model settlement for future American Jewish activity in Palestine. Herbert Samuel, the Zionist MP whose 1915 memorandum had framed the start of discussions in the British Cabinet, was asked by Lloyd George on 24April 1920 to act as the first civil governor of British Palestine, replacing the previous military administration that had ruled the area since the war. Shortly after beginning the role in July 1920, he was invited to read the "haftarah" from Isaiah 40 at the Hurva Synagogue in Jerusalem, which, according to his memoirs, led the congregation of older settlers to feel that the "fulfilment of ancient prophecy might at last be at hand". The local Christian and Muslim community of Palestine, who constituted almost 90% of the population, strongly opposed the declaration. As described by the Palestinian-American philosopher Edward Said in 1979, it was perceived as being made: "(a)by a European power, (b)about a non-European territory, (c)in a flat disregard of both the presence and the wishes of the native majority resident in that territory, and (d)it took the form of a promise about this same territory to another foreign group." According to the 1919 King–Crane Commission, "No British officer, consulted by the Commissioners, believed that the Zionist programme could be carried out except by force of arms." A delegation of the Muslim-Christian Association, headed by Musa al-Husayni, expressed public disapproval on 3November 1918, one day after the Zionist Commission parade marking the first anniversary of the Balfour Declaration. They handed a petition signed by more than 100 notables to Ronald Storrs, the British military governor: The group also protested the carrying of new "white and blue banners with two inverted triangles in the middle", drawing the attention of the British authorities to the serious consequences of any political implications in raising the banners. Later that month, on the first anniversary of the occupation of Jaffa by the British, the Muslim-Christian Association sent a lengthy memorandum and petition to the military governor protesting once more any formation of a Jewish state. In the broader Arab world, the declaration was seen as a betrayal of the British wartime understandings with the Arabs. The Sharif of Mecca and other Arab leaders considered the declaration a violation of a previous commitment made in the McMahon–Hussein correspondence in exchange for launching the Arab Revolt. Following the publication of the declaration, the British dispatched Commander David George Hogarth to see Hussein in January 1918 bearing the message that the "political and economic freedom" of the Palestinian population was not in question. Hogarth reported that Hussein "would not accept an independent Jewish State in Palestine, nor was I instructed to warn him that such a state was contemplated by Great Britain". Hussein had also learned of the Sykes–Picot Agreement when it was leaked by the new Soviet government in December 1917, but was satisfied by two disingenuous messages from Sir Reginald Wingate, who had replaced McMahon as High Commissioner of Egypt, assuring him that the British commitments to the Arabs were still valid and that the Sykes–Picot Agreement was not a formal treaty. Continuing Arab disquiet over Allied intentions also led during 1918 to the British Declaration to the Seven and the Anglo-French Declaration, the latter promising "the complete and final liberation of the peoples who have for so long been oppressed by the Turks, and the setting up of national governments and administrations deriving their authority from the free exercise of the initiative and choice of the indigenous populations". In 1919, King Hussein refused to ratify the Treaty of Versailles. After February, 1920, the British ceased to pay subsidy to him. In August, 1920, five days after the signing of the Treaty of Sèvres, which formally recognized the Kingdom of Hejaz, Curzon asked Cairo to procure Hussein's signature to both treaties and agreed to make a payment of £30,000 conditional on signature. Hussein declined and in 1921, stated that he could not be expected to "affix his name to a document assigning Palestine to the Zionists and Syria to foreigners." Following the 1921 Cairo Conference, Lawrence was sent to try and obtain the King's signature to a treaty as well as to Versailles and Sèvres, a £60,000 annual subsidy being proposed; this attempt also failed. During 1923, the British made one further attempt to settle outstanding issues with Hussein and once again, the attempt foundered, Hussein continued in his refusal to recognize the Balfour Declaration or any of the Mandates that he perceived as being his domain. In March 1924, having briefly considered the possibility of removing the offending article from the treaty, the government suspended any further negotiations; within six months they withdrew their support in favour of their central Arabian ally Ibn Saud, who proceeded to conquer Hussein's kingdom. The declaration was first endorsed by a foreign government on 27 December 1917, when Serbian Zionist leader and diplomat David Albala announced the support of Serbia's government in exile during a mission to the United States. The French and Italian governments offered their endorsements, on 14 February and 9 May 1918, respectively. At a private meeting in London on 1 December 1918, Lloyd George and French Prime Minister Georges Clemenceau agreed to certain modifications to the Sykes-Picot Agreement, including British control of Palestine. On 25 April 1920, the San Remo conference – an outgrowth of the Paris Peace Conference attended by the prime ministers of Britain, France and Italy, the , and the United States Ambassador to Italy – established the basic terms for three League of Nations mandates: a French mandate for Syria, and British mandates for Mesopotamia and Palestine. With respect to Palestine, the resolution stated that the British were responsible for putting into effect the terms of the Balfour Declaration. The French and the Italians made clear their dislike of the "Zionist cast of the Palestinian mandate" and objected especially to language that did not safeguard the "political" rights of non-Jews, accepting Curzon's claim that "in the British language all ordinary rights were included in "civil rights"". At the request of France, it was agreed that an undertaking was to be inserted in the mandate's procès-verbal that this would not involve the surrender of the rights hitherto enjoyed by the non-Jewish communities in Palestine. The Italian endorsement of the Declaration had included the condition "...on the understanding that there is no prejudice against the legal and political status of the already existing religious communities..." (in Italian "...che non ne venga nessun pregiudizio allo stato giuridico e politico delle gia esistenti communita religiose..." The boundaries of Palestine were left unspecified, to "be determined by the Principal Allied Powers." Three months later, in July 1920, the French defeat of Faisal's Arab Kingdom of Syria precipitated the British need to know "what is the 'Syria' for which the French received a mandate at San Remo?" and "does it include Transjordania?" – it subsequently decided to pursue a policy of associating Transjordan with the mandated area of Palestine without adding it to the area of the Jewish National Home. In 1922, Congress officially endorsed America's support for the Balfour Declaration through the passage of the Lodge-Fish Resolution, notwithstanding opposition from the State Department. Professor Lawrence Davidson, of West Chester University, whose research focuses on American relations with the Middle East, argues that President Wilson and Congress ignored democratic values in favour of "biblical romanticism" when they endorsed the declaration. He points to an organized pro-Zionist lobby in the United States, which was active at a time when the country's small Arab American community had little political power. The publication of the Balfour Declaration was met with tactical responses from the Central Powers. Two weeks following the declaration, Ottokar Czernin, the Austrian Foreign Minister, gave an interview to Arthur Hantke, President of the Zionist Federation of Germany, promising that his government would influence the Turks once the war was over. On 12December, the Ottoman Grand Vizier, Talaat Pasha, gave an interview to the German newspaper "Vossische Zeitung" that was published on 31December and subsequently released in the German-Jewish periodical "" on 4January 1918, in which he referred to the declaration as "une blague" (a deception) and promised that under Ottoman rule "all justifiable wishes of the Jews in Palestine would be able to find their fulfilment" subject to the absorptive capacity of the country. This Turkish statement was endorsed by the German Foreign Office on 5January 1918. On 8January 1918, a German-Jewish Society, the Union of German Jewish Organizations for the Protection of the Rights of the Jews of the East (VJOD), was formed to advocate for further progress for Jews in Palestine. Following the war, the Treaty of Sèvres was signed by the Ottoman Empire on 10August 1920. The treaty dissolved the Ottoman Empire, requiring Turkey to renounce sovereignty over much of the Middle East. Article95 of the treaty incorporated the terms of the Balfour Declaration with respect to "the administration of Palestine, within such boundaries as may be determined by the Principal Allied Powers". Since incorporation of the declaration into the Treaty of Sèvres did not affect the legal status of either the declaration or the Mandate, there was also no effect when Sèvres was superseded by the Treaty of Lausanne (1923), which did not include any reference to the declaration. In 1922, German anti-Semitic theorist Alfred Rosenberg in his primary contribution to Nazi theory on Zionism, "Der Staatsfeindliche Zionismus" ("Zionism, the Enemy of the State"), accused German Zionists of working for a German defeat and supporting Britain and the implementation of the Balfour Declaration, in a version of the stab-in-the-back myth. Adolf Hitler took a similar approach in some of his speeches from 1920 onwards. With the advent of the declaration and the British entry into Jerusalem on 9 December, the Vatican reversed its earlier sympathetic attitude to Zionism and adopted an oppositional stance that was to continue until the early 1990s. The British policy as stated in the declaration was to face numerous challenges to its implementation in the following years. The first of these was the indirect peace negotiations which took place between Britain and the Ottomans in December 1917 and January 1918 during a pause in the hostilities for the rainy season; although these peace talks were unsuccessful, archival records suggest that key members of the War Cabinet may have been willing to permit leaving Palestine under nominal Turkish sovereignty as part of an overall deal. In October 1919, almost a year after the end of the war, Lord Curzon succeeded Balfour as Foreign Secretary. Curzon had been a member of the 1917 Cabinet that had approved the declaration, and according to British historian Sir David Gilmour, Curzon had been "the only senior figure in the British government at the time who foresaw that its policy would lead to decades of Arab–Jewish hostility". He therefore determined to pursue a policy in line with its "narrower and more prudent rather than the wider interpretation". Following Bonar Law's appointment as Prime Minister in late 1922, Curzon wrote to Law that he regarded the declaration as "the worst" of Britain's Middle East commitments and "a striking contradiction of our publicly declared principles". In August 1920 the report of the Palin Commission, the first in a long line of British Commissions of Inquiry on the question of Palestine during the Mandate period, noted that "The Balfour Declaration... is undoubtedly the starting point of the whole trouble". The conclusion of the report, which was not published, mentioned the Balfour Declaration three times, stating that "the causes of the alienation and exasperation of the feelings of the population of Palestine" included: British public and government opinion became increasingly unfavourable to state support for Zionism; even Sykes had begun to change his views in late 1918. In February 1922 Churchill telegraphed Samuel, who had begun his role as High Commissioner for Palestine 18 months earlier, asking for cuts in expenditure and noting: Following the issuance of the Churchill White Paper in June 1922, the House of Lords rejected a Palestine Mandate that incorporated the Balfour Declaration by 60 votes to 25, following a motion issued by Lord Islington. The vote proved to be only symbolic as it was subsequently overruled by a vote in the House of Commons following a tactical pivot and variety of promises made by Churchill. In February 1923, following the change in government, Cavendish, in a lengthy memorandum for the Cabinet, laid the foundation for a secret review of Palestine policy: His covering note asked for a statement of policy to be made as soon as possible and that the cabinet ought to focus on three questions: (1) whether or not pledges to the Arabs conflict with the Balfour declaration; (2) if not, whether the new government should continue the policy set down by the old government in the 1922 White Paper; and (3) if not, what alternative policy should be adopted. Stanley Baldwin, replacing Bonar Law, in June 1923 set up a cabinet subcommittee whose terms of reference were: The Cabinet approved the report of this committee on 31 July 1923. Describing it as "nothing short of remarkable", Quigley noted that the government was admitting to itself that its support for Zionism had been prompted by considerations having nothing to do with the merits of Zionism or its consequences for Palestine. As Huneidi noted, "wise or unwise, it is well nigh impossible for any government to extricate itself without a substantial sacrifice of consistency and self-respect, if not honour." The wording of the declaration was thus incorporated into the British Mandate for Palestine, a legal instrument that created Mandatory Palestine with an explicit purpose of putting the declaration into effect and was finally formalized in September,1923. Unlike the declaration itself, the Mandate was legally binding on the British government. In June, 1924, Britain made its report to the Permanent Mandates Commission for the period July 1920 to the end of 1923 containing nothing of the candor reflected in the internal documents; the documents relating to the 1923 reappraisal stayed secret until the early 1970s. Lloyd George and Balfour remained in government until the collapse of the coalition in October 1922. Under the new Conservative government, attempts were made to identify the background to and motivations for the declaration. A private Cabinet memorandum was produced in January 1923, providing a summary of the then-known Foreign Office and War Cabinet records leading up to the declaration. An accompanying Foreign Office note asserted that the primary authors of the declaration were Balfour, Sykes, Weizmann, and Sokolow, with "perhaps Lord Rothschild as a figure in the background", and that "negotiations seem to have been mainly oral and by means of private notes and memoranda of which only the scantiest records seem to be available." Following the 1936 general strike that was to degenerate into the 1936–1939 Arab revolt in Palestine, the most significant outbreak of violence since the Mandate began, a British Royal Commission  – a high-profile public inquiry – was appointed to investigate the causes of the unrest. The Palestine Royal Commission, appointed with significantly broader terms of reference than the previous British inquiries into Palestine, completed its 404-page report after six months of work in June 1937, publishing it a month later. The report began by describing the history of the problem, including a detailed summary of the origins of the Balfour Declaration. Much of this summary relied on Lloyd-George's personal testimony; Balfour had died in 1930 and Sykes in 1919. He told the commission that the declaration was made "due to propagandist reasons... In particular Jewish sympathy would confirm the support of American Jewry, and would make it more difficult for Germany to reduce her military commitments and improve her economic position on the eastern front". Two years later, in his "Memoirs of the Peace Conference", Lloyd George described a total of nine factors motivating his decision as Prime Minister to release the declaration, including the additional reasons that a Jewish presence in Palestine would strengthen Britain's position on the Suez Canal and reinforce the route to their imperial dominion in India. These geopolitical calculations were debated and discussed in the following years. Historians agree that the British believed that expressing support would appeal to Jews in Germany and the United States, given two of Woodrow Wilson's closest advisors were known to be avid Zionists; they also hoped to encourage support from the large Jewish population in Russia. In addition, the British intended to pre-empt the expected French pressure for an international administration in Palestine. Some historians argue that the British government's decision reflected what James Gelvin, Professor of Middle Eastern History at UCLA, calls 'patrician anti-Semitism' in the overestimation of Jewish power in both the United States and Russia. American Zionism was still in its infancy; in 1914 the Zionist Federation had a small budget of about $5,000 and only 12,000 members, despite an American Jewish population of three million. But the Zionist organizations had recently succeeded, following a show of force within the American Jewish community, in arranging a Jewish congress to debate the Jewish problem as a whole. This impacted British and French government estimates of the balance of power within the American Jewish public. Avi Shlaim, Emeritus Professor of International Relations in the University of Oxford, asserts that two main schools of thought have been developed on the question of the primary driving force behind the declaration, one presented in 1961 by Leonard Stein, a lawyer and former political secretary to the World Zionist Organization, and the other in 1970 by Mayir Vereté, then Professor of Israeli History at the Hebrew University of Jerusalem. Shlaim states that Stein does not reach any clear cut conclusions, but that implicit in his narrative is that the declaration resulted primarily from the activity and skill of the Zionists, whereas according to Vereté, it was the work of hard-headed pragmatists motivated by British imperial interests in the Middle East. Much of modern scholarship on the decision to issue the declaration focuses on the Zionist movement and rivalries within it, with a key debate being whether the role of Weizmann was decisive or whether the British were likely to have issued a similar declaration in any event. Danny Gutwein, Professor of Jewish History at the University of Haifa, proposes a twist on an old idea, asserting that Sykes's February 1917 approach to the Zionists was the defining moment, and that it was consistent with the pursuit of the government's wider agenda to partition the Ottoman Empire. Historian J. C. Hurewitz has written that British support for a Jewish homeland in Palestine was part of an effort to secure a land bridge between Egypt and the Persian Gulf by annexing territory from the Ottoman Empire. The declaration had two indirect consequences, the emergence of a Jewish state and a chronic state of conflict between Arabs and Jews throughout the Middle East. It has been described as the "original sin" with respect to both Britain's failure in Palestine and for wider events in Palestine. The statement also had a significant impact on the traditional anti-Zionism of religious Jews, some of whom saw it as divine providence; this contributed to the growth of religious Zionism amid the larger Zionist movement. Starting in 1920, intercommunal conflict in Mandatory Palestine broke out, which widened into the regional Arab–Israeli conflict, often referred to as the world's "most intractable conflict". The "dual obligation" to the two communities quickly proved to be untenable; the British subsequently concluded that it was impossible for them to pacify the two communities in Palestine by using different messages for different audiences. The Palestine Royal Commission – in making the first official proposal for partition of the region – referred to the requirements as "contradictory obligations", and that the "disease is so deep-rooted that, in our firm conviction, the only hope of a cure lies in a surgical operation". Following the 1936–1939 Arab revolt in Palestine, and as worldwide tensions rose in the buildup to the Second World War, the British Parliament approved the White Paper of 1939 – their last formal statement of governing policy in Mandatory Palestine – declaring that Palestine should not become a Jewish State and placing restrictions on Jewish immigration. Whilst the British considered this consistent with the Balfour Declaration's commitment to protect the rights of non-Jews, many Zionists saw it as a repudiation of the declaration. Although this policy lasted until the British surrendered the Mandate in 1948, it served only to highlight the fundamental difficulty for Britain in carrying out the Mandate obligations. Britain's involvement in this became one of the most controversial parts of its Empire's history, and damaged its reputation in the Middle East for generations. According to historian Elizabeth Monroe: "measured by British interests alone, [the declaration was] one of the greatest mistakes in [its] imperial history." The 2010 study by Jonathan Schneer, specialist in modern British history at Georgia Tech, concluded that because the build-up to the declaration was characterized by "contradictions, deceptions, misinterpretations, and wishful thinking", the declaration sowed dragon's teeth and "produced a murderous harvest, and we go on harvesting even today". The foundational stone for modern Israel had been laid, but the prediction that this would lay the groundwork for harmonious Arab-Jewish cooperation proved to be wishful thinking. The document was presented to the British Museum in 1924 by Walter Rothschild; today it is held in the British Library, which separated from the British Museum in 1973, as Additional Manuscripts number 41178. From October 1987 to May 1988 it was lent outside the UK for display in Israel's Knesset. The Israeli government are currently in negotiations to arrange a second loan in 2018, with plans to display the document at Independence Hall in Tel Aviv.
https://en.wikipedia.org/wiki?curid=4820
Black Hand (Serbia) Unification or Death (), popularly known as the Black Hand (Црна рука / Crna ruka), was a secret military society formed in 1901 by officers in the Army of the Kingdom of Serbia, best known for being allegedly involved in assassination of Archduke Franz Ferdinand in Sarajevo and the earlier conspiracy to assassinate the Serbian royal couple in 1903, under the aegis of Captain Dragutin Dimitrijević ( "Apis"). It was formed with the aim of uniting all of the territories with a South Slavic majority not ruled by either Serbia or Montenegro. Its inspiration was primarily the unification of Italy in 1859–70 but also that of Germany in 1871. Through its connections to the June 1914 assassination of Archduke Franz Ferdinand in Sarajevo, which was committed by the members of youth movement Young Bosnia, the Black Hand is often viewed as having contributed to the start of World War I by precipitating the July Crisis of 1914, which eventually led to Austria-Hungary's invasion of the Kingdom of Serbia. In August 1901, a group of lower officers headed by captain Dragutin Dimitrijević "Apis" established a conspiracy group (called the Black Hand in literature), against the dynasty. The first meeting was held on 6 September 1901. In attendance were captains Radomir Aranđelović, Milan F. Petrović, and Dragutin Dimitrijević, as well as lieutenants Antonije Antić, Dragutin Dulić, Milan Marinković, and Nikodije Popović. They made a plan to kill the royal couple—King Alexander I Obrenović and Queen Draga. Captain Apis personally led the group of Army officers who killed the royal couple in the Old Palace at Belgrade on the night of 28/29 May 1903 (Old Style). On 8 October 1908, just two days after Austria annexed Bosnia and Herzegovina, some Serbian ministers, officials, and generals held a meeting at the City Hall in Belgrade. They founded a semi-secret society, the "Narodna Odbrana" ("National Defense") which gave Pan-Serbism a focus and an organization. The purpose of the group was to liberate Serbs under the Austro-Hungarian occupation. They also undertook anti-Austrian propaganda and organized spies and saboteurs to operate within the occupied provinces. Satellite groups were formed in Slovenia, Bosnia, Herzegovina and Istria. The Bosnian group became deeply associated with local groups of pan-Serb activists such as "Mlada Bosna" ("Young Bosnia"). The Unification or Death was established in the beginning of May 1911, the original constitution of the organization being signed on 9 May. Ljuba Čupa, Bogdan Radenković and Vojislav Tankosić wrote the constitution of the organization. The constitution was modeled after similar German secret nationalist associations and the Italian Carbonari. The organization was mentioned in the Serbian parliament as the "Black Hand" in late 1911. By 1911–12, Narodna Odbrana had established ties with the Black Hand, and the two became "parallel in action and overlapping in membership". The organization used the magazine "Pijemont" (the Serbian name for Piedmont, the kingdom that led the unification of Italy, under the House of Savoy) for the dissemination of their ideas. The magazine was founded by Ljuba Čupa in August 1911. By 1914, there were hundreds of members, many of whom were Serbian Army officers. The goal of uniting Serb-inhabited territories was implemented through the training of guerilla fighters and saboteurs. The Black Hand was organized at the grassroots level in 3 to 5-member cells, supervised by district committees and by a Central committee in Belgrade whose ten-member Executive Committee was led, more or less, by Colonel Dragutin Dimitrijević "Apis". To ensure secrecy, members rarely knew much more than the members of their own cell and one superior above them. New members swore the oath: The Black Hand took over the terrorist actions of "Narodna Odbrana", and worked deliberately at obscuring any distinctions between the two groups, trading on the prestige and network of the older organization. Black Hand members held important army and government positions. Crown Prince Alexander was an enthusiastic and financial supporter. The group held influence over government appointment and policy. The Serbian government was fairly well informed of Black Hand activities. Friendly relations had fairly well cooled by 1914. The Black Hand was displeased with Prime Minister Nikola Pašić. They thought he did not act aggressively enough towards the Pan-Serb cause. They engaged in a bitter power struggle over several issues, such as who would control territories Serbia annexed in the Balkan Wars. By this point, disagreeing with the Black Hand was dangerous, as political murder was one of their tools. It was also in 1914 that Apis allegedly decided that Archduke Franz Ferdinand, the heir-apparent of Austria, should be assassinated, as he was trying to pacify the Serbians, and if this happened then a revolution would never occur. Towards that end it is claimed that three young Bosnian Serbs were recruited to kill the Archduke. They were definitely trained in bomb throwing and marksmanship by current and former members of the Serbian military. Gavrilo Princip, Nedeljko Čabrinović and Trifko Grabež were smuggled across the border back into Bosnia via a chain of underground-railroad style contacts. The decision to kill the Archduke was apparently initiated by Apis, and not sanctioned by the full Executive Committee (assuming Apis was involved at all, a question that remains in dispute). Those involved probably realized that their plot would result in war between Austria and Serbia, and had every reason to expect that Russia would side with Serbia. They likely did not, however, anticipate that the assassination would start a chain of events leading to World War I. Others in the government and some of the Black Hand Executive Council were not as confident of Russian aid. Russia had let them down recently. When word of the plot allegedly percolated through Black Hand leadership and the Serbian government (the Prime Minister Pašić was definitely informed of two armed men being smuggled across the border; it is not clear if Pašić knew the planned assassination), Apis was supposedly told not to proceed. He may have made a half-hearted attempt to intercept the young assassins at the border, but they had already crossed. Other sources say the attempted 'recall' was only begun after the assassins had reached Sarajevo. This 'recall' appears to make Apis look like a loose cannon, and the young assassins as independent zealots. In fact, the 'recall' took place a full two weeks before the Archduke's visit. The assassins idled around in Sarajevo for a month. Nothing more was done to stop them. The group encompassed a range of ideological outlooks, from conspiratorially-minded army officers to idealistic youths, sometimes tending towards republicanism, despite the acquisition of nationalistic royal circles in its activities (the movement's leader, Colonel Dragutin Dimitrijević or "Apis," had been instrumental in the June 1903 coup which had brought King Petar Karađorđević to the Serbian throne following 45 years of rule by the rival Obrenović dynasty). The group was denounced as nihilist by the Austro-Hungarian press and compared to the Russian People's Will and the Chinese Assassination Corps. In 1938 a conspiracy group to overthrow the Yugoslav regency was founded by, among others, members of the Serbian Cultural Club (SKK). The organization was modeled after the Black Hand, including the recruitment process. Two members of the Black Hand, Antonije Antić, and Velimir Vemić were the organization's military advisors.
https://en.wikipedia.org/wiki?curid=4821
Board of directors A board of directors is a group of people who jointly supervise the activities of an organization, which can be either a for-profit business, nonprofit organization, or a government agency. Such a board's powers, duties, and responsibilities are determined by government regulations (including the jurisdiction's corporations law) and the organization's own constitution and bylaws. These authorities may specify the number of members of the board, how they are to be chosen, and how often they are to meet. In an organization with voting members, the board is accountable to, and might be subordinate to, the organization's full membership, which usually vote for the members of the board. In a stock corporation, non-executive directors are voted for by the shareholders, with the board having ultimate responsibility for the management of the corporation. In nations with codetermination (such as Germany and Sweden), the workers of a corporation elect a set fraction of the board's directors. The board of directors appoints the chief executive officer of the corporation and sets out the overall strategic direction. In corporations with dispersed ownership, the identification and nomination of directors (that shareholders vote for or against) are often done by the board itself, leading to a high degree of self-perpetuation. In a non-stock corporation with no general voting membership, the board is the supreme governing body of the institution, and its members are sometimes chosen by the board itself. Other names include board of directors and advisors, board of governors, board of managers, board of regents, board of trustees, or board of visitors. It may also be called "the executive board" and is often simply referred to as "the board". Typical duties of boards of directors include: The legal responsibilities of boards and board members vary with the nature of the organization, and between jurisdictions. For companies with publicly trading stock, these responsibilities are typically much more rigorous and complex than for those of other types. Typically, the board chooses one of its members to be the "chairman" (often now called the "chair" or "chairperson"), who holds whatever title is specified in the by-laws or articles of association. However, in membership organizations, the members elect the president of the organization and the president becomes the board chair, unless the by-laws say otherwise. The directors of an organization are the persons who are members of its board. Several specific terms categorize directors by the presence or absence of their other relationships to the organization. An inside director is a director who is also an employee, officer, chief executive, major shareholder, or someone similarly connected to the organization. Inside directors represent the interests of the entity's stakeholders, and often have special knowledge of its inner workings, its financial or market position, and so on. Typical inside directors are: An inside director who is employed as a manager or executive of the organization is sometimes referred to as an executive director (not to be confused with the title executive director sometimes used for the CEO position in some organizations). Executive directors often have a specified area of responsibility in the organization, such as finance, marketing, human resources, or production. An outside director is a member of the board who is not otherwise employed by or engaged with the organization, and does not represent any of its stakeholders. A typical example is a director who is president of a firm in a different industry. Outside directors are not employees of the company or affiliated with it in any other way. Outside directors bring outside experience and perspectives to the board. For example, for a company that serves a domestic market only, the presence of CEOs from global multinational corporations as outside directors can help to provide insights on export and import opportunities and international trade options. One of the arguments for having outside directors is that they can keep a watchful eye on the inside directors and on the way the organization is run. Outside directors are unlikely to tolerate "insider dealing" between inside directors, as outside directors do not benefit from the company or organization. Outside directors are often useful in handling disputes between inside directors, or between shareholders and the board. They are thought to be advantageous because they can be objective and present little risk of conflict of interest. On the other hand, they might lack familiarity with the specific issues connected to the organization's governance, and they might not know about the industry or sector in which the organization is operating. Individual directors often serve on more than one board. This practice results in an interlocking directorate, where a relatively small number of individuals have significant influence over many important entities. This situation can have important corporate, social, economic, and legal consequences, and has been the subject of significant research. The process for running a board, sometimes called the board process, includes the selection of board members, the setting of clear board objectives, the dissemination of documents or board package to the board members, the collaborative creation of an agenda for the meeting, the creation and follow-up of assigned action items, and the assessment of the board process through standardized assessments of board members, owners, and CEOs. The science of this process has been slow to develop due to the secretive nature of the way most companies run their boards, however some standardization is beginning to develop. Some who are pushing for this standardization in the USA are the National Association of Corporate Directors, McKinsey and The Board Group. A board of directors conducts its meetings according to the rules and procedures contained in its governing documents. These procedures may allow the board to conduct its business by conference call or other electronic means. They may also specify how a quorum is to be determined. Most organizations have adopted "Robert's Rules of Order" as its guide to supplement its own rules. In this book, the rules for conducting board meetings may be less formal if there is no more than about a dozen board members present. An example of the informality is that motions are not required if it's clear what is being discussed. Historically, nonprofit boards have often had large boards with up to twenty-four members, but a modern trend is to have smaller boards as small as six or seven people. Studies suggest that after seven people, each additional person reduces the effectiveness of group decision-making. The role and responsibilities of a board of directors vary depending on the nature and type of business entity and the laws applying to the entity (see types of business entity). For example, the nature of the business entity may be one that is traded on a public market (public company), not traded on a public market (a private, limited or closely held company), owned by family members (a family business), or exempt from income taxes (a non-profit, not for profit, or tax-exempt entity). There are numerous types of business entities available throughout the world such as a corporation, limited liability company, cooperative, business trust, partnership, private limited company, and public limited company. Much of what has been written about boards of directors relates to boards of directors of business entities actively traded on public markets. More recently, however, material is becoming available for boards of private and closely held businesses including family businesses. A board-only organization is one whose board is self-appointed, rather than being accountable to a base of members through elections; or in which the powers of the membership are extremely limited. In membership organizations, such as a society made up of members of a certain profession or one advocating a certain cause, a board of directors may have the responsibility of running the organization in between meetings of the membership, especially if the membership meets infrequently, such as only at an annual general meeting. The amount of powers and authority delegated to the board depend on the bylaws and rules of the particular organization. Some organizations place matters exclusively in the board's control while in others, the general membership retains full power and the board can only make recommendations. The setup of a board of directors vary widely across organizations and may include provisions that are applicable to corporations, in which the "shareholders" are the members of the organization. A difference may be that the membership elects the officers of the organization, such as the president and the secretary, and the officers become members of the board in addition to the directors and retain those duties on the board. The directors may also be classified as officers in this situation. There may also be ex-officio members of the board, or persons who are members due to another position that they hold. These ex-officio members have all the same rights as the other board members. Members of the board may be removed before their term is complete. Details on how they can be removed are usually provided in the bylaws. If the bylaws do not contain such details, the section on disciplinary procedures in "Robert's Rules of Order" may be used. In a publicly held company, directors are elected to represent and are legally obligated as fiduciaries to represent owners of the company—the shareholders/stockholders. In this capacity they establish policies and make decisions on issues such as whether there is dividend and how much it is, stock options distributed to employees, and the hiring/firing and compensation of upper management. Theoretically, the control of a company is divided between two bodies: the board of directors, and the shareholders in general meeting. In practice, the amount of power exercised by the board varies with the type of company. In small private companies, the directors and the shareholders are normally the same people, and thus there is no real division of power. In large public companies, the board tends to exercise more of a supervisory role, and individual responsibility and management tends to be delegated downward to individual professional executives (such as a finance director or a marketing director) who deal with particular areas of the company's affairs. Another feature of boards of directors in large public companies is that the board tends to have more "de facto" power. Many shareholders grant proxies to the directors to vote their shares at general meetings and accept all recommendations of the board rather than try to get involved in management, since each shareholder's power, as well as interest and information is so small. Larger institutional investors also grant the board proxies. The large number of shareholders also makes it hard for them to organize. However, there have been moves recently to try to increase shareholder activism among both institutional investors and individuals with small shareholdings. A contrasting view is that in large public companies it is upper management and not boards that wield practical power, because boards delegate nearly all of their power to the top executive employees, adopting their recommendations almost without fail. As a practical matter, executives even choose the directors, with shareholders normally following management recommendations and voting for them. In most cases, serving on a board is not a career unto itself. For major corporations, the board members are usually professionals or leaders in their field. In the case of outside directors, they are often senior leaders of other organizations. Nevertheless, board members often receive remunerations amounting to hundreds of thousands of dollars per year since they often sit on the boards of several companies. Inside directors are usually not paid for sitting on a board, but the duty is instead considered part of their larger job description. Outside directors are usually paid for their services. These remunerations vary between corporations, but usually consist of a yearly or monthly salary, additional compensation for each meeting attended, stock options, and various other benefits. such as travel, hotel and meal expenses for the board meetings. Tiffany & Co., for example, pays directors an annual retainer of $46,500, an additional annual retainer of $2,500 if the director is also a chairperson of a committee, a per-meeting-attended fee of $2,000 for meetings attended in person, a $500 fee for each meeting attended via telephone, in addition to stock options and retirement benefits. In some European and Asian countries, there are two separate boards, an executive board for day-to-day business and a supervisory board (elected by the shareholders and employees) for supervising the executive board. In these countries, the CEO (chief executive or managing director) presides over the executive board and the chairman presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board and allows for clear lines of authority. The aim is to prevent a conflict of interest and too much power being concentrated in the hands of one person. There is a strong parallel here with the structure of government, which tends to separate the political cabinet from the management civil service. In the United States, the board of directors (elected by the shareholders) is often equivalent to the supervisory board, while the executive board may often be known as the executive committee (operating committee or executive council), composed of the CEO and their direct reports (other C-level officers, division/subsidiary heads). The board of directors, in its modern sense, was one of the 17th-century Dutch pioneering institutional innovations. In other words, modern-day boards of directors are all the descendants of the VOC model in many respects. The development of a separate board of directors to manage/govern/oversee a company has occurred incrementally and indefinitely over legal history. Until the end of the 19th century, it seems to have been generally assumed that the general meeting (of all shareholders) was the supreme organ of a company, and that the board of directors merely acted as an agent of the company subject to the control of the shareholders in general meeting. However, by 1906, the English Court of Appeal had made it clear in the decision of "Automatic Self-Cleansing Filter Syndicate Co Ltd v Cuninghame" [1906] 2 Ch 34 that the division of powers between the board and the shareholders in general meaning depended on the construction of the articles of association and that, where the powers of management were vested in the board, the general meeting could not interfere with their lawful exercise. The articles were held to constitute a contract by which the members had agreed that "the directors and the directors alone shall manage." The new approach did not secure immediate approval, but it was endorsed by the House of Lords in "Quin & Axtens v Salmon" [1909] AC 442 and has since received general acceptance. Under English law, successive versions of Table A have reinforced the norm that, unless the directors are acting contrary to the law or the provisions of the Articles, the powers of conducting the management and affairs of the company are vested in them. The modern doctrine was expressed in "John Shaw & Sons (Salford) Ltd v Shaw" [1935] 2 KB 113 by Greer LJ as follows: A company is an entity distinct alike from its shareholders and its directors. Some of its powers may, according to its articles, be exercised by directors, certain other powers may be reserved for the shareholders in general meeting. If powers of management are vested in the directors, they and they alone can exercise these powers. The only way in which the general body of shareholders can control the exercise of powers by the articles in the directors is by altering the articles, or, if opportunity arises under the articles, by refusing to re-elect the directors of whose actions they disapprove. They cannot themselves usurp the powers which by the articles are vested in the directors any more than the directors can usurp the powers vested by the articles in the general body of shareholders. It has been remarked that this development in the law was somewhat surprising at the time, as the relevant provisions in Table A (as it was then) seemed to contradict this approach rather than to endorse it. In most legal systems, the appointment and removal of directors is voted upon by the shareholders in general meeting or through a proxy statement. For publicly traded companies in the U.S., the directors which are available to vote on are largely selected by either the board as a whole or a nominating committee. Although in 2002 the New York Stock Exchange and the NASDAQ required that nominating committees consist of independent directors as a condition of listing, nomination committees have historically received input from management in their selections even when the CEO does not have a position on the board. Shareholder nominations can only occur at the general meeting itself or through the prohibitively expensive process of mailing out ballots separately; in May 2009 the SEC proposed a new rule allowing shareholders meeting certain criteria to add nominees to the proxy statement. In practice for publicly traded companies, the managers (inside directors) who are purportedly accountable to the board of directors have historically played a major role in selecting and nominating the directors who are voted on by the shareholders, in which case more "gray outsider directors" (independent directors with conflicts of interest) are nominated and elected. Directors may also leave office by resignation or death. In some legal systems, directors may also be removed by a resolution of the remaining directors (in some countries they may only do so "with cause"; in others the power is unrestricted). Some jurisdictions also permit the board of directors to appoint directors, either to fill a vacancy which arises on resignation or death, or as an addition to the existing directors. In practice, it can be quite difficult to remove a director by a resolution in general meeting. In many legal systems, the director has a right to receive special notice of any resolution to remove him or her; the company must often supply a copy of the proposal to the director, who is usually entitled to be heard by the meeting. The director may require the company to circulate any representations that he wishes to make. Furthermore, the director's contract of service will usually entitle him to compensation if he is removed, and may often include a generous "golden parachute" which also acts as a deterrent to removal. A recent study examines how corporate shareholders voted in director elections in the United States. It found that directors received fewer votes from shareholders when their companies performed poorly, had excess CEO compensation, or had poor shareholder protection. Also, directors received fewer votes when they did not regularly attend board meetings or received negative recommendations from a proxy advisory firm. The study also shows that companies often improve their corporate governance by removing poison pills or classified boards and by reducing excessive CEO pay after their directors receive low shareholder support. Board accountability to shareholders is a recurring issue. In 2010, the "New York Times" noted that several directors who had overseen companies which had failed in the financial crisis of 2007–2010 had found new positions as directors. The SEC sometimes imposes a ban (a "D&O bar") on serving on a board as part of its fraud cases, and one of these was upheld in 2013. The exercise by the board of directors of its powers usually occurs in board meetings. Most legal systems require sufficient notice to be given to all directors of these meetings, and that a quorum must be present before any business may be conducted. Usually, a meeting which is held without notice having been given is still valid if all of the directors attend, but it has been held that a failure to give notice may negate resolutions passed at a meeting, because the persuasive oratory of a minority of directors might have persuaded the majority to change their minds and vote otherwise. In most common law countries, the powers of the board are vested in the board as a whole, and not in the individual directors. However, in instances an individual director may still bind the company by his acts by virtue of his ostensible authority (see also: the rule in "Turquand's Case"). Because directors exercise control and management over the organization, but organizations are (in theory) run for the benefit of the shareholders, the law imposes strict duties on directors in relation to the exercise of their duties. The duties imposed on directors are fiduciary duties, similar to those that the law imposes on those in similar positions of trust: agents and trustees. The duties apply to each director separately, while the powers apply to the board jointly. Also, the duties are owed to the company itself, and not to any other entity. This does not mean that directors can never stand in a fiduciary relationship to the individual shareholders; they may well have such a duty in certain circumstances. Directors must exercise their powers for a proper purpose. While in many instances an improper purpose is readily evident, such as a director looking to feather his or her own nest or divert an investment opportunity to a relative, such breaches usually involve a breach of the director's duty to act in good faith. Greater difficulties arise where the director, while acting in good faith, is serving a purpose that is not regarded by the law as proper. The seminal authority in relation to what amounts to a proper purpose is the Supreme Court decision in Eclairs Group Ltd v JKX Oil & Gas plc (2015). The case concerned the powers of directors under the articles of association of the company to disenfranchise voting rights attached to shares for failure to properly comply with notice served on the shareholders. Prior to that case the leading authority was "Howard Smith Ltd v Ampol Ltd" [1974] AC 821. The case concerned the power of the directors to issue new shares. It was alleged that the directors had issued many new shares purely to deprive a particular shareholder of his voting majority. An argument that the power to issue shares could only be properly exercised to raise new capital was rejected as too narrow, and it was held that it would be a proper exercise of the director's powers to issue shares to a larger company to ensure the financial stability of the company, or as part of an agreement to exploit mineral rights owned by the company. If so, the mere fact that an incidental result (even if it was a desired consequence) was that a shareholder lost his majority, or a takeover bid was defeated, this would not itself make the share issue improper. But if the sole purpose was to destroy a voting majority, or block a takeover bid, that would be an improper purpose. Not all jurisdictions recognised the "proper purpose" duty as separate from the "good faith" duty however. Directors cannot, without the consent of the company, fetter their discretion in relation to the exercise of their powers, and cannot bind themselves to vote in a particular way at future board meetings. This is so even if there is no improper motive or purpose, and no personal advantage to the director. This does not mean, however, that the board cannot agree to the company entering into a contract which binds the company to a certain course, even if certain actions in that course will require further board approval. The company remains bound, but the directors retain the discretion to vote against taking the future actions (although that may involve a breach by the company of the contract that the board previously approved). As fiduciaries, the directors may not put themselves in a position where their interests and duties conflict with the duties that they owe to the company. The law takes the view that good faith must not only be done, but must be manifestly seen to be done, and zealously patrols the conduct of directors in this regard; and will not allow directors to escape liability by asserting that his decision was in fact well founded. Traditionally, the law has divided conflicts of duty and interest into three sub-categories. By definition, where a director enters into a transaction with a company, there is a conflict between the director's interest (to do well for himself out of the transaction) and his duty to the company (to ensure that the company gets as much as it can out of the transaction). This rule is so strictly enforced that, even where the conflict of interest or conflict of duty is purely hypothetical, the directors can be forced to disgorge all personal gains arising from it. In "Aberdeen Ry v Blaikie" (1854) 1 Macq HL 461 Lord Cranworth stated in his judgment that: However, in many jurisdictions the members of the company are permitted to ratify transactions which would otherwise fall foul of this principle. It is also largely accepted in most jurisdictions that this principle can be overridden in the company's constitution. In many countries, there is also a statutory duty to declare interests in relation to any transactions, and the director can be fined for failing to make disclosure. Directors must not, without the informed consent of the company, use for their own profit the company's assets, opportunities, or information. This prohibition is much less flexible than the prohibition against the transactions with the company, and attempts to circumvent it using provisions in the articles have met with limited success. In "Regal (Hastings) Ltd v Gulliver" [1942] All ER 378 the House of Lords, in upholding what was regarded as a wholly unmeritorious claim by the shareholders, held that: And accordingly, the directors were required to disgorge the profits that they made, and the shareholders received their windfall. The decision has been followed in several subsequent cases, and is now regarded as settled law. Directors cannot compete directly with the company without a conflict of interest arising. Similarly, they should not act as directors of competing companies, as their duties to each company would then conflict with each other. Traditionally, the level of care and skill which has to be demonstrated by a director has been framed largely with reference to the non-executive director. In "Re City Equitable Fire Insurance Co" [1925] Ch 407, it was expressed in purely subjective terms, where the court held that: However, this decision was based firmly in the older notions (see above) that prevailed at the time as to the mode of corporate decision making, and effective control residing in the shareholders; if they elected and put up with an incompetent decision maker, they should not have recourse to complain. However, a more modern approach has since developed, and in "Dorchester Finance Co Ltd v Stebbing" [1989] BCLC 498 the court held that the rule in "Equitable Fire" related only to skill, and not to diligence. With respect to diligence, what was required was: This was a dual subjective and objective test, and one deliberately pitched at a higher level. More recently, it has been suggested that both the tests of skill and diligence should be assessed objectively and subjectively; in the United Kingdom, the statutory provisions relating to directors' duties in the new Companies Act 2006 have been codified on this basis. In most jurisdictions, the law provides for a variety of remedies in the event of a breach by the directors of their duties: Historically, directors' duties have been owed almost exclusively to the company and its members, and the board was expected to exercise its powers for the financial benefit of the company. However, more recently there have been attempts to "soften" the position, and provide for more scope for directors to act as good corporate citizens. For example, in the United Kingdom, the Companies Act 2006 requires directors of companies "to promote the success of the company for the benefit of its members as a whole" and sets out the following six factors regarding a director's duty to promote success: This represents a considerable departure from the traditional notion that directors' duties are owed only to the company. Previously in the United Kingdom, under the Companies Act 1985, protections for non-member stakeholders were considerably more limited (see, for example, s.309 which permitted directors to take into account the interests of employees but which could only be enforced by the shareholders and not by the employees themselves). The changes have therefore been the subject of some criticism. Board of Directors Technology The adoption of technology that facilitates the meeting preparation and execution of directors continues to grow. Board directors are increasingly leveraging this technology to communicate and collaborate within a secure environment to access meeting materials, communicate with each other, and execute their governance responsibilities. This trend is particularly acute in the United States where a robust market of early adopters garnered acceptance of board software by organizations resulting in higher penetration of the board portal services in the region. Most companies have weak mechanisms for bringing the voice of society into the board room. They rely on personalities who weren't appointed for their understanding of societal issues. Often they give limited focus (both through time and financial resource) to issues of corporate responsibility and sustainability. A Social Board has society designed into its structure. It elevates the voice of society through specialist appointments to the board and mechanisms that empower innovation from within the organisation. Social Boards align themselves with themes that are important to society. These may include measuring worker pay ratios, linking personal social and environmental objectives to remuneration, integrated reporting, fair tax and B-Corp Certification. Social Boards recognise that they are part of society and that they require more than a licence to operate to succeed. They balance short-term shareholder pressure against long-term value creation, managing the business for a plurality of stakeholders including employees, shareholders, supply chains and civil society. The Sarbanes–Oxley Act of 2002 has introduced new standards of accountability on boards of U.S. companies or companies listed on U.S. stock exchanges. Under the Act, directors risk large fines and prison sentences in the case of accounting crimes. Internal control is now the direct responsibility of directors. The vast majority of companies covered by the Act have hired internal auditors to ensure that the company adheres to required standards of internal control. The internal auditors are required by law to report directly to an audit board, consisting of directors more than half of whom are outside directors, one of whom is a "financial expert." The law requires companies listed on the major stock exchanges (NYSE, NASDAQ) to have a majority of independent directors—directors who are not otherwise employed by the firm or in a business relationship with it. According to the Corporate Library's study, the average size of publicly traded company's board is 9.2 members, and most boards range from 3 to 31 members. According to Investopedia, some analysts think the ideal size is seven. State law may specify a minimum number of directors, maximum number of directors, and qualifications for directors (e.g. whether board members must be individuals or may be business entities). While a board may have several committees, two—the compensation committee and audit committee—are critical and must be made up of at least three independent directors and no inside directors. Other common committees in boards are nominating and governance. Directors of Fortune 500 companies received median pay of $234,000 in 2011. Directorship is a part-time job. A recent National Association of Corporate Directors study found directors averaging just 4.3 hours a week on board work. Surveys indicate that about 20% of nonprofit foundations pay their board members, and 2% of American nonprofit organizations do. 80% of nonprofit organizations require board members to personally contribute to the organization, as BoardSource recommends. This percentage has increased in recent years. According to John Gillespie, a former investment banker and co-author of a book critical of boards, "Far too much of their time has been for check-the-box and cover-your-behind activities rather than real monitoring of executives and providing strategic advice on behalf of shareholders". At the same time, scholars have found that individual directors have a large effect on major corporate initiatives such as mergers and acquisitions and cross-border investments. The issue of gender representation on corporate boards of directors has been the subject of much criticism in recent years. Governments and corporations have responded with measures such as legislation mandating gender quotas and comply or explain systems to address the disproportionality of gender representation on corporate boards. A study of the French corporate elite has found that certain social classes are also disproportionately represented on boards, with those from the upper and, especially, upper-middle classes tending to dominate.
https://en.wikipedia.org/wiki?curid=4822
Balkan Wars The Balkan Wars consisted of two conflicts that took place in the Balkan Peninsula in 1912 and 1913. Four Balkan states defeated the Ottoman Empire in the First Balkan War. In the Second Balkan War, Bulgaria fought against all four original combatants of the first war along with facing a surprise attack from Romania from the north. The conflicts ended catastrophically for the Ottoman Empire, which lost the bulk of its territory in Europe. Austria-Hungary, although not a combatant, became relatively weaker as a much enlarged Serbia pushed for union of the South Slavic peoples. The war set the stage for the Balkan crisis of 1914 and thus served as a "prelude to the First World War". By the early 20th century, Bulgaria, Greece, Montenegro and Serbia had achieved independence from the Ottoman Empire, but large elements of their ethnic populations remained under Ottoman rule. In 1912, these countries formed the Balkan League. The First Balkan War began on 8 October 1912, when the League member states attacked the Ottoman Empire, and ended eight months later with the signing of the Treaty of London on 30 May 1913. The Second Balkan War began on 16 June 1913, when Bulgaria, dissatisfied with its loss of Macedonia, attacked its former Balkan League allies. The more numerous combined Serbian and Greek armies repelled the Bulgarian offensive and counter-attacked into Bulgaria from the west and the south. Romania, having taken no part in the conflict, had intact armies to strike with and invaded Bulgaria from the north in violation of a peace treaty between the two states. The Ottoman Empire also attacked Bulgaria and advanced in Thrace regaining Adrianople. In the resulting Treaty of Bucharest, Bulgaria conserved most of the territories it had gained in the First Balkan War in addition to being forced to cede the ex-Ottoman south part of Dobroudja province to Romania. The background to the wars lies in the incomplete emergence of nation-states on the European territory of the Ottoman Empire during the second half of the 19th century. Serbia had gained substantial territory during the Russo-Turkish War, 1877–1878, while Greece acquired Thessaly in 1881 (although it lost a small area back to the Ottoman Empire in 1897) and Bulgaria (an autonomous principality since 1878) incorporated the formerly distinct province of Eastern Rumelia (1885). All three countries, as well as Montenegro, sought additional territories within the large Ottoman-ruled region known as Rumelia, comprising Eastern Rumelia, Albania, Macedonia, and Thrace. The First Balkan War had some main causes briefly presented below: Throughout the 19th century, the Great Powers shared different aims over the "Eastern Question" and the integrity of the Ottoman Empire. Russia wanted access to the "warm waters" of the Mediterranean from the Black Sea; it pursued a pan-Slavic foreign policy and therefore supported Bulgaria and Serbia. Britain wished to deny Russia access to the "warm waters" and supported the integrity of the Ottoman Empire, although it also supported a limited expansion of Greece as a backup plan in case integrity of the Empire was no longer possible. France wished to strengthen its position in the region, especially in the Levant (today's Lebanon, Syria, and Israel). Habsburg-ruled Austria-Hungary wished for a continuation of the existence of the Ottoman Empire, since both were troubled multinational entities and thus the collapse of the one might weaken the other. The Habsburgs also saw a strong Ottoman presence in the area as a counterweight to the Serbian nationalistic call to their own Serb subjects in Bosnia, Vojvodina and other parts of the empire. Italy's primary aim at the time seems to have been the denial of access to the Adriatic Sea to another major sea power. The German Empire, in turn, under the "Drang nach Osten" policy, aspired to turn the Ottoman Empire into its own de facto colony, and thus supported its integrity. In the late 19th and early 20th century, Bulgaria and Greece contended for Ottoman Macedonia and Thrace. Ethnic Greeks sought the forced "Hellenization" of ethnic Bulgars, who sought "Bulgarization" of Greeks (Rise of nationalism). Both nations sent armed irregulars into Ottoman territory to protect and assist their ethnic kindred. From 1904, there was low intensity warfare in Macedonia between the Greek and Bulgarian bands and the Ottoman army (the Struggle for Macedonia). After the Young Turk revolution of July 1908, the situation changed drastically. The 1908 Young Turk Revolution saw the reinstatement of constitutional monarchy in the Ottoman Empire and the start of the Second Constitutional Era. When the revolt broke out, it was supported by intellectuals, the army, and almost all the ethnic minorities of the Empire, and forced Sultan Abdul Hamid II to re-adopt the long defunct Ottoman constitution of 1876 and parliament. Hopes were raised among the Balkan ethnicities of reforms and autonomy, and elections were held to form a representative, multi-ethnic, Ottoman parliament. However, following the Sultan's attempted counter-coup, the liberal element of the Young Turks was sidelined and the nationalist element became dominant. At the same time, in October 1908, Austria-Hungary seized the opportunity of the Ottoman political upheaval to annex the "de jure" Ottoman province of Bosnia and Herzegovina, which it had occupied since 1878 (see "Bosnian Crisis"). Bulgaria declared independence as it had done in 1878, but this time the independence was internationally recognised. The Greeks of the autonomous Cretan State proclaimed unification with Greece, though the opposition of the Great Powers prevented the latter action from taking practical effect. It has large influence in the consequent world order. Serbia was frustrated in the north by Austria-Hungary's incorporation of Bosnia. In March 1909, Serbia was forced to accept the annexation and restrain anti-Habsburg agitation by Serbian nationalists. Instead, the Serbian government (PM: Nikola Pašić) looked to formerly Serb territories in the south, notably "Old Serbia" (the Sanjak of Novi Pazar and the province of Kosovo). On 15 August 1909, the Military League, a group of Greek officers, took action against the government to reform their country's national government and reorganize the army. The Military League sought the creation of a new political system, thus summoned the Cretan politician Eleutherios Venizelos to Athens as its political advisor. Venizelos persuaded king George I to revise the constitution and asked the League to disband in favor of a National Assembly. In March 1910, the Military League dissolved itself. Bulgaria, which had secured Ottoman recognition of her independence in April 1909 and enjoyed the friendship of Russia, also looked to annex districts of Ottoman Thrace and Macedonia. In August 1910, Montenegro followed Bulgaria's precedent by becoming a kingdom. Following the Italian victory in the Italo-Turkish War of 1911–1912, the severity of the Ottomanizing policy of the Young Turkish regime and a series of three revolts in Ottoman held Albania, the Young Turks fell from power after a coup. The Christian Balkan countries were forced to take action and saw this as an opportunity to promote their national agenda by expanding in the territories of the falling empire and liberating their enslaved co-patriots. In order to achieve that, a wide net of treaties was constructed and an alliance was formed. The negotiation among the Balkan States’ governments started in the latter part of 1911 and were all conducted in secret. The treaties and military conventions were published in French translations after the Balkan Wars, on 24-26 of November, in Le Matin, Paris, France In April 1911, Greek PM Eleutherios Venizelos’ attempt to reach an agreement with the Bulgarian PM and form a defensive alliance against the Ottoman Empire was fruitless, because of the doubts the Bulgarians held on the strength of the Greek Army. Later that year, in December 1911, Bulgaria and Serbia agreed to start negotiations in forming an alliance under the tight inspection of Russia. The treaty between Serbia and Bulgaria was signed on 29 of February/13 of March 1912. Serbia sought expansion to "Old Serbia" and as Milan Milovanovich noted in 1909 to the Bulgarian counterpart, "As long as we are not allied with you, our influence over the Croats and Slovens will be insignificant". On the other side, Bulgaria wanted the autonomy of Macedonia region under the influence of the two countries. The then Bulgarian Minister of Foreign Affairs General Stefan Paprikov stated in 1909 that, "It will be clear that if not today then tomorrow, the most important issue will again be the Macedonian Question. And this question, whatever happens, cannot be decided without more or less direct participation of the Balkan States". Last but not least, they noted down the divisions should be made of the Ottoman territories after a victorious outcome of the war. More specifically, Bulgaria would gain all the territories eastern of Rodopi Mountains and River Strimona, while Serbia would annex the territories northern and western of Mount Skardo. The alliance pact between Greece and Bulgaria was finally signed on 16/29 of May 1912, without stipulating any specific division of Ottoman territories. In summer 1912, Greece proceeded on making "gentlemen’s’ agreements" with Serbia and Montenegro. Despite the fact that, a draft of the alliance pact with Serbia was submitted on 22 of October, a formal pact was never signed due to the outbreak of the war. As a result, Greece did not have any territorial or other commitments, other than the common cause to fight the Ottoman Empire. In April 1912 Montenegro and Bulgaria reached an agreement including financial aid to Montenegro in case of war with the Ottoman Empire. A gentlemen's agreement with Greece was reached soon after, as mentioned before. By the end of September a political and military alliance between Montenegro and Serbia was achieved. By the end of September 1912, Bulgaria had formal-written alliances with Serbia, Greece and Montenegro. A formal alliance was also signed between Serbia and Montenegro, while Greco-Montenegrin and Greco-Serbian agreements were basically oral "gentlemen’s agreements". All these completed the formation of the Balkan League. At that time, the Balkan States had been able to maintain armies that were both numerous, in relation to each country's population, and eager to act, being aspired by the idea that they would free enslaved parts of their homeland. The Bulgarian Army was the leading army of the coalition. It was a well-trained and fully equipped army, capable of facing the Imperial Army. It was suggested that the bulk of the Bulgarian Army would be in the Thracian front, as it was expected that the front near the Ottoman Capital would be the most crucial one. The Serbian Army would act in the Macedonian front, while the Greek Army was thought powerless and was not taken under serious consideration. At that time, Greece was needed in the Balkan League only for its navy and its capability to dominate the Aegean Sea, cutting off the Ottoman Armies from reinforcements. On 13/26 of September 1912, the Ottoman mobilization in Thrace forced Serbia and Bulgaria to act and order their own mobilization. On 17/30 of September Greece also order mobilization. On 25 of September/8 of October, Montenegro declared war on the Ottoman Empire, after negotiations failed regarding the border status. On 30 of September/13 of October the ambassadors of Serbia, Bulgaria and Greece delivered the common ultimatum to the Ottoman government, which was immediately rejected. The Empire withdrew its ambassadors from Sofia, Belgrade and Athens, while the Bulgarian, Serbian and Greek diplomats left the Ottoman capital delivering the war declaration on 4/17 of October 1912. The three Slavic allies (Bulgaria, Serbia and Montenegro) had laid out extensive plans to coordinate their war efforts, in continuation of their secret prewar settlements and under close Russian supervision (Greece was not included). Serbia and Montenegro would attack in the theater of Sandjak, Bulgaria and Serbia in Macedonia and Thrace. The Ottoman Empire's situation was difficult. Its population of about 26 million people provided a massive pool of manpower, but three-quarters of the population and nearly all of the Muslim component lived in the Asian part of the Empire. Reinforcements had to come from Asia mainly by sea, which depended on the result of battles between the Turkish and Greek navies in the Aegean. With the outbreak of the war, the Ottoman Empire activated three Army HQs: the Thracian HQ in Constantinople, the Western HQ in Salonika, and the Vardar HQ in Skopje, against the Bulgarians, the Greeks and the Serbians respectively. Most of their available forces were allocated to these fronts. Smaller independent units were allocated elsewhere, mostly around heavily fortified cities. Montenegro was the first that declared war on 8 October (25 September O.S.). Its main thrust was towards Shkodra, with secondary operations in the Novi Pazar area. The rest of the Allies, after giving a common ultimatum, declared war a week later. Bulgaria attacked towards Eastern Thrace, being stopped only at the outskirts of Constantinople at the Çatalca line and the isthmus of the Gallipoli peninsula, while secondary forces captured Western Thrace and Eastern Macedonia. Serbia attacked south towards Skopje and Monastir and then turned west to present-day Albania, reaching the Adriatic, while a second Army captured Kosovo and linked with the Montenegrin forces. Greece's main forces attacked from Thessaly into Macedonia through the Sarantaporo strait. On 7 November, in response to an Ottoman initiative, they entered into negotiations for the surrender of Thessaloniki. With the Greeks already there, and the Bulgarian 7th Rila Division moving swiftly from the north towards Thessaloniki, Hassan Tahsin Pasha considered his position to be hopeless. . The Greeks offered more attractive terms than the Bulgarians did. On 8 November, Tahsin Pasha agreed to terms and 26,000 Ottoman troops passed over into Greek captivity. Before the Greeks entered the city, a German warship whisked the former sultan Abdul Hamid II out of Thessaloniki to continue his exile, across the Bosporus from Constantinople. With their army in Thessaloniki, the Greeks took new positions to the east and northeast, including Nigrita. On 12 November (on 26 October 1912, O.S.) Greece expanded its occupied area and teamed up with the Serbian army to the northwest, while its main forces turned east towards Kavala, reaching the Bulgarians. Another Greek army attacked into Epirus towards Ioannina. On the naval front, the Ottoman fleet twice exited the Dardanelles and was twice defeated by the Greek Navy, in the battles of Elli and Lemnos. Greek dominance on the Aegean Sea made it impossible for the Ottomans to transfer the planned troops from the Middle East to the Thracian (against the Bulgarian) and to the Macedonian (against the Greeks and Serbians) fronts. According to E.J. Erickson the Greek Navy also played a crucial, albeit indirect role, in the Thracian campaign by neutralizing no less than three Thracian Corps (see First Balkan War, the Bulgarian theater of operations), a significant portion of the Ottoman Army there, in the all-important opening round of the war. After the defeat of the Ottoman fleet, the Greek Navy was also free to liberate the islands of the Aegean. General Nikola Ivanov identified the activity of the Greek Navy as the chief factor in the general success of the allies. In January, after a successful coup by young army officers, the Ottoman Empire decided to continue the war. After a failed Ottoman counter-attack in the Western-Thracian front, Bulgarian forces, with the help of the Serbian Army, managed to conquer Adrianople, while Greek forces managed to take Ioannina after defeating the Ottomans in the battle of Bizani. In the joint Serbian-Montenegrin theater of operation, the Montenegrin army besieged and captured the Shkodra, ending the Ottoman presence in Europe west of the Çatalca line after nearly 500 years. The war ended officially with the Treaty of London on 30(17) May 1913. After pressure from the Great Powers towards Greece and Serbia, who had postponed signing in order to fortify their defensive positions, the signing of the Treaty of London took place on 30 May 1913. With this Treaty came the end of the war between the Balkan Allies and the Ottoman Empire. From now on, the Great Powers had the right of decision on the territorial adjustments that had to be made, which even led to the creation of an independent Albania. Every Aegean island belonging to the Ottoman Empire, with the exception of Imbros and Tenedos, was handed over to the Greeks, including the island of Crete. Furthermore, all European territory of the Ottoman Empire west of the Enos-Midia (Enez-Midye) line, was ceded to the Balkan League, but the division of the territory among the League was not to be decided by the Treaty itself. This event led to the formation of two ‘de facto’ military occupation zones on the Macedonian territory, as Greece and Serbia tried to create a common border. The Bulgarians were not satisfied with their share of spoils and as a result, the Second Balkan War broke out on the night of 29 June 1913, as Bulgaria confronted the Serbian and Greek lines in Macedonia. . Though the Balkan allies had fought together against the common enemy, that was not enough to overcome their mutual rivalries. In the original document for the Balkans league, Serbia promised Bulgaria most of Macedonia. But before the first war come to an end, Serbia (in violation of the previous agreement) and Greece revealed their plan to keep possession of the territories that their forces had occupied. This act prompted the tsar of Bulgaria to invade his allies. The Second Balkan War broke out on 29 (16) June 1913, when Bulgaria attacked its erstwhile allies in the First Balkan War, Serbia and Greece, while Montenegro and the Ottoman Empire intervened later against Bulgaria, with Romania attacking Bulgaria from the north in violation of a peace treaty. When the Greek army had entered Thessaloniki in the First Balkan War ahead of the Bulgarian 7th division by only a day, they were asked to allow a Bulgarian battalion to enter the city. Greece accepted in exchange for allowing a Greek unit to enter the city of Serres. The Bulgarian unit that entered Thessaloniki turned out to be an 18,000-strong division instead of the battalion, which caused concern among the Greeks, who viewed it as a Bulgarian attempt to establish a condominium over the city. In the event, due to the urgently needed reinforcements in the Thracian front, Bulgarian Headquarters was soon forced to remove its troops from the city (while the Greeks agreed by mutual treaty to remove their units based in Serres) and transport them to Dedeağaç (modern Alexandroupolis), but still it left behind a battalion that started fortifying its positions. Greece had also allowed the Bulgarians to control the stretch of the Thessaloniki-Constantinople railroad that lay in Greek-occupied territory, since Bulgaria controlled the largest part of this railroad towards Thrace. After the end of the operations in Thrace, and confirming Greek concerns, Bulgaria was not satisfied with the territory it controlled in Macedonia and immediately asked Greece to relinquish its control over Thessaloniki and the land north of Pieria, effectively handing over all Aegean Macedonia. These unacceptable demands, with the Bulgarian refusal to demobilize its army after the Treaty of London had ended the common war against the Ottomans, alarmed Greece, which decided to also keep its army mobilized. A month after the Second Balkan War started, the Bulgarian community of Thessaloniki no longer existed, as hundreds of long-time Bulgarian locals were arrested. Thirteen hundred Bulgarian soldiers and about five hundred komitadjis were also arrested and transferred to Greek prisons. In November 1913, the Bulgarians were forced to admit their defeat, as the Greeks received international recognition on their claim of Thessaloniki . Similarly, in North Macedonia, the tension between Serbia and Bulgaria due to the latter's aspirations over Vardar Macedonia generated many incidents between their respective armies, prompting Serbia to keep its army mobilized. Serbia and Greece proposed that each of the three countries reduce its army by one fourth, as a first step to facilitate a peaceful solution, but Bulgaria rejected it. Seeing the omens, Greece and Serbia started a series of negotiations and signed a treaty on 1 June(19 May) 1913. With this treaty, a mutual border was agreed between the two countries, together with an agreement for mutual military and diplomatic support in case of a Bulgarian or/and Austro-Hungarian attack. Tsar Nicholas II of Russia, being well informed, tried to stop the upcoming conflict on 8 June, by sending an identical personal message to the Kings of Bulgaria and Serbia, offering to act as arbitrator according to the provisions of the 1912 Serbo-Bulgarian treaty. But Bulgaria, by making the acceptance of Russian arbitration conditional, in effect denied any discussion, causing Russia to repudiate its alliance with Bulgaria (see Russo-Bulgarian military convention signed 31 May 1902). The Serbs and the Greeks had a military advantage on the eve of the war because their armies confronted comparatively weak Ottoman forces in the First Balkan War and suffered relatively light casualties, while the Bulgarians were involved in heavy fighting in Thrace. The Serbs and Greeks had time to fortify their positions in Macedonia. The Bulgarians also held some advantages, controlling internal communication and supply lines. On 29(16) June 1913, General Savov, under direct orders of Tsar Ferdinand I, issued attacking orders against both Greece and Serbia without consulting the Bulgarian government and without any official declaration of war. During the night of 30(17) June 1913, they attacked the Serbian army at Bregalnica river and then the Greek army in Nigrita. The Serbian army resisted the sudden night attack, while most of soldiers did not even know who they were fighting with, as Bulgarian camps were located next to Serbs and were considered allies. Montenegro's forces were just a few kilometers away and also rushed to the battle. The Bulgarian attack was halted. The Greek army was also successful. It retreated according to plan for two days while Thessaloniki was cleared of the remaining Bulgarian regiment. Then, the Greek army counter-attacked and defeated the Bulgarians at Kilkis (Kukush), after which the mostly Bulgarian town was plundered and burnt and part of its mostly Bulgarian population massacred by the Greek army. Following the capture of Kilkis, the Greek army's pace was not quick enough to prevent the retaliatory destruction of Nigrita, Serres, and Doxato and massacres of non-combatant Greek inhabitants at Sidirokastro and Doxato by the Bulgarian army. The Greek army then divided its forces and advanced in two directions. Part proceeded east and occupied Western Thrace. The rest of the Greek army advanced up to the Struma River valley, defeating the Bulgarian army in the battles of Doiran and Mt. Beles, and continued its advance to the north towards Sofia. In the Kresna straits, the Greeks were ambushed by the Bulgarian 2nd and 1st Armies, newly arrived from the Serbian front, that had already taken defensive positions there following the Bulgarian victory at Kalimanci. By 30 July, the Greek army was outnumbered by the counter-attacking Bulgarian army, which attempted to encircle the Greeks in a Cannae-type battle, by applying pressure on their flanks. The Greek army was exhausted and faced logistical difficulties. The battle was continued for 11 days, between 29 July and 9 August over 20 km of a maze of forests and mountains with no conclusion. The Greek King, seeing that the units he fought were from the Serbian front, tried to convince the Serbs to renew their attack, as the front ahead of them was now thinner, but the Serbs declined. By then, news came of the Romanian advance toward Sofia and its imminent fall. Facing the danger of encirclement, Constantine realized that his army could no longer continue hostilities. Thus, he agreed to Eleftherios Venizelos' proposal and accepted the Bulgarian request for armistice as had been communicated through Romania. Romania had raised an army and declared war on Bulgaria on 10 July(27 June) as it had from 28(15) June officially warned Bulgaria that it would not remain neutral in a new Balkan war, due to Bulgaria's refusal to cede the fortress of Silistra as promised before the First Balkan war in exchange for Romanian neutrality. Its forces encountered little resistance and by the time the Greeks accepted the Bulgarian request for armistice they had reached Vrazhdebna, from the center of Sofia. Seeing the military position of the Bulgarian army the Ottomans decided to intervene. They attacked, and, finding no opposition, managed to recover eastern Thrace with its fortified city of Adrianople, regaining an area in Europe which was only slightly larger than the present-day European territory of the Republic of Turkey. The developments that led to the First Balkan War did not go unnoticed by the Great Powers. Although there was an official consensus between the European Powers over the territorial integrity of the Ottoman Empire, which led to a stern warning to the Balkan states, unofficially each of them took a different diplomatic approach due to their conflicting interests in the area. As a result, any possible preventive effect of the common official warning was cancelled by the mixed unofficial signals, and failed to prevent or to stop the war: The Second Balkan war was a catastrophic blow to Russian policies in the Balkans, which for centuries had focused on access to the "warm seas". First, it marked the end of the Balkan League, a vital arm of the Russian system of defense against Austria-Hungary. Second, the clearly pro-Serbian position Russia had been forced to take in the conflict, mainly due to the disagreements over land partitioning between Serbia and Bulgaria, caused a permanent break-up between the two countries. Accordingly, Bulgaria reverted its policy to one closer to the Central Powers' understanding over an anti-Serbian front, due to its new national aspirations, now expressed mainly against Serbia. As a result, Serbia was isolated militarily against its rival Austria-Hungary, a development that eventually doomed Serbia in the coming war a year later. But, most damaging, the new situation effectively trapped Russian foreign policy: After 1913, Russia could not afford losing its last ally in this crucial area and thus had no alternatives but to unconditionally support Serbia when the crisis between Serbia and Austria broke out in 1914. This was a position that inevitably drew her, although unwillingly, into a World War with devastating results for her, since she was less prepared (both militarily and socially) for that event than any other Great Power. Austria-Hungary took alarm at the great increase in Serbia's territory at the expense of its national aspirations in the region, as well as Serbia's rising status, especially to Austria-Hungary's Slavic populations. This concern was shared by Germany, which saw Serbia as a satellite of Russia. This contributed significantly to the two Central Powers' willingness to go to war as soon as possible. Finally, when a Serbian backed organization assassinated the heir of the Austro-Hungarian throne, causing the 1914 July Crisis, no-one could stop the conflict and the First World War broke out. The epilogue to this nine-month pan-Balkan war was drawn mostly by the treaty of Bucharest, August 10, 1913. Delegates of Greece, Serbia, Montenegro and Bulgaria, hosted by the deputy of Romania arrived in Bucharest to settle negotiations. Ottoman's request to participate was rejected, on the basis that the talks were to deal with matters strictly among the Balkan allies. The Great Powers maintained a very influential presence, but they did not dominate the proceedings. The Treaty partitioned Macedonia, made changes to the Balkan borders and established the independent state of Albania. Serbia, gained the territory of north-east Macedonia, settled the eastern borders with Bulgaria and gained the eastern half of the Sanjak of Novi-Bazar, doubling its size. Montenegro gained the western half of the Sanjak of Novi-Bazar and secured the borders with Serbia. Greece over-doubled its size by gaining southern Epirus, southern Macedonia (the biggest part), including the city-port of Kavala in its eastern border. What is more, the Aegean Islands were annexed by the Greek Kingdom, apart from Dodecanese, and Cretan unification was completed and established. Romania annexed the southern part of Dobrudja province. Bulgaria, finally, even though defeated, managed to hold some territorial gains from the First Balkan War. Bulgaria embraced a portion of Macedonia, including the town of Strumnitza, and western Thrace with a 70-mile Aegean littoral including the port-town of Alexandroupolis. The need to deal with the Ottoman counter-attack brought the Bulgarian delegates in Constantinople to negotiate with the Ottomans. Basic purpose and hope of the Bulgarians was to regain the territories in Eastern Thrace where the bulk of the Bulgarian forces had struggled to conquer and many soldiers died there. This hope soon dashed, as the Turks insisted on retaining the lands that had been regained after the counter-attack. Thus, a straight line of Ainos-Midia became the eastern Border, which was never implemented as the regions of Lozengrad, Lyule Burgas-Buni Hisar, and Adrianople revertd to the Ottomans. Right after the Treaty of Constantinople, 30 September 1913, Bulgaria sought an alliance pact with the Ottoman Empire, as they claimed Macedonia as their national target on a future war with Greece and Serbia. The Treaty of Constantinople was followed by the Treaty of Athens, 14 November 1913, between the Turks and the Greeks. This treaty concluded the conflict between the two states. However, the status of the Aegean Islands, which were under the Greek control, was left in a question, especially the islands of Imvros and Tenedos that were in a strategic position against the Dardanels Straights. Despite the fact that a treaty was signed, the relations between the two countries remained very bad, and war almost broke out in spring 1914. Finally, a second Treaty in Constantinope, re-established the relations between Serbia and the Ottoman Empire, concluding officially the Balkan Wars. Montenegro never signed a pact with the Turks. The Balkan Wars brought to an end the Ottoman rule of the Balkan Peninsula, except for eastern Thrace and Constantinople. The Young Turk regime was unable to reverse the decline of the Empire. It remained in power, though, and in June 1913, establish a dictatorship. A large influx of Turks started to flee into the Ottoman heartland from the lost lands. By 1914, the remaining core region of the Ottoman Empire had experienced a population increase of around 2.5 million because of the flood of immigration from the Balkans. Soviet demographer Boris Urlanis estimated in "Voini I Narodo-Nacelenie Europi" (1960) that in the first and second Balkan wars there were 122,000 killed in action, 20,000 dead of wounds, and 82,000 dead of disease. Another major issue was the partitioning of these Ottoman territories. This large area that hosted Greeks, Bulgarians, Aromanians, Serbs, Jews, Turks, Albanians and other nations after the 19th century rise of nationalism the Ottoman empire. What is more, another state-nation emerged. Albania was establish on lands that were occupied by Greeks and Serbs. Both armies were asked to leave after the establishment of the new country. Greece never gained North Epirus, despite the fact that was fully inhabited by Greek populations, and Serbia lost a wide littoral to the Adriatic Sea. The purpose behind this arrangement was the denial of Italy and Austro-Hungary to a greater and more powerful Serbia. Finally, during and after the wars, the Greek fleet emerged as the only considerable naval power in the Aegean Sea, blocking the Turkish fleet inside the Dardanels Straight. The Hellenic Navy managed to liberate the Greek islanders and boost the moral of the Greeks. However, the Greek populations in Asia Minor and Pontus faced the rage of the Young Turks' regime, who answered to the defeat with embargoes, exiles, persecutions and, finally, genocide. Citizens of Turkey regard the Balkan Wars as a major disaster ("Balkan harbi faciası") in the nation's history. The Ottoman Empire lost all its European territories to the west of the River Maritsa as a result of the two Balkan Wars, which thus delineated present-day Turkey's western border. The unexpected fall and sudden relinquishing of Turkish-dominated European territories created a psycho-traumatic event amongst many Turks that triggered the ultimate collapse of the empire itself within five years. Nazım Pasha, Chief of Staff of the Ottoman Army, was held responsible for the failure and was assassinated on 23 January 1913 during the 1913 Ottoman coup d'état. Most Greeks regard the Balkan Wars as a period of epic achievements. They managed to liberate and gain by conquest territories that had been inhabited by Greeks since ancient times and doubled the size of the Greek Kingdom. The Greek Army, very little in numbers and ill-equipped in comparison to the superior Ottoman but also Bulgarian and Serbian armies, won very important battles that made it accountable to the Great Powers' chessplay. Two great personalities rose in the Greek political arena, Prime Minister Eleftherios Venizelos, the leading mind behind the Greek foreign policy, and Crown Prince, and later King, Konstantinos I, the major General of the Greek Army.
https://en.wikipedia.org/wiki?curid=4823
BeBox The BeBox is a dual CPU personal computer, briefly sold by Be Inc. to run the company's own operating system, BeOS. Notable aspects of the system include its CPU configuration, I/O board with "GeekPort", and "Blinkenlights" on the front bezel. The BeBox made its debut in October 1995 (BeBox Dual603-66). The processors were upgraded to 133 MHz in August 1996 (BeBox Dual603e-133). Production was halted in January 1997, following the port of BeOS to the Macintosh, in order for the company to concentrate on software. Be sold around a thousand 66 MHz BeBoxes and 800 133 MHz BeBoxes. BeBox creator Jean-Louis Gassée did not see the BeBox as a general consumer device, warning that "Before we let you use the BeBox, we believe you must have some aptitude toward programming the standard language is C++." Initial prototypes are equipped with two AT&T Hobbit processors and three AT&T 9308S DSPs. Production models use two PowerPC 603 processors running at 66 or 133 MHz to power the BeBox. Prototypes having dual 200 MHz CPUs or four CPUs exist, but these were never publicly available. Two yellow/green vertical LED arrays, dubbed the "blinkenlights", are built into the front bezel to illustrate the CPU load. The bottommost LED on the right side indicates hard disk activity.
https://en.wikipedia.org/wiki?curid=4825
Biomedical engineering Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes (e.g. diagnostic or therapeutic). This field seeks to close the gap between engineering and medicine, combining the design and problem solving skills of engineering with medical biological sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment within hospitals while adhering to relevant industry standards. This involves making equipment recommendations, procurement, routine testing and preventive maintenance, a role also known as a Biomedical Equipment Technician (BMET) or as clinical engineering. Biomedical engineering has recently emerged as its own study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields, to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, common imaging equipment such as MRIs and EKG/ECGs, regenerative tissue growth, pharmaceutical drugs and therapeutic biologicals. Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data. Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organisational principles within nucleic acid and protein sequences. Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. Biomedical optics refers to the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME. One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct. Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology ("not a medical application", but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research. Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of Chemical Engineering, and Pharmaceutical Analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment. This is an "extremely broad category"—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism. A medical device is intended for use in: Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants. Stereolithography is a practical example of "medical modeling" being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases. Medical devices are regulated and classified (in the US) as follows (see also "Regulation"): Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means. Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy. An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents. Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools. In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals. Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly. Their inherent focus on "practical" implementation of technology has tended to keep them oriented more towards "incremental"-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also see safety engineering for a discussion of the procedures used to design safe systems. Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community. While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University. The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote inclusion of their users into the mainstream of society, commerce, and recreation. Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death" Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe "and" 2) effective and 3) for all the manufactured devices A product is safe if patients, users and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at acceptable level if compared with the benefit derived from the use of it. A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device. The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle. The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical "devices, drugs, biologics, and combination" products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices). In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" that is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. Risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in different structure. The Quality System deliverables usually includes procedures that ensure quality throughout all product life cycle. The same standard (ISO EN 13485) is usually applied for quality management systems in US and worldwide. In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area. The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the "optimal extent" of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments. Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2. RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled. The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS comply with the hazardous substances limits and have a CE mark on their products. The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series. The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard. AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards. The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning. Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D.) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as "its own discipline" rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which actually includes so much biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology. In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. Over 65 programs are currently accredited by ABET. In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at Ryerson University as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University. As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program. Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards. Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the University's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or "another engineering" discipline (plus certain life science coursework), or "life science" (plus certain engineering coursework). Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME. As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine. Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required. In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM. The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure. Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers. In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 27% (much faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions.
https://en.wikipedia.org/wiki?curid=4827
Balkans The Balkans ( ), also known as the Balkan Peninsula, is a geographic area in Southeast Europe with various definitions and meanings, including geopolitical and historical. The region takes its name from the Balkan Mountains that stretch throughout the whole of Bulgaria from the Serbian–Bulgarian border to the Black Sea coast. The Balkan Peninsula is bordered by the Adriatic Sea in the northwest, the Ionian Sea in the southwest, the Aegean Sea in the south, the Turkish Straits in the east, and the Black Sea in the northeast. The northern border of the peninsula is variously defined. The highest point of the Balkans is Mount Musala, , in the Rila mountain range, Bulgaria. The concept of the Balkan Peninsula was created by the German geographer August Zeune in 1808, who mistakenly considered the Balkan Mountains the dominant mountain system of Southeast Europe spanning from the Adriatic Sea to the Black Sea. The term of Balkan Peninsula was a synonym for Rumelia (European Turkey) in the 19th century, the former provinces of the Ottoman Empire in Southeast Europe. It had a geopolitical rather than a geographical definition, further promoted during the creation of the Kingdom of Yugoslavia in the early 20th century. The definition of the Balkan Peninsula's natural borders do not coincide with the technical definition of a peninsula and hence modern geographers reject the idea of a Balkan peninsula, while scholars usually discuss the Balkans as a region. The term has acquired a stigmatized and pejorative meaning related to the process of Balkanization, and hence the preferred alternative term used for the region is Southeast Europe. The origin of the word "Balkan" is obscure; it may be related to Persian "bālk" 'mud', and the Turkish suffix "an" 'swampy forest' or Persian "balā-khāna" 'big high house'. Related words are also found in other Turkic languages. The term was brought in Europe with Ottoman Turkish influence, where "" means 'chain of wooded mountains' in Turkic languages. From classical antiquity through the Middle Ages, the Balkan Mountains were called by the local Thracian name "Haemus". According to Greek mythology, the Thracian king Haemus was turned into a mountain by Zeus as a punishment and the mountain has remained with his name. A reverse name scheme has also been suggested. D. Dechev considers that Haemus (Αἷμος) is derived from a Thracian word "*saimon", 'mountain ridge'. A third possibility is that "Haemus" () derives from the Greek word "haima" () meaning 'blood'. The myth relates to a fight between Zeus and the monster/titan Typhon. Zeus injured Typhon with a thunder bolt and Typhon's blood fell on the mountains, from which they got their name. The earliest mention of the name appears in an early 14th-century Arab map, in which the Haemus mountains are referred to as "Balkan". The first attested time the name "Balkan" was used in the West for the mountain range in Bulgaria was in a letter sent in 1490 to Pope Innocent VIII by Buonaccorsi Callimaco, an Italian humanist, writer and diplomat. The Ottomans first mention it in a document dated from 1565. There has been no other documented usage of the word to refer to the region before that, although other Turkic tribes had already settled in or were passing through the region. There is also a claim about an earlier Bulgar Turkic origin of the word popular in Bulgaria, however it is only an unscholarly assertion. The word was used by the Ottomans in Rumelia in its general meaning of mountain, as in "Kod̲j̲a-Balkan", "Čatal-Balkan", and "Ungurus-Balkani̊", but especially it was applied to the Haemus mountain. The name is still preserved in Central Asia with the Balkan Daglary (Balkan Mountains) and the Balkan Province of Turkmenistan. English traveler John Morritt introduced this term into the English literature at the end of the 18th-century, and other authors started applying the name to the wider area between the Adriatic and the Black Sea. The concept of the "Balkans" was created by the German geographer August Zeune in 1808, who mistakenly considered it as the dominant central mountain system of Southeast Europe spanning from the Adriatic Sea to the Black Sea. During the 1820s, "Balkan became the preferred although not yet exclusive term alongside Haemus among British travelers... Among Russian travelers not so burdened by classical toponymy, Balkan was the preferred term". The term was not commonly used in geographical literature until the mid-19th century because already then scientists like Carl Ritter warned that only the part South of the Balkan Mountains can be considered as a peninsula and considered it to be renamed as "Greek peninsula". Other prominent geographers who didn't agree with Zeune were Hermann Wagner, Theobald Fischer, Marion Newbigin, Albrecht Penck, while Austrian diplomat Johann Georg von Hahn in 1869 for the same territory used the term "Südostereuropäische Halbinsel" ("Southeasterneuropean peninsula"). Another reason it was not commonly accepted as the definition of then European Turkey had a similar land extent. However, after the Congress of Berlin (1878) there was a political need for a new term and gradually the Balkans was revitalized, but in the maps the northern border was in Serbia and Montenegro without Greece (it only depicted the Ottoman occupied parts of Europe), while Yugoslavian maps also included Croatia and Bosnia. The term Balkan Peninsula was a synonym for European Turkey, the political borders of former Ottoman Empire provinces. The usage of the term changed in the very end of the 19th and beginning of the 20th century when was embraced by Serbian geographers, most prominently by Jovan Cvijić. It was done with political reasoning as affirmation for Serbian nationalism on the whole territory of the South Slavs, and also included anthropological and ethnological studies of the South Slavs through which were claimed various nationalistic and racialist theories. Through such policies and Yugoslavian maps the term was elevated to the modern status of a geographical region. The term acquired political nationalistic connotations far from its initial geographic meaning, arising from political changes from the late 19th century to the creation of post–World War I Yugoslavia (initially the Kingdom of Serbs, Croats and Slovenes in 1918). After the dissolution of Yugoslavia beginning in June 1991, the term "Balkans" acquired a negative political meaning, especially in Croatia and Slovenia, as well in worldwide casual usage for war conflicts and fragmentation of a territory (see Balkanization). In part due to the historical and political connotations of the term "Balkans", especially since the military conflicts of the 1990s in Yugoslavia in the western half of the region, the term "Southeast Europe" is becoming increasingly popular. A European Union initiative of 1999 is called the "Stability Pact for South Eastern Europe", and the online newspaper "Balkan Times" renamed itself "Southeast European Times" in 2003. In other languages of the region, the region is known as: The Balkan Peninsula is bounded by the Adriatic Sea to the west, the Mediterranean Sea (including the Ionian and Aegean seas) and the Marmara Sea to the south and the Black Sea to the east. Its northern boundary is often given as the Danube, Sava and Kupa Rivers. The Balkan Peninsula has a combined area of about (slightly smaller than Spain). It is more or less identical to the region known as Southeast Europe. From 1920 until World War II, Italy included Istria and some Dalmatian areas (like "Zara", today's Zadar) that are within the general definition of the Balkan Peninsula. The current territory of Italy includes only the small area around Trieste inside the Balkan Peninsula. However, the regions of Trieste and Istria are not usually considered part of the Balkans by Italian geographers, due to their definition of the Balkans that limits its western border to the Kupa River. Share of total area in brackets within the Balkan Peninsula by country, by the Danube–Sava definition, with Bulgaria and Greece occupying almost the half of the territory of the Balkan Peninsula, with around 23% of the total area each: Entirely within the Balkan Peninsula: Mostly or partially within the Balkan Peninsula: The term "the Balkans" is used more generally for the region; it includes states in the region, which may extend beyond the peninsula, and is not defined by the geography of the peninsula itself. Historians state the Balkans comprise Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Greece, Kosovo, Montenegro, North Macedonia, Romania, Serbia, and Slovenia. Its total area is usually given as and the population as 59,297,000 (est. 2002). Italy, although having a small part of its territory in the Balkan Peninsula, is not included in the term "the Balkans". The term Southeast Europe is also used for the region, with various definitions. Individual Balkan states can also be considered part of other regions, including Southern Europe, Eastern Europe and Central Europe. Turkey, often including its European territory, is also included in Western or Southwestern Asia. "Western Balkans" is a political neologism coined to refer to Albania and the territory of the former Yugoslavia, except Slovenia, since the early 1990s. The region of the "Western Balkans", a coinage exclusively used in Pan-European parlance, roughly corresponds to the Dinaric Alps territory. The institutions of the European Union have generally used the term "Western Balkans" to mean the Balkan area that includes countries that are not members of the European Union, while others refer to the geographical aspects. Each of these countries aims to be part of the future enlargement of the European Union and reach democracy and transmission scores but, until then, they will be strongly connected with the pre-EU waiting program CEFTA. Croatia, considered part of the Western Balkans, joined the EU in July 2013. The term is criticized for having a geopolitical, rather than a geographical meaning and definition, as a multiethnic and political area in the southeastern part of Europe. The geographical term of a peninsula defines that the water border must be longer than land, with the land side being the shortest in the triangle, but that is not the case with the Balkan Peninsula. Both Eastern and Western water cathetus from Odessa to Cape Matapan (ca. 1230–1350 km) and from Trieste to Cape Matapan (ca. 1270–1285 km) are shorter than land cathetus from Trieste to Odessa (ca. 1330–1365 km). The land has a too wide line connected to the continent to be technically proclaimed as a peninsula - Szczecin (920 km) and Rostock (950 km) at the Baltic Sea are closer to Trieste than Odessa yet it is not considered as another European peninsula. Since the late 19th and early 20th-century literature is not known where is exactly the northern border between the peninsula and the continent, with an issue, whether the rivers are suitable for its definition. In the studies the Balkans natural borders, especially the northern border, are often avoided to be addressed, considered as a "fastidious problem" by André Blanc in "Geography of the Balkans" (1965), while John Lampe and Marvin Jackman in "Balkan Economic History" (1971) noted that "modern geographers seem agreed in rejecting the old idea of a Balkan Peninsula". Another issue is the name because the Balkan Mountains which are mostly located in Northern Bulgaria are not dominating the region by length and area like the Dinaric Alps. An eventual Balkan peninsula can be considered a territory South of the Balkan Mountains, with a possible name "Greek-Albanian Peninsula." The term influenced the meaning of Southeast Europe which again is not properly defined by geographical factors yet historical borders of the Balkans. Croatian geographers and academics are highly critical of inclusion of Croatia within the broad geographical, social-political and historical context of the Balkans, while the neologism Western Balkans is perceived as a humiliation of Croatia by the European political powers. According to M. S. Altić, the term has two different meanings, "geographical, ultimately undefined, and cultural, extremely negative, and recently strongly motivated by the contemporary political context". In 2018, President of Croatia Kolinda Grabar-Kitarović stated that the use of the term "Western Balkans" should be avoided because it does not imply only a geographic area, but also negative connotations, and instead must be perceived as and called Southeast Europe because it is part of Europe. As the Slovenian philosopher Slavoj Žižek put it, Most of the area is covered by mountain ranges running from the northwest to southeast. The main ranges are the Balkan mountains (Stara Planina in Bulgarian language), running from the Black Sea coast in Bulgaria to the border with Serbia, the Rilo-Rhodope massif in southern Bulgaria, northern Greece and southeastern North Macedonia, the Dinaric Alps in Bosnia and Herzegovina, Croatia and Montenegro, the Šar massif which spreads from Albania to North Macedonia, and the Pindus range, spanning from southern Albania into central Greece and the Albanian Alps. The highest mountain of the region is Rila in Bulgaria, with Musala at 2925 m, Mount Olympus in Greece, being second at 2917 m and Vihren in Bulgaria being the third at 2914 m. The karst field or polje is a common feature of the landscape. On the Adriatic and Aegean coasts the climate is Mediterranean, on the Black Sea coast the climate is humid subtropical and oceanic, and inland it is humid continental. In the northern part of the peninsula and on the mountains, winters are frosty and snowy, while summers are hot and dry. In the southern part winters are milder. The humid continental climate is predominant in Bosnia and Herzegovina, northern Croatia, Bulgaria, Kosovo, northern Montenegro, the Republic of North Macedonia, the interior of Albania and Serbia, while the other, less common climates, the humid subtropical and oceanic climates, are seen on the Black Sea coast of Bulgaria and Balkan Turkey (European Turkey); and the Mediterranean climate is seen on the coast of Albania, the coast of Croatia, Greece, southern Montenegro and the Aegean coast of Balkan Turkey (European Turkey). Over the centuries forests have been cut down and replaced with bush. In the southern part and on the coast there is evergreen vegetation. Inland there are woods typical of Central Europe (oak and beech, and in the mountains, spruce, fir and pine). The tree line in the mountains lies at the height of 1800–2300 m. The land provides habitats for numerous endemic species, including extraordinarily abundant insects and reptiles that serve as food for a variety of birds of prey and rare vultures. The soils are generally poor, except on the plains, where areas with natural grass, fertile soils and warm summers provide an opportunity for tillage. Elsewhere, land cultivation is mostly unsuccessful because of the mountains, hot summers and poor soils, although certain cultures such as olive and grape flourish. Resources of energy are scarce, except in Kosovo, where considerable coal, lead, zinc, chromium and silver deposits are located. Other deposits of coal, especially in Bulgaria, Serbia and Bosnia, also exist. Lignite deposits are widespread in Greece. Petroleum scarce reserves exist in Greece, Serbia and Albania. Natural gas deposits are scarce. Hydropower is in wide use, from over 1,000 dams. The often relentless bora wind is also being harnessed for power generation. Metal ores are more usual than other raw materials. Iron ore is rare, but in some countries there is a considerable amount of copper, zinc, tin, chromite, manganese, magnesite and bauxite. Some metals are exported. The Balkan region was the first area in Europe to experience the arrival of farming cultures in the Neolithic era. The Balkans have been inhabited since the Paleolithic and are the route by which farming from the Middle East spread to Europe during the Neolithic (7th millennium BC). The practices of growing grain and raising livestock arrived in the Balkans from the Fertile Crescent by way of Anatolia and spread west and north into Central Europe, particularly through Pannonia. Two early culture-complexes have developed in the region, Starčevo culture and Vinča culture. The Balkans are also the location of the first advanced civilizations. Vinča culture developed a form of proto-writing before the Sumerians and Minoans, known as the Old European script, while the bulk of the symbols had been created in the period between 4500 and 4000 BC, with the ones on the Tărtăria clay tablets even dating back to around 5300 BC. The identity of the Balkans is dominated by its geographical position; historically the area was known as a crossroads of cultures. It has been a juncture between the Latin and Greek bodies of the Roman Empire, the destination of a massive influx of pagan Bulgars and Slavs, an area where Orthodox and Catholic Christianity met, as well as the meeting point between Islam and Christianity. In pre-classical and classical antiquity, this region was home to Greeks, Illyrians, Paeonians, Thracians, Dacians, and other ancient groups. The Achaemenid Persian Empire incorporated parts of the Balkans comprising Macedonia, Thrace, Bulgaria, and the Black Sea coastal region of Romania between the late 6th and the first half of the 5th-century BC into its territories. Later the Roman Empire conquered most of the region and spread Roman culture and the Latin language, but significant parts still remained under classical Greek influence. The Romans considered the Rhodope Mountains to be the northern limit of the Peninsula of Haemus and the same limit applied approximately to the border between Greek and Latin use in the region (later called the Jireček Line). However large spaces south of Jireček Line were and are inhabited by Vlachs (Aromanians), the Romance-speaking heirs of Roman Empire. The Bulgars and Slavs arrived in the 6th-century and began assimilating and displacing already-assimilated (through Romanization and Hellenization) older inhabitants of the northern and central Balkans, forming the Bulgarian Empire. During the Middle Ages, the Balkans became the stage for a series of wars between the Byzantine Roman and the Bulgarian Empires. By the end of the 16th-century, the Ottoman Empire had become the controlling force in the region after expanding from Anatolia through Thrace to the Balkans. Many people in the Balkans place their greatest folk heroes in the era of either the onslaught or the retreat of the Ottoman Empire. As examples, for Greeks, Constantine XI Palaiologos and Kolokotronis; and for Serbs, Miloš Obilić and Tzar Lazar; for Montenegrins, Đurađ I Balšić and Ivan Crnojević; for Albanians, George Kastrioti Skanderbeg; for ethnic Macedonians, Nikola Karev and Goce Delčev; for Bulgarians, Vasil Levski, Georgi Sava Rakovski and Hristo Botev and for Croats, Nikola Šubić Zrinjski. In the past several centuries, because of the frequent Ottoman wars in Europe fought in and around the Balkans and the comparative Ottoman isolation from the mainstream of economic advance (reflecting the shift of Europe's commercial and political centre of gravity towards the Atlantic), the Balkans has been the least developed part of Europe. According to Halil İnalcık, "The population of the Balkans, according to one estimate, fell from a high of 8 million in the late 16th-century to only 3 million by the mid-eighteenth. This estimate is based on Ottoman documentary evidence." Most of the Balkan nation-states emerged during the 19th and early 20th centuries as they gained independence from the Ottoman Empire or the Austro-Hungarian empire: Greece in 1821, Serbia, Montenegro in 1878, Romania in 1881, Bulgaria in 1908 and Albania in 1912. In 1912–1913 the First Balkan War broke out when the nation-states of Bulgaria, Serbia, Greece and Montenegro united in an alliance against the Ottoman Empire. As a result of the war, almost all remaining European territories of the Ottoman Empire were captured and partitioned among the allies. Ensuing events also led to the creation of an independent Albanian state. Bulgaria insisted on its status quo territorial integrity, divided and shared by the Great Powers next to the Russo-Turkish War (1877–78) in other boundaries and on the pre-war Bulgarian-Serbian agreement. Bulgaria was provoked by the backstage deals between its former allies, Serbia and Greece, on the allocation of the spoils at the end of the First Balkan War. At the time, Bulgaria was fighting at the main Thracian Front. Bulgaria marks the beginning of Second Balkan War when it attacked them. The Serbs and the Greeks repulsed single attacks, but when the Greek army invaded Bulgaria together with an unprovoked Romanian intervention in the back, Bulgaria collapsed. The Ottoman Empire used the opportunity to recapture Eastern Thrace, establishing its new western borders that still stand today as part of modern Turkey. The First World War was sparked in the Balkans in 1914 when members of Young Bosnia, a revolutionary organization with predominantly Serb and pro-Yugoslav members, assassinated the Austro-Hungarian heir Archduke Franz Ferdinand of Austria in Bosnia and Herzegovina's capital, Sarajevo. That caused a war between Austria-Hungary and Serbia, which—through the existing chains of alliances—led to the First World War. The Ottoman Empire soon joined the Central Powers becoming one of the three empires participating in that alliance. The next year Bulgaria joined the Central Powers attacking Serbia, which was successfully fighting Austro-Hungary to the north for a year. That led to Serbia's defeat and the intervention of the Entente in the Balkans which sent an expeditionary force to establish a new front, the third one of that war, which soon also became static. The participation of Greece in the war three years later, in 1918, on the part of the Entente finally altered the balance between the opponents leading to the collapse of the common German-Bulgarian front there, which caused the exit of Bulgaria from the war, and in turn the collapse of the Austro-Hungarian Empire, ending the First World War. With the start of the Second World War, all Balkan countries, with the exception of Greece, were allies of Nazi Germany, having bilateral military agreements or being part of the Axis Pact. Fascist Italy expanded the war in the Balkans by using its protectorate Albania to invade Greece. After repelling the attack, the Greeks counterattacked, invading Italy-held Albania and causing Nazi Germany's intervention in the Balkans to help its ally. Days before the German invasion, a successful coup d'état in Belgrade by neutral military personnel seized power. Although the new government reaffirmed Serbia's intentions to fulfill its obligations as a member of the Axis, Germany, with Bulgaria, invaded both Greece and Yugoslavia. Yugoslavia immediately disintegrated when those loyal to the Serbian King and the Croatian units mutinied. Greece resisted, but, after two months of fighting, collapsed and was occupied. The two countries were partitioned between the three Axis allies, Bulgaria, Germany and Italy, and the Independent State of Croatia, a puppet state of Italy and Germany. During the occupation the population suffered considerable hardship due to repression and starvation, to which the population reacted by creating a mass resistance movement. Together with the early and extremely heavy winter of that year (which caused hundreds of thousands deaths among the poorly fed population), the German invasion had disastrous effects in the timetable of the planned invasion in Russia causing a significant delay, which had major consequences during the course of the war. Finally, at the end of 1944, the Soviets entered Romania and Bulgaria forcing the Germans out of the Balkans. They left behind a region largely ruined as a result of wartime exploitation. During the Cold War, most of the countries on the Balkans were governed by communist governments. Greece became the first battleground of the emerging Cold War. The Truman Doctrine was the US response to the civil war, which raged from 1944 to 1949. This civil war, unleashed by the Communist Party of Greece, backed by communist volunteers from neighboring countries (Albania, Bulgaria and Yugoslavia), led to massive American assistance for the non-communist Greek government. With this backing, Greece managed to defeat the partisans and, ultimately, remained the only non-communist country in the region. However, despite being under communist governments, Yugoslavia (1948) and Albania (1961) fell out with the Soviet Union. Yugoslavia, led by Marshal Josip Broz Tito (1892–1980), first propped up then rejected the idea of merging with Bulgaria and instead sought closer relations with the West, later even spearheaded, together with India and Egypt the Non-Aligned Movement. Albania on the other hand gravitated toward Communist China, later adopting an isolationist position. As the only non-communist countries, Greece and Turkey were (and still are) part of NATO composing the southeastern wing of the alliance. In the 1990s, the transition of the regions' ex-Eastern bloc countries towards democratic free-market societies went peacefully. While in the non-aligned Yugoslavia, Wars between the former Yugoslav republics broke out after Slovenia and Croatia held free elections and their people voted for independence on their respective countries' referenda. Serbia in turn declared the dissolution of the union as unconstitutional and the Yugoslavian army unsuccessfully tried to maintain status quo. Slovenia and Croatia declared independence on 25 June 1991, followed by the Ten-Day War in Slovenia. Till October 1991, the Army withdrew from Slovenia, and in Croatia, the Croatian War of Independence would continue until 1995. In the ensuing 10 years armed confrontation, gradually all the other Republics declared independence, with Bosnia being the most affected by the fighting. The long lasting wars resulted in a United Nations intervention and NATO ground and air forces took action against Serb forces in Bosnia and Herzegovina and Serbia. From the dissolution of Yugoslavia six republics achieved international recognition as sovereign republics, but these are traditionally included in Balkans: Slovenia, Croatia, Bosnia and Herzegovina, North Macedonia, Montenegro and Serbia. In 2008, while under UN administration, Kosovo declared independence (according to the official Serbian policy, Kosovo is still an internal autonomous region). In July 2010, the International Court of Justice, ruled that the declaration of independence was legal. Most UN member states recognise Kosovo. After the end of the wars a revolution broke in Serbia and Slobodan Milošević, the Serbian communist leader (elected president between 1989 and 2000), was overthrown and handed for trial to the International Criminal Tribunal for crimes against the International Humanitarian Law during the Yugoslav wars. Milošević died of a heart attack in 2006 before a verdict could have been released. Ιn 2001 an Albanian uprising in North Macedonia forced the country to give local autonomy to the ethnic Albanians in the areas where they predominate. With the dissolution of Yugoslavia an issue emerged over the name under which the former (federated) republic of Macedonia would internationally be recognized, between the new country and Greece. Being the Macedonian part of Yugoslavia (see Vardar Macedonia), the federated Republic under the Yugoslav identity had the name Republic of Macedonia on which it declared its sovereignty in 1991. Greece, having a large region (see Macedonia) also under the same name opposed to the usage of this name as an indication of a nationality. The issue was resolved under UN mediation and the Prespa agreement was reached, which saw the country's renaming into North Macedonia. Balkan countries control the direct land routes between Western Europe and South West Asia (Asia Minor and the Middle East). Since 2000, all Balkan countries are friendly towards the EU and the US. Greece has been a member of the European Union since 1981 while Slovenia is a member since 2004, Bulgaria and Romania are members since 2007, and Croatia is a member since 2013. In 2005, the European Union decided to start accession negotiations with candidate countries; Turkey, and North Macedonia were accepted as candidates for EU membership. In 2012, Montenegro started accession negotiations with the EU. In 2014, Albania is an official candidate for accession to the EU. In 2015, Serbia was expected to start accession negotiations with the EU, however this process has been stalled over the recognition of Kosovo as an independent state by existing EU member states. Greece and Turkey have been NATO members since 1952. In March 2004, Bulgaria, Romania and Slovenia have become members of NATO. As of April 2009, Albania and Croatia are members of NATO. Montenegro joined in June 2017. All other countries have expressed a desire to join the EU or NATO at some point in the future. Currently all of the states are republics, but until World War II all countries were monarchies. Most of the republics are parliamentary, excluding Romania and Bosnia which are semi-presidential. All the states have open market economies, most of which are in the upper-middle income range ($4,000–12,000 p.c.), except Croatia, Romania, Greece and Slovenia that have high income economies (over $12,000 p.c.), and are classified with very high HDI, along with Bulgaria, in contrast to the remaining states, which are classified with high HDI. The states from the former Eastern Bloc that formerly had planned economy system and Turkey mark gradual economic growth each year, only the economy of Greece drops for 2012 and meanwhile it was expected to grow in 2013. The Gross domestic product (Purchasing power parity) per capita is highest in Slovenia (over $36,000), followed by Greece (over $30,000), Croatia, Bulgaria and Romania (over $23,000), Turkey, Montenegro, Serbia, North Macedonia ($10,000–15,000) and Bosnia, Albania and Kosovo (below $10,000). The Gini coefficient, which indicates the level of difference by monetary welfare of the layers, is on the second level at the highest monetary equality in Albania, Bulgaria and Serbia, on the third level in Greece, Montenegro and Romania, on the fourth level in North Macedonia, on the fifth level in Turkey, and the most unequal by Gini coefficient is Bosnia at the eighth level which is the penultimate level and one of the highest in the world. The unemployment is lowest in Romania (below 5%), followed by Bulgaria, Serbia (5-10%), Albania, Turkey (10–15%), Greece, Bosnia, Montenegro (15–20%), North Macedonia (over 20%) and Kosovo (over 25%). See also the Black Sea regional organizations The region is inhabited by Albanians, Aromanians, Bulgarians, Bosniaks, Croats, Gorani, Greeks, Macedonians, Montenegrins, Serbs, Slovenes, Romanians, Turks, and other ethnic groups which present minorities in certain countries like the Romani and Ashkali. The region is a meeting point of Orthodox Christianity, Islam and Roman Catholic Christianity. Eastern Orthodoxy is the majority religion in both the Balkan Peninsula and the Balkan region, The Eastern Orthodox Church has played a prominent role in the history and culture of Eastern and Southeastern Europe. A variety of different traditions of each faith are practiced, with each of the Eastern Orthodox countries having its own national church. A part of the population in the Balkans defines itself as irreligious. The Jewish communities of the Balkans were some of the oldest in Europe and date back to ancient times. These communities were Sephardi Jews, except in Transylvania, Croatia and Slovenia, where the Jewish communities were mainly Ashkenazi Jews. In Bosnia and Herzegovina, the small and close-knit Jewish community is 90% Sephardic, and Ladino is still spoken among the elderly. The Sephardi Jewish cemetery in Sarajevo has tombstones of a unique shape and inscribed in ancient Ladino. Sephardi Jews used to have a large presence in the city of Thessaloniki, and by 1900, some 80,000, or more than half of the population, were Jews. The Jewish communities in the Balkans suffered immensely during World War II, and the vast majority were killed during the Holocaust. An exception were the Bulgarian Jews, most of whom were saved by Boris III of Bulgaria, who resisted Adolf Hitler, opposing their deportation to Nazi concentration camps. Almost all of the few survivors have emigrated to the (then) newly founded state of Israel and elsewhere. Almost no Balkan country today has a significant Jewish minority. The Balkan region today is a very diverse ethno-linguistic region, being home to multiple Slavic and Romance languages, as well as Albanian, Greek, Turkish, and others. Romani is spoken by a large portion of the Romanis living throughout the Balkan countries. Throughout history many other ethnic groups with their own languages lived in the area, among them Thracians, Illyrians, Romans, Celts and various Germanic tribes. All of the aforementioned languages from the present and from the past belong to the wider Indo-European language family, with the exception of the Turkic languages (e.g., Turkish and Gagauz). Most of the states in the Balkans are predominantly urbanized, with the lowest number of urban population as % of the total population found in Kosovo at under 40%, Bosnia and Herzegovina at 40% and Slovenia at 50%. A list of largest cities: The time zones in the Balkans are defined as the following:
https://en.wikipedia.org/wiki?curid=4829
Bohr model In atomic physics, the Rutherford–Bohr model or Bohr model, presented by Niels Bohr and Ernest Rutherford in 1913, is a system consisting of a small, dense nucleus surrounded by orbiting electrons—similar to the structure of the Solar System, but with attraction provided by electrostatic forces in place of gravity. After the cubic model (1902), the plum-pudding model (1904), the Saturnian model (1904), and the Rutherford model (1911) came the "Rutherford–Bohr model" or just "Bohr model" for short (1913). The improvement to the Rutherford model is mostly a quantum physical interpretation of it. The model's key success lay in explaining the Rydberg formula for the spectral emission lines of atomic hydrogen. While the Rydberg formula had been known experimentally, it did not gain a theoretical underpinning until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the "valence shell atom" model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related model was originally proposed by Arthur Erich Haas in 1910 but was rejected. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory. In the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. Given this experimental data, Rutherford naturally considered a planetary model of the atom, the Rutherford model of 1911 – electrons orbiting a solar nucleus – however, the said planetary model of the atom has a technical difficulty: the laws of classical mechanics (i.e. the Larmor formula) predict that the electron will release electromagnetic radiation while orbiting a nucleus. Because the electron would lose energy, it would rapidly spiral inwards, collapsing into the nucleus on a timescale of around 16 picoseconds. This atom model is disastrous, because it predicts that all atoms are unstable. Also, as the electron spirals inward, the emission would rapidly increase in frequency as the orbit got smaller and faster. This would produce a continuous smear, in frequency, of electromagnetic radiation. However, late 19th century experiments with electric discharges have shown that atoms will only emit light (that is, electromagnetic radiation) at certain discrete frequencies. To overcome this hard difficulty, Niels Bohr proposed, in 1913, what is now called the "Bohr model of the atom". He put forward these three postulates that sum up most of the model: Other points are: Bohr's condition, that the angular momentum is an integer multiple of "ħ" was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: According to de Broglie hypothesis, matter particles such as the electron behaves as waves. So, de Broglie wavelength of electron is: which implies that, or where formula_7 is the angular momentum formula_8 of the orbiting electron. which is the Bohr's second postulate. Bohr described angular momentum of the electron orbit as 1/2h while de Broglie's wavelength of described h divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron (i.e., matter waves) was not suspected. In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. Calculation of the orbits requires two assumptions. If an electron in an atom is moving on an orbit with period "T", classically the electromagnetic radiation will repeat itself every orbital period. If the coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, the radiation will be emitted in a pattern which repeats every period, so that the Fourier transform will have frequencies which are only multiples of 1/"T". This is the classical radiation law: the frequencies emitted are integer multiples of 1/"T". In quantum mechanics, this emission must be in quanta of light, of frequencies consisting of integer multiples of 1/"T", so that classical mechanics is an approximate description at large quantum numbers. This means that the energy level corresponding to a classical orbit of period 1/"T" must have nearby energy levels which differ in energy by "h"/"T", and they should be equally spaced near that level, Bohr worried whether the energy spacing 1/"T" should be best calculated with the period of the energy state formula_15, or formula_16, or some average—in hindsight, this model is only the leading semiclassical approximation. Bohr considered circular orbits. Classically, these orbits must decay to smaller circles when photons are emitted. The level spacing between circular orbits can be calculated with the correspondence formula. For a Hydrogen atom, the classical orbits have a period "T" determined by Kepler's third law to scale as "r"3/2. The energy scales as 1/"r", so the level spacing formula amounts to It is possible to determine the energy levels by recursively stepping down orbit by orbit, but there is a shortcut. The angular momentum "L" of the circular orbit scales as . The energy in terms of the angular momentum is then Assuming, with Bohr, that quantized values of "L" are equally spaced, the spacing between neighboring energies is This is as desired for equally spaced angular momenta. If one kept track of the constants, the spacing would be "ħ", so the angular momentum should be an integer multiple of "ħ", This is how Bohr arrived at his model. An electron in the lowest energy level of hydrogen () therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level () is −3.4 eV. The third (3) is −1.51 eV, and so on. For larger values of "n", these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product. The combination of natural constants in the energy formula is called the Rydberg energy ("R"E): This expression is clarified by interpreting it in combinations that form more natural units: Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge , where "Z" is the atomic number. This will now give us energy levels for hydrogenic ("hydrogen-like") atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with "Z" protons, the energy levels are (to a rough approximation): The actual energy levels cannot be solved analytically for more than one electron (see "n"-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb Force. When "Z" = 1/"α" (), the motion becomes highly relativistic, and "Z"2 cancels the "α"2 in "R"; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei. The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron, However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. The Rydberg formula, which was known empirically before Bohr's formula, is seen in Bohr's theory as describing the energies of transitions or quantum jumps between orbital energy levels. Bohr's formula gives the numerical value of the already-known and measured the Rydberg constant, but in terms of more fundamental constants of nature, including the electron's charge and the Planck constant. When the electron gets moved from its original energy level to a higher one, it then jumps back each level until it comes to the original position, which results in a photon being emitted. Using the derived formula for the different energy levels of hydrogen one may determine the wavelengths of light that a hydrogen atom can emit. The energy of a photon emitted by a hydrogen atom is given by the difference of two hydrogen energy levels: where is the final energy level, and is the initial energy level. Since the energy of a photon is the wavelength of the photon given off is given by This is known as the Rydberg formula, and the Rydberg constant is , or in natural units. This formula was known in the nineteenth century to scientists studying spectroscopy, but there was no theoretical explanation for this form or a theoretical prediction for the value of , until Bohr. In fact, Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman ( =1), Balmer ( =2), and Paschen ( =3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted. To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing with or with where is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr's idea was that each discrete orbit could only hold a certain number of electrons. After that orbit is full, the next level would have to be used. This gives the atom a shell structure, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at "Z" = 2. Each one sees the nuclear charge of "Z" = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge "Z" doesn't usually come out to be an integer. But Moseley's law experimentally probes the innermost pair of electrons, and shows that they do see a nuclear charge of approximately "Z" − 1, while the outermost electron in an atom or ion with only one electron in the outermost shell orbits a core with effective charge "Z" − "k" where "k" is the total number of electrons in the inner shells. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment. Niels Bohr said in 1962, "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913 Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number . Moseley's empiric formula was found to be derivable from Rydberg and Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to . Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. Later, people realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In the experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of "Z" − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +"Z", and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines, or Here, R"v = R"E/"h" is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number (see Henry Moseley for detail) but, as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. The Bohr model gives an incorrect value for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to rotate "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction. Nevertheless, in the modern "fully quantum treatment in phase space", the proper deformation (careful full extension) of the semi-classical result adjusts the angular momentum value to the correct effective one. As a consequence, the physical ground state expression is obtained through a shift of the vanishing quantum angular momentum expression, which corresponds to spherical symmetry. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also has difficulty with, or else fails to explain: Several enhancements to the Bohr model were proposed, most notably the Sommerfeld model or Bohr–Sommerfeld model, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition where "pr" is the radial momentum canonically conjugate to the coordinate "q" which is the radial position and "T" is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the "xy"-plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr-Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
https://en.wikipedia.org/wiki?curid=4831
Bombay Sapphire Bombay Sapphire is a brand of gin that was first launched in 1986 by English wine-merchant IDV. In 1997 Diageo sold the brand to Bacardi. Its name originates from the popularity of gin in India during the British Raj and "Sapphire" refers to the violet-blue Star of Bombay which was mined from Sri Lanka and is now on display at the Smithsonian Institution. Bombay Sapphire is marketed in a flat-sided, sapphire-coloured bottle that bears a picture of Queen Victoria on the label. The flavouring of the drink comes from a recipe of ten ingredients: almond, lemon peel, liquorice, juniper berries, orris root, angelica, coriander, cassia, cubeb, and grains of paradise. Alcohol brought in from another supplier is evaporated three times using a carterhead still, and the alcohol vapours are passed through a mesh/basket containing the ten botanicals, in order to gain flavour and aroma. This is felt to give the gin a lighter, more floral taste compared to those gins that are created using a copper pot still. Water from Lake Vyrnwy is added to bring the strength of Bombay Sapphire down to 40.0% (UK, the Nordics, several continental European markets, Canada and Australia). The 47.0% version is the standard for sale at duty-free stores in all markets. In 2011, plans were announced to move the manufacturing process to a new facility at Laverstoke Mill in Whitchurch, Hampshire, including the restoration of the former Portal's paper mill at the proposed site, and the construction of a visitor centre. Planning permission was granted in February 2012, and the centre opened to the public in the autumn of 2014. The visitor centre included a new construction by Thomas Heatherwick of two glasshouses for plants used as botanicals in the production of Bombay Sapphire gin. Production and bottling of the drink is contracted out by Bacardi to G&J Greenall. Bacardi also markets Bombay Original London Dry Gin (or Bombay Original Dry). Eight botanical ingredients are used in the production of the Original Dry variety, as opposed to the ten in Bombay Sapphire. "Wine Enthusiast" preferred it to Bombay Sapphire. In September 2011, Bombay Sapphire East was launched in test markets in New York and Las Vegas. This variety has another two botanicals, lemongrass and black peppercorns, in addition to the original ten. It is bottled at 42% and was designed to counteract the sweetness of American tonic water. A special edition of Bombay gin called Star of Bombay was produced in 2015 for the UK market. It is bottled at 47.5% and is distilled from grain. It features bergamot and ambrette seeds in harmony with Bombay's signature botanicals. This version has later been extended to several other markets. In summer 2019, Bacardi launched a limited edition gin called Bombay Sapphire English Estate, which features three additional English sourced botanicals: Pennyroyal Mint, rosehip and hazelnut. It is bottled at 41%. The brand started a series of design collaborations. Their first step into the design world was a series of advertisements featuring work from currently popular designers. Their works, varying from martini glasses to tiles and cloth patterns, are labelled as “Inspired by Bombay Sapphire”. The campaign featured designers such as Marcel Wanders, Yves Behar, Karim Rashid, Ulla Darni, and Dror Benshetrit and performance artist Jurgen Hahn. From the success of this campaign, the company began a series of events and sponsored locations. The best known is the Bombay Sapphire Designer Glass Competition, held each year, where design students from all over the world can participate by designing their own “inspired” martini cocktail glass. The finalists (one from each participating country) are then invited to the yearly Salone del Mobile, an international design fair in Milano, where the winner is chosen. Bombay Sapphire also endorses glass artists and designers with the Bombay Sapphire Prize, which is awarded every year to an outstanding design which features glass. Bombay Sapphire also showcases the designers' work in the Bombay Sapphire endorsed blue room, which is a design exhibition touring the world each year. From 2008 the Bombay Sapphire Designer Glass Competition final will be held at 100% Design in London, UK and the Bombay Sapphire Prize will take place in Milan at the Salone Del Mobile. Bombay Sapphire has been reviewed by several outside spirit ratings organizations to mixed success. Recently, it was awarded a score of 92 (on a 100-point scale) from the Beverage Testing Institute. Ratings aggregator Proof66.com categorizes the Sapphire as a Tier 2 spirit, indicating highly favourable "expert" reviews.
https://en.wikipedia.org/wiki?curid=4832
Data General Nova The Data General Nova is a series of 16-bit minicomputers released by the American company Data General. The Nova family was very popular in the 1970s and ultimately sold tens of thousands of examples. The first model, known simply as "Nova", was released in 1969. The Nova was packaged into a single rack-mount case and had enough computing power to handle most simple tasks. The Nova became popular in science laboratories around the world. It was followed the next year by the SuperNOVA, which ran roughly four times as fast. Introduced during a period of rapid progress in integrated circuit (IC, or "chip") design, the line went through several upgrades over the next five years, introducing the 800 and 1200, the Nova 2, Nova 3, and ultimately the Nova 4. A single-chip implementation was also introduced as the microNOVA in 1977, but did not see widespread use as the market moved to new microprocessor designs. Fairchild Semiconductor also introduced a microprocessor version of the Nova in 1977, the Fairchild 9440, but it also saw limited use in the market. The Nova line was succeeded by the Data General Eclipse, which was similar in most ways but added virtual memory support and other features required by modern operating systems. A 32-bit upgrade of the Eclipse resulted in the Eclipse MV series of the 1980s. Edson de Castro was the Product Manager of the pioneering Digital Equipment Corporation (DEC) PDP-8, a 12-bit computer generally considered by most to be the first true minicomputer. He also led the design of the upgraded PDP-8/I, which used early integrated circuits in place of individual transistors. During the PDP-8/I process, de Castro had been visiting circuit board manufacturers who were making rapid strides in the complexity of the boards they could assemble. de Castro concluded that the 8/I could be produced using fully automated assembly on large boards, which would have been impossible only a year earlier. Others within DEC had become used to the smaller boards used in earlier machines and were concerned about tracking down problems when there were many components on a single board. For the 8/I, the decision was made to stay with small boards, using the new "flip-chip" packaging for a modest improvement in density. During the period when the PDP-8 was being developed, the introduction of ASCII and its major update in 1967 led to a new generation of designs with word lengths that were multiples of 8 bits rather than multiples of 6 bits as in most previous designs. This led to mid-range designs working at 16-bit word lengths instead of DEC's current 12- and 18-bit lineups. de Castro was convinced that it was possible to improve upon the PDP-8 by building a 16-bit minicomputer CPU on a single 15-inch square board. In 1967, de Castro began a new design effort known as "PDP-X" which included several advanced features. Among these was a single underlying design that could be used to build 8-, 16- and 32-bit platforms. This progressed to the point of producing several detailed architecture documents. Ken Olsen was not supportive of this project, feeling it did not offer sufficient advantages over the 12-bit PDP-8 and the 18-bit PDP-9. It was eventually canceled in the spring of 1968. Cancelation of the PDP-X prompted de Castro to consider leaving DEC to build a system on his own. He was not alone; in late 1967 a group of like-minded engineers formed to consider such a machine. The group included Pat Green, a divisional manager, Richard Sogge, another hardware engineer, and a software engineer, Henry Burkhardt II. In contrast to the PDP-X, the new effort focussed on a single machine that could be brought to market quickly, as de Castro felt the PDP-X concept was far too ambitious for a small startup company. Discussing it with the others at DEC, the initial concept led to an 8-bit machine which would be less costly to implement. At this time the group began talking with Herbert Richman, a salesman for Fairchild Semiconductor who knew the others through his contacts with DEC. Richman pointed out that the machine's internal word length didn't have to be the same as its external presentation; one could have a 16-bit machine that used a 4-bit arithmetic logic unit (ALU), for instance. This could be cheaply implemented with a single modern IC, which Fairchild just happened to be introducing at that time, in the form of the 74181. This approach significantly reduced the complexity and cost of the main logic and is responsible for the Nova's low selling cost. The new design used a simple load–store architecture which would reemerge in the RISC designs in the 1980s. As the complexity of a flip-flop was being rapidly reduced as they were implemented in chips, they offset the lack of addressing modes of the load/store design by adding four general-purpose accumulators, instead of the single register that would be found in similar low-cost offerings. In keeping with the original packaging concept of the 8/I, the Nova was based on two printed circuit boards, one for the CPU and another for various support systems. The boards were designed so they could be connected together using a printed circuit backplane, with minimal manual wiring, allowing all the boards to be built in an automated fashion. This greatly reduced costs over 8/I, which consisted of many smaller boards that had to be wired together at the backplane. The larger-board construction also made the Nova more reliable, which made it especially attractive for industrial or lab settings. Fairchild provided the medium-scale integration (MSI) chips used throughout the system. Late in 1967, Richman introduced the group to New York-based lawyer Fred Adler, who began canvassing various funding sources for seed capital. By 1968, Adler had arranged a major funding deal with a consortium of venture capital funds from the Boston area, who agreed to provide an initial $400,000 investment with a second $400,000 available for production ramp-up. de Castro, Burkhart and Sogge quit DEC and started Data General (DG) on 15 April 1968. Green did not join them, considering the venture too risky, and Richman did not join until the product was up and running later in the year. Work on the first system took about nine months, and the first sales efforts started that November. They had a bit of luck because the Fall Joint Computer Conference had been delayed until December that year, so they were able to bring a working unit to the Moscone Center where they ran a version of "Spacewar!". DG officially released the Nova in 1969 at a base price of US$3,995, advertising it as "the best small computer in the world." The basic model was not very useful out of the box, and adding RAM in the form of core memory typically brought the price up to $7,995. The first sale was to a university in Texas, with the team hand-building an example which shipped out in February. However, this was in the midst of a strike in the airline industry and the machine never arrived. They sent a second example, which did arrive as the strike had ended, and in May the original one was finally delivered as well. The system was successful from the start, with the 100th being sold after six months, and the 500th after 15 months. Sales accelerated as newer versions were introduced, and by 1975 the company had annual sales of $100 million. Ken Olsen had publicly predicted that DG would fail, but with the release of the Nova it was clear that was not going to happen. By this time a number of other companies were talking about introducing 16-bit designs as well. Olsen decided these presented a threat to their 18-bit line as well as 12-bit, and began a new 16-bit design effort. This emerged in 1970 as the PDP-11, a much more complex design that was as different from the PDP-X as the Nova was. The two designs competed heavily in the market. Rumors of the new system from DEC reached DG shortly after the Nova began shipping. In the spring of 1970 they hired a new designer, Larry Seligman, to leapfrog any possible machine in the making. Two major changes had taken place since the Nova was designed; one was that ICs continued to improve and offer higher densities, and another was that Intel was aggressively talking up semiconductor-based memories, promising 1000 bits on a single chip and running at much higher speeds than core. Seligman's new design took advantage of both of these improvements. To start, the new ICs allowed the ALU to be expanded to a full 16-bit width, making the new design four times as fast at math as the original. In addition, a new smaller core memory was used that improved the cycle time from the original's 1,200 ns to 800 ns, offering a further improvement gain. Performance could be further improved by replacing the core with read-only memory; lacking core's read/write cycle, this could be accessed at 300 ns for a dramatic performance boost. The resulting machine, known as the SuperNOVA, was released in 1970. Although the initial models still used core, the entire design was based on the premise that faster semiconductor memories would become available and the platform could make full use of them. This was introduced later the same year as the SuperNOVA SC, featuring semiconductor (SC) memory. The much higher performance memory allowed the CPU, which was synchronous with memory, to be further increased in speed to run at a 300 ns cycle time (3.3 MHz). This made it the fastest available minicomputer for many years. However, the new memory was also very expensive and ran hot, so it was not widely used. While Seligman was working on the SuperNOVA, the company received a letter from Ron Gruner stating "I've read about your product, I've read your ads, and I'm going to work for you. And I'm going to be at your offices in a week to talk to you about that." He was hired on the spot. By this point, the company had decided it needed a new version of their low-cost platform to take advantage of the changes in the market, and a new concept emerged where a single machine could be swapped out in the field by the customer if they needed to move from the low-cost to high-performance system. Gruner was put in charge of the low-cost machine while Seligman designed a matching high-performance version. Gruner's low-cost model launched in 1970 as the Nova 1200, the 1200 referring to the use of the original Nova's 1,200 ns core memory. It also used the original 4-bit ALU, and was thus essentially a repackaged Nova. Seligman's repackaged SuperNOVA was released in 1971 as the Nova 800, resulting in the somewhat confusing naming where the lower-numbered model has higher performance. Both models were offered in a variety of cases, the 1200 with seven slots, the 1210 with four and the 1220 with fourteen. By this time the PDP-11 was finally shipping. It offered a much richer instruction set architecture than the deliberately simple one in the Nova. Continuing improvement in IC designs, and especially their price–performance ratio, was eroding the value of the original simplified instructions. Seligman was put in charge of designing a new machine that would be compatible with the Nova while offering a much richer environment for those who wanted it. This concept shipped as the Data General Eclipse series, which offered the ability to add additional circuity to tailor the instruction set for scientific or data processing workloads. The Eclipse was successful in competing with the PDP-11 at the higher end of the market. Around the same time, rumors of a new 32-bit machine from DEC began to surface. DG decided they had to have a similar product, and Gruner was put in charge of what became the Fountainhead Project. Given the scope of the project, they agreed that the entire effort should be handled off-site, and Gruner selected a location at Research Triangle Park in North Carolina. This design became very complex and was ultimately canceled years later. While these efforts were underway, work on the Nova line continued. The 840, first offered in 1973, also included a new paged memory system allowing for addresses of up to 17-bits. An index offset the base address into the larger 128 kword memory. Actually installing this much memory required considerable space; the 840 shipped in a large 14-slot case. The next version was the Nova 2, with the first versions shipping in 1973. The Nova 2 was essentially a simplified version of the earlier machines as increasing chip densities allowed the CPU to be reduced in size. While the SuperNOVA used three 15×15" boards to implement the CPU and its memory, the Nova 2 fitted all of this onto a single board. ROM was used to store the boot code, which was then copied into core when the "program load" switch was flipped. Versions were available with four, seven and ten slots. Further improvements in core led to the 2/10 and 2/4, referring to their cycle times in milliseconds. The Nova 3 of 1975 added two more registers, used to control access to a built-in stack. The processor was also re-implemented using TTL components, further increasing the performance of the system. The Nova 3 was offered in four-slot (the Nova 3/4) and twelve-slot (the Nova 3/12) versions. It appears that Data General originally intended the Nova 3 to be the last of its line, planning to replace the Nova with the later Eclipse machines. However, continued demand led to a Nova 4 machine, this time based on four AMD Am2901 bit-slice ALUs. This machine was designed from the start to be both the Nova 4 and the Eclipse S/140, with different microcode for each. A floating-point co-processor was also available, taking up a separate slot. An additional option allowed for memory mapping, allowing programs to access up to 128 kwords of memory using bank switching. Unlike the earlier machines, the Nova 4 did not include a front panel console and instead relied on the terminal to emulate a console when needed. There were three different versions of the Nova 4, the Nova 4/C, the Nova 4/S and the Nova 4/X. The Nova 4/C was a single-board implementation that included all of the memory (16 or 32 kwords). The Nova 4/S and 4/X used separate memory boards. The Nova 4/X had the on-board memory management unit (MMU) enabled to allow up to 128 kwords of memory to be used (the MMU was also installed in the Nova 4/S, but was disabled by firmware). Both the 4/S and the 4/X included a “prefetcher” to increase performance by fetching up to two instructions from memory before they were needed. Data General also produced a series of single-chip implementations of the Nova processor as the microNOVA. Changes to the bus architecture limited speed dramatically, to the point where it was about one-half the speed of the original Nova. The original microNOVA with the “mN601” processor shipped in 1977. It was followed by the microNOVA MP/100 in 1979, which reduced the CPU to a single VLSI chip, the mN602. A larger version was also offered as the microNOVA MP/200, shipping the same year. The microNOVA was later re-packaged in a PC-style case with two floppy disks as the Enterprise. Enterprise shipped in 1981, running RDOS, but the introduction of the IBM PC the same year made most other machines disappear under the radar. The Nova influenced the design of both the Xerox Alto (1973) and Apple I (1976) computers, and its architecture was the basis for the Computervision CGP (Computervision Graphics Processor) series. Its external design has been reported to be the direct inspiration for the front panel of the MITS Altair (1975) microcomputer. Data General followed up on the success of the original Nova with a series of faster designs. The Eclipse family of systems was later introduced with an extended upwardly compatible instruction set, and the MV-series further extended the Eclipse into a 32-bit architecture to compete with the DEC VAX. The development of the MV-series was documented in Tracy Kidder's popular 1981 book, "The Soul of a New Machine". Data General itself would later evolve into a vendor of Intel processor-based servers and storage arrays, eventually being purchased by EMC. The Nova, unlike the PDP-8, was a load–store architecture. It had four 16-bit accumulator registers, of which two (2 and 3) could be used as index registers. There was a 15-bit program counter and a single-bit carry register. As with the PDP-8, current + zero page addressing was central. There was no stack register, but later Eclipse designs would utilize a dedicated hardware memory address for this function. The earliest models of the Nova processed math serially in 4-bit packets, using a single 74181 bitslice ALU. A year after its introduction, this design was improved to include a full 16-bit parallel math unit using four 74181s, this design being referred to as the SuperNova. Future versions of the system added a stack unit and hardware multiply/divide. The Nova 4 / Eclipse S/140 was based on four AMD 2901 bit-slice ALUs, with microcode in read-only memory, and was the first Nova designed for DRAM main memory only, without provision for magnetic core memory. The first models were available with 8K words of magnetic core memory as an option, one that practically everyone had to buy, bringing the system cost up to $7,995. This core memory board was organized in planar fashion as four groups of four banks, each bank carrying two sets of core in a 64 by 64 matrix; thus there were 64 x 64 = 4096 bits per set, x 2 sets giving 8,192 bits, x 4 banks giving 32,768 bits, x 4 groups giving a total of 131,072 bits, and this divided by the machine word size of 16 bits gave 8,192 words of memory. The core on this 8K word memory board occupied a centrally located "board-on-a-board", 5.25" wide by 6.125" high, and was covered by a protective plate. It was surrounded by the necessary support driver read-write-rewrite circuitry. All of the core and the corresponding support electronics fit onto a single standard 15 x board. Up to 32K of such core RAM could be supported in one external expansion box. Semiconductor ROM was already available at the time, and RAM-less systems (i.e. with ROM only) became popular in many industrial settings. The original Nova machines ran at approximately 200 kHz, but its SuperNova was designed to run at up to 3 MHz when used with special semiconductor main memory. The standardized backplane and I/O signals created a simple, efficient I/O design that made interfacing programmed I/O and Data Channel devices to the Nova simple compared to competing machines. In addition to its dedicated I/O bus structure, the Nova backplane had wire wrap pins that could be used for non-standard connectors or other special purposes. The instruction format could be broadly categorized into one of three functions: 1) register-to-register manipulation, 2) memory reference, and 3) input/output. Each instruction was contained in one word. The register-to-register manipulation was almost RISC-like in its bit-efficiency; and an instruction that manipulated register data could also perform tests, shifts and even elect to discard the result. Hardware options included an integer multiply and divide unit, a floating-point unit (single and double precision), and memory management. The earliest Nova came with a BASIC interpreter on punched tape. As the product grew, Data General developed many languages for the Nova computers, running under a range of consistent operating systems. FORTRAN IV, ALGOL, Extended BASIC, Data General Business Basic, Interactive COBOL, and several assemblers were available from Data General. Third party vendors and the user community expanded the offerings with Forth, Lisp, BCPL, C, ALGOL, and other proprietary versions of COBOL and BASIC. The machine instructions implemented below are the common set implemented by all of the Nova series processors. Specific models often implemented additional instructions, and some instructions were provided by optional hardware. All arithmetic instructions operated between accumulators. For operations requiring two operands, one was taken from the source accumulator, and one from the destination accumulator, and the result was deposited in the destination accumulator. For single-operand operations, the operand was taken from the source register and the result replaced the destination register. For all single-operand opcodes, it was permissible for the source and destination accumulators to be the same, and the operation functioned as expected. All arithmetic instructions included a "no-load" bit which, when set, suppressed the transfer of the result to the destination register; this was used in conjunction with the test options to perform a test without losing the existing contents of the destination register. In assembly language, adding a '#' to the opcode set the no-load bit. The CPU contained a single-bit register called the carry bit, which after an arithmetic operation would contain the carry out of the most significant bit. The carry bit could be set to a desired value prior to performing the operation using a two-bit field in the instruction. The bit could be set, cleared, or complemented prior to performing the instruction. In assembly language, these options were specified by adding a letter to the opcode: 'O' — set the carry bit; 'Z' — clear the carry bit, 'C' — complement the carry bit, nothing — leave the carry bit alone. If the no-load bit was also specified, the specified carry value would be used for the computation, but the actual carry register would remain unaltered. All arithmetic instructions included a two-bit field which could be used to specify a shift option, which would be applied to the result before it was loaded into the destination register. A single-bit left or right shift could be specified, or the two bytes of the result could be swapped. Shifts were 17-bit circular, with the carry bit "to the left" of the most significant bit. In other words, when a left shift was performed, the most significant bit of the result was shifted into the carry bit, and the previous contents of the carry bit were shifted into the least significant bit of the result. Byte swaps did not effect the carry bit. In assembly language, these options were specified by adding a letter to the opcode: 'L' — shift left; 'R' — shift right, 'S' — swap bytes; nothing — do not perform a shift or swap. All arithmetic instructions included a three-bit field that could specify a test which was to be applied to the result of the operation. If the test evaluated to true, the next instruction in line was skipped. In assembly language, the test option was specified as a third operand to the instruction. The available tests were: The actual arithmetic instructions were: An example arithmetic instructions, with all options utilized, is: ADDZR# 0,2,SNC This decoded as: clear the carry bit; add the contents of AC2 (accumulator 2) to AC0; circularly shift the result one bit to the right; test the result to see if the carry bit is set and skip the next instruction if so. Discard the result after performing the test. In effect, this adds two numbers and tests to see if the result is odd or even. The Nova instruction set contained a pair of instructions that transferred memory contents to accumulators and vice versa, two transfer-of-control instructions, and two instructions that tested the contents of a memory location. All memory reference instructions contained an eight-bit address field, and a two-bit field that specified the mode of memory addressing. The four modes were: Obviously, mode 0 was only capable of addressing the first 256 memory words, given the eight-bit address field. This portion of memory was referred to as "page zero". Page zero memory words were considered precious to Nova assembly language programmers because of the small number available; only page zero locations could be addressed from anywhere in the program without resorting to indexed addressing, which required tying up accumulator 2 or 3 to use as an index register. In assembly language, a ".ZREL" directive caused the assembler to place the instructions and data words that followed it in page zero; an ".NREL" directive placed the following instructions and data words in "normal" memory. Later Nova models added instructions with extended addressing fields, which overcame this difficulty (at a performance penalty). The assembler computed relative offsets for mode 1 automatically, although it was also possible to write it explicitly in the source. If a memory reference instruction referenced a memory address in .NREL space but no mode specifier, mode 1 was assumed and the assembler calculated the offset between the current instruction and the referenced location, and placed this in the instruction's address field (provided that the resulting value fit into the 8-bit field). The two load and store instructions were: Both of these instructions included an "indirect" bit. If this bit was set (done in assembly language by adding a '@' to the opcode), the contents of the target address were assumed to be a memory address itself, and that address would be referenced to do the load or store. The two transfer-of-control instructions were: As in the case of the load and store instructions, the jump instructions contained an indirect bit, which likewise was specified in assembly using the '@' character. In the case of an indirect jump, the processor retrieved the contents of the target location, and used the value as the memory address to jump to. However, unlike the load and store instructions, if the indirect address had the most significant bit set, it would perform a further cycle of indirection. On the Nova series processors prior to the Nova 3, there was no limit on the number of indirection cycles; an indirect address that referenced itself would result in an infinite indirect addressing loop, with the instruction never completing. (This could be alarming to users, since when in this condition, pressing the STOP switch on the front panel did nothing. It was necessary to reset the machine to break the loop.) The two memory test instructions were: As in the case of the load and store instructions, there was an indirect bit that would perform a single level of indirect addressing. These instructions were odd in that, on the Novas with magnetic core memory, the instruction was executed within the memory board itself. As was common at the time, the memory boards contained a "write-back" circuit to solve the destructive-read problem inherent to magnetic core memory. But the write-back mechanism also contained a mini arithmetic unit, which the processor used for several purposes. For the ISZ and DSZ instructions, the increment or decrement occurred between the memory location being read and the write-back; the CPU simply waited to be told if the result was zero or nonzero. These instructions were useful because they allowed a memory location to be used as a loop counter without tying up an accumulator, but they were slower than performing the equivalent arithmetic instructions. Some examples of memory reference instructions: LDA 1,COUNT Transfers the contents of the memory location labeled COUNT into accumulator 1. Assuming that COUNT is in .NREL space, this instruction is equivalent to: LDA 1,1,(COUNT-(.+1)) where '.' represents the location of the LDA instruction. JSR@ 0,17 Jump indirect to the memory address specified by the contents of location 17, in page zero space, and deposit the return address in accumulator 3. This was the standard method for making an RDOS system call on early Nova models; the assembly language mnemonic ".SYSTM" translated to this. JMP 0,3 Jump to the memory location whose address is contained in accumulator 3. This was a common means of returning from a function or subroutine call, since the JSR instruction left the return address in accumulator 3. STA 0,3,-1 Store the contents of accumulator 0 in the location that is one less than the address contained in accumulator 3. DSZ COUNT Decrement the value in the location labeled COUNT, and skip the next instruction if the result is zero. As in the case above, if COUNT is assumed to be in .NREL space, this is equivalent to: DSZ 1,(COUNT-(.+1)) The Novas implemented a channelized model for interfacing to I/O devices. In the model, each I/O device was expected to implement two flags, referred to as "Busy" and "Done", and three data and control registers, referred to as A, B, and C. I/O instructions were available to read and write the registers, and to send one of three signals to the device, referred to as "start", "clear", and "pulse". In general, sending a start signal initiated an I/O operation that had been set up by loading values into the A/B/C registers. The clear signal halted an I/O operation and cleared any resulting interrupt. The pulse signal was used to initiate ancillary operations on complex subsystems, such as seek operations on disk drives. Polled devices usually moved data directly between the device and the A register. DMA devices generally used the A register to specify the memory address, the B register to specify the number of words to be transferred, and the C register for control flags. Channel 63 referred to the CPU itself and was used for various special functions. Each I/O instruction contained a six-bit channel number field, a four-bit to specify which register to read or write, and a two-bit field to specify which signal was to be sent. In assembly language, the signal was specified by adding a letter to the opcode: 'S' for start, 'C' for clear, 'P' for pulse, and nothing for no signal. The opcodes were: In addition, four instructions were available to test the status of a device: Starting a device caused it to set its busy flag. When the requested operation was completed, conventionally the device cleared its busy flag and set its done flag; most devices had their interrupt request mechanism wired to the done flag, so setting the done flag caused an interrupt (if interrupts were enabled and the device wasn't masked). These instructions performed various CPU control and status functions. All of them were actually shorthand mnemonics for I/O instructions on channel 63, the CPU's self-referential I/O channel. From the hardware standpoint, the interrupt mechanism was relatively simple, but also less flexible, than current CPU architectures. The backplane supported a single interrupt request line, which all devices capable of interrupting connected to. When a device needed to request an interrupt, it raised this line. The CPU took the interrupt as soon as it completed the current instruction. As stated above, a device was expected to raise its "done" I/O flag when it requested an interrupt, and the convention was that the device would clear its interrupt request when the CPU executed a I/O clear instruction on the device's channel number. The CPU expected the operating system to place the address of its interrupt service routine into memory address 1. When a device interrupted, the CPU did an indirect jump through address 1, placing the return address into memory address 0, and disabling further interrupts. The interrupt handler would then perform an INTA instruction to discover the channel number of the interrupting device. This worked by raising an "acknowledge" signal on the backplane. The acknowledge signal was wired in a daisy-chain format across the backplane, such that it looped through each board on the bus. Any device requesting an interrupt was expected to block the further propagation of the acknowledge signal down the bus, so that if two or more devices had pending interrupts simultaneously, only the first one would see the acknowledge signal. That device then responded by placing its channel number on the data lines on the bus. This meant that, in the case of simultaneous interrupt requests, the device that had priority was determined by which one was physically closest to the CPU in the card cage. After the interrupt had been processed and the service routine had sent the device an I/O clear, it resumed normal processing by enabling interrupts and then returning via an indirect jump through memory address 0. In order to prevent a pending interrupt from interrupting immediately before the return jump (which would cause the return address to be overwritten), the INTEN instruction had a one-instruction-cycle delay. When it was executed, interrupts would not be enabled until after the following instruction, which was expected to be the JMP@ 0 instruction, was executed. The operating system's interrupt service routine then typically performed an indexed jump using the received channel number, to jump to the specific interrupt handling routine for the device. There were a few devices, notably the CPU's power-failure detection circuit, which did not respond to the INTA instruction. If the INTA returned a result of zero, the interrupt service routine had to poll all of the non-INTA-responding devices using the SKPDZ/SKPDN instructions to see which one interrupted. The operating system could somewhat manage the ordering of interrupts by setting an interrupt mask using the MSKO instruction. This was intended to allow the operating system to determine which devices were permitted to interrupt at a given time. When this instruction was issued, a 16-bit interrupt mask was transmitted to all devices on the backplane. It was up to the device to decide what the mask actually meant to it; by convention, a device that was masked out was not supposed to raise the interrupt line, but the CPU had no means of enforcing this. Most devices that were maskable allowed the mask bit to be selected via a jumper on the board. There were devices that ignored the mask altogether. On the systems having magnetic core memory (which retained its contents without power), recovery from a power failure was possible. A power failure detection circuit in the CPU issued an interrupt when loss of the main power coming into the computer was detected; from this point, the CPU had a short amount of time until a capacitor in the power supply lost its charge and the power to the CPU failed. This was enough time to stop I/O in progress, by issuing an IORST instruction, and then save the contents of the four accumulators and the carry bit to memory. When the power returned, if the CPU's front panel key switch was in the LOCK position, the CPU would start and perform an indirect jump through memory address 2. This was expected to be the address of an operating system service routine that would reload the accumulators and carry bit, and then resume normal processing. It was up to the service routine to figure out how to restart I/O operations that were aborted by the power failure. As was the convention of the day, most Nova models provided a front panel console to control and monitor CPU functions. Models prior to the Nova 3 all relied on a canonical front panel layout, as shown in the Nova 840 panel photo above. The layout contained a keyed power switch, two rows of address and data display lamps, a row of data entry switches, and a row of function switches that activated various CPU functions when pressed. The address lamps always displayed the current value of the program counter, in binary. The data lamps displayed various values depending on which CPU function was active at the moment. To the left of the leftmost data lamp, an additional lamp displayed the current value of the carry bit. On most models the lamps were incandescent lamps which were soldered to the panel board; replacing burned-out lamps was a bane of existence for Data General field service engineers. Each of the data switches controlled the value of one bit in a 16-bit value, and per Data General convention, they were numbered 0-15 from left to right. The data switches provided input to the CPU for various functions, and could also be read by a running program using the READS assembly language instruction. To reduce panel clutter and save money, the function switches were implemented as two-way momentary switches. When a function switch lever was lifted, it triggered the function whose name was printed above the switch on the panel; when the lever was pressed down, it activated the function whose name appeared below the switch. The switch lever returned to a neutral position when released. Referencing the Nova 840 photo, the first four switches from the left performed the EXAMINE and DEPOSIT functions for the four accumulators. Pressing EXAMINE on one of these caused the current value of the accumulator to be displayed in binary by the data lamps. Pressing DEPOSIT transferred the binary value represented by the current settings of the data switches to the accumulator. Going to the right, the next switch was the RESET/STOP switch. Pressing STOP caused the CPU to halt after completing the current instruction. Pressing RESET caused the CPU to halt immediately, cleared a number of CPU internal registers, and sent an I/O reset signal to all connected devices. The switch to the right of that was the START/CONTINUE switch. Pressing CONTINUE caused the CPU to resume executing at the instruction currently pointed at by the program counter. Pressing START transferred the value currently set in data switches 1-15 to the program counter, and then began executing from there. The next two switches provided read and write access to memory from the front panel. Pressing EXAMINE transferred the value set in data switches 1-15 to the program counter, fetched the value in the corresponding memory location, and displayed its value in the data lamps. Pressing EXAMINE NEXT incremented the program counter and then performed an examine operation on that memory location, allowing the user to step through a series of memory locations. Pressing DEPOSIT wrote the value contained in the data switches to the memory location pointed at by the program counter. Pressing DEPOSIT NEXT first incremented the program counter and then deposited to the pointed-to memory location. The INST STEP function caused the CPU to execute one instruction, at the current program counter location, and then halt. Since the program counter would be incremented as part of the instruction execution, this allowed the user to single-step through a program. MEMORY STEP, a misnomer, caused the CPU to run through a single clock cycle and halt. This was of little use to users and was generally only used by field service personnel for diagnostics. PROGRAM LOAD was the mechanism usually used to boot a Nova. When this switch was triggered, it caused the 32-word boot ROM to be mapped over the first 32 words of memory, set the program counter to 0, and started the CPU. The boot ROM contained code that would read 256 words (512 bytes) of code from a selected I/O device into memory and then transfer control to the read-in code. The data switches 8-15 were used to tell the boot ROM which I/O channel to boot from. If switch 0 was off, the boot ROM would assume the device was a polled device (e.g., the paper tape reader) and run a polled input loop until 512 bytes had been read. If switch 0 was on, the boot ROM assumed the device was a DMA-capable device and it initiated a DMA data transfer. The boot ROM was not smart enough to position the device prior to initiating the transfer. This was a problem when rebooting after a crash; if the boot device was a disk drive, its heads had likely been left on a random cylinder. They had to be repositioned to cylinder 0, where RDOS wrote the first-level boot block, in order for the boot sequence to work. Conventionally this was done by cycling the drive through its load sequence, but users who got frustrated with the wait time (up to 5 minutes depending on the drive model) learned how to input from the front panel a drive "recalibrate" I/O code and single-step the CPU through it, an operation that took an experienced user only a few seconds. The power switch was a 3-way keyed switch with positions marked OFF, ON, and LOCK. In the OFF position all power was removed from the CPU. Turning the key to ON applied power to the CPU. However, unlike current CPUs, the CPU did not start automatically when power was applied; the user had to use PROGRAM LOAD or some other method to start the CPU and initiate the boot sequence. Turning the switch to LOCK disabled the front panel function switches; by turning the switch to LOCK and removing the key, the user could render the CPU resistant to tampering. On systems with magnetic core memory, the LOCK position also enabled the auto power failure recovery function. The key could be removed in the OFF or LOCK positions. The Nova 1200 executed core memory access instructions (LDA and STA) in 2.55 microseconds (μs). Use of read-only memory saved 0.4 μs. Accumulator instructions (ADD, SUB, COM, NEG, etc.) took 1.55 μs, MUL 2.55 μs, DIV 3.75 μs, ISZ 3.15-4.5 μs. On the later Eclipse MV/6000, LDA and STA took 0.44 μs, ADD, etc. took 0.33 μs, MUL 2.2 μs, DIV 3.19 μs, ISZ 1.32 μs, FAD 5.17 μs, FMMD 11.66 μs. This is a minimal programming example in Nova assembly language. It is designed to run under RDOS and prints the string “Hello, world.” on the console. Basic models of the Nova came without built-in hardware multiply and divide capability, to keep prices competitive. The following routine multiplies two 16-bit words to produce a 16-bit word result (overflow is ignored). It demonstrates combined use of ALU op, shift, and test (skip). Note that when this routine is called by jsr, AC3 holds the return address. This is used by the return instruction jmp 0,3. An idiomatic way to clear an accumulator is sub 0,0. Other single instructions can be arranged to load a specific set of useful constants (e.g. -2, -1, or +1). The following routine prints the value of AC1 as a 16-digit binary number, on the RDOS console. It reveals further quirks of the Nova instruction set. For instance, there is no instruction to load an arbitrary “immediate” value into an accumulator (although memory reference instructions do encode such a value to form an effective address). Accumulators must generally be loaded from initialized memory locations (e.g. n16). Other contemporary machines such as the PDP-11, and practically all modern architectures, allow for immediate loads, although many such as ARM restrict the range of values that can be loaded immediately. Because the RDOS .systm call macro implements a jsr, AC3 is overwritten by the return address for the .pchar function. Therefore, a temporary location is needed to preserve the return address of the caller of this function. For a recursive or otherwise re-entrant routine, a stack, hardware if available, software if not, must be used instead. The return instruction becomes jmp @ retrn which exploits the Nova's indirect addressing mode to load the return PC. The constant definitions at the end show two assembler features: the assembler radix is octal by default (20 = sixteen), and character constants could be encoded as e.g. "0. The Canadian Broadcasting Corporation in Montreal used the Nova 1200 for channel play-out automation up until the late 1980s. It was then replaced with refurbished Nova 4 units and these were in use until the mid 1990s.
https://en.wikipedia.org/wiki?curid=8654
Protestant Church in the Netherlands The Protestant Church in the Netherlands (, abbreviated PKN) is the largest Protestant denomination in the Netherlands, being both Reformed (Calvinist) and Lutheran. It was founded on 1 May 2004 as the merger of the vast majority of Dutch Reformed Church, the vast majority of the Reformed Churches in the Netherlands and the Evangelical Lutheran Church in the Kingdom of the Netherlands. The merger was the culmination of an organizational process started in 1961. Several orthodox Reformed and liberal churches did not merge into the new church. The Protestant Church in the Netherlands (PKN) forms the country's second largest Christian denomination after the Roman Catholic Church, with approximately 1.6 million members as per the church official statistics or some 9.1% of the population in 2016. It is the traditional faith of the Dutch Royal Family – a remnant of historical dominance of the Dutch Reformed Church, the main predecessor of the Protestant Church. The doctrine of the Protestant Church in the Netherlands is expressed in its creeds. In addition to holding the Apostles', the Nicene and the Athanasian Creeds of the universal church, it also holds to the confessions of its predecessor bodies. From the Lutheran tradition are the unaltered Augsburg Confession and Luther's Catechism. From the Reformed, the Heidelberg and Genevan Catechisms along with the Belgic Confession with the Canons of Dordt. The Church also acknowledges the Theological Declaration of Barmen and the Leuenberg Agreement. Ordination of women and blessings of same-sex marriages are allowed. The PKN contains both liberal and conservative movements; although the liberal Remonstrants left talks when they could not agree with the unaltered adoption of the Canons of Dordt. Local congregations have far-reaching powers concerning "controversial" matters (such as admittance to holy communion or whether women are admitted as members of the congregation's consistory). The polity of the Protestant Church in the Netherlands is a hybrid of presbyterian and congregationalist church governance. Church governance is organised along local, regional, and national lines. At the local level is the congregation. An individual congregation is led by a church council made of the minister along with elders and deacons elected by the congregation. At the regional level were 75 classical assemblies whose members are chosen by the church councils. As of 1st May 2018, these 75 classical assemblies are reorganised into 11 larger ones. At the national level is the General Synod which directs areas of common interest, such as theological education, ministry training and ecumenical co-operation. The PKN has four different types of congregations: Lutherans are a minority (about 1 percent) of the PKN's membership. To ensure that Lutherans are represented in the Church, the Lutheran congregations have their own synod. The Lutheran Synod also has representatives in the General Synod. The Protestant Church in the Netherlands issues yearly reports regarding its membership and finances. Its make-up by former affiliation of its congregations was as follows in 2017: Trend shows that since 2011 identification with former denominations has been falling in favor of simply identifying as "Protestant". Secularization, or the decline in religiosity, first became noticeable after 1960 in the Protestant rural areas of Friesland and Groningen. Then, it spread to Amsterdam, Rotterdam and the other large cities in the west. Finally the Catholic southern areas showed religious declines. A countervailing trend is produced by a religious revival in the Protestant Bible Belt, and the growth of Muslims and Hindu communities resulting from immigration and high birth rates. Research in 2007 concluded that 42% of the members of the PKN were non-theists. Furthermore, in the PKN and several other smaller denominations of the Netherlands, one in six clergy were either agnostic or atheist. A minister of the PKN, Klaas Hendrikse once described God as "a word for experience, or human experience" and said that Jesus may have never existed. Only those congregations belonging to the former Reformed Churches in the Netherlands have the legal right to secede from the PKN without losing its property and church during a transition period of 10 years. Seven congregations have so far decided to form the Continued Reformed Churches in the Netherlands. Two congregations have joined one of the other smaller Reformed churches in the Netherlands. Some minorities within congregations that joined the PKN decided to leave the church and associated themselves individually with one of the other Reformed churches. Some congregations and members in the Dutch Reformed Church did not agree with the merger and have separated. They have organized themselves in the Restored Reformed Church. Estimations of their membership vary from 35,000 up to 70,000 people in about 120 local congregations. They disagree with the pluralism of the merged church which maintains, as they see it, contradicting Reformed and Lutheran confessions. This group also considers same-sex marriages and female clergy unbiblical. In a meeting of eight Jewish and eight Protestant Dutch leaders in Israel in May 2011, a statement of cooperation was issued, indicating, for the most part, that the Protestant Church recognizes the issues involved with the Palestinian Christians and that this is sometimes at odds with support for the State of Israel, but standing up for the rights of the Palestinians does not detract from the emphasis on the safety of the State of Israel and vice versa.
https://en.wikipedia.org/wiki?curid=8659
Christian Church (Disciples of Christ) The Christian Church (Disciples of Christ) is a Mainline Protestant Christian denomination in the United States and Canada. The denomination started with the Restoration Movement during the Second Great Awakening, first existing as a loose association of churches working towards Christian unity during the 19th century, then slowly forming quasi-denominational structures through missionary societies, regional associations, and an international convention. In 1968, the Disciples of Christ officially adopted a denominational structure at which time a group of churches left to remain nondenominational. It is often referred to as The Christian Church, The Disciples of Christ, The Disciples, or the D.O.C. The Christian Church was a charter participant in the formation of the World Council of Churches (WCC) and of the Federal Council of Churches (now the National Council of Churches), and it continues to be engaged in ecumenical conversations. The Disciples' local churches are congregationally governed. In 2008 there were 679,563 members in 3,714 congregations in North America. By 2015, this number had declined to a baptized membership of 497,423 in 3,267 congregations, of whom about 306,905 were active members, while roughly 177,141 people attended Sunday services each week. In 2018, the denomination reported 380,248 members with 124,437 people in average worship attendance. The name "Disciples of Christ" is shared by three other groups: the Churches of Christ, the Independent Christian churches and churches of Christ, and the Christian Congregation. They emerged from the same roots. The Stone-Campbell movement began as two separate threads, each without knowledge of the other, during the Second Great Awakening in the early 19th century. The first of these two groups, led by Barton W. Stone began at Cane Ridge, Bourbon County, Kentucky. The group called themselves simply "Christians". The second, began in western Pennsylvania and Virginia (now West Virginia), led by Thomas Campbell and his son, Alexander Campbell. Because the founders wanted to abandon all denominational labels, they used the biblical names for the followers of Jesus that they found in the Bible. In 1801, the Cane Ridge Revival in Kentucky planted the seed for a movement in Kentucky and the Ohio River Valley to disassociate from denominationalism. In 1803 Stone and others withdrew from the Kentucky Presbytery and formed the Springfield Presbytery. The defining event of the Stone wing of the movement was the publication of the "Last Will and Testament of the Springfield Presbytery", at Cane Ridge, Kentucky, in 1804. "The Last Will" is a brief document in which Stone and five others announced their withdrawal from Presbyterianism and their intention to be solely part of the body of Christ. The writers appealed for the unity of all who follow Jesus, suggested the value of congregational self-governance, and lifted the Bible as the source for understanding the will of God. They denounced the divisive use of the Westminster Confession of Faith. Soon, they adopted the name "Christian" to identify their group. Thus, the remnants of the Springfield Presbytery became the Christian Church. It is estimated that the Christian Church numbered about 12,000 by 1830. Independently of Stone, the Campbell wing of the movement was launched when Thomas Campbell published the "Declaration and Address of the Christian Association of Washington," (Pennsylvania) in 1809. The Presbyterian Synod had suspended his ministerial credentials. In "The Declaration and Address" he set forth some of his convictions about the church of Jesus Christ, as he organized the Christian Association of Washington, not as a church but as an association of persons seeking to grow in faith. On May 4, 1811, however, the Christian Association constituted itself as a congregationally governed church. With the building it then constructed at Brush Run, it became known as Brush Run Church. When their study of the New Testament led the reformers to begin to practice baptism by immersion, the nearby Redstone Baptist Association invited Brush Run Church to join with them for the purpose of fellowship. The reformers agreed provided that they would be "allowed to preach and to teach whatever they learned from the Scriptures." Thus began a sojourn for the reformers among the Baptists within the Redstone Baptist Association (1815–1824). While the reformers and the Baptists shared the same beliefs in baptism by immersion and congregational polity, it was soon clear that the reformers were not traditional Baptists. Within the Redstone Association, the differences became intolerable to some of the Baptist leaders, when Alexander Campbell began publishing a journal, "The Christian Baptist," promoting reform. Campbell anticipated the conflict and moved his membership to a congregation of the Mahoning Baptist Association in 1824. In 1827, the Mahoning Association appointed reformer Walter Scott as an Evangelist. Through Scott's efforts, the Mahoning Association grew rapidly. In 1828, Thomas Campbell visited several of the congregations formed by Scott and heard him preach. The elder Campbell realized that Scott was bringing an important new dimension to the movement with his approach to evangelism. Several Baptist associations began disassociating congregations that refused to subscribe to the Philadelphia Confession. The Mahoning Association came under attack. In 1830, the Mahoning Baptist Association disbanded. Alexander ceased publication of "The Christian Baptist". In January 1831, he began publication of the "Millennial Harbinger". The two groups united at High Street Meeting House, Lexington, Kentucky, with a handshake between Barton W. Stone and "Raccoon" John Smith, on Saturday, December 31, 1831. Smith had been chosen, by those present, to speak on behalf of the followers of the Campbells. While contemporaneous accounts are clear that the handshake took place on Saturday, some historians have changed the date of the merger to Sunday, January 1, 1832. The 1832 date has become generally accepted. The actual difference is about 20 hours. Two representatives of those assembled were appointed to carry the news of the union to all the churches: John Rogers, for the Christians and "Raccoon" John Smith for the reformers. Despite some challenges, the merger succeeded. With the merger, there was the challenge of what to call the new movement. Clearly, finding a Biblical, non-sectarian name was important. Stone wanted to continue to use the name "Christians." Alexander Campbell insisted upon "Disciples of Christ". Walter Scott and Thomas Campbell sided with Stone, but the younger Campbell had strong reasons and would not yield. As a result, both names were used. The confusion over names has been present ever since. Prior to the 1906 separation, congregations would typically be named "Disciples of Christ," "Christian Church," and "Church of Christ." However, there are different practices by each. More than the name separates each church. For example, the "Independent Christian Church" will not accept a woman as a minister while some of the "Disciples of Christ" congregations will. These different congregations (Disciples of Christ, Church of Christ, and Independent Church) share many of the same beliefs and practices but there are, in fact, some differences. In 1849, the first National Convention was held at Cincinnati, Ohio. Alexander Campbell had concerns that holding conventions would lead the movement into divisive denominationalism. He did not attend the gathering. Among its actions, the convention elected Alexander Campbell its President and created the American Christian Missionary Society (ACMS). The formation of a missionary society set the stage for further "co-operative" efforts. By the end of the century, the Foreign Christian Missionary Society and the Christian Women's Board of Missions were also engaged in missionary activities. Forming the ACMS did not reflect a consensus of the entire movement. Sponsorship of missionary activities became a divisive issue. In the succeeding decades, for some congregations and their leaders, co-operative work through missionary societies and the adoption of instrumental music in church worship was straying too far from their conception of the early church. After the American Civil War, the schism grew. While there was no disagreement over the need for evangelism, many believed that missionary societies were not authorized by scripture and would compromise the autonomy of local congregations. This became one important factor leading to the separation of the Churches of Christ from the Christian Church (Disciples of Christ). From the beginning of the movement, the free exchange of ideas among the people was fostered by the journals published by its leaders. Alexander Campbell published "The Christian Baptist" and "The Millennial Harbinger". Barton W. Stone published "The Christian Messenger". In a respectful way, both men routinely published the contributions of others whose positions were radically different from their own. Following Campbell's death in 1866, journals continued to keep the discussion and conversation alive. Between 1870 and 1900, two journals emerged as the most prominent. The "Christian Standard" was edited and published by Isaac Errett of Cincinnati. "The Christian Evangelist" was edited and published by J. H. Garrison from St. Louis. The two men enjoyed a friendly rivalry, and kept the dialog going within the movement. A third journal became part of the conversation with the publication in 1884 of "The Christian Oracle", later to become "The Christian Century", with an interdenominational appeal. In 1914, Garrison's Christian Publishing company was purchased by R. A. Long, who then established a non-profit corporation, "The Christian Board of Publication" as the Brotherhood publishing house. In 1906, the U.S. Religious Census listed Churches of Christ for the first time as a group which was separate and distinct from the Disciples of Christ. However, the division had been growing for years, with published reports as early as 1883. The most obvious distinction between the two groups was the Churches of Christ rejecting the use of musical instruments in worship. The controversy over musical instruments began in 1860, when some congregations introduced organs, traditionally associated with wealthier, denominational churches. More basic were the underlying approaches to Biblical interpretation. The Churches of Christ permitted only those practices found in accounts of New Testament worship. They could find no New Testament documentation of the use of instrumental music in worship. The Disciples, by contrast, considered permissible any practices that the New Testament did not expressly forbid. After the division, Disciples churches used "Christian Church" as the dominant designation for congregations. While music and the approach to missionary work were the most visible issues, there were also some deeper ones. The process that led to the separation had begun prior to the American Civil War. Following the 1906 separation by the Churches of Christ, additional controversies arose. Should missionary efforts be cooperative or should they be independently sponsored by congregations? Should new methods of Biblical analysis, developed in the late 19th century, be embraced in the study and interpretation of the Bible? The "cooperative" churches were generally more likely to adopt the new biblical study methods. During the first half of the 20th century, these opposing factions among the Christian Churches coexisted but with growing discomfort and tension. Among the cooperative churches, the three Missionary Societies merged into the United Christian Missionary Society in 1920. Human service ministries grew through the National Benevolent Association and provided assistance to orphans, the elderly and the disabled. By mid century, the cooperative Christian Churches and the independent Christian Churches were following different paths. Following World War II, it became obvious that the organizations that had been developed in previous decades no longer effectively met the needs of the postwar era. After a number of discussions throughout the 1950s, the 1960 International Convention of Christian Churches adopted a process to "restructure" the entire organization. The Commission on Restructure, chaired by Granville T. Walker, held its first meeting on October 30 & November 1, 1962. In 1968, the International Convention of Christian Churches (Disciples of Christ) adopted the commission's proposed "Provisional Design of the Christian Church (Disciples of Christ)." Soon the Provisional Design became "The Design." Under the design, all churches in the 1968 yearbook of Christian Churches (Disciples of Christ) were automatically recognized as part of the Christian Church (Disciples of Christ). In the years that followed, many of the Independent Christian Church Congregations requested formal withdrawal from the yearbook. Many of those congregations became part of the Christian churches and churches of Christ. Christians believe that Jesus is the only Son of God, one in the same with God and the Holy Spirit (the Trinity). Christians believe Jesus came to die for the sins of all mankind and that He rose from Death to live again eternally in Heaven, where He will one day call home to heaven all those who believe. According to the book of Ephesians chapter 2, Christian's believe there is no way to earn eternal life and that only by God is man saved and by faith in Christ, one is made a child of God. As an integral part of worship in most Christian Church (Disciples of Christ) congregations members celebrate Lord's Supper weekly. Most congregations also sing hymns, read from the Old and New Testaments of Christian Scripture, hear the word of God proclaimed through sermon or other medium and extend an invitation to become Christ's Disciples. As a congregational church, each congregation determines the nature of its worship, study, Christian service, and witness to the world. Through the observance of communion, individuals are invited to acknowledge their faults and sins, to remember the death and resurrection of Jesus Christ, to remember their baptism, and to give thanks for God's redeeming love. The Christian Church (Disciples of Christ) believes that it is in the local congregations where people come, find, and know God as they gather in Christ's name. Because Disciples believe that the invitation to the table comes from Jesus Christ, communion is open to all who confess that Jesus Christ is Lord, regardless of their denominational affiliation. For most Disciples, communion is understood as the symbolic presence of Jesus within the gathered community. Most Disciple congregations practice believer's baptism in the form of immersion, believing it to be the form used in the New Testament. The experiences of yielding to Christ in being buried with him in the waters of baptism and rising to a new life, have profound meaning for the church. "In essentials, Unity; In non-essentials, Liberty; and in all things, Charity." Marco Antonio de Dominis, "De Repubblica Ecclesiastica", adopted as the 19th Century slogan of the Stone-Campbell Movement For modern disciples the one essential is the acceptance of Jesus Christ as Lord and Savior, and obedience to him in baptism. There is no requirement to give assent to any other statement of belief or creed. Nor is there any "official" interpretation of the Bible. Hierarchical doctrine was traditionally rejected by Disciples as human-made and divisive, and subsequently, freedom of belief and scriptural interpretation allows many Disciples to question or even deny beliefs common in doctrinal churches such as the Incarnation, the Trinity, and the Atonement. Beyond the essential commitment to follow Jesus, there is a tremendous freedom of belief and interpretation. As the basic teachings of Jesus are studied and applied to life, there is the freedom to interpret Jesus' teaching in different ways. As would be expected from such an approach, there is a wide diversity among Disciples in what individuals and congregations believe. It is not uncommon to find individuals who seemingly hold diametrically opposed beliefs within the same congregation affirming one another's journeys of faith as sisters and brothers in Christ. Members and seekers are encouraged to take being disciples seriously, meaning that they are student followers of Jesus. Often the best teaching comes in the form, "I'll tell you what I think, but read the Bible for yourself, and then study and pray about it. Decide in what ways God is calling you to be a follower of Jesus." Modern Disciples reject the use of creeds as "tests of faith," that is, as required beliefs, necessary to be accepted as a follower of Jesus. Although Disciples respect the great creeds of the church as informative affirmations of faith, they are never seen as binding. Since the adoption of The Design of the Christian Church (Disciples of Christ), in 1968, Disciples have celebrated a sense of unity in reading the preamble to the Design publicly. It is as a meaningful affirmation of faith, not binding upon any member. It was originally intended to remind readers that this Church seeks God through Jesus Christ, even when it adopts a design for its business affairs. "...the church of Christ upon earth is essentially, intentionally, and constitutionally one;consisting of all those in every place that profess their faith in Christ and obedience to him in all things..." Thomas Campbell — Proposition 1 of the Declaration and address The Disciples celebrate their oneness with all who seek God through Jesus Christ, throughout time and regardless of location. That oneness is symbolized in the open invitation to communion for all who have professed faith in Christ without regard to church affiliation. In local communities, congregations share with churches of other denominations in joint worship and in community Christian service. Ecumenical cooperation and collaboration with other Christian Communions has long been practiced, by the Regions. At the General Church level, the Council on Christian Unity coordinates the ecumenical activities of the church. The Disciples continues to relate to the National Council of Churches, of which it was a founding member. It shares in the dialog and in the theological endeavors of the World Council of Churches. The Disciples has been a full participant in the Consultation on Church Union since it began in the sixties. It continues to support those ongoing conversations which have taken on the title Churches Uniting in Christ. The goal of these endeavors is not the merger into some "Super Church", but rather to discover ways to celebrate and proclaim the unity and oneness that is Christ's gift to his church. In 2011, the denomination stated that "Disciples do not have a formal policy on same-sex marriage. Different congregations have the autonomy to discern on issues such as this one". In 2013, the Disciples of Christ voted in favor of a resolution affirming all members regardless of sexual orientation. After same-sex marriage was legalized in the US, the denomination reiterated that it leaves "all decisions of policy on same-sex marriage to local congregations". The Disciples Alliance Q, an association of LGTBQ+ members, certifies congregations as "Open and Affirming" to show that they are accepting of all gender identities and sexual orientations. The process includes congregational workshops, information provided by the Alliance about what Open and Affirming means, and public witness to Open and Affirming ministries. The Disciples believe in the priesthood of all believers, in that all people baptized are called to minister to others with diverse spiritual gifts. The Disciples view their Order of Ministry as a specific subset of all believers who are called with spiritual gifts specifically suited for pastoral ministry. Congregations use different terms to refer to persons in the Order of Ministry including Pastor and Reverend but most call them Ministers, including the denomination's governing documents. Congregations sponsor members seeking ordination or commissioning as a Minister, and Regional Ministries organize committees to oversee the process. Ordination can be achieved by obtaining a Master of Divinity from a theological institution, which does not have to be an institution associated with the Disciples. Ordination can also be achieved through an "Apprentice" track which has candidates shadow ordained ministers. Finally, Ministers can be Commissioned, a shorter process for seminary students and those seeking short-term ministry in a Region. Regional requirements for ministry vary. Ordination is made official through a service which includes members of the church, clergy, and Regional Minister laying their hands on the candidate as the ordaining act. Ecumenical representatives are often included to emphasize the Disciples' desire for Christian unity. Disciples recognize the ordinations of the United Church of Christ as do they for Disciples. A General Commission on the Order of Ministry exists to interpret and review definitions of ministry, give oversight to Regions and congregations, provide other support, and maintain the standing of Regional Ministers and Ministers of General (National) Ministries. Congregations of the Disciples are self-governing in the tradition of congregational polity. They call their own Ministers, select their own leadership, own their own property, and manage their own affairs. In Disciples congregations, the priesthood of all believers finds its expression in worship and Christian service. Typically, lay persons who have been elected and ordained as Elders preside with the church's Ministers in the celebration of the sacrament of Holy Communion. The Elders and Ministers provide spiritual oversight and care for members in partnership with one another. The Regional Churches of the Christian Church provide resources for leadership development and opportunities for Christian fellowship beyond the local congregation. They have taken responsibility for the nurture and support of those individuals seeking to discern God's call to service as ordained or licensed ministers. Typically, they organize summer camping experiences for children and youth. Regional churches assist congregations who are seeking ministers and ministers who are seeking congregations. Regional leadership is available on request to assist congregations that face conflict. Though they have no authority to direct the life of any congregation, the Regional Churches are analogous to the middle judicatories of other denominations. The Christian Church (Disciples of Christ) at the "General Church" level consists of a number of self-governing agencies, which focus upon specific Christian witnesses to the world that have emerged in the dialog within the movement since before the first convention in 1849. Typically, these ministries have a scope that is larger than Regional Ministries, and often have a global perspective. The church agencies report to the General Assembly, which meets biennially in odd-numbered years. The General Minister and President (GMP) is the designated leader for the General Church, but does not have the administrative authority to direct any of the general church agencies other than "The Office of General Minister and President." The GMP has influence that derives from the respect of the church much as the pastor of a local church leads a local congregation. The General Ministries are: One highly popular and respected General Agency program is the "Week of Compassion," named for the special offering to fund the program when it began in the 1950s. The Week of Compassion is the disaster relief and Third World development agency. It works closely with Church World Service and church related organizations in countries around the world where disasters strike, providing emergency aid. The General Church has challenged the entire denomination to work for a 2020 Vision for the first two decades of the 21st Century. Together the denomination is well on the way to achieving its four foci: The relationship between the congregations, regions and the general church are detailed in "The Design of the Christian Church (Disciples of Christ)". At the 2005 General Assembly, over 3,000 delegates voted nearly unanimously to elect the Sharon E. Watkins as General Minister and President of the denomination. Watkins was the first woman to be elected as the presiding minister of a mainline Protestant denomination. The logo of the Christian Church (Disciples of Christ) is a red chalice with a white St. Andrew's Cross. The chalice represents the centrality of Communion to the life of the church. The cross of Saint Andrew is a reminder of the ministry of each person and the importance of evangelism, and recalls the denomination's Scottish Presbyterian ancestry. After the 1968 General Assembly, the Administrative Committee charged a sub-committee with the task of proposing a symbol for the church. Hundreds of designs were submitted, but none seemed right. By November the Deputy General Minister and President, William Howland, suggested that the committee's staff consultant and chairperson agree on a specific proposal and bring it back to the committee: that meant Robert L. Friedly of the Office of Interpretation and Ronald E. Osborn. On January 20, 1970, the two men sat down for lunch. With a red felt-tip pen, Osborn began to scrawl a Saint Andrew's cross circumscribed inside a chalice on his placemat. Immediately, Friedly dispatched the crude drawing to Bruce Tilsley, a commercial artist and member of Central Christian Church of Denver, with the plea that he prepare an artistic version of the ideas. Tilsley responded with two or three sketches, from which was selected the now-familiar red chalice. Use of the proposed symbol became so prevalent that there was little debate when official adoption was considered at the 1971 General Assembly. The chalice is a registered trademark of the Christian Church (Disciples of Christ). Congregations and ministries of the Christian Church (Disciples of Christ) are free to use the chalice in publications, web sites and other media. Organizations not affiliated with the Christian Church (Disciples of Christ) are asked to obtain permission. Because most congregations call themselves "Christian Churches," the chalice has become a simple way to identify Disciples of Christ Churches through signage, letterhead, and other forms of publicity. The Christian Church (Disciples of Christ) has experienced a significant loss of membership since the middle of the 20th century. Membership peaked in 1958 at just under 2 million. In 1993, membership dropped below 1 million. In 2009, the denomination reported 658,869 members in 3,691 congregations. As of 2010, the five states with the highest adherence rates were Kansas, Missouri, Iowa, Kentucky and Oklahoma. The states with the largest absolute number of adherents were Missouri, Texas, Indiana, Kentucky and Ohio. From the very beginnings of the movement, Disciples have founded institutions of higher learning. Alexander Campbell taught young leaders and founded Bethany College. The movement established similar schools, especially in the years following the American Civil War. Because intellectual and religious freedom are important values for the Disciples of Christ, the colleges, universities, and seminaries founded by its congregations do not seek to indoctrinate students or faculty with a sectarian point of view. In the 21st century, the relationship between the Christian Church (Disciples of Christ) and its affiliated universities is the purview of Higher Education and Leadership Ministries (HELM), an agency of the General Church. The Disciples of Christ maintains ecumenical relations with the Pontifical Council for Promoting Christian Unity. It is also affiliated with other ecumenical organizations such as Churches Uniting in Christ, Christian Churches Together, the National Council of Churches and the World Council of Churches. It maintains Ordained Ministerial Partner Standing with the United Church of Christ, which means that clergy ordained in the Disciples of Christ may also serve in the United Church of Christ.
https://en.wikipedia.org/wiki?curid=8660
David Rice Atchison David Rice Atchison (August 11, 1807January 26, 1886) was a mid-19th century Democratic United States Senator from Missouri. He served as President pro tempore of the United States Senate for six years. Atchison served as a major general in the Missouri State Militia in 1838 during Missouri's Mormon War and as a Confederate brigadier general during the American Civil War under Major General Sterling Price in the Missouri Home Guard. He is best known for the claim that for 24 hours—Sunday, March 4, 1849 through noon on Monday—he may have been Acting President of the United States. This belief, however, is dismissed by nearly all historians, scholars, and biographers. Atchison, owner of many slaves and a plantation, was a prominent pro-slavery activist and Border Ruffian leader, deeply involved with violence against abolitionists and other free-staters during the "Bleeding Kansas" events. Atchison was born to William Atchison in Frogtown (later Kirklevington), which is now part of Lexington, Kentucky. He was educated at Transylvania University in Lexington, where his classmates included five future Democratic senators (Solomon Downs of Louisiana, Jesse Bright of Indiana, George Wallace Jones of Iowa, Edward Hannegan of Indiana, and Jefferson Davis of Mississippi). Atchison was admitted to the Kentucky bar in 1829. In 1830 he moved to Liberty in Clay County in western Missouri, and set up practice there, where he also farmed. Atchison's law practice flourished, and his best-known client was Latter Day Saint Movement founder Joseph Smith. Atchison represented Smith in land disputes with non-Mormon settlers in Caldwell County and Daviess County. Alexander William Doniphan joined Atchison's law practice in Liberty in May 1833. The two became fast friends and spent many leisure time hours playing cards, going to the horse races, hunting, fishing, attending social functions and political events. Atchison, already a member of the Liberty Blues, a volunteer militia in Missouri, got Doniphan to join. Atchison was elected to the Missouri House of Representatives in 1834. He worked hard for the Platte Purchase, which extended the northwestern boundary of Missouri to the Missouri River in 1837. When the earlier disputes broke out into the so-called Mormon War of 1838, Atchison was appointed a major general in the state militia and took part in suppression of the violence by both sides. In 1838 he was re-elected to the Missouri State House of Representatives. Three years later, he was appointed a circuit court judge for the six-county area of the Platte Purchase. In 1843 he was named a county commissioner in Platte County, where he then lived. In October 1843, Atchison was appointed to the U.S. Senate to fill the vacancy left by the death of Lewis F. Linn. He thus became the first senator from western Missouri. At age 36, he was the youngest senator from Missouri up to that time. Later in 1843, Atchison was appointed to serve the remainder of Linn's term and was re-elected in 1849. Atchison was very popular with his fellow Senate Democrats. When the Democrats took control of the Senate in December 1845, they chose Atchison as President pro tempore, placing him third in succession for the Presidency, and also giving him the duty of presiding over the Senate when the Vice President was absent. He was then only 38 years old and had served in the Senate just two years. In 1849 Atchison stepped down as President pro tempore in favor of William R. King. King in turn yielded the office back to Atchison in December 1852, since King had been elected Vice President of the United States. Atchison continued as President pro tempore until December 1854. As a Senator, Atchison was a fervent advocate of slavery and territorial expansion. He supported the annexation of Texas and the U.S.-Mexican War. Atchison and Missouri's other Senator, the venerable Thomas Hart Benton, became rivals and finally enemies, though both were Democrats. Benton declared himself to be against slavery in 1849, and in 1851 Atchison allied with the Whigs to defeat Benton for re-election. Benton, intending to challenge Atchison in 1854, began to agitate for territorial organization of the area west of Missouri (now the states of Kansas and Nebraska) so it could be opened to settlement. To counter this, Atchison proposed that the area be organized "and" that the section of the Missouri Compromise banning slavery there be repealed in favor of popular sovereignty, under which the settlers in each territory would decide themselves whether slavery would be allowed. At Atchison's request, Senator Stephen Douglas of Illinois introduced the Kansas–Nebraska Act, which embodied this idea, in November 1853. The Act became law in May 1854, establishing the Territories of Kansas and Nebraska. Both Douglas and Atchison had assumed that Nebraska would be settled by Free-State men from Iowa and Illinois, and Kansas by pro-slavery Missourians and other Southerners, thus preserving the numerical balance between free states and slave states. In 1854 Atchison helped found the town of Atchison, Kansas, as a pro-slavery settlement. The town (and county) were named for him. In fact, while Southerners welcomed the opportunity to settle Kansas, very few actually chose to do so. Instead, most free-soilers preferred Kansas. Furthermore, anti-slavery activists throughout the North came to view Kansas as a battleground and formed societies to encourage free-soil settlers to go to Kansas and ensure that both Kansas and Nebraska would become free states. It appeared as if the Kansas Territorial legislature to be elected in March 1855 would be controlled by free-soilers and ban slavery. This was viewed as a breach of faith by Atchison and his supporters. An angry Atchison called on pro-slavery Missourians to uphold slavery by force and "to kill every God-damned abolitionist in the district" if necessary. He recruited an immense mob of heavily armed Missourians, the infamous "Border Ruffians". On the election day, March 30, 1855, Atchison led 5,000 Border Ruffians into Kansas. They seized control of all polling places at gunpoint, cast tens of thousands of fraudulent votes for pro-slavery candidates, and elected a pro-slavery legislature. The outrage was nonetheless accepted by the Federal government. When Territorial Governor Andrew Reeder objected, he was fired by President Pierce. Despite this show of force, far more free-soilers than pro-slavery settlers migrated to Kansas. There were continual raids and ambushes by both sides in "Bleeding Kansas". But in spite of the best efforts of Atchison and the Ruffians, Kansas did reject slavery and finally became a free state in 1861. Charles Sumner, in the epic "Crimes Against Kansas" speech on May 19, 1856, exposed Atchison's role in the invasion, tortures, and killings in Kansas. Speaking in the flamboyant style he and others used, lacing his prose with references to Roman history, Sumner compared Atchison to Roman Senator Catiline, who betrayed his country in a plot to overthrow the existing order. For two days, Sumner listed crime, after crime, in detail, complete with documentation by newspapers and letters of the time, showing the tortures and violence by Atchison and his men. Two days later, Atchison gave his own speech, totally unaware as yet that he was exposed on Senate floor in such a fashion. Atchison's speech was to the Texas men he just met, hired and paid for, Atchison reveals in his speech, by "authorities in Washington". They are about to invade Lawrence Kansas. Atchison makes the men promise to kill and "draw blood," and boasts of his flag, which was red in color for "Southern Rights" and the color of blood. They would press "to blood" the spread of slavery into Kansas. He revealed in this speech that the immediate goal of the invasion was to stop the newspaper in Lawrence from publishing anti-slavery material. Atchison's men had made it a crime to publish anti-slavery newspapers in Kansas. Atchison made it clear the men are to kill and draw blood, told the men they will be "well paid," and encouraged them to plunder from the homes that they invaded. That was after the hundreds of dozens of tortures and killings that Sumner had detailed in his Crimes Against Kansas speech. In other words, things were about to get much worse since Atchison had his hired men from Texas. Atchison's Senate term expired on March 3, 1855. He sought election to another term, but the Democrats in the Missouri legislature were split between him and Benton, while the Whig minority put forward their own man. No Senator was elected until January 1857, when James S. Green was chosen. When the First Transcontinental Railroad was proposed in the 1850s, Atchison called for it to be built along the central route (from St. Louis through Missouri, Kansas, and Utah), rather than the southern route (from New Orleans through Texas and New Mexico). Naturally, his suggested route went through Atchison. Atchison and A. W. Doniphan would fall out over the politics preceding the Civil War and on which direction Missouri should proceed. Atchison favored secession, while Doniphan was torn and would remain for the most part non-committal. Privately Doniphan favored the Union, but found it hard to go against his friends and associates. During the secession crisis in Missouri at the beginning of the American Civil War, Atchison sided with Missouri's pro-Confederate governor, Claiborne Jackson. He accepted an appointment as a major general in the Missouri State Guard. Atchison actively recruited State Guardsmen in northern Missouri and served with Missouri State Guard commander General Sterling Price in the summer campaign of 1861. In September 1861, Atchison led 3,500 State Guard recruits across the Missouri River to reinforce Price, and defeated Union troops that tried to block his force in the Battle of Liberty. Atchison continued to serve through the end of 1861. In March 1862, Union forces in the Trans-Mississippi theater won a decisive victory at Pea Ridge in Arkansas and secured Union control of Missouri. Atchison then resigned from the army over reported strategy arguments with Price and moved to Texas for the duration of the Civil War. After the war he retired to his farm near Gower, and was noted to deny many of his pro-slavery public statements made prior to the Civil War. In addition, his retirement cottage outside of Plattsburg, Missouri burned to the ground before his death in 1886. This included the complete loss of his library containing books, documents, and letters which documented his role in the Mormon War, Indian affairs, pro-slavery activities, Civil War activities, and other legislation covering his career as a lawyer, senator, and soldier. Inauguration Day—March 4—fell on a Sunday in 1849, and so President-elect Zachary Taylor did not take the presidential oath of office until the next day. Even so, the term of the outgoing president, James K. Polk, ended at noon on March 4. On March 2, outgoing vice president George M. Dallas relinquished his position as President of the Senate, at which time Atchison was elected President pro tempore. In 1849, according to the Presidential Succession Act of 1792, the Senate president pro tempore immediately followed the vice president in presidential line of succession. As Dallas's term also ended at noon on the 4th, and as neither Taylor nor Vice President-elect Millard Fillmore had been sworn-in to office on that day, it was claimed by some of Atchison's friends and colleagues that on March 4–5, 1849, Atchison was Acting President of the United States. Historians, constitutional scholars and biographers all dismiss the claim. They point out that Atchison's Senate term had ended on March 3. When the Senate of the new Congress convened on March 5 to allow new senators and the new vice president to take the oath of office, the secretary of the Senate called members to order, as the Senate had no president pro tempore. Furthermore, the Constitution doesn't require the President-elect to take the oath of office to hold the office, just to execute the powers. Also, as Atchison never swore the presidential oath either, he could not have acted as President. Most historians and scholars assert that as soon as the outgoing President's term expires, the President-elect automatically assumes the office, although some claim instead that the office is vacant until the taking of the oath. In September 1872, Atchison, who never himself claimed that he was technically president, told a reporter for the "Plattsburg Lever": Atchison died on January 26, 1886, at his home near Gower, Missouri at the age of 78. He was buried at Greenlawn Cemetery in Plattsburg, Missouri. His grave marker reads "President of the United States for One Day."
https://en.wikipedia.org/wiki?curid=8662
Daniel Gabriel Fahrenheit Daniel Gabriel Fahrenheit FRS (; ; 24 May 1686 – 16 September 1736) was a german physicist, inventor, and scientific instrument maker. Fahrenheit was born in Danzig (Gdańsk), then a predominantly German-speaking city in the Pomeranian Voivodeship of the Polish–Lithuanian Commonwealth, but lived most of his life in the Dutch Republic (1701–1736) and was one of the notable figures in the Golden Age of Dutch science and technology. A pioneer of exact thermometry, he helped lay the foundations for the era of precision thermometry by inventing the mercury-in-glass thermometer (first practical, accurate thermometer) and Fahrenheit scale (first standardized temperature scale to be widely used). In other words, Fahrenheit's inventions ushered in the first revolution in the history of thermometry (branch of physics concerned with methods of temperature measurement). From the early 1710s until the beginnings of the electronic era, mercury-in-glass thermometers were among the most reliable and accurate thermometers ever invented. Fahrenheit was born in Danzig (Gdańsk), then in the Polish–Lithuanian Commonwealth, but lived most of his life in the Dutch Republic. The Fahrenheits were a German Hanse merchant family who had lived in several Hanseatic cities. Fahrenheit's great-grandfather had lived in Rostock, and research suggests that the Fahrenheit family originated in Hildesheim. Daniel's grandfather moved from Kneiphof in Königsberg (present-day Kaliningrad) to Danzig and settled there as a merchant in 1650. His son, Daniel Fahrenheit (the father of Daniel Gabriel), married Concordia Schumann, daughter of a well-known Danzig business family. Daniel was the eldest of the five Fahrenheit children (two sons, three daughters) who survived childhood. His sister, Virginia Elisabeth Fahrenheit, married Benjamin Krüger and was the mother of Benjamin Ephraim Krüger, a clergyman and playwright. Daniel Gabriel began training as a merchant in Amsterdam after his parents died on 14 August 1701 from eating poisonous mushrooms. However, Fahrenheit's interest in natural science led him to begin studies and experimentation in that field. From 1717, he traveled to Berlin, Halle, Leipzig, Dresden, Copenhagen, and also to his hometown, where his brother still lived. During that time, Fahrenheit met or was in contact with Ole Rømer, Christian Wolff, and Gottfried Leibniz. In 1717, Fahrenheit settled in The Hague as a glassblower, making barometers, altimeters, and thermometers. From 1718 onwards, he lectured in chemistry in Amsterdam. He visited England in 1724 and was the same year elected a Fellow of the Royal Society. From August 1736 Fahrenheit stayed in the house of Johannes Frisleven at Plein square in The Hague, in connection with an application for a patent at the States of Holland and West Friesland. At the beginning of September he became ill and on the 7th his health had deteriorated to such an extent that he had notary Willem Ruijsbroek come to draw up his will. On the 11th the notary came by again to make some changes. Five days after that Fahrenheit died at the age of fifty. Four days later he received a fourth-class funeral, which meant that he was destitute, in the Kloosterkerk in The Hague. (the Cloister or Monastery Church) According to Fahrenheit's 1724 article, he determined his scale by reference to three fixed points of temperature. The lowest temperature was achieved by preparing a frigorific mixture of ice, water, and a salt ("ammonium chloride or even sea salt"), and waiting for the eutectic system to reach equilibrium temperature. The thermometer then was placed into the mixture and the liquid in the thermometer allowed to descend to its lowest point. The thermometer's reading there was taken as 0 °F. The second reference point was selected as the reading of the thermometer when it was placed in still water when ice was just forming on the surface. This was assigned as 30 °F. The third calibration point, taken as 90 °F, was selected as the thermometer's reading when the instrument was placed under the arm or in the mouth. Fahrenheit came up with the idea that Mercury boils around 300 degrees on this temperature scale. Work by others showed that water boils about 180 degrees above its freezing point. The Fahrenheit scale later was redefined to make the freezing-to-boiling interval exactly 180 degrees, a convenient value as 180 is a highly composite number, meaning that it is evenly divisible into many fractions. It is because of the scale's redefinition that normal mean body temperature today is taken as 98.2 degrees, whereas it was 96 degrees on Fahrenheit's original scale. The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in English-speaking countries until the 1970s, nowadays replaced by the Celsius scale long used in the rest of the world, apart from the United States, where temperatures and weather reports are still broadcast in Fahrenheit.
https://en.wikipedia.org/wiki?curid=8663
Freescale DragonBall Motorola/Freescale Semiconductor's DragonBall, or MC68328, is a microcontroller design based on the famous 68000 core, but implemented as an all-in-one low-power system for handheld computer use. It is supported by μClinux. It was designed by Motorola in Hong Kong and released in 1995. The DragonBall's major design win was in numerous devices running the Palm OS platform. However, from Palm OS 5 onwards their use was superseded by ARM-based processors from Texas Instruments and Intel. The processor is capable of speeds of up to 16.58 MHz and can run up to 2.7 MIPS (million instructions per second), for the base 68328 and DragonBall EZ (MC68EZ328) model. It was extended to 33 MHz, 5.4 MIPS for the DragonBall VZ (MC68VZ328) model, and 66 MHz, 10.8 MIPS for the DragonBall Super VZ (MC68SZ328). It is a 32-bit processor with 32-bit internal and external address bus (24-bit external address bus for EZ and VZ variants) and 32-bit data bus. It has many built-in functions, like a color and grayscale display controller, PC speaker sound, serial port with UART and IRDA support, UART bootstrap, real time clock, is able to directly access DRAM, Flash ROM, mask ROM, and has built-in support for touch screens. The more recent DragonBall MX series microcontrollers, later renamed the Freescale i.MX (MC9328MX/MCIMX) series, are intended for similar application to the earlier DragonBall devices but are based on an ARM processor core instead of a 68000 core.
https://en.wikipedia.org/wiki?curid=8664
Double-slit experiment In modern physics, the double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena. This type of experiment was first performed, using light, by Thomas Young in 1801, as a demonstration of the wave behavior of light. At that time it was thought that light consisted of "either" waves "or" particles. With the beginning of modern physics, about a hundred years later, it was realized that light could in fact show behavior characteristic of "both" waves "and" particles. In 1927, Davisson and Germer demonstrated that electrons show the same behavior, which was later extended to atoms and molecules. Thomas Young's experiment with light was part of classical physics well before quantum mechanics, and the concept of wave-particle duality. He believed it demonstrated that the wave theory of light was correct, and his experiment is sometimes referred to as Young's experiment
https://en.wikipedia.org/wiki?curid=8667
Dan Bricklin Daniel Singer "Dan" Bricklin (born on July 16, 1951), is an American businessman and engineer who is the co-creator, with Bob Frankston, of the VisiCalc spreadsheet program. He also founded Software Garden, Inc., of which he is currently president, and Trellix Corporation. He currently serves as the chief technology officer of Alpha Software. His book, "Bricklin on Technology", was published by Wiley in May 2009. For his work with VisiCalc, Bricklin is often referred to as “the father of the Spreadsheet.” Bricklin was born in a Jewish family in Philadelphia, where he attended Akiba Hebrew Academy. He began his college as a mathematics major, but soon switched to computer science. He earned a Bachelor of Science in electrical engineering and computer science from the Massachusetts Institute of Technology in 1973, where he was a resident of Bexley Hall. Upon graduating from MIT, Bricklin worked for Digital Equipment Corporation (DEC) where he was part of the team that worked on WPS-8 until 1976, when he began working for FasFax, a cash register manufacturer. In 1977, he returned to education, and was awarded a Master of Business Administration from Harvard University in 1979. While a student at Harvard Business School, Bricklin co-developed VisiCalc in 1979, making it the first electronic spreadsheet readily available for home and office use. It ran on an Apple II computer, and was considered a fourth generation software program. VisiCalc is widely credited for fueling the rapid growth of the personal computer industry. Instead of doing financial projections with manually calculated spreadsheets, and having to recalculate with every single cell in the sheet, VisiCalc allowed the user to change any cell, and have the entire sheet automatically recalculated. This could turn 20 hours of work into 15 minutes and allowed for more creativity. In 1979, Bricklin and Frankston founded Software Arts, Inc., and began selling VisiCalc, via a separate company named VisiCorp. Along with co-founder Bob Frankston, he started writing versions of the program for the Tandy TRS-80, Commodore PET and the Atari 800. Soon after its launch, VisiCalc became a fast seller at $100. Software Arts also published TK/Solver and "and "Spotlight","a desktop organizer for the I.B.M. Personal Computer." Bricklin was awarded the Grace Murray Hopper Award in 1981 for VisiCalc. Bricklin could not patent VisiCalc, since software programs were not eligible for patent protection at the time. Bricklin was chairman of Software Arts until 1985, the year that Software Arts was acquired by Lotus. He left and founded Software Garden. Dan Bricklin founded Software Garden, a small consulting firm and developer of software applications, in 1985. The company's focus was to produce and market “Dan Bricklin's Demo Program”. The program allowed users to create demonstrations of their programs before they were even written, and was also used to create tutorials for Windows-based programs. Other versions released soon after included demo-it!. He remained the president of the company until he co-founded Slate Corporation in 1990. In 1992, he became the vice president of Phoenix-based Slate corporation, and developed "At Hand", a pen-based spreadsheet. When Slate closed in 1994, Bricklin returned to Software Garden. His ""Dan Bricklin's Overall Viewer"" (described by "The New York Times" as "a visual way to display information in Windows-based software") was released in November 1994. In 1995 Bricklin founded Trellix Corporation, named for "Trellix Site Builder". Trellix was bought by Interland (now Web.com) in 2003, and Bricklin became Interland's chief technology officer until early 2004. Bricklin continues to serve as president of Software Garden, a small company that develops and markets software tools he creates, as well as providing speaking and consulting services. He has released Note Taker HD, an application that integrates handwritten notes on the Apple iPad tablet. He is also developing wikiCalc, a collaborative, basic spreadsheet running on the Web. He is currently the chief technology officer of Alpha Software in Burlington, Massachusetts, a company that creates tools to easily develop cross-platform mobile business applications. In 1994, Bricklin was inducted as a Fellow of the Association for Computing Machinery. He is a founding trustee of the Massachusetts Technology Leadership Council and has served on the boards of the Software Publishers Association and the Boston Computer Society. He was also elected to be a member of the National Academy of Engineering. In 1981, Bricklin was given a Grace Murray Hopper Award for VisiCalc. In 1996, Bricklin was awarded by the IEEE Computer Society with the Computer Entrepreneur Award for pioneering the development and commercialization of the spreadsheet and the profound changes it fostered in business and industry. In 2003, Bricklin was given the Wharton Infosys Business Transformation Award for being a technology change leader. He was recognized for having used information technology in an industry-transforming way. He has received an Honorary Doctor of Humane Letters from Newbury College. In 2004, he was made a Fellow of the Computer History Museum "for advancing the utility of personal computers by developing the VisiCalc electronic spreadsheet." Bricklin:
https://en.wikipedia.org/wiki?curid=8668
Digital enhanced cordless telecommunications Digital enhanced cordless telecommunications (Digital European cordless telecommunications), usually known by the acronym DECT, is a standard primarily used for creating cordless telephone systems. It originated in Europe, where it is the universal standard, replacing earlier cordless phone standards, such as 900 MHz CT1 and CT2. Beyond Europe, it has been adopted by Australia, and most countries in Asia and South America. North American adoption was delayed by United States radio frequency regulations. This forced development of a variation of DECT, called DECT 6.0, using a slightly different frequency range which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost universally replaced other standards in most countries where it is used, with the exception of North America. DECT was originally intended for fast roaming between networked base stations and the first DECT product was Net3 wireless LAN. However, its most popular application is single-cell cordless phones connected to traditional analog telephone, primarily in home and small office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in many private branch exchange (PBX) systems for medium and large businesses produced by Panasonic, Mitel, Gigaset, Snom, BT Business, Spectralink, and RTX Telecom. DECT can also be used for purposes other than cordless phones, such as baby monitors and industrial sensors. The ULE Alliance's DECT ULE and its "HAN FUN" protocol are variants tailored for home security, automation, and the internet of things (IoT). The DECT standard includes the generic access profile (GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum. The New Generation DECT (NG-DECT) standard, marketed as CAT-iq by the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability across IP-DECT base stations and handsets from different manufacturers, while maintaining backward-compatibility with GAP equipment. It also requires mandatory support for wideband audio. The DECT standard was developed by ETSI in several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard. Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing. Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by the ITU as fulfilling the IMT-2000 requirements and thus qualifies as a 3G system. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT). DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries in Europe. Outside Europe, it is used in most of Asia, Australia and South America. In the United States, the Federal Communications Commission in 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9 GHz), known as Unlicensed Personal Communications Services (UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such as baby monitors and wireless networks. The New Generation DECT (NG-DECT) standard was first published in 2007; it was developed by ETSI with guidance from the Home Gateway Initiative through the DECT Forum to support IP-DECT functions in home gateway/IP-PBX equipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570. The DECT Forum maintains the CAT-iq trademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527. The DECT Ultra Low Energy (DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year by Dialog Semiconductor. The standard was created to enable home automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference than Zigbee, Bluetooth, or Wi-Fi from microwave ovens, which all operate in the unlicensed 2.4 GHz ISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit. Future revisions of the standard (tentatively termed DECT-2020) are expected to include high reliability low-latency DECT ULE for industry machine-to-machine application, high bitrate ultra reliable low latency protocols for professional wireless audio applications using point-to-point or multicast communications, high-throughput QAM-1024 modulation; a long term evolution to OFDM (downlink) and OFDMA/SC-FDMA (uplink) modulation with a downlink rate of 1Gbit/s (tentatively termed DECT-5G) is being researched by the ETSI DECT committee. The effort aims to adopt the updated DECT protocols into the upcoming IMT-2020 standard, which defines Ultra-Reliable Low-Latency Communications (URLLC), Massive Machine Type Communications (MMTC), and enhanced Mobile Broadband (eMBB) services. A new low-complexity audio codec, LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications, and supports scalable narrowband, wideband, super wideband, and fullband coding, with sample rates of 8, 16, 24, 32 and 48 kHz and audio bandwidth of up to 20 kHz. OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware from Dialog Semiconductor and DSP Group; the project is maintained by the DECT forum. The DECT standard originally envisaged three major areas of application: Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprise PABX market had some success, and all the major PABX vendors have offered DECT access options. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998 Telecom Italia launched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy. The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001. DECT has been used for wireless local loop as a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over . One example is the corDECT standard. The first data application for DECT was Net3 wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3 was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates. Data applications such as electronic cash terminals, traffic lights, and remote door openers also exist, but have been eclipsed by Wi-Fi, 3G and 4G which compete with DECT for both voice and data. DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM. In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since the UPCS band (1920–1930 MHz) is not free from heavy interference. Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor. Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed Region 2 2.4 GHz and 900 MHz ISM bands; some users of Uniden WDECT 2.4 GHz phones reported interoperability issues with Wi-Fi equipment. North-American products may not be used in Europe, Pakistan, Sri Lanka, and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities, PTA, Telecommunications Regulatory Commission of Sri Lanka and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by the Federal Communications Commission and Industry Canada. DECT 8.0 HD is a marketing designation for North American DECT devices certified with CAT-iq 2.0 "Multi Line" profile. Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI. NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitate VoIP calls through SIP and H.323 protocols. There are several CAT-iq profiles which define supported voice features: CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to support IP-DECT base stations found in office IP-PBX and home gateways. The DECT standard specifies a means for a portable phone or "Portable Part" to access a fixed telephone network via radio. Base station or "Fixed Part" is used to terminate the radio link and provide access to a fixed line. A gateway is then used to connect calls to the fixed network, such as public switched telephone network (telephone jack), office PBX, ISDN, or VoIP over Ethernet connection. Typical abilities of a domestic DECT Generic Access Profile (GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone jack. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used as intercoms, communicating between each other, and sometimes as walkie-talkies, intercommunicating without telephone line connection. DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz. DECT operates as a multicarrier frequency division multiple access (FDMA) and time division multiple access (TDMA) system. This means that the radio spectrum is divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT uses time division duplex (TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s. DECT also provides frequency-hopping spread spectrum over TDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion. DECT allows interference-free wireless operation to around outdoors, much less indoors when separated by walls. Operates clearly in common congested domestic radio traffic situations, for instance, generally immune to interference from other DECT systems, Wi-Fi networks, video senders, Bluetooth technology, baby monitors and other wireless devices. ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties: The DECT physical layer uses FDMA/TDMA access with TDD. Gaussian frequency-shift keying (GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level Differential BPSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 8 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s. DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list. The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed as effective radiated power (ERP), rather than the more commonly used equivalent isotropically radiated power (EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges. The DECT media access control layer controls the physical layer and provides connection oriented, connectionless and broadcast services to the higher layers. The DECT data link layer uses Link Access Protocol Control (LAPC), a specially designed variant of the ISDN data link protocol called LAPD. They are based on HDLC. GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP). There are several combinations of slots and corresponding types of physical packets with GFSK modulation: The 420/424 bits of a GFSK basic packet (P32) contain the following fields: The resulting full data rate is 32 kbit/s, available in both directions. The DECT network layer always contains the following protocol entities: Optionally it may also contain others: All these communicate through a Link Control Entity (LCE). The call control protocol is derived from ISDN DSS1, which is a Q.931-derived protocol. Many DECT-specific changes have been made. The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT. Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability. DECT GAP is an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions. The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number. The standard also provides encryption services with the DECT Standard Cipher (DSC). The encryption is fairly weak, using a 35-bit initialization vector and encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under a non-disclosure agreement to the phones' manufacturers from ETSI. The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time. Unfortunately very few DECT devices on the market implemented authentication and encryption procedures and even when encryption was used by the phone, it was possible to implement a man-in-the-middle attack impersonating a DECT base station and revert to unencrypted mode which allows calls to be listened to, recorded, and re-routed to a different destination. After an unverified report of a successful attack in 2002, members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008, and as of 2010 there has been a viable attack on it that can recover the key. In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based on AES 128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite. DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication. Various access profiles have been defined in the DECT standard: Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to indoors and using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative to Wi-Fi. The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti's Net3, was a wireless LAN, and German firms Dosch & Amand and Hoeft & Wessel built niche businesses on the supply of data transmission systems based on DECT. However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed. DECT uses UHF radio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies. The UK Health Protection Agency (HPA) claims that due to a mobile phone's adaptive power ability, a DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones. Most studies have been unable to demonstrate any link to health effects, or have been inconclusive. Electromagnetic fields may have an effect on protein expression in laboratory settings but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.
https://en.wikipedia.org/wiki?curid=8674
Data compression ratio Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed size. Data compression ratio is defined as the ratio between the "uncompressed size" and "compressed size": Thus, a representation that compresses a file's storage size from 10 MB to 2 MB has a compression ratio of 10/2 = 5, often notated as an explicit ratio, 5:1 (read "five" to "one"), or as an implicit ratio, 5/1. This formulation applies equally for compression, where the uncompressed size is that of the original; and for decompression, where the uncompressed size is that of the reproduction. Sometimes the "space savings" is given instead, which is defined as the reduction in size relative to the uncompressed size: Thus, a representation that compresses the storage size of a file from 10MB to 2MB yields a space savings of 1 - 2/10 = 0.8, often notated as a percentage, 80%. For signals of indefinite size, such as streaming audio and video, the compression ratio is defined in terms of uncompressed and compressed data rates instead of data sizes: and instead of space savings, one speaks of data-rate savings, which is defined as the data-rate reduction relative to the uncompressed data rate: For example, uncompressed songs in CD format have a data rate of 16 bits/channel x 2 channels x 44.1 kHz ≅ 1.4 Mbit/s, whereas AAC files on an iPod are typically compressed to 128 kbit/s, yielding a compression ratio of 10.9, for a data-rate savings of 0.91, or 91%. When the uncompressed data rate is known, the compression ratio can be inferred from the compressed data rate. Lossless compression of digitized data such as video, digitized film, and audio preserves all the information, but it does not generally achieve compression ratio much better than 2:1 because of the intrinsic entropy of the data. Compression algorithms which provide higher ratios either incur very large overheads or work only for specific data sequences (e.g. compressing a file with mostly zeros). In contrast, lossy compression (e.g. JPEG for images, or MP3 and Opus for audio) can achieve much higher compression ratios at the cost of a decrease in quality, such as Bluetooth audio streaming, as visual or audio compression artifacts from loss of important information are introduced. A compression ratio of at least 50:1 is needed to get 1080i video into a 20 Mbit/s MPEG transport stream. The data compression ratio can serve as a measure of the complexity of a data set or signal. In particular it is used to approximate the algorithmic complexity. It is also used to see how much of a file is able to be compressed without increasing its original size.
https://en.wikipedia.org/wiki?curid=8681
Disc jockey A disc jockey, more commonly abbreviated as DJ, is a person who plays recorded music for an audience. Most common types of DJs include radio DJs, club DJs, who perform at a nightclub or music festival, mobile DJs, who are hired to perform at public and private events (weddings, parties, festivals), and turntablists who use record players, usually turntables, to manipulate sounds on phonograph records. Originally, the "disc" in "disc jockey" referred to vinyl records, but nowadays DJ is used as an all-encompassing term to also describe persons who mix music from other recording media such as cassettes, CDs or digital audio files on a CDJ, controller, or even a laptop. DJs may adopt the title "DJ" in front of their real names, adopted pseudonyms, or stage names. DJs use audio equipment that can play at least two sources of recorded music simultaneously and mix them together to create seamless transitions between recordings and develop unique mixes of songs. Often, this involves aligning the beats of the music sources so their rhythms and tempos do not clash when played together and to enable a smooth transition from one song to another. DJs often use specialized DJ mixers, small audio mixers with crossfader and cue functions to blend or transition from one song to another. Mixers are also used to pre-listen to sources of recorded music in headphones and adjust upcoming tracks to mix with currently playing music. DJ software can be used with a DJ controller device to mix audio files on a computer instead of a console mixer. DJs may also use a microphone to speak to the audience; effects units such as reverb to create sound effects and electronic musical instruments such as drum machines and synthesizers. Originally, the "disc" in "disc jockey" referred to gramophone records, but now "DJ" is used as an all-encompassing term to describe someone who mixes recorded music from any source, including vinyl records, cassettes, CDs, or digital audio files stored on USB stick or laptop. DJs typically perform for a live audience in a nightclub or dance club or a TV, radio broadcast audience, or an online radio audience. DJs also create mixes, remixes and tracks that are recorded for later sale and distribution. In hip hop music, DJs may create beats, using percussion breaks, basslines and other musical content sampled from pre-existing records. In hip hop, rappers and MCs use these beats to rap over. Some DJs adopt the title "DJ" as part of their names (e.g., DJ Jazzy Jeff, DJ Qbert, DJ Shadow and DJ Yoda). Professional DJs often specialize in a specific genre of music, such as techno, house or hip hop music. DJs typically have an extensive knowledge about the music they specialize in. Many DJs are avid music collectors of vintage, rare or obscure tracks and records. Radio DJs or radio personalities introduce and play music broadcast on AM, FM, digital or Internet radio stations. Club DJs, commonly referred as DJs in general, play music at musical events, such as parties at music venues or bars, music festivals, corporate and private events. Typically, club DJs mix music recordings from two or more sources using different mixing techniques in order to produce non-stopping flow of music. One key technique used for seamlessly transitioning from one song to another is beatmatching. A DJ who mostly plays and mixes one specific music genre is often given the title of that genre; for example, a DJ who plays hip hop music is called a hip hop DJ, a DJ who plays house music is a house DJ, a DJ who plays techno is called a techno DJ, and so on. The quality of a DJ performance (often called a DJ mix or DJ set) consists of two main features: technical skills, or how well can DJ operate the equipment and produce smooth transitions between two or more recordings and a playlist; and the ability of a DJ to select most suitable recordings, also known as "reading the crowd". Turntablists, also called battle DJs, use turntables and DJ mixer to manipulate recorded sounds in order to produce new music. In essence, they use DJ equipment as a musical instrument. The most known turntablist technique is scratching. Turntablists often participate in DJ contests like DMC World DJ Championships and Red Bull 3Style. A resident DJ performs at a venue on a regular basis or permanently. They would perform regularly (typically under an agreement) in a particular discotheque, a particular club, a particular event, or a particular broadcasting station. Residents have a decisive influence on the club or a series of events. Per agreement with the management or company, the DJ would have to perform under agreed times and dates. Typically, DJs perform as residents for two or three times in a week, for example, on Friday and Saturday. Also, DJs who make a steady income from a venue, are also considered resident DJs. Examples for resident DJs are: In Jamaican music, a deejay (DJ) is a reggae or dancehall musician who sings and raps ("toasts") to an instrumental (or riddim). DJs use equipment that enables them to play multiple sources of recorded music and mix them to create seamless transitions and unique arrangements of songs. An important tool for DJs is the specialized DJ mixer, a small audio mixer with a crossfader and cue functions. The crossfader enables the DJ to blend or transition from one song to another. The cue knobs or switches allow the DJ to "listen" to a source of recorded music in headphones before playing it for the live club or broadcast audience. Previewing the music in headphones helps the DJ pick the next track they want to play, cue up the track to the desired starting location, and align the two tracks' beats in traditional situations where auto sync technology is not being used. This process ensures that the selected song will mix well with the currently playing music. DJs may align the beats of the music sources so their rhythms do not clash when they are played together to help create a smooth transition from one song to another. Other equipment may include a microphone, effects units such as reverb, and electronic musical instruments such as drum machines and synthesizers. As music technology has progressed, DJs have adopted different types of equipment to play and mix music, all of which are still commonly used. Traditionally, DJs used two turntables plugged into a DJ mixer to mix music on vinyl records. As compact discs became popular media for publishing music, specialized high quality CD players known as CDJs were developed for DJs. CDJs can take the place of turntables or be used together with turntables. Many CDJs can now play digital music files from USB flash drives or SD cards in addition to CDs. With the spread of portable laptop, tablet, and smartphone computers, DJs began using software together with specialized sound cards and DJ controller hardware. DJ software can be used in conjunction with a hardware DJ mixer or be used instead of a hardware mixer. Turntables allow DJs to play vinyl records. By adjusting the playback speed of the turntable, either by adjusting the speed knob, or by manipulating the platter (e.g., by slowing down the platter by putting a finger gently along the side), DJs can match the tempos of different records so their rhythms can be played together at the same time without clashing or make a smooth, seamless transition from one song to another. This technique is known as beatmatching. DJs typically replace the rubber mat on turntables that keeps the record moving in sync with the turntable with a slipmat that facilitates manipulating the playback of the record by hand. With the slipmat, the DJ can stop or slow down the record while the turntable is still spinning. Direct-drive turntables are the type preferred by DJs, with the Technics SL-1200 being the most popular model of turntables for DJs. Belt-drive turntables are less expensive, but they are not suitable for turntablism and DJing, because the belt-drive motor does not like being slowed down, as it can stretch out the belt. Some DJs, most commonly those who play hip hop music, go beyond merely mixing records and use turntables as musical instruments for scratching, beat juggling, and other turntablism techniques. CDJs are high quality digital media players made for DJing. They often have large jog wheels and pitch controls to allow DJs to manipulate the playback of digital files for beatmatching similar to how DJs manipulate vinyl records on turntables. CDJs often have features such as loops and waveform displays similar to DJ software. Originally designed to play music from compact discs, they now can play digital music files stored on USB flash drives and SD cards. Some CDJs can also connect to a computer running DJ software to act as a DJ controller. DJ mixers are small audio mixing consoles specialized for DJing. Most DJ mixers have far fewer channels than a mixer used by a record producer or audio engineer; whereas standard live sound mixers in small venues have 12 to 24 channels, and standard recording studio mixers have even more (as many as 72 on large boards), basic DJ mixers may have only two channels. While DJ mixers have many of the same features found on larger mixers (faders, equalization knobs, gain knobs, effects units, etc.), DJ mixers have a feature that is usually only found on DJ mixers: the crossfader. The crossfader is a type of fader that is mounted horizontally. DJs used the crossfader to mix two or more sound sources. The midpoint of the crossfader's travel is a 50/50 mix of the two channels (on a two channel mixer). The far left side of the crossfader provides only the channel A sound source. The far right side provides only the channel B sound source (e.g., record player number 2). Positions in between the two extremes provide different mixes of the two channels. Some DJs use a computer with DJ software and a DJ controller instead of an analog DJ mixer to mix music, although DJ software can be used in conjunction with a hardware DJ mixer. DJs generally use higher quality headphones than those designed for music consumers. DJ headphones have other properties useful for DJs, such as designs that acoustically isolate the sounds of the headphones from the outside environment (hard shell headphones), flexible headbands and pivot joints to allow DJs to listen to one side of the headphones, while turning the other headphone away (so he/she can monitor the mix in the club), and replaceable cables. Replaceable cables enables DJs to buy new cables if a cable becomes frayed, worn, or damaged, or if a cable is accidentally cut. Closed-back headphones are highly recommended for DJs to block outside noise as the environment of DJ usually tend to be very noisy. Standard headphones have 3.5mm jack but DJ equipment usually requires ¼ inch jack. Most of specialized DJ Headphones have an adapter to switch between 3.5mm jack and ¼ inch jack. Detachable coiled cables are perfect for DJ Headphones. DJs have changed their equipment as new technologies are introduced. The earliest DJs in pop music, in 1970s discos, used record turntables, vinyl records and audio consoles. In the 1970s, DJs would have to lug heavy direct drive turntables and crates of records to clubs and shows. In the 1980s, many DJs transitioned to compact cassettes. In the 1990s and 2000s, many DJs switched to using digital audio such as CDs and MP3 files. As technological advances made it practical to store large collections of digital music files on a laptop computer, DJ software was developed so DJs could use a laptop as a source of music instead of transporting CDs or vinyl records to gigs. Unlike most music player software designed for regular consumers, DJ software can play at least two audio files simultaneously, display the waveforms of the files on screen and enable the DJ to listen to either source. The waveforms allow the DJ see what is coming next in the music and how the playback of different files is aligned. The software analyzes music files to identify their tempo and where the beats are. The analyzed information can be used by the DJ to help manually beatmatch like with vinyl records or the software can automatically synchronize the beats. Digital signal processing algorithms in software allow DJs to adjust the tempo of recordings independently of their pitch (and musical key, a feature known as "keylock". Some software analyzes the loudness of the music for automatic normalization with ReplayGain and detects the musical key. Additionally, DJ software can store cue points, set loops, and apply effects. As tablet computers and smartphones became widespread, DJ software was written to run on these devices in addition to laptops. DJ software requires specialized hardware in addition to a computer to fully take advantage of its features. The consumer grade, regular sound card integrated into most computer motherboards can only output two channels (one stereo pair). However, DJs need to be able to output at least four channels (two stereo pairs, thus Left and Right for input 1 and Left and Right for input 2), either unmixed signals to send to a DJ mixer or a main output plus a headphone output. Additionally, DJ sound cards output higher quality signals than the sound cards built into consumer-grade computer motherboards. Special vinyl records (or CDs/digital files played with CDJs) can be used with DJ software to play digital music files with DJ software as if they were pressed onto vinyl, allowing turntablism techniques to be used with digital files. These vinyl records do not have music recordings pressed on to them. Instead, they are pressed with a special signal, referred to as "timecode", to control DJ software. The DJ software interprets changes in the playback speed, direction, and position of the timecode signal and manipulates the digital files it is playing in the same way that the turntable manipulates the timecode record. This requires a specialized DJ sound card with at least 4 channels (2 stereo pairs) of inputs and outputs. With this setup, the DJ software typically outputs unmixed signals from the music files to an external hardware DJ mixer. Some DJ mixers have integrated USB sound cards that allow DJ software to connect directly to the mixer without requiring a separate sound card. A DJ software can be used to mix audio files on the computer instead of a separate hardware mixer. When mixing on a computer, DJs often use a DJ controller device that mimics the layout of two turntables plus a DJ mixer to control the software rather than the computer keyboard & touchpad on a laptop, or the touchscreen on a tablet computer or smartphone. Many DJ controllers have an integrated sound card with 4 output channels (2 stereo pairs) that allows the DJ to use headphones to preview music before playing it on the main output. Several techniques are used by DJs as a means to better mix and blend recorded music. These techniques primarily include the cueing, equalization and audio mixing of two or more sound sources. The complexity and frequency of special techniques depends largely on the setting in which a DJ is working. Radio DJs are less likely to focus on advanced music-mixing procedures than club DJs, who rely on a smooth transition between songs using a range of techniques. However, some radio DJs are experienced club DJs, so they use the same sophisticated mixing techniques. Club DJ turntable techniques include beatmatching, phrasing and slip-cueing to preserve energy on a dance floor. Turntablism embodies the art of cutting, beat juggling, scratching, needle drops, phase shifting, back spinning and more to perform the transitions and overdubs of samples in a more creative manner (although turntablism is often considered a use of the turntable as a musical instrument rather than a tool for blending recorded music). Professional DJs may use harmonic mixing to choose songs that are in compatible musical keys. Recent advances in technology in both DJ hardware and software can provide assisted or automatic completion of some traditional DJ techniques and skills. Examples include phrasing and beatmatching, which can be partially or completely automated by utilizing DJ software that performs automatic synchronization of sound recordings, a feature commonly labelled "sync". Most DJ mixers now include a beat-counter which analyzes the tempo of an incoming sound source and displays its tempo in beats per minute (BPM), which may assist with beatmatching analog sound sources. In the past, being a DJ has largely been a self-taught craft but with the complexities of new technologies and the convergence with music production methods, there are a growing number of schools and organizations that offer instruction on the techniques. In DJ culture, miming refers to the practice of DJ's pantomiming the actions of live-mixing a set on stage while a pre-recorded mix plays over the sound system. Miming mixing in a live performance is considered to be controversial within DJ culture. Some within the DJ community say that miming is increasingly used as a technique by celebrity model DJs who may lack mixing skills, but can draw big crowds to a venue. During a DJ tour for the release of the French group Justice's "A Cross the Universe" in November 2008, controversy arose when a photograph of Augé DJing with an unplugged Akai MPD24 surfaced. The photograph sparked accusations that Justice's live sets were faked. Augé has since said that the equipment was unplugged very briefly before being reattached and the band put a three-photo set of the incident on their MySpace page. After a 2013 Disclosure concert, the duo was criticized for pretending to live mix to a playback of a pre-recorded track. Disclosure's Guy Lawrence said they did not deliberately intend to mislead their audience, and cited miming by other DJs such as David Guetta. The term "disc jockey" was ostensibly coined by radio gossip commentator Walter Winchell in 1935, and the phrase first appeared in print in a 1941 "Variety" magazine, used to describe radio personalities who introduced phonograph records on the air. Playing recorded music for dancing and parties rose with the mass marketing of home phonographs in the late 19th century. British radio disc jockey Jimmy Savile hosted his first live dance party in 1943 using a single turntable and a makeshift sound system. Four years later, Savile began using two turnables welded together to form a single DJ console. In 1947, the Whiskey A Go-Go opened in Paris as the first discotheque. In the 1960s, Rudy Bozak began making the first DJ mixers, mixing consoles specialized for DJing. In the late 1960s to early 1970s Jamaican sound system culture, producer and sound system operator (DJ), (Jamaican) King Tubby and producer Lee "Scratch" Perry were pioneers of the genre known as dub music. They experimented with tape-based composition; emphasized repetitive rhythmic structures (often stripped of their harmonic elements); electronically manipulated spatiality; sonically manipulated pre-recorded musical materials from mass media; and remixed music among other innovative techniques. It is widely known that the Jamaican dancehall culture has had and continues to have a significant impact on the American hip hop culture. DJ turntablism has origins in the invention of direct-drive turntables. Early belt-drive turntables were unsuitable for turntablism and mixing, since they had a slow start-up time, and they were prone to wear-and-tear and breakage, as the belt would break from backspinning or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. In 1972, Technics started making their SL-1200 turntable, which became the most popular turntable for DJs due to its high torque direct drive design. The SL-1200 had a rapid start and its durable direct drive enabled DJs to manipulate the platter, as with scratching techniques. Hip hop DJs began using the Technics SL-1200s as musical instruments to manipulate records with turntablism techniques such as scratching and beat juggling rather than merely mixing records. These techniques were developed in the 1970s by DJ Kool Herc, Grand Wizard Theodore, and Afrika Bambaataa, as they experimented with Technics direct-drive decks, finding that the motor would continue to spin at the correct RPM even if the DJ wiggled the record back and forth on the platter. Although Technics stopped producing the SL-1200 in 2010, they remain the most popular DJ turntable due to their high build quality and durability. In 1980, Japanese company Roland released the TR-808, an analog rhythm/drum machine, which has unique artificial sounds, such as its booming bass and sharp snare, and a metronome-like rhythm. Yellow Magic Orchestra's use of the instrument in 1980 influenced hip hop pioneer Afrika Bambaataa, after which the TR-808 would be widely adopted by hip hop DJs, with 808 sounds remaining central to hip hop music ever since. The Roland TB-303, a bass synthesizer released in 1981, had a similar impact on electronic dance music genres such as techno and house music, along with Roland's TR-808 and TR-909 drum machines. In 1982, the Compact Disc (CD) format was released, popularizing digital audio. In 1998, the first MP3 digital audio player, the Eiger Labs MPMan F10, was introduced. In January of that same year at the BeOS Developer Conference, N2IT demonstrated FinalScratch, the first digital DJ system to allow DJs control of MP3 files through special time-coded vinyl records or CDs. While it would take some time for this novel concept to catch on with the "die-hard Vinyl DJs," this would become the first step in the Digital DJ revolution. Manufacturers joined with computer DJing pioneers to offer professional endorsements, the first being Professor Jam (a.k.a. William P. Rader), who went on to develop the industry's first dedicated computer DJ convention and learning program, the "CPS (Computerized Performance System) DJ Summit", to help spread the word about the advantages of this emerging technology. In 2001, Pioneer DJ began producing the CDJ-1000 CD player, making the use of digital music recordings with traditional DJ techniques practical for the first time. As the 2000s progressed, laptop computers became more powerful and affordable. DJ software, specialized DJ sound cards, and DJ controllers were developed for DJs to use laptops as a source of music rather than turntables or CDJs. In the 2010s, like laptops before them, tablet computers and smartphones became more powerful & affordable. DJ software was written to run on these more portable devices instead of laptops, although laptops remain the more common type of computer for DJing. In Western popular music, women musicians have achieved great success in singing and songwriting roles, however, there are relatively few women DJs or turntablists. Part of this may stem from a general low percentage of women in audio technology-related jobs. A 2013 "Sound on Sound" article stated that there are "...few women in record production and sound engineering." Ncube states that "[n]inety-five percent of music producers are male, and although there are female producers achieving great things in music, they are less well-known than their male counterparts." The vast majority of students in music technology programs are male. In hip hop music, the low percentage of women DJs and turntablists may stem from the overall male domination of the entire hip hop music industry. Most of the top rappers, MCs, DJs, record producers and music executives are men. There are a small number of high-profile women, but they are rare. In 2007 Mark Katz's article "Men, Women, and Turntables: Gender and the DJ Battle," stated that "very few women [do turntablism] battle[s]; the matter has been a topic of conversation among hip-hop DJs for years." In 2010 Rebekah Farrugia states "the male-centricity of EDM culture" contributes to "a marginalisation of women in these [EDM] spaces." While turntablism and broader DJ practices should not be conflated, Katz suggests use or lack of use of the turntable broadly by women across genres and disciplines is impacted upon by what he defines as "male technophilia." Historian Ruth Oldenziel concurs in her writing on engineering with this idea of socialization as a central factor in the lack of engagement with technology. She explains: "an exclusive focus on women's supposed failure to enter the field … is insufficient for understanding how our stereotypical notions have come into being; it tends to put the burden of proof entirely on women and to blame them for their supposedly inadequate socialization, their lack of aspiration, and their want of masculine values. An equally challenging question is why and how boys have come to love things technical, how boys have historically been socialized as technophiles." Lucy Green has focused on gender in relation to musical performers and creators, and specifically on educational frameworks as they relate to both. She suggests that women's alienation from "areas that have a strong technological tendency such as DJing, sound engineering and producing" are "not necessarily about her dislike of these instruments but relates to the interrupting effect of their dominantly masculine delineations." Despite this, women and girls do increasingly engage in turntable and DJ practices, individually and collectively, and "carve out spaces for themselves in EDM and DJ Culture". A 2015 article cited a number of prominent female DJs: Hannah Wants, Ellen Allien, Miss Kittin, Monika Kruse, Nicole Moudaber, B.Traits, Magda, Nina Kraviz, Nervo, and Annie Mac. Two years later, another article brings out a list with world-famous female DJs including Nastia, tINY, Nora En Pure, Anja Schneider, Peggy Gou, Maya Jane Coles, and Eli & Fur. Female DJ The Black Madonna has been called "one of the world’s most exciting turntablists." Her stage name The Black Madonna is a tribute to her mother's favorite Catholic saint. In 2018, The Black Madonna played herself as an in-residence DJ for the video game "Grand Theft Auto Online", as part of the "After Hours" DLC. There are various projects dedicated to the promotion and support of these practices such as Female DJs London. Some artists and collectives go beyond these practices to be more gender inclusive. For example, Discwoman, a New York-based collective and booking agency, describe themselves as "representing and showcasing cis women, trans women and genderqueer talent."
https://en.wikipedia.org/wiki?curid=8683
Detroit Detroit (, ; ) is the largest and most populous city in the U.S. state of Michigan, the largest U.S. city on the United States–Canada border, and the seat of Wayne County. The municipality of Detroit had a 2019 estimated population of 670,031, making it the 24th-most populous city in the United States. The metropolitan area, known as Metro Detroit, is home to 4.3 million people, making it the second-largest in the Midwest after the Chicago metropolitan area, and 14th largest in the United States. Regarded as a major cultural center, Detroit is known for its contributions to music and as a repository for art, architecture and design. Detroit is a major port on the Detroit River, one of the four major straits that connect the Great Lakes system to the Saint Lawrence Seaway. The Detroit Metropolitan Airport is among the most important hubs in the United States. The City of Detroit anchors the second-largest regional economy in the Midwest, behind Chicago and ahead of Minneapolis–Saint Paul, and the 13th-largest in the United States. Detroit and its neighboring Canadian city Windsor are connected through a highway tunnel, railway tunnel, and the Ambassador Bridge, which is the second busiest international crossing in North America, after San Diego–Tijuana. Detroit is best known as the center of the U.S. automobile industry, and the "Big Three" auto manufacturers General Motors, Ford, and Fiat Chrysler are all headquartered in Metro Detroit. In 1701, Antoine de la Mothe Cadillac founded Fort Pontchartrain du Détroit, the future city of Detroit. During the 19th century, it became an important industrial hub at the center of the Great Lakes region. The city became the 4th-largest in the nation in 1920, after only New York City, Chicago and Philadelphia with the influence of the booming auto industry. With expansion of the auto industry in the early 20th century, the city and its suburbs experienced rapid growth, and by the 1940s, the city remained as the fourth-largest in the country. However, due to industrial restructuring, the loss of jobs in the auto industry, and rapid suburbanization, Detroit lost considerable population from the late 20th century to the present. Since reaching a peak of 1.85 million at the 1950 census, Detroit's population has declined by more than 60 percent. In 2013, Detroit became the largest U.S. city to file for bankruptcy, which it successfully exited in December 2014, when the city government regained control of Detroit's finances. Detroit's diverse culture has had both local and international influence, particularly in music, with the city giving rise to the genres of Motown and techno, and playing an important role in the development of jazz, hip-hop, rock, and punk music. The rapid growth of Detroit in its boom years resulted in a globally unique stock of architectural monuments and historic places. Since the 2000s conservation efforts have managed to save many architectural pieces and achieved several large-scale revitalizations, including the restoration of several historic theatres and entertainment venues, high-rise renovations, new sports stadiums, and a riverfront revitalization project. More recently, the population of Downtown Detroit, Midtown Detroit, and various other neighborhoods has increased. An increasingly popular tourist destination, Detroit receives 19 million visitors per year. In 2015, Detroit was named a "City of Design" by UNESCO, the first U.S. city to receive that designation. Paleo-Indian people inhabited areas near Detroit as early as 11,000 years ago including the culture referred to as the Mound-builders. In the 17th century, the region was inhabited by Huron, Odawa, Potawatomi and Iroquois peoples. The first Europeans did not penetrate into the region and reach the straits of Detroit until French missionaries and traders worked their way around the League of the Iroquois, with whom they were at war, and other Iroquoian tribes in the 1630s. The Huron and Neutral peoples held the north side of Lake Erie until the 1650s, when the Iroquois pushed both and the Erie people away from the lake and its beaver-rich feeder streams in the Beaver Wars of 1649–1655. By the 1670s, the war-weakened Iroquois laid claim to as far south as the Ohio River valley in northern Kentucky as hunting grounds, and had absorbed many other Iroquoian peoples after defeating them in war. For the next hundred years, virtually no British, colonist, or French action was contemplated without consultation with, or consideration of the Iroquois' likely response. When the French and Indian War evicted the Kingdom of France from Canada, it removed one barrier to British colonists migrating west. British negotiations with the Iroquois would both prove critical and lead to a Crown policy limiting settlements below the Great Lakes and west of the Alleghenies. Many colonial American would-be migrants resented this restraint and became supporters of the American Revolution. The 1778 raids and resultant 1779 decisive Sullivan Expedition reopened the Ohio Country to westward emigration, which began almost immediately. By 1800 white settlers were pouring westwards. The city was named by French colonists, referring to the Detroit River (, meaning "the strait of Lake Erie"), linking Lake Huron and Lake Erie; in the historical context, the strait included the St. Clair River, Lake St. Clair and the Detroit River. On July 24, 1701, the French explorer Antoine de la Mothe Cadillac, along with more than a hundred other settlers, began constructing a small fort on the north bank of the Detroit River. Cadillac would later name the settlement Fort Pontchartrain du Détroit, after Louis Phélypeaux, comte de Pontchartrain, Minister of Marine under Louis XIV. A church was soon founded here, and the parish was known as Sainte Anne de Détroit. France offered free land to colonists to attract families to Detroit; when it reached a population of 800 in 1765, this was the largest European settlement between Montreal and New Orleans, both also French settlements, in the former colonies of New France and La Louisiane, respectively. By 1773, after the addition of Anglo-American settlers, the population of Detroit was 1,400. By 1778, its population reached 2,144 and it was the third-largest city in what was known as the Province of Quebec since the British takeover of French colonies following their victory in the Seven Years' War. The region's economy was based on the lucrative fur trade, in which numerous Native American people had important roles as trappers and traders. Today the flag of Detroit reflects its French colonial heritage. Descendants of the earliest French and French-Canadian settlers formed a cohesive community, who gradually were superseded as the dominant population after more Anglo-American settlers arrived in the early 19th century with American westward migration. Living along the shores of Lakes St. Clair, and south to Monroe and downriver suburbs, the ethnic French Canadians of Detroit, also known as Muskrat French in reference to the fur trade, remain a subculture in the region in the 21st century. During the French and Indian War (1754–63), the North American front of the Seven Years' War between Britain and France, British troops gained control of the settlement in 1760, and shortened its name to "Detroit". Several regional Native American tribes, such as the Potowatomi, Ojibwe and Huron, launched Pontiac's Rebellion (1763), and conducted a siege of Fort Detroit, but failed to capture it. In defeat, France ceded its territory in North America east of the Mississippi to Britain following the war. Following the American Revolutionary War and United States independence, Britain ceded Detroit along with other territory in the area under the Jay Treaty (1796), which established the northern border with its colony of Canada. In 1805, fire destroyed most of the Detroit settlement, which had primarily buildings made of wood. One stone fort, a river warehouse, and brick chimneys of former wooden homes were the sole structures to survive. Of the 600 Detroit residents in this area, none died in the fire. From 1805 to 1847, Detroit was the capital of Michigan (first the territory, then the state). The United States commander at Detroit surrendered without a fight to British troops during the War of 1812 in the Siege of Detroit, believing his forces were vastly outnumbered. The Battle of Frenchtown (January 18–23, 1813) was part of a U.S. effort to retake the city, and U.S. troops suffered their highest fatalities of any battle in the war. This battle is commemorated at River Raisin National Battlefield Park south of Detroit in Monroe County. Detroit was recaptured by the United States later that year. The settlement was incorporated as a city in 1815. As the city expanded, a geometric street plan developed by Augustus B. Woodward was followed, featuring grand boulevards as in Paris. Prior to the American Civil War, the city's access to the Canada–US border made it a key stop for refugee slaves gaining freedom in the North along the Underground Railroad. Many went across the Detroit River to Canada to escape pursuit by slave catchers. An estimated 20,000 to 30,000 African-American refugees settled in Canada. George DeBaptiste was considered to be the "president" of the Detroit Underground Railroad, William Lambert the "vice president" or "secretary", and Laura Haviland the "superintendent". Numerous men from Detroit volunteered to fight for the Union during the American Civil War, including the 24th Michigan Infantry Regiment. It was part of the legendary Iron Brigade, which fought with distinction and suffered 82% casualties at the Battle of Gettysburg in 1863. When the First Volunteer Infantry Regiment arrived to fortify Washington, D.C., President Abraham Lincoln is quoted as saying "Thank God for Michigan!" George Armstrong Custer led the Michigan Brigade during the Civil War and called them the "Wolverines". During the late 19th century, wealthy industry and shipping magnates commissioned design and construction of several Gilded Age mansions east and west of the current downtown, along the major avenues of the Woodward plan. Most notable among them was the David Whitney House at 4421 Woodward Avenue, and the grand avenue became a favored address for mansions. During this period some referred to Detroit as the "Paris of the West" for its architecture, grand avenues in the Paris style, and for Washington Boulevard, recently electrified by Thomas Edison. The city had grown steadily from the 1830s with the rise of shipping, shipbuilding, and manufacturing industries. Strategically located along the Great Lakes waterway, Detroit emerged as a major port and transportation hub. In 1896, a thriving carriage trade prompted Henry Ford to build his first automobile in a rented workshop on Mack Avenue. During this growth period, Detroit expanded its borders by annexing all or part of several surrounding villages and townships. In 1903, Henry Ford founded the Ford Motor Company. Ford's manufacturing—and those of automotive pioneers William C. Durant, the Dodge Brothers, Packard, and Walter Chrysler—established Detroit's status in the early 20th century as the world's automotive capital. The growth of the auto industry was reflected by changes in businesses throughout the Midwest and nation, with the development of garages to service vehicles and gas stations, as well as factories for parts and tires. With the rapid growth of industrial workers in the auto factories, labor unions such as the American Federation of Labor and the United Auto Workers fought to organize workers to gain them better working conditions and wages. They initiated strikes and other tactics in support of improvements such as the 8-hour day/40-hour work week, increased wages, greater benefits and improved working conditions. The labor activism during those years increased influence of union leaders in the city such as Jimmy Hoffa of the Teamsters and Walter Reuther of the Autoworkers. Due to the booming auto industry, Detroit became the 4th-largest in the nation in 1920, following New York City, Chicago and Philadelphia. The prohibition of alcohol from 1920 to 1933 resulted in the Detroit River becoming a major conduit for smuggling of illegal Canadian spirits. Detroit, like many places in the United States, developed racial conflict and discrimination in the 20th century following the rapid demographic changes as hundreds of thousands of new workers were attracted to the industrial city; in a short period it became the 4th-largest city in the nation. The Great Migration brought rural blacks from the South; they were outnumbered by southern whites who also migrated to the city. Immigration brought southern and eastern Europeans of Catholic and Jewish faith; these new groups competed with native-born whites for jobs and housing in the booming city. Detroit was one of the major Midwest cities that was a site for the dramatic urban revival of the Ku Klux Klan beginning in 1915. "By the 1920s the city had become a stronghold of the KKK," whose members primarily opposed Catholic and Jewish immigrants, but also practiced discrimination against black Americans. Even after the decline of the KKK in the late 1920s, the Black Legion, a secret vigilante group, was active in the Detroit area in the 1930s. One-third of its estimated 20,000 to 30,000 members in Michigan were based in the city. It was defeated after numerous prosecutions following the kidnapping and murder in 1936 of Charles Poole, a Catholic organizer with the federal Works Progress Administration. Some 49 men of the Black Legion were convicted of numerous crimes, with many sentenced to life in prison for murder. In the 1940s the world's "first urban depressed freeway" ever built, the Davison, was constructed in Detroit. During World War II, the government encouraged retooling of the American automobile industry in support of the Allied powers, leading to Detroit's key role in the American Arsenal of Democracy. Jobs expanded so rapidly due to the defense buildup in World War II that 400,000 people migrated to the city from 1941 to 1943, including 50,000 blacks in the second wave of the Great Migration, and 350,000 whites, many of them from the South. Whites, including ethnic Europeans, feared black competition for jobs and scarce housing. The federal government prohibited discrimination in defense work, but when in June 1943 Packard promoted three black people to work next to whites on its assembly lines, 25,000 white workers walked off the job. The Detroit race riot of 1943 took place in June, three weeks after the Packard plant protest, beginning with an altercation at Belle Isle. Blacks suffered 25 deaths (of a total of 34), three quarters of 600 wounded, and most of the losses due to property damage. Rioters moved through the city, and young whites traveled across town to attack more settled blacks in their neighborhood of Paradise Valley. Industrial mergers in the 1950s, especially in the automobile sector, increased oligopoly in the American auto industry. Detroit manufacturers such as Packard and Hudson merged into other companies and eventually disappeared. At its peak population of 1,849,568, in the 1950 Census, the city was the 5th-largest in the United States, after New York City, Chicago, Philadelphia and Los Angeles. In this postwar era, the auto industry continued to create opportunities for many African Americans from the South, who continued with their Great Migration to Detroit and other northern and western cities to escape the strict Jim Crow laws and racial discrimination policies of the South. As before the war, competition had been fierce for employment, housing, and land. Racial discrimination took place in employment, keeping the work force and better jobs predominantly white. These unequal opportunities in employment resulted in unequal housing opportunities for the majority of the black community. Despite changes in demographics, for instance, Detroit's police force, fire department, and other city jobs were held by predominately white residents. The surge in Detroit's black population with the Great Migration augmented the strain on housing scarcity. Black people were often turned away from bank loans to obtain housing and interest rates, and rents were unfairly inflated to prevent their moving into white neighborhoods. Such discrimination also took place due to redlining by banks and federal housing policy, which limited the ability of blacks to improve their housing, and encouraged white people to guard the racial divide that defined their neighborhoods. This marginalized the agency of black Detroiters—another important aspect in the history of postwar Detroit. As in other major American cities in the postwar era, construction of a federally subsidized, extensive highway and freeway system around Detroit, and pent-up demand for new housing stimulated suburbanization; highways made commuting by car easier. However, this construction had negative implications for many urban residents. Highways were constructed through neighborhoods of poor and minority residents who had less political power to oppose them. (The neighborhoods were mostly low income or considered blighted, made up of older housing, where investment had been lacking due to racial redlining, so the highways were presented as a kind of urban renewal.) They displaced residents with little consideration of the effects of breaking up functioning neighborhoods. In 1956, Detroit's last heavily used electric streetcar line, which traveled along the length of Woodward Avenue, was removed and replaced with gas-powered buses. It was the last line of what had once been a 534-mile network of electric streetcars. In 1941 at peak times, a streetcar ran on Woodward Avenue every 60 seconds. All of these changes in the area's transportation system favored low-density, auto-oriented development rather than high-density urban development. Industry also moved to the suburbs, seeking large plots of land for single story factories. By the 21st century, the metro Detroit area had developed as one of the most sprawling job markets in the United States; combined with poor public transport, this resulted in many new jobs being beyond the reach of urban low-income workers. In 1950, the city held about one-third of the state's population, anchored by its industries and workers. Over the next sixty years, the city's population declined to less than 10 percent of the state's population. During the same time period, the sprawling Detroit metropolitan area, which surrounds and includes the city, grew to contain more than half of Michigan's population. The shift of population and jobs eroded Detroit's tax base. In June 1963, Rev. Martin Luther King Jr. gave a major speech as part of a civil rights march in Detroit that foreshadowed his "I Have a Dream" speech in Washington, D.C., two months later. While the civil rights movement gained significant federal civil rights laws in 1964 and 1965, longstanding inequities resulted in confrontations between the police and inner city black youth who wanted change. Longstanding tensions in Detroit culminated in the Twelfth Street riot in July 1967. Governor George W. Romney ordered the Michigan National Guard into Detroit, and President Johnson sent in U.S. Army troops. The result was 43 dead, 467 injured, over 7,200 arrests, and more than 2,000 buildings destroyed, mostly in black residential and business areas. Thousands of small businesses closed permanently or relocated to safer neighborhoods. The affected district lay in ruins for decades. It was the most costly riot in the United States. On August 18, 1970, the NAACP filed suit against Michigan state officials, including Governor William Milliken, charging "de facto" public school segregation. The NAACP argued that although schools were not legally segregated, the city of Detroit and its surrounding counties had enacted policies to maintain racial segregation in public schools. The NAACP also suggested a direct relationship between unfair housing practices and educational segregation, as the composition of students in the schools followed segregated neighborhoods. The District Court held all levels of government accountable for the segregation in its ruling. The Sixth Circuit Court affirmed some of the decision, holding that it was the state's responsibility to integrate across the segregated metropolitan area. The U.S. Supreme Court took up the case February 27, 1974. The subsequent "Milliken v. Bradley" decision had nationwide influence. In a narrow decision, the US Supreme Court found schools were a subject of local control, and suburbs could not be forced to solve problems in the city's school district. "Milliken was perhaps the greatest missed opportunity of that period," said Myron Orfield, professor of law at the University of Minnesota. "Had that gone the other way, it would have opened the door to fixing nearly all of Detroit's current problems." John Mogk, a professor of law and an expert in urban planning at Wayne State University in Detroit, says, "Everybody thinks that it was the riots [in 1967] that caused the white families to leave. Some people were leaving at that time but, really, it was after Milliken that you saw mass flight to the suburbs. If the case had gone the other way, it is likely that Detroit would not have experienced the steep decline in its tax base that has occurred since then." In November 1973, the city elected Coleman Young as its first black mayor. After taking office, Young emphasized increasing racial diversity in the police department, which was predominately white. Young also worked to improve Detroit's transportation system, but tension between Young and his suburban counterparts over regional matters was problematic throughout his mayoral term. In 1976, the federal government offered $600 million for building a regional rapid transit system, under a single regional authority. But the inability of Detroit and its suburban neighbors to solve conflicts over transit planning resulted in the region losing the majority of funding for rapid transit. Following the failure to reach a regional agreement over the larger system, the city moved forward with construction of the elevated downtown circulator portion of the system, which became known as the Detroit People Mover. The gasoline crises of 1973 and 1979 also affected Detroit and the U.S. auto industry. Buyers chose smaller, more fuel-efficient cars made by foreign makers as the price of gas rose. Efforts to revive the city were stymied by the struggles of the auto industry, as their sales and market share declined. Automakers laid off thousands of employees and closed plants in the city, further eroding the tax base. To counteract this, the city used eminent domain to build two large new auto assembly plants in the city. As mayor, Young sought to revive the city by seeking to increase investment in the city's declining downtown. The Renaissance Center, a mixed-use office and retail complex, opened in 1977. This group of skyscrapers was an attempt to keep businesses in downtown. Young also gave city support to other large developments to attract middle and upper-class residents back to the city. Despite the Renaissance Center and other projects, the downtown area continued to lose businesses to the automobile-dependent suburbs. Major stores and hotels closed, and many large office buildings went vacant. Young was criticized for being too focused on downtown development and not doing enough to lower the city's high crime rate and improve city services to residents. Previously a major population center and site of worldwide automobile manufacturing, Detroit has suffered a long economic decline produced by numerous factors. Like many industrial American cities, Detroit peak population was in 1950, before postwar suburbanization took effect. The peak population was 1.8 million people. Following suburbanization, industrial restructuring, and loss of jobs (as described above), by the 2010 census, the city had less than 40 percent of that number, with just over 700,000 residents. The city has declined in population in each census since 1950. High unemployment was compounded by middle-class flight to the suburbs, and some residents leaving the state to find work. The result for the city was a higher proportion of poor in its population, reduced tax base, depressed property values, abandoned buildings, abandoned neighborhoods, high crime rates, and a pronounced demographic imbalance. On August 16, 1987, Northwest Airlines Flight 255 crashed near Detroit, killing all but one of the 155 people on board, as well as two people on the ground. In 1993 Young retired as Detroit's longest-serving mayor, deciding not to seek a sixth term. That year the city elected Dennis Archer, a former Michigan Supreme Court justice. Archer prioritized downtown development and easing tensions with Detroit's suburban neighbors. A referendum to allow casino gambling in the city passed in 1996; several temporary casino facilities opened in 1999, and permanent downtown casinos with hotels opened in 2007–08. Campus Martius, a reconfiguration of downtown's main intersection as a new park, was opened in 2004. The park has been cited as one of the best public spaces in the United States. The city's riverfront on the Detroit River has been the focus of redevelopment, following successful examples of other older industrial cities. In 2001, the first portion of the International Riverfront was completed as a part of the city's 300th anniversary celebration. Miles of associated parks and landscaping have been completed in succeeding years. In 2011, the Port Authority Passenger Terminal opened, with the riverwalk connecting Hart Plaza to the Renaissance Center. Since 2006, $9 billion has been invested in downtown and surrounding neighborhoods; $5.2 billion of which has come in 2013 and 2014. Construction activity, particularly rehabilitation of historic downtown buildings, has increased markedly. The number of vacant downtown buildings has dropped from nearly 50 to around 13. Among the most notable redevelopment projects are the Book Cadillac Hotel and Fort Shelby Hotel; the David Broderick Tower; and the David Whitney Building. Little Caesars Arena, a new home for the Detroit Red Wings and the Detroit Pistons, with attached residential, hotel, and retail use, opened on September 5, 2017. The plans for the project call for mixed-use residential on the blocks surrounding the arena and the renovation of the vacant 14-story Eddystone Hotel. It will be a part of The District Detroit, a group of places owned by Olympia Entertainment Inc., including Comerica Park and the Detroit Opera House, among others. Detroit's protracted decline has resulted in severe urban decay, with thousands of empty buildings around the city, referred to as greyfield. Some parts of Detroit are so sparsely populated the city has difficulty providing municipal services. The city has demolished abandoned homes and buildings, planting grass and trees, and considered removing street lighting from large portions of the city, in order to encourage the small population in certain areas to move to more populated areas. Roughly half of the owners of Detroit's 305,000 properties failed to pay their 2011 tax bills, resulting in about $246 million in taxes and fees going uncollected, nearly half of which was due to Detroit. The rest of the money would have been earmarked for Wayne County, Detroit Public Schools, and the library system. In September 2008, Mayor Kwame Kilpatrick (who had served for six years) resigned following felony convictions. In 2013, Kilpatrick was convicted on 24 federal felony counts, including mail fraud, wire fraud, and racketeering, and was sentenced to 28 years in federal prison. The former mayor's activities cost the city an estimated $20 million. In 2013, felony bribery charges were brought against seven building inspectors. In 2016, further corruption charges were brought against 12 principals, a former school superintendent and supply vendor for a $12 million kickback scheme. Law professor Peter Henning argues Detroit's corruption is not unusual for a city its size, especially when compared with Chicago. The city's financial crisis resulted in Michigan taking over administrative control of its government. The state governor declared a financial emergency in March 2013, appointing Kevyn Orr as emergency manager. On July 18, 2013, Detroit became the largest U.S. city to file for bankruptcy. It was declared bankrupt by U.S. District Court on December 3, 2013, in light of the city's $18.5 billion debt and its inability to fully repay its thousands of creditors. On November 7, 2014, the city's plan for exiting bankruptcy was approved. The following month, on December 11, the city officially exited bankruptcy. The plan allowed the city to eliminate $7 billion in debt and invest $1.7 billion into improved city services. One of the largest post-bankruptcy efforts to improve city services has been work to fix the city's broken street lighting system. At one time it was estimated that 40% of lights were not working, which resulted in public safety issues and abandonment of housing. The plan called for replacing outdated high pressure sodium lights with 65,000 LED lights. Construction began in late 2014 and finished in December 2016; Detroit is the largest U.S city with all LED street lighting. In the 2010s, several initiatives were taken by Detroit's citizens and new residents to improve the cityscape by renovating and revitalizing neighborhoods. Such include the Motor City Blight Busters and various urban gardening movements. The well-known symbol of the city's decades-long demise, the Michigan Central Station, was long vacant. The city renovated it with new windows, elevators and facilities since 2015. Several other landmark buildings have been privately renovated and adapted as condominiums, hotels, offices, or for cultural uses. Detroit is mentioned as a city of renaissance and has reversed many of the trends of the prior decades. Detroit is the center of a three-county urban area (with a population of 3,734,090 within an area of according to the 2010 United States Census), six-county metropolitan statistical area (population of 4,296,250 in an area of as of the 2010 census), and a nine-county combined statistical area (population of 5.3 million within ). According to the U.S. Census Bureau, the city has a total area of , of which is land and is water. Detroit is the principal city in Metro Detroit and Southeast Michigan. It is situated in the Midwestern United States and the Great Lakes region. The Detroit River International Wildlife Refuge is the only international wildlife preserve in North America, and is uniquely located in the heart of a major metropolitan area. The Refuge includes islands, coastal wetlands, marshes, shoals, and waterfront lands along of the Detroit River and Western Lake Erie shoreline. The city slopes gently from the northwest to southeast on a till plain composed largely of glacial and lake clay. The most notable topographical feature in the city is the Detroit Moraine, a broad clay ridge on which the older portions of Detroit and Windsor are located, rising approximately above the river at its highest point. The highest elevation in the city is directly north of Gorham Playground on the northwest side approximately three blocks south of 8 Mile Road, at a height of . Detroit's lowest elevation is along the Detroit River, at a surface height of . Belle Isle Park is a island park in the Detroit River, between Detroit and Windsor, Ontario. It is connected to the mainland by the MacArthur Bridge in Detroit. Belle Isle Park contains such attractions as the James Scott Memorial Fountain, the Belle Isle Conservatory, the Detroit Yacht Club on an adjacent island, a half-mile (800 m) beach, a golf course, a nature center, monuments, and gardens. The city skyline may be viewed from the island. Three road systems cross the city: the original French template, with avenues radiating from the waterfront, and true north–south roads based on the Northwest Ordinance township system. The city is north of Windsor, Ontario. Detroit is the only major city along the Canada–U.S. border in which one travels south in order to cross into Canada. Detroit has four border crossings: the Ambassador Bridge and the Detroit–Windsor Tunnel provide motor vehicle thoroughfares, with the Michigan Central Railway Tunnel providing railroad access to and from Canada. The fourth border crossing is the Detroit–Windsor Truck Ferry, near the Windsor Salt Mine and Zug Island. Near Zug Island, the southwest part of the city was developed over a salt mine that is below the surface. The Detroit salt mine run by the Detroit Salt Company has over of roads within. Detroit and the rest of southeastern Michigan have a hot-summer humid continental climate (Köppen: "Dfa") which is influenced by the Great Lakes like other places in the state; the city and close-in suburbs are part of USDA Hardiness zone 6b, while the more distant northern and western suburbs generally are included in zone 6a. Winters are cold, with moderate snowfall and temperatures not rising above freezing on an average 44 days annually, while dropping to or below on an average 4.4 days a year; summers are warm to hot with temperatures exceeding on 12 days. The warm season runs from May to September. The monthly daily mean temperature ranges from in January to in July. Official temperature extremes range from on July 24, 1934, down to on January 21, 1984; the record low maximum is on January 19, 1994, while, conversely the record high minimum is on August 1, 2006, the most recent of five occurrences. A decade or two may pass between readings of or higher, which last occurred July 17, 2012. The average window for freezing temperatures is October 20 thru April 22, allowing a growing season of 180 days. Precipitation is moderate and somewhat evenly distributed throughout the year, although the warmer months such as May and June average more, averaging annually, but historically ranging from in 1963 to in 2011. Snowfall, which typically falls in measurable amounts between November 15 through April 4 (occasionally in October and very rarely in May), averages per season, although historically ranging from in 1881–82 to in 2013–14. A thick snowpack is not often seen, with an average of only 27.5 days with or more of snow cover. Thunderstorms are frequent in the Detroit area. These usually occur during spring and summer. Seen in panorama, Detroit's waterfront shows a variety of architectural styles. The post modern Neo-Gothic spires of the One Detroit Center (1993) were designed to refer to the city's Art Deco skyscrapers. Together with the Renaissance Center, these buildings form a distinctive and recognizable skyline. Examples of the Art Deco style include the Guardian Building and Penobscot Building downtown, as well as the Fisher Building and Cadillac Place in the New Center area near Wayne State University. Among the city's prominent structures are United States' largest Fox Theatre, the Detroit Opera House, and the Detroit Institute of Arts, all built in the early 20th century. While the Downtown and New Center areas contain high-rise buildings, the majority of the surrounding city consists of low-rise structures and single-family homes. Outside of the city's core, residential high-rises are found in upper-class neighborhoods such as the East Riverfront, extending toward Grosse Pointe, and the Palmer Park neighborhood just west of Woodward. The University Commons-Palmer Park district in northwest Detroit, near the University of Detroit Mercy and Marygrove College, anchors historic neighborhoods including Palmer Woods, Sherwood Forest, and the University District. Forty-two significant structures or sites are listed on the National Register of Historic Places. Neighborhoods constructed prior to World War II feature the architecture of the times, with wood-frame and brick houses in the working-class neighborhoods, larger brick homes in middle-class neighborhoods, and ornate mansions in upper-class neighborhoods such as Brush Park, Woodbridge, Indian Village, Palmer Woods, Boston-Edison, and others. Some of the oldest neighborhoods are along the major Woodward and East Jefferson corridors, which formed spines of the city. Some newer residential construction may also be found along the Woodward corridor, and in the far west and northeast. The oldest extant neighborhoods include West Canfield and Brush Park. There have been multi-million dollar restorations of existing homes and construction of new homes and condominiums here. The city has one of United States' largest surviving collections of late 19th- and early 20th-century buildings. Architecturally significant churches and cathedrals in the city include St. Joseph's, Old St. Mary's, the Sweetest Heart of Mary, and the Cathedral of the Most Blessed Sacrament. The city has substantial activity in urban design, historic preservation, and architecture. A number of downtown redevelopment projects—of which Campus Martius Park is one of the most notable—have revitalized parts of the city. Grand Circus Park and historic district is near the city's theater district; Ford Field, home of the Detroit Lions, and Comerica Park, home of the Detroit Tigers. Other projects include the demolition of the Ford Auditorium off Jefferson Street. The Detroit International Riverfront includes a partially completed three-and-one-half mile riverfront promenade with a combination of parks, residential buildings, and commercial areas. It extends from Hart Plaza to the MacArthur Bridge, which connects to Belle Isle Park, the largest island park in a U.S. city. The riverfront includes Tri-Centennial State Park and Harbor, Michigan's first urban state park. The second phase is a extension from Hart Plaza to the Ambassador Bridge for a total of of parkway from bridge to bridge. Civic planners envision the pedestrian parks will stimulate residential redevelopment of riverfront properties condemned under eminent domain. Other major parks include River Rouge (in the southwest side), the largest park in Detroit; Palmer (north of Highland Park) and Chene Park (on the east river downtown). Detroit has a variety of neighborhood types. The revitalized Downtown, Midtown, and New Center areas feature many historic buildings and are high density, while further out, particularly in the northeast and on the fringes, high vacancy levels are problematic, for which a number of solutions have been proposed. In 2007, Downtown Detroit was recognized as the best city neighborhood in which to retire among the United States' largest metro areas by CNN Money Magazine editors. Lafayette Park is a revitalized neighborhood on the city's east side, part of the Ludwig Mies van der Rohe residential district. The development was originally called the Gratiot Park. Planned by Mies van der Rohe, Ludwig Hilberseimer and Alfred Caldwell it includes a landscaped, park with no through traffic, in which these and other low-rise apartment buildings are situated. Immigrants have contributed to the city's neighborhood revitalization, especially in southwest Detroit. Southwest Detroit has experienced a thriving economy in recent years, as evidenced by new housing, increased business openings and the recently opened Mexicantown International Welcome Center. The city has numerous neighborhoods consisting of vacant properties resulting in low inhabited density in those areas, stretching city services and infrastructure. These neighborhoods are concentrated in the northeast and on the city's fringes. A 2009 parcel survey found about a quarter of residential lots in the city to be undeveloped or vacant, and about 10% of the city's housing to be unoccupied. The survey also reported that most (86%) of the city's homes are in good condition with a minority (9%) in fair condition needing only minor repairs. To deal with vacancy issues, the city has begun demolishing the derelict houses, razing 3,000 of the total 10,000 in 2010, but the resulting low density creates a strain on the city's infrastructure. To remedy this, a number of solutions have been proposed including resident relocation from more sparsely populated neighborhoods and converting unused space to urban agricultural use, including Hantz Woodlands, though the city expects to be in the planning stages for up to another two years. Public funding and private investment have also been made with promises to rehabilitate neighborhoods. In April 2008, the city announced a $300-million stimulus plan to create jobs and revitalize neighborhoods, financed by city bonds and paid for by earmarking about 15% of the wagering tax. The city's working plans for neighborhood revitalizations include 7-Mile/Livernois, Brightmoor, East English Village, Grand River/Greenfield, North End, and Osborn. Private organizations have pledged substantial funding to the efforts. Additionally, the city has cleared a section of land for large-scale neighborhood construction, which the city is calling the "Far Eastside Plan". In 2011, Mayor Dave Bing announced a plan to categorize neighborhoods by their needs and prioritize the most needed services for those neighborhoods. In the 2010 United States Census, the city had 713,777 residents, ranking it the 18th most populous city in the United States. Of the large shrinking cities in the United States, Detroit has had the most dramatic decline in population of the past 60 years (down 1,135,791) and the second largest percentage decline (down 61.4%). While the drop in Detroit's population has been ongoing since 1950, the most dramatic period was the significant 25% decline between the 2000 and 2010 Census. The population collapse has resulted in large numbers of abandoned homes and commercial buildings, and areas of the city hit hard by urban decay. Detroit's 713,777 residents represent 269,445 households, and 162,924 families residing in the city. The population density was 5,144.3 people per square mile (1,895/km²). There were 349,170 housing units at an average density of 2,516.5 units per square mile (971.6/km²). Housing density has declined. The city has demolished thousands of Detroit's abandoned houses, planting some areas and in others allowing the growth of urban prairie. Of the 269,445 households, 34.4% had children under the age of 18 living with them, 21.5% were married couples living together, 31.4% had a female householder with no husband present, 39.5% were non-families, 34.0% were made up of individuals, and 3.9% had someone living alone who was 65 years of age or older. Average household size was 2.59, and average family size was 3.36. There was a wide distribution of age in the city, with 31.1% under the age of 18, 9.7% from 18 to 24, 29.5% from 25 to 44, 19.3% from 45 to 64, and 10.4% 65 years of age or older. The median age was 31 years. For every 100 females, there were 89.1 males. For every 100 females age 18 and over, there were 83.5 males. According to a 2014 study, 67% of the population of the city identified themselves as Christians, with 49% professing attendance at Protestant churches, and 16% professing Roman Catholic beliefs, while 24% claim no religious affiliation. Other religions collectively make up about 8% of the population. The loss of industrial and working-class jobs in the city has resulted in high rates of poverty and associated problems. From 2000 to 2009, the city's estimated median household income fell from $29,526 to $26,098. the mean income of Detroit is below the overall U.S. average by several thousand dollars. Of every three Detroit residents, one lives in poverty. Luke Bergmann, author of "Getting Ghost: Two Young Lives and the Struggle for the Soul of an American City", said in 2010, "Detroit is now one of the poorest big cities in the country." In the 2010 American Community Survey, median household income in the city was $25,787, and the median income for a family was $31,011. The per capita income for the city was $14,118. 32.3% of families had income at or below the federally defined poverty level. Out of the total population, 53.6% of those under the age of 18 and 19.8% of those 65 and older had income at or below the federally defined poverty line. Oakland County in Metro Detroit, once rated amongst the wealthiest US counties per household, is no longer shown in the top 25 listing of "Forbes" magazine. But internal county statistical methods—based on measuring per capita income for counties with more than one million residents—show Oakland is still within the top 12, slipping from the 4th-most affluent such county in the U.S. in 2004 to 11th-most affluent in 2009. Detroit dominates Wayne County, which has an average household income of about $38,000, compared to Oakland County's $62,000. Much of Detroit's history is racially-charged and rooted in the effects of structural and individualized racism. Beginning with the rise of the automobile industry, the city's population increased more than sixfold during the first half of the 20th century as an influx of European, Middle Eastern (Lebanese, Assyrian/Chaldean), and Southern migrants brought their families to the city. With this economic boom following World War I, the African American population grew from a mere 6,000 in 1910 to more than 120,000 by 1930. This influx of thousands of African Americans in the 20th century became known as the Great Migration. Many of the original white families in Detroit saw this increase in diversity as a threat to their way of life and made it their mission to isolate black people from their neighborhoods, workplaces, and public institutions. Perhaps one of the most overt examples of neighborhood discrimination occurred in 1925 when African American physician Ossian Sweet found his home surrounded by an angry mob of his hostile white neighbors violently protesting his new move into a traditionally white neighborhood. Sweet and ten of his family members and friends were put on trial for murder as one of the mob members throwing rocks at the newly purchased house was shot and killed by someone firing out of a second floor window. Many middle-class families experienced the same kind of hostility as they sought the security of homeownership and the potential for upward mobility. Detroit has a relatively large Mexican-American population. In the early 20th century, thousands of Mexicans came to Detroit to work in agricultural, automotive, and steel jobs. During the Mexican Repatriation of the 1930s many Mexicans in Detroit were willingly repatriated or forced to repatriate. By the 1940s much of the Mexican community began to settle what is now Mexicantown. After World War II, many people from Appalachia also settled in Detroit. Appalachians formed communities and their children acquired southern accents. Many Lithuanians also settled in Detroit during the World War II era, especially on the city's Southwest side in the West Vernor area, where the renovated Lithuanian Hall reopened in 2006. By 1940, 80% of Detroit deeds contained restrictive covenants prohibiting African Americans from buying houses they could afford. These discriminatory tactics were successful as a majority of black people in Detroit resorted to living in all black neighborhoods such as Black Bottom and Paradise Valley. At this time, white people still made up about 90.4% of the city's population. From the 1940s to the 1970s a second wave of black people moved to Detroit in search of employment and with the desire to escape the Jim Crow laws enforcing segregation in the south. However, they soon found themselves once again excluded from many opportunities in Detroit—through violence and policy perpetuating economic discrimination (e.g., redlining). White residents attacked black homes: breaking windows, starting fires, and detonating bombs. An especially grueling result of this increasing competition between black and white people was the Riot of 1943 that had violent ramifications. This era of intolerance made it almost impossible for African Americans to be successful without access to proper housing or the economic stability to maintain their homes and the conditions of many neighborhoods began to decline. In 1948, the landmark Supreme Court case of Shelley v Kraemer outlawed restrictive covenants and while racism in housing did not disappear, it allowed affluent black families to begin moving to traditionally white neighborhoods. Many white families with the financial ability moved to the suburbs of Detroit taking their jobs and tax dollars with them. By 1950, much of the city's white population had moved to the suburbs as macrostructural processes such as "white flight" and "suburbanization" led to a complete population shift. The Detroit Riot of 1967 is considered to be one of the greatest racial turning points in the history of the city. The ramifications of the uprising were widespread as there were many allegations of white police brutality towards African Americans and over $36 million of insured property was lost. Discrimination and deindustrialization in tandem with racial tensions that had been intensifying in the previous years boiled over and led to an event considered to be the most damaging in Detroit's history. The population of Latinos significantly increased in the 1990s due to immigration from Jalisco. By 2010 Detroit had 48,679 Hispanics, including 36,452 Mexicans: a 70% increase from 1990. While African Americans previously comprised only 13% of Michigan's population, by 2010 they made up nearly 82% of Detroit's population. The next largest population groups were white people, at 10%, and Hispanics, at 6%. In 2001,103,000 Jews, or about 1.9% of the population, were living in the Detroit area, in both Detroit and Ann Arbor. According to the 2010 census, segregation in Detroit has decreased in absolute and relative terms and in the first decade of the 21st century, about two-thirds of the total black population in the metropolitan area resided within the city limits of Detroit. The number of integrated neighborhoods increased from 100 in 2000 to 204 in 2010. Detroit also moved down the ranking from number one most segregated city to number four. A 2011 op-ed in "The New York Times" attributed the decreased segregation rating to the overall exodus from the city, cautioning that these areas may soon become more segregated. This pattern already happened in the 1970s, when apparent integration was a precursor to white flight and resegregation. Over a 60-year period, white flight occurred in the city. According to an estimate of the Michigan Metropolitan Information Center, from 2008 to 2009 the percentage of non-Hispanic White residents increased from 8.4% to 13.3%. As the city has become more gentrified, some empty nesters and many young white people have moved into the city, increasing housing values and once again forcing African Americans to move. Gentrification in Detroit has become a rather controversial issues as reinvestment will hopefully lead to economic growth and an increase in population; however, it has already forced many black families to relocate to the suburbs. Despite revitalization efforts, Detroit remains one of the most racially segregated cities in the United States. One of the implications of racial segregation, which correlates with class segregation, may correlate to overall worse health for some populations. As of 2002, of all of the municipalities in the Wayne County-Oakland County-Macomb County area, Detroit had the second largest Asian population. As of that year Detroit's percentage of Asians was 1%, far lower than the 13.3% of Troy. By 2000 Troy had the largest Asian American population in the tricounty area, surpassing Detroit. There are four areas in Detroit with significant Asian and Asian American populations. Northeast Detroit has population of Hmong with a smaller group of Lao people. A portion of Detroit next to eastern Hamtramck includes Bangladeshi Americans, Indian Americans, and Pakistani Americans; nearly all of the Bangladeshi population in Detroit lives in that area. Many of those residents own small businesses or work in blue collar jobs, and the population is mostly Muslim. The area north of Downtown Detroit; including the region around the Henry Ford Hospital, the Detroit Medical Center, and Wayne State University; has transient Asian national origin residents who are university students or hospital workers. Few of them have permanent residency after schooling ends. They are mostly Chinese and Indian but the population also includes Filipinos, Koreans, and Pakistanis. In Southwest Detroit and western Detroit there are smaller, scattered Asian communities including an area in the westside adjacent to Dearborn and Redford Township that has a mostly Indian Asian population, and a community of Vietnamese and Laotians in Southwest Detroit. , the city has one of the U.S.'s largest concentrations of Hmong Americans. In 2006, the city had about 4,000 Hmong and other Asian immigrant families. Most Hmong live east of Coleman Young Airport near Osborn High School. Hmong immigrant families generally have lower incomes than those of suburban Asian families. Several major corporations are based in the city, including three Fortune 500 companies. The most heavily represented sectors are manufacturing (particularly automotive), finance, technology, and health care. The most significant companies based in Detroit include General Motors, Quicken Loans, Ally Financial, Compuware, Shinola, American Axle, Little Caesars, DTE Energy, Lowe Campbell Ewald, Blue Cross Blue Shield of Michigan, and Rossetti Architects. About 80,500 people work in downtown Detroit, comprising one-fifth of the city's employment base. Aside from the numerous Detroit-based companies listed above, downtown contains large offices for Comerica, Chrysler, Fifth Third Bank, HP Enterprise, Deloitte, PricewaterhouseCoopers, KPMG, and Ernst & Young. Ford Motor Company is in the adjacent city of Dearborn. Thousands more employees work in Midtown, north of the central business district. Midtown's anchors are the city's largest single employer Detroit Medical Center, Wayne State University, and the Henry Ford Health System in New Center. Midtown is also home to watchmaker Shinola and an array of small and startup companies. New Center bases TechTown, a research and business incubator hub that is part of the WSU system. Like downtown and Corktown, Midtown also has a fast-growing retailing and restaurant scene. A number of the city's downtown employers are relatively new, as there has been a marked trend of companies moving from satellite suburbs around Metropolitan Detroit into the downtown core. Compuware completed its world headquarters in downtown in 2003. OnStar, Blue Cross Blue Shield, and HP Enterprise Services are at the Renaissance Center. PricewaterhouseCoopers Plaza offices are adjacent to Ford Field, and Ernst & Young completed its office building at One Kennedy Square in 2006. Perhaps most prominently, in 2010, Quicken Loans, one of the largest mortgage lenders, relocated its world headquarters and 4,000 employees to downtown Detroit, consolidating its suburban offices. In July 2012, the U.S. Patent and Trademark Office opened its Elijah J. McCoy Satellite Office in the Rivertown/Warehouse District as its first location outside Washington, D.C.'s metropolitan area. In April 2014, the United States Department of Labor reported the city's unemployment rate at 14.5%. The city of Detroit and other private-public partnerships have attempted to catalyze the region's growth by facilitating the building and historical rehabilitation of residential high-rises in the downtown, creating a zone that offers many business tax incentives, creating recreational spaces such as the Detroit RiverWalk, Campus Martius Park, Dequindre Cut Greenway, and Green Alleys in Midtown. The city itself has cleared sections of land while retaining a number of historically significant vacant buildings in order to spur redevelopment; even though it has struggled with finances, the city issued bonds in 2008 to provide funding for ongoing work to demolish blighted properties. Two years earlier, downtown reported $1.3 billion in restorations and new developments which increased the number of construction jobs in the city. In the decade prior to 2006, downtown gained more than $15 billion in new investment from private and public sectors. Despite the city's recent financial issues, many developers remain unfazed by Detroit's problems. Midtown is one of the most successful areas within Detroit to have a residential occupancy rate of 96%. Numerous developments have been recently completed or are in various stages of construction. These include the $82 million reconstruction of downtown's David Whitney Building (now an Aloft Hotel and luxury residences), the Woodward Garden Block Development in Midtown, the residential conversion of the David Broderick Tower in downtown, the rehabilitation of the Book Cadillac Hotel (now a Westin and luxury condos) and Fort Shelby Hotel (now Doubletree) also in downtown, and various smaller projects. Downtown's population of young professionals is growing and retail is expanding. A study in 2007 found out that Downtown's new residents are predominantly young professionals (57% are ages 25 to 34, 45% have bachelor's degrees, and 34% have a master's or professional degree), a trend which has hastened over the last decade. John Varvatos is set to open a downtown store in 2015, and Restoration Hardware is rumored to be opening a store nearby. On July 25, 2013, Meijer, a midwestern retail chain, opened its first supercenter store in Detroit,; this was a $20 million, 190,000-square-foot store in the northern portion of the city and it also is the centerpiece of a new $72 million shopping center named Gateway Marketplace. On June 11, 2015, Meijer opened its second supercenter store in the city. On May 21, 2014, JPMorgan Chase announced it was injecting $100 million over five years into Detroit's economy, providing development funding for a variety of projects that would increase employment. It is the largest commitment made to any one city by the nation's biggest bank. Of the $100 million, $50 million will go toward development projects, $25 million will go toward city blight removal, $12.5 million will go for job training, $7 million will go for small businesses in the city, and $5.5 million will go toward the M-1 light rail project (Qline). On May 19, 2015, JPMorgan Chase announced it has invested $32 million for two redevelopment projects in the city's Capitol Park district, the Capitol Park Lofts (the former Capitol Park Building) and the Detroit Savings Bank building at 1212 Griswold. Those investments are separate from Chase's five-year, $100-million commitment. On May 10, 2017, J.P. Morgan Chase & Co. announced a $50 million increase in the $100 million investment the firm committed to economic development and neighborhood stabilization in Detroit by 2019. Half of the $150 million will be grants and the other half is going to toward a variety of loan funds for small business growth, mixed-use real estate development and residential housing projects. On June 26, 2019, JPMorgan Chase announced plans to invest $50 million more in affordable housing, job training and entrepreneurship by the end of 2022, growing its investment to $200 million. In the central portions of Detroit, the population of young professionals, artists, and other transplants is growing and retail is expanding. This dynamic is luring additional new residents, and former residents returning from other cities, to the city's Downtown along with the revitalized Midtown and New Center areas. A desire to be closer to the urban scene has also attracted some young professionals to reside in inner ring suburbs such as Ferndale and Royal Oak, Michigan. Detroit's proximity to Windsor, Ontario, provides for views and nightlife, along with Ontario's minimum drinking age of 19. A 2011 study by Walk Score recognized Detroit for its above average walkability among large U.S. cities. About two-thirds of suburban residents occasionally dine and attend cultural events or take in professional games in the city of Detroit. Known as the world's automotive center, "Detroit" is a metonym for that industry. Detroit's auto industry, some of which was converted to wartime defense production, was an important element of the American "Arsenal of Democracy" supporting the Allied powers during World War II. It is an important source of popular music legacies celebrated by the city's two familiar nicknames, the "Motor City" and "Motown". Other nicknames arose in the 20th century, including "City of Champions," beginning in the 1930s for its successes in individual and team sport; "The D"; "Hockeytown" (a trademark owned by the city's NHL club, the Red Wings); "Rock City" (after the Kiss song "Detroit Rock City"); and "The 313" (its telephone area code). Live music has been a prominent feature of Detroit's nightlife since the late 1940s, bringing the city recognition under the nickname 'Motown'. The metropolitan area has many nationally prominent live music venues. Concerts hosted by Live Nation perform throughout the Detroit area. Large concerts are held at DTE Energy Music Theatre and The Palace of Auburn Hills. The city's theatre venue circuit is the United States' second largest and hosts Broadway performances. The city of Detroit has a rich musical heritage and has contributed to a number of different genres over the decades leading into the new millennium. Important music events in the city include: the Detroit International Jazz Festival, the Detroit Electronic Music Festival, the Motor City Music Conference (MC2), the Urban Organic Music Conference, the Concert of Colors, and the hip-hop Summer Jamz festival. In the 1940s, Detroit blues artist John Lee Hooker became a long-term resident in the city's southwest Delray neighborhood. Hooker, among other important blues musicians migrated from his home in Mississippi bringing the Delta blues to northern cities like Detroit. Hooker recorded for Fortune Records, the biggest pre-Motown blues/soul label. During the 1950s, the city became a center for jazz, with stars performing in the Black Bottom neighborhood. Prominent emerging Jazz musicians of the 1960s included: trumpet player Donald Byrd who attended Cass Tech and performed with Art Blakey and the Jazz Messengers early in his career and Saxophonist Pepper Adams who enjoyed a solo career and accompanied Byrd on several albums. The Graystone International Jazz Museum documents jazz in Detroit. Other, prominent Motor City R&B stars in the 1950s and early 1960s was Nolan Strong, Andre Williams and Nathaniel Mayer – who all scored local and national hits on the Fortune Records label. According to Smokey Robinson, Strong was a primary influence on his voice as a teenager. The Fortune label, a family-operated label on Third Avenue in Detroit, was owned by the husband and wife team of Jack Brown and Devora Brown. Fortune, which also released country, gospel and rockabilly LPs and 45s, laid the groundwork for Motown, which became Detroit's most legendary record label. Berry Gordy, Jr. founded Motown Records which rose to prominence during the 1960s and early 1970s with acts such as Stevie Wonder, The Temptations, The Four Tops, Smokey Robinson & The Miracles, Diana Ross & The Supremes, the Jackson 5, Martha and the Vandellas, The Spinners, Gladys Knight & the Pips, The Marvelettes, The Elgins, The Monitors, The Velvelettes and Marvin Gaye. Artists were backed by in-house vocalists The Andantes and The Funk Brothers, the Motown house band that was featured in Paul Justman's 2002 documentary film Standing in the Shadows of Motown, based on Allan Slutsky's book of the same name. The Motown Sound played an important role in the crossover appeal with popular music, since it was the first African American owned record label to primarily feature African-American artists. Gordy moved Motown to Los Angeles in 1972 to pursue film production, but the company has since returned to Detroit. Aretha Franklin, another Detroit R&B star, carried the Motown Sound; however, she did not record with Berry's Motown Label. Local artists and bands rose to prominence in the 1960s and 70s including: the MC5, The Stooges, Bob Seger, Amboy Dukes featuring Ted Nugent, Mitch Ryder and The Detroit Wheels, Rare Earth, Alice Cooper, and Suzi Quatro. The group Kiss emphasized the city's connection with rock in the song "Detroit Rock City" and the movie produced in 1999. In the 1980s, Detroit was an important center of the hardcore punk rock underground with many nationally known bands coming out of the city and its suburbs, such as The Necros, The Meatmen, and Negative Approach. In the 1990s and the new millennium, the city has produced a number of influential hip hop artists, including Eminem, the hip-hop artist with the highest cumulative sales, hip-hop producer J Dilla, rapper and producer Esham and hip hop duo Insane Clown Posse. The city is also home to rappers Big Sean and Danny Brown. The band Sponge toured and produced music, with artists such as Kid Rock and Uncle Kracker. The city also has an active garage rock genre that has generated national attention with acts such as: The White Stripes, The Von Bondies, The Detroit Cobras, The Dirtbombs, Electric Six, and The Hard Lessons. Detroit is cited as the birthplace of techno music in the early 1980s. The city also lends its name to an early and pioneering genre of electronic dance music, "Detroit techno". Featuring science fiction imagery and robotic themes, its futuristic style was greatly influenced by the geography of Detroit's urban decline and its industrial past. Prominent Detroit techno artists include Juan Atkins, Derrick May, Kevin Saunderson, and Jeff Mills. The Detroit Electronic Music Festival, now known as "Movement", occurs annually in late May on Memorial Day Weekend, and takes place in Hart Plaza. In the early years (2000–2002), this was a landmark event, boasting over a million estimated attendees annually, coming from all over the world to celebrate Techno music in the city of its birth. Major theaters in Detroit include the Fox Theatre (5,174 seats), Music Hall (1,770 seats), the Gem Theatre (451 seats), Masonic Temple Theatre (4,404 seats), the Detroit Opera House (2,765 seats), the Fisher Theatre (2,089 seats), The Fillmore Detroit (2,200 seats), Saint Andrew's Hall, the Majestic Theater, and Orchestra Hall (2,286 seats) which hosts the renowned Detroit Symphony Orchestra. The Nederlander Organization, the largest controller of Broadway productions in New York City, originated with the purchase of the Detroit Opera House in 1922 by the Nederlander family. Motown Motion Picture Studios with produces movies in Detroit and the surrounding area based at the Pontiac Centerpoint Business Campus for a film industry expected to employ over 4,000 people in the metro area. Because of its unique culture, distinctive architecture, and revitalization and urban renewal efforts in the 21st century, Detroit has enjoyed increased prominence as a tourist destination in recent years. "The New York Times" listed Detroit as the 9th-best destination in its list of "52 Places to Go in 2017", while travel guide publisher "Lonely Planet" named Detroit the second-best city in the world to visit in 2018. Many of the area's prominent museums are in the historic cultural center neighborhood around Wayne State University and the College for Creative Studies. These museums include the Detroit Institute of Arts, the Detroit Historical Museum, Charles H. Wright Museum of African American History, the Detroit Science Center, as well as the main branch of the Detroit Public Library. Other cultural highlights include Motown Historical Museum, the Ford Piquette Avenue Plant museum (birthplace of the Ford Model T and the world's oldest car factory building open to the public), the Pewabic Pottery studio and school, the Tuskegee Airmen Museum, Fort Wayne, the Dossin Great Lakes Museum, the Museum of Contemporary Art Detroit (MOCAD), the Contemporary Art Institute of Detroit (CAID), and the Belle Isle Conservatory. In 2010, the G.R. N'Namdi Gallery opened in a complex in Midtown. Important history of America and the Detroit area are exhibited at The Henry Ford in Dearborn, the United States' largest indoor-outdoor museum complex. The Detroit Historical Society provides information about tours of area churches, skyscrapers, and mansions. Inside Detroit, meanwhile, hosts tours, educational programming, and a downtown welcome center. Other sites of interest are the Detroit Zoo in Royal Oak, the Cranbrook Art Museum in Bloomfield Hills, the Anna Scripps Whitcomb Conservatory on Belle Isle, and Walter P. Chrysler Museum in Auburn Hills. The city's Greektown and three downtown casino resort hotels serve as part of an entertainment hub. The Eastern Market farmer's distribution center is the largest open-air flowerbed market in the United States and has more than 150 foods and specialty businesses. On Saturdays, about 45,000 people shop the city's historic Eastern Market. The Midtown and the New Center area are centered on Wayne State University and Henry Ford Hospital. Midtown has about 50,000 residents and attracts millions of visitors each year to its museums and cultural centers; for example, the Detroit Festival of the Arts in Midtown draws about 350,000 people. Annual summer events include the Electronic Music Festival, International Jazz Festival, the Woodward Dream Cruise, the African World Festival, the country music Hoedown, Noel Night, and Dally in the Alley. Within downtown, Campus Martius Park hosts large events, including the annual Motown Winter Blast. As the world's traditional automotive center, the city hosts the North American International Auto Show. Held since 1924, America's Thanksgiving Parade is one of the nation's largest. River Days, a five-day summer festival on the International Riverfront lead up to the Windsor–Detroit International Freedom Festival fireworks, which draw super sized-crowds ranging from hundreds of thousands to over three million people. An important civic sculpture in Detroit is "The Spirit of Detroit" by Marshall Fredericks at the Coleman Young Municipal Center. The image is often used as a symbol of Detroit and the statue itself is occasionally dressed in sports jerseys to celebrate when a Detroit team is doing well. A memorial to Joe Louis at the intersection of Jefferson and Woodward Avenues was dedicated on October 16, 1986. The sculpture, commissioned by "Sports Illustrated" and executed by Robert Graham, is a long arm with a fisted hand suspended by a pyramidal framework. Artist Tyree Guyton created the controversial street art exhibit known as the Heidelberg Project in 1986, using found objects including cars, clothing and shoes found in the neighborhood near and on Heidelberg Street on the near East Side of Detroit. Guyton continues to work with neighborhood residents and tourists in constantly evolving the neighborhood-wide art installation. Detroit is one of 13 U.S. metropolitan areas that are home to professional teams representing the four major sports in North America. Since 2017, all of these teams play in the city limits of Detroit itself, a distinction shared with only three other U.S. cities. Detroit is the only U.S. city to have its four major sports teams play within its downtown district. There are three active major sports venues in the city: Comerica Park (home of the Major League Baseball team Detroit Tigers), Ford Field (home of the NFL's Detroit Lions), and Little Caesars Arena (home of the NHL's Detroit Red Wings and the NBA's Detroit Pistons). A 1996 marketing campaign promoted the nickname "Hockeytown". The Detroit Tigers have won four World Series titles. The Detroit Red Wings have won 11 Stanley Cups (the most by an American NHL franchise). The Detroit Lions have won 4 NFL titles. The Detroit Pistons have won three NBA titles. With the Pistons' first of three NBA titles in 1989, the city of Detroit has won titles in all four of the major professional sports leagues. Two new downtown stadiums for the Detroit Tigers and Detroit Lions opened in 2000 and 2002, respectively, returning the Lions to the city proper. In college sports, Detroit's central location within the Mid-American Conference has made it a frequent site for the league's championship events. While the MAC Basketball Tournament moved permanently to Cleveland starting in 2000, the MAC Football Championship Game has been played at Ford Field in Detroit since 2004, and annually attracts 25,000 to 30,000 fans. The University of Detroit Mercy has a NCAA Division I program, and Wayne State University has both NCAA Division I and II programs. The NCAA football Quick Lane Bowl is held at Ford Field each December. The local soccer team is called the Detroit City Football Club and was founded in 2012. The team plays in the National Premier Soccer League, and its nickname is "Le Rouge". The city hosted the 2005 MLB All-Star Game, 2006 Super Bowl XL, 2006 and 2012 World Series, WrestleMania 23 in 2007, and the NCAA Final Four in April 2009. The city hosted the Detroit Indy Grand Prix on Belle Isle Park from 1989 to 2001, 2007 to 2008, and 2012 and beyond. In 2007, open-wheel racing returned to Belle Isle with both Indy Racing League and American Le Mans Series Racing. Detroit is one of eight American cities to have won titles in all four major leagues (MLB, NFL, NHL and NBA), though of the eight it is the only one to have not won a Super Bowl title (all of the Lions' titles came prior to the start of the Super Bowl era). In the years following the mid-1930s, Detroit was referred to as the "City of Champions" after the Tigers, Lions, and Red Wings captured the three major professional sports championships in existence at the time in a seven-month period of time (the Tigers won the World Series in October 1935; the Lions won the NFL championship in December 1935; the Red Wings won the Stanley Cup in April 1936). In 1932, Eddie "The Midnight Express" Tolan from Detroit won the 100- and 200-meter races and two gold medals at the 1932 Summer Olympics. Joe Louis won the heavyweight championship of the world in 1937. Detroit has made the most bids to host the Summer Olympics without ever being awarded the games: seven unsuccessful bids for the 1944, 1952, 1956, 1960, 1964, 1968 and 1972 games. The city is governed pursuant to the "Home Rule Charter of the City of Detroit". The government of Detroit, Michigan is run by a mayor, the nine-member Detroit City Council, the eleven-member Board of Police Commissioners, and a clerk. All of these officers are elected on a nonpartisan ballot, with the exception of four of the police commissioners, who are appointer by the mayor. Detroit has a "strong mayoral" system, with the mayor approving departmental appointments. The council approves budgets, but the mayor is not obligated to adhere to any earmarking. The city clerk supervises elections and is formally charged with the maintenance of municipal records. City ordinances and substantially large contracts must be approved by the council. The "Detroit City Code" is the codification of Detroit's local ordinances. The city clerk supervises elections and is formally charged with the maintenance of municipal records. Municipal elections for mayor, city council and city clerk are held at four-year intervals, in the year after presidential elections. Following a November 2009 referendum, seven council members will be elected from districts beginning in 2013 while two will continue to be elected at-large. Detroit's courts are state-administered and elections are nonpartisan. The Probate Court for Wayne County is in the Coleman A. Young Municipal Center in downtown Detroit. The Circuit Court is across Gratiot Avenue in the Frank Murphy Hall of Justice, in downtown Detroit. The city is home to the Thirty-Sixth District Court, as well as the First District of the Michigan Court of Appeals and the United States District Court for the Eastern District of Michigan. The city provides law enforcement through the Detroit Police Department and emergency services through the Detroit Fire Department. Detroit has struggled with high crime for decades. The number of homicides peaked in 1974 at 714 and again in 1991 with 615. The murder rate for the city have gone up and down throughout the years averaging over 400 murders with a population of over 1,000,000 residents. The crime rate however has been above the nation average since the 1970s Crime has since decreased and, in 2014, the murder rate was 43.4 per 100,000, lower than in St. Louis. About half of all murders in Michigan in 2015 occurred in Detroit. Although the rate of violent crime dropped 11% in 2008, violent crime in Detroit has not declined as much as the national average from 2007 to 2011. The violent crime rate is one of the highest in the United States. Neighborhoodscout.com reported a crime rate of 62.18 per 1,000 residents for property crimes, and 16.73 per 1,000 for violent crimes (compared to national figures of 32 per 1,000 for property crimes and 5 per 1,000 for violent crime in 2008). Annual statistics released by the Detroit Police Department for 2016 indicate that while the city's overall crime rate declined that year, the murder rate rose from 2015. In 2016 there were 302 homicides in Detroit, a 2.37% increase in the number of murder victims from the preceding year. The city's downtown typically has lower crime than national and state averages. According to a 2007 analysis, Detroit officials note about 65 to 70 percent of homicides in the city were drug related, with the rate of unsolved murders roughly 70%. Areas of the city adjacent to the Detroit River are also patrolled by the United States Border Patrol. In 2012, crime in the city was among the reasons for more expensive car insurance. Beginning with its incorporation in 1802, Detroit has had a total of 74 mayors. Detroit's last mayor from the Republican Party was Louis Miriani, who served from 1957 to 1962. In 1973, the city elected its first black mayor, Coleman Young. Despite development efforts, his combative style during his five terms in office was not well received by many suburban residents. Mayor Dennis Archer, a former Michigan Supreme Court Justice, refocused the city's attention on redevelopment with a plan to permit three casinos downtown. By 2008, three major casino resort hotels established operations in the city. In 2000, the city requested an investigation by the United States Justice Department into the Detroit Police Department which was concluded in 2003 over allegations regarding its use of force and civil rights violations. The city proceeded with a major reorganization of the Detroit Police Department. Detroit is sometimes referred to as the sanctuary city because it has "anti-profiling ordinances that generally prohibit local police from asking about the immigration status of people who are not suspected of any crime." In March 2013, Governor Rick Snyder declared a financial emergency in the city, stating the city has a $327 million budget deficit and faces more than $14 billion in long-term debt. It has been making ends meet on a month-to-month basis with the help of bond money held in a state escrow account and has instituted mandatory unpaid days off for many city workers. Those troubles, along with underfunded city services, such as police and fire departments, and ineffective turnaround plans from Mayor Bing and the City Council led the state of Michigan to appoint an emergency manager for Detroit on March 14, 2013. On June 14, 2013, Detroit defaulted on $2.5 billion of debt by withholding $39.7 million in interest payments, while Emergency Manager Kevyn Orr met with bondholders and other creditors in an attempt to restructure the city's $18.5 billion debt and avoid bankruptcy. On July 18, 2013, the City of Detroit filed for Chapter 9 bankruptcy protection. It was declared bankrupt by U.S. judge Stephen Rhodes on December 3, with its $18.5 billion debt; he said in accepting the city's contention it is broke and negotiations with its thousands of creditors were infeasible. The city levies an income tax of 2.4 percent on residents and 1.2 percent on nonresidents. Detroit is home to several institutions of higher learning including Wayne State University, a national research university with medical and law schools in the Midtown area offering hundreds of academic degrees and programs. The University of Detroit Mercy, in Northwest Detroit in the University District, is a prominent Roman Catholic co-educational university affiliated with the Society of Jesus (the Jesuits) and the Sisters of Mercy. The University of Detroit Mercy offers more than a hundred academic degrees and programs of study including business, dentistry, law, engineering, architecture, nursing and allied health professions. The University of Detroit Mercy School of Law is Downtown across from the Renaissance Center. Sacred Heart Major Seminary, founded in 1919, is affiliated with Pontifical University of Saint Thomas Aquinas, "Angelicum" in Rome and offers pontifical degrees as well as civil undergraduate and graduate degrees. Sacred Heart Major Seminary offers a variety of academic programs for both clerical and lay students. Other institutions in the city include the College for Creative Studies, Marygrove College and Wayne County Community College. In June 2009, the Michigan State University College of Osteopathic Medicine which is based in East Lansing opened a satellite campus at the Detroit Medical Center. The University of Michigan was established in 1817 in Detroit and later moved to Ann Arbor in 1837. In 1959, University of Michigan–Dearborn was established in neighboring Dearborn. many K-12 students in Detroit frequently change schools, with some children having been enrolled in seven schools before finishing their K-12 careers. There is a concentration of senior high schools and charter schools in the Downtown Detroit area, which had wealthier residents and more gentrification relative to other parts of Detroit: Downtown, northwest Detroit, and northeast Detroit have 1,894, 3,742, and 6,018 students of high school age each, respectively, while they have 11, three, and two high schools each, respectively. With about 66,000 public school students (2011–12), the Detroit Public Schools (DPS) district is the largest school district in Michigan. Detroit has an additional 56,000 charter school students for a combined enrollment of about 122,000 students. there are about as many students in charter schools as there are in district schools. DPS continues to have the majority of the special education pupils. In addition, some Detroit students, as of 2016, attend public schools in other municipalities. In 1999, the Michigan Legislature removed the locally elected board of education amid allegations of mismanagement and replaced it with a reform board appointed by the mayor and governor. The elected board of education was re-established following a city referendum in 2005. The first election of the new 11-member board of education occurred on November 8, 2005. Due to growing Detroit charter schools enrollment as well as a continued exodus of population, the city planned to close many public schools. State officials report a 68% graduation rate for Detroit's public schools adjusted for those who change schools. Traditional public and charter school students in the city have performed poorly on standardized tests. Circa 2009 and 2011, while Detroit traditional public schools scored a record low on national tests, the publicly funded charter schools did even worse than the traditional public schools. there were 30,000 excess openings in Detroit traditional public and charter schools, bearing in mind the number of K-12-aged children in the city. In 2016, Kate Zernike of "The New York Times" stated school performance did not improve despite the proliferation of charters, describing the situation as "lots of choice, with no good choice." Detroit public schools students scored the lowest on tests of reading and writing of all major cities in the United States in 2015. Among eighth-graders, only 27% showed basic proficiency in math and 44% in reading. Nearly half of Detroit's adults are functionally illiterate. Detroit is served by various private schools, as well as parochial Roman Catholic schools operated by the Archdiocese of Detroit. there are four Catholic grade schools and three Catholic high schools in the City of Detroit, with all of them in the city's west side. The Archdiocese of Detroit lists a number of primary and secondary schools in the metro area as Catholic education has emigrated to the suburbs. Of the three Catholic high schools in the city, two are operated by the Society of Jesus and the third is co-sponsored by the Sisters, Servants of the Immaculate Heart of Mary and the Congregation of St. Basil. In the 1964–1965 school year there were about 110 Catholic grade schools in Detroit, Hamtramck, and Highland Park and 55 Catholic high schools in those three cities. The Catholic school population in Detroit has decreased due to the increase of charter schools, increasing tuition at Catholic schools, the small number of African-American Catholics, White Catholics moving to suburbs, and the decreased number of teaching nuns. The "Detroit Free Press" and "The Detroit News" are the major daily newspapers, both broadsheet publications published together under a joint operating agreement called the Detroit Newspaper Partnership. Media philanthropy includes the "Detroit Free Press" high school journalism program and the Old Newsboys' Goodfellow Fund of Detroit. In March 2009, the two newspapers reduced home delivery to three days a week, print reduced newsstand issues of the papers on non-delivery days and focus resources on Internet-based news delivery. The "Metro Times", founded in 1980, is a weekly publication, covering news, arts & entertainment. Also founded in 1935 and based in Detroit the "Michigan Chronicle" is one of the oldest and most respected African-American weekly newspapers in America. Covering politics, entertainment, sports and community events. The Detroit television market is the 11th largest in the United States; according to estimates that do not include audiences in large areas of Ontario, Canada (Windsor and its surrounding area on broadcast and cable TV, as well as several other cable markets in Ontario, such as the city of Ottawa) which receive and watch Detroit television stations. Detroit has the 11th largest radio market in the United States, though this ranking does not take into account Canadian audiences. Nearby Canadian stations such as Windsor's CKLW (whose jingles formerly proclaimed "CKLW-the Motor City") are popular in Detroit. Hardcore Pawn, a U.S. documentary reality television series produced for truTV, features the day-to-day operations of American Jewelry and Loan, a family-owned pawn shop on Greenfield Road. Within the city of Detroit, there are over a dozen major hospitals which include the Detroit Medical Center (DMC), Henry Ford Health System, St. John Health System, and the John D. Dingell VA Medical Center. The DMC, a regional Level I trauma center, consists of Detroit Receiving Hospital and University Health Center, Children's Hospital of Michigan, Harper University Hospital, Hutzel Women's Hospital, Kresge Eye Institute, Rehabilitation Institute of Michigan, Sinai-Grace Hospital, and the Karmanos Cancer Institute. The DMC has more than 2,000 licensed beds and 3,000 affiliated physicians. It is the largest private employer in the City of Detroit. The center is staffed by physicians from the Wayne State University School of Medicine, the largest single-campus medical school in the United States, and the United States' fourth largest medical school overall. Detroit Medical Center formally became a part of Vanguard Health Systems on December 30, 2010, as a for profit corporation. Vanguard has agreed to invest nearly $1.5 B in the Detroit Medical Center complex which will include $417 M to retire debts, at least $350 M in capital expenditures and an additional $500 M for new capital investment. Vanguard has agreed to assume all debts and pension obligations. The metro area has many other hospitals including William Beaumont Hospital, St. Joseph's, and University of Michigan Medical Center. In 2011, Detroit Medical Center and Henry Ford Health System substantially increased investments in medical research facilities and hospitals in the city's Midtown and New Center. In 2012, two major construction projects were begun in New Center, the Henry Ford Health System started the first phase of a $500 million, 300-acre revitalization project, with the construction of a new $30 million, 275,000-square-foot, "Medical Distribution Center" for Cardinal Health, Inc. and Wayne State University started construction on a new $93 million, 207,000-square-foot, Integrative Biosciences Center (IBio). As many as 500 researchers, and staff will work out of the IBio Center. With its proximity to Canada and its facilities, ports, major highways, rail connections and international airports, Detroit is an important transportation hub. The city has three international border crossings, the Ambassador Bridge, Detroit–Windsor Tunnel and Michigan Central Railway Tunnel, linking Detroit to Windsor, Ontario. The Ambassador Bridge is the single busiest border crossing in North America, carrying 27% of the total trade between the U.S. and Canada. On February 18, 2015, Canadian Transport Minister Lisa Raitt announced Canada has agreed to pay the entire cost to build a $250 million U.S. Customs plaza adjacent to the planned new Detroit–Windsor bridge, now the Gordie Howe International Bridge. Canada had already planned to pay for 95% of the bridge, which will cost $2.1 billion, and is expected to open in 2022 or 2023. "This allows Canada and Michigan to move the project forward immediately to its next steps which include further design work and property acquisition on the U.S. side of the border," Raitt said in a statement issued after she spoke in the House of Commons. Mass transit in the region is provided by bus services. The Detroit Department of Transportation (DDOT) provides service within city limits up to the outer edges of the city. From there, the Suburban Mobility Authority for Regional Transportation (SMART) provides service to the suburbs and the city regionally with local routes and SMART's FAST service. FAST is a new service provided by SMART which offers limited stops along major corridors throughout the Detroit metropolitan area connecting the suburbs to downtown. The new high-frequency service travels along three of Detroit's busiest corridors, Gratiot, Woodward, and Michigan, and only stops at designated FAST stops. Cross border service between the downtown areas of Windsor and Detroit is provided by Transit Windsor via the Tunnel Bus. An elevated rail system known as the People Mover, completed in 1987, provides daily service around a loop downtown. The QLINE serves as a link between the Detroit People Mover and Detroit Amtrak station via Woodward Avenue. The SEMCOG Commuter Rail line will extend from Detroit's New Center, connecting to Ann Arbor via Dearborn, Wayne, and Ypsilanti when it is opened. The Regional Transit Authority (RTA) was established by an act of the Michigan legislature in December 2012 to oversee and coordinate all existing regional mass transit operations, and to develop new transit services in the region. The RTA's first project was the introduction of RelfeX, a limited-stop, cross-county bus service connecting downtown and midtown Detroit with Oakland county via Woodward avenue. Amtrak provides service to Detroit, operating its "Wolverine" service between Chicago and Pontiac. The Amtrak station is in New Center north of downtown. The "J. W. Westcott II", which delivers mail to lake freighters on the Detroit River, is a floating post office. The city of Detroit has a higher than average percentage of households without a car. In 2016, 24.7 percent of Detroit households lacked a car, much higher than the national average of 8.7. Detroit averaged 1.15 cars per household in 2016, compared to a national average of 1.8. Freight railroad operations in the city of Detroit are provided by Canadian National Railway, Canadian Pacific Railway, Conrail Shared Assets, CSX Transportation and Norfolk Southern Railway, each of which have local yards within the city. Detroit is also served by the Delray Connecting Railroad and Detroit Connecting Railroad shortlines. Detroit Metropolitan Wayne County Airport (DTW), the principal airport serving Detroit, is in nearby Romulus. DTW is a primary hub for Delta Air Lines (following its acquisition of Northwest Airlines), and a secondary hub for Spirit Airlines. The airport is connected to Downtown Detroit by the Suburban Mobility Authority for Regional Transportation (SMART) FAST Michigan route. Coleman A. Young International Airport (DET), previously called Detroit City Airport, is on Detroit's northeast side; the airport now maintains only charter service and general aviation. Willow Run Airport, in far-western Wayne County near Ypsilanti, is a general aviation and cargo airport. Metro Detroit has an extensive toll-free network of freeways administered by the Michigan Department of Transportation. Four major Interstate Highways surround the city. Detroit is connected via Interstate 75 (I-75) and I-96 to Kings Highway 401 and to major Southern Ontario cities such as London, Ontario and the Greater Toronto Area. I-75 (Chrysler and Fisher freeways) is the region's main north–south route, serving Flint, Pontiac, Troy, and Detroit, before continuing south (as the Detroit–Toledo and Seaway Freeways) to serve many of the communities along the shore of Lake Erie. I-94 (Edsel Ford Freeway) runs east–west through Detroit and serves Ann Arbor to the west (where it continues to Chicago) and Port Huron to the northeast. The stretch of the I-94 freeway from Ypsilanti to Detroit was one of America's earlier limited-access highways. Henry Ford built it to link the factories at Willow Run and Dearborn during World War II. A portion was known as the Willow Run Expressway. The I-96 freeway runs northwest–southeast through Livingston, Oakland and Wayne counties and (as the Jeffries Freeway through Wayne County) has its eastern terminus in downtown Detroit. I-275 runs north–south from I-75 in the south to the junction of I-96 and I-696 in the north, providing a bypass through the western suburbs of Detroit. I-375 is a short spur route in downtown Detroit, an extension of the Chrysler Freeway. I-696 (Reuther Freeway) runs east–west from the junction of I-96 and I-275, providing a route through the northern suburbs of Detroit. Taken together, I-275 and I-696 form a semicircle around Detroit. Michigan state highways designated with the letter M serve to connect major freeways. Detroit has a floating post office. In 1948, The J. W. Westcott II became a floating post office servicing the Port of Detroit. Its zip code is 48222. Originally established in 1874 as a maritime reporting agency to inform other vessels about port conditions, the J. W. Westcott II is still in operation today.
https://en.wikipedia.org/wiki?curid=8687
Deccan Traps The Deccan Traps are a large igneous province of west-central India (17–24°N, 73–74°E). They are one of the largest volcanic features on Earth. They consist of multiple layers of solidified flood basalt that together are more than thick, cover an area of , and have a volume of . Originally, the Deccan Traps may have covered , with a correspondingly larger original volume. The term "trap" has been used in geology since 1785–1795 for such rock formations. It is derived from the Swedish word for stairs ("trappa") and refers to the step-like hills forming the landscape of the region. The Deccan Traps began forming 66.25 million years ago, at the end of the Cretaceous period. The bulk of the volcanic eruption occurred at the Western Ghats some 66 million years ago. This series of eruptions may have lasted fewer than 30,000 years. The original area covered by the lava flows is estimated to have been as large as , approximately half the size of modern India. The Deccan Traps region was reduced to its current size by erosion and plate tectonics; the present area of directly observable lava flows is around . The release of volcanic gases, particularly sulfur dioxide, during the formation of the traps contributed to climate change. Data points to an average drop in temperature of about in this period. Because of its magnitude, scientists have speculated that the gases released during the formation of the Deccan Traps played a major role in the Cretaceous–Paleogene (K–Pg) extinction event (also known as the Cretaceous–Tertiary or K–T extinction). It has been theorized that sudden cooling due to sulfurous volcanic gases released by the formation of the traps and toxic gas emissions may have contributed significantly to the K–Pg, as well as other, mass extinctions. However, the current consensus among the scientific community is that the extinction was primarily triggered by the Chicxulub impact event in North America, which would have produced a sunlight-blocking dust cloud that killed much of the plant life and reduced global temperature (this cooling is called an impact winter). Work published in 2014 by geologist Gerta Keller and others on the timing of the Deccan volcanism suggests the extinction may have been caused by both the volcanism and the impact event. This was followed by a similar study in 2015, both of which consider the hypothesis that the impact exacerbated or induced the Deccan volcanism, since the events occur at antipodes. However, the impact theory is still the best supported and has been determined by various reviews to be the consensus view. Within the Deccan Traps at least 95% of the lavas are tholeiitic basalts. Other rock types present include: alkali basalt, nephelinite, lamprophyre, and carbonatite. Mantle xenoliths have been described from Kachchh (northwestern India) and elsewhere in the western Deccan. The Deccan Traps are famous for the beds of fossils that have been found between layers of lava. Particularly well known species include the frog "Oxyglossus pusillus" (Owen) of the Eocene of India and the toothed frog "Indobatrachus", an early lineage of modern frogs, which is now placed in the Australian family Myobatrachidae. The Infratrappean Beds and Intertrappean Beds also contain fossil freshwater molluscs. It is postulated that the Deccan Traps eruption was associated with a deep mantle plume. The area of long-term eruption (the hotspot), known as the Réunion hotspot, is suspected of both causing the Deccan Traps eruption and opening the rift that once separated the Seychelles plateau from India. Seafloor spreading at the boundary between the Indian and African Plates subsequently pushed India north over the plume, which now lies under Réunion island in the Indian Ocean, southwest of India. The mantle plume model has, however, been challenged. Data continues to emerge that support the plume model. The motion of the Indian tectonic plate and the eruptive history of the Deccan traps show strong correlations. Based on data from marine magnetic profiles, a pulse of unusually rapid plate motion began at the same time as the first pulse of Deccan flood basalts, which is dated at 67 million years ago. The spreading rate rapidly increased and reached a maximum at the same time as the peak basaltic eruptions. The spreading rate then dropped off, with the decrease occurring around 63 million years ago, by which time the main phase of Deccan volcanism ended. This correlation is seen as driven by plume dynamics. The motions of the Indian and African plates have also been shown to be coupled, the common element being the position of these plates relative to the location of the Réunion plume head. The onset of accelerated motion of India coincides with a large slowing of the rate of counterclockwise rotation of Africa. The close correlations between the plate motions suggest that they were both driven by the force of the Réunion plume. There is some evidence to link the Deccan Traps eruption to the contemporaneous asteroid impact that created the nearly antipodal Chicxulub crater in the Mexican state of Yucatán. Although the Deccan Traps began erupting well before the impact, argon-argon dating suggests that the impact may have caused an increase in permeability that allowed magma to reach the surface and produced the most voluminous flows, accounting for around 70% of the volume. The combination of the asteroid impact and the resulting increase in eruptive volume may have been responsible for the mass extinctions that occurred at the time that separates the Cretaceous and Paleogene periods, known as the K–Pg boundary. A more recent discovery appears to demonstrate the scope of the destruction from the impact alone, however. In a March 2019 article in the Proceedings of the National Academy of Sciences, an international team of twelve scientists revealed the contents of the Tanis fossil site discovered near Bowman, North Dakota that appeared to show a devastating mass destruction of an ancient lake and its inhabitants at the time of the Chicxulub impact. In the paper, the group claims that the geology of the site is strewn with fossilized trees and remains of fish and other animals. The lead researcher, Robert A. DePalma of the University of Kansas, was quoted in the New York Times as stating that “[Y]ou would be blind to miss the carcasses sticking out... It is impossible to miss when you see the outcrop.” Evidence correlating this find to the Chicxulub impact included tektites bearing "the unique chemical signature of other tektites associated with the Chicxulub event" found in the gills of fish fossils and embedded in amber, an iridium-rich top layer that is considered another signature of the event, and an atypical lack of scavenging of the dead fish and animals that suggested few other species survived the event to feed off the mass death. The exact mechanism of the site's destruction has been debated as either an impact-caused tsunami or lake and river seiche activity triggered by post-impact earthquakes, though there has yet been no firm conclusion upon which researchers have settled. A geological structure that exists in the sea floor off the west coast of India has been suggested as a possible impact crater, in this context called the Shiva crater. It has also been dated at approximately 66 million years ago, potentially matching the Deccan traps. The researchers claiming that this feature is an impact crater suggest that the impact may have been the triggering event for the Deccan Traps as well as contributing to the acceleration of the Indian plate in the early Paleogene. However, the current consensus in the Earth science community is that this feature is unlikely to be an actual impact crater.
https://en.wikipedia.org/wiki?curid=8688
Don't ask, don't tell "Don't ask, don't tell" (DADT) was the official United States policy on military service by gays, bisexuals, and lesbians, instituted by the Clinton Administration on February 28, 1994, when Department of Defense Directive 1304.26 issued on December 21, 1993, took effect, lasting until September 20, 2011. The policy prohibited military personnel from discriminating against or harassing closeted homosexual or bisexual service members or applicants, while barring openly gay, lesbian, or bisexual persons from military service. This relaxation of legal restrictions on service by gays and lesbians in the armed forces was mandated by United States federal law (), which was signed November 30, 1993. The policy prohibited people who "demonstrate a propensity or intent to engage in homosexual acts" from serving in the armed forces of the United States, because their presence "would create an unacceptable risk to the high standards of morale, good order and discipline, and unit cohesion that are the essence of military capability". The act prohibited any homosexual or bisexual person from disclosing their sexual orientation or from speaking about any homosexual relationships, including marriages or other familial attributes, while serving in the United States armed forces. The act specified that service members who disclose that they are homosexual or engage in homosexual conduct should be separated (discharged) except when a service member's conduct was "for the purpose of avoiding or terminating military service" or when it "would not be in the best interest of the armed forces". Since DADT ended in 2011, persons who are openly homosexual and bisexual have been able to serve. The "don't ask" part of the DADT policy specified that superiors should not initiate investigation of a service member's orientation without witnessing disallowed behaviors, though credible evidence of homosexual behavior could be used to initiate an investigation. Unauthorized investigations and harassment of suspected servicemen and women led to an expansion of the policy to "don't ask, don't tell, don't pursue, don't harass". Beginning in the early 2000s, several legal challenges to DADT were filed, and legislation to repeal DADT was enacted in December 2010, specifying that the policy would remain in place until the President, the Secretary of Defense, and the Chairman of the Joint Chiefs of Staff certified that repeal would not harm military readiness, followed by a 60-day waiting period. A July 6, 2011, ruling from a federal appeals court barred further enforcement of the U.S. military's ban on openly gay service members. President Barack Obama, Secretary of Defense Leon Panetta, and Chairman of the Joint Chiefs of Staff Admiral Mike Mullen sent that certification to Congress on July 22, 2011, which set the end of DADT to September 20, 2011. Engaging in homosexual activity has been grounds for discharge from the American military since the Revolutionary War. Policies based on sexual orientation appeared as the United States prepared to enter World War II. When the military added psychiatric screening to its induction process, it included homosexuality as a disqualifying trait, then seen as a form of psychopathology. When the army issued revised mobilization regulations in 1942, it distinguished "homosexual" recruits from "normal" recruits for the first time. Before the buildup to the war, gay service members were court-martialed, imprisoned, and dishonorably discharged; but in wartime, commanding officers found it difficult to convene court-martial boards of commissioned officers and the administrative blue discharge became the military's standard method for handling gay and lesbian personnel. In 1944, a new policy directive decreed that homosexuals were to be committed to military hospitals, examined by psychiatrists and discharged under Regulation 615-360, section 8. In 1947, blue discharges were discontinued and two new classifications were created: "general" and "undesirable". Under such a system, a serviceman or woman found to be gay but who had not committed any sexual acts while in service would tend to receive an undesirable discharge. Those found guilty of engaging in sexual conduct were usually dishonorably discharged. A 1957 U.S. Navy study known as the Crittenden Report dismissed the charge that homosexuals constitute a security risk, but advocated stringent anti-homosexual policies because "Homosexuality is wrong, it is evil, and it is to be branded as such." It remained secret until 1976. Fannie Mae Clackum was the first service member to successfully appeal such a discharge, winning eight years of back pay from the US Court of Claims in 1960. From the 1950s through the Vietnam War, some notable gay service members avoided discharges despite pre-screening efforts, and when personnel shortages occurred, homosexuals were allowed to serve. The gay and lesbian rights movement in the 1970s and 1980s raised the issue by publicizing several noteworthy dismissals of gay service members. Sgt. Leonard Matlovich appeared on the cover of "Time" in 1975. In 1982 the Department of Defense issued a policy stating that, "Homosexuality is incompatible with military service." It cited the military's need "to maintain discipline, good order, and morale" and "to prevent breaches of security". In 1988, in response to a campaign against lesbians at the Marines' Parris Island Depot, activists launched the Gay and Lesbian Military Freedom Project (MFP) to advocate for an end to the exclusion of gays and lesbians from the armed forces. In 1989, reports commissioned by the Personnel Security Research and Education Center (PERSEREC), an arm of the Pentagon, were discovered in the process of Joseph Steffan's lawsuit fighting his forced resignation from the U.S. Naval Academy. One report said that "having a same-gender or an opposite-gender orientation is unrelated to job performance in the same way as is being left- or right-handed." Other lawsuits fighting discharges highlighted the service record of service members like Tracy Thorne and Margarethe (Grethe) Cammermeyer. The MFP began lobbying Congress in 1990, and in 1991 Senator Brock Adams (D-Washington) and Rep. Barbara Boxer introduced the Military Freedom Act, legislation to end the ban completely. Adams and Rep. Pat Schroeder (D-Colorado) re-introduced it the next year. In July 1991, Secretary of Defense Dick Cheney, in the context of the outing of his press aide Pete Williams, dismissed the idea that gays posed a security risk as "a bit of an old chestnut" in testimony before the House Budget Committee. In response to his comment, several major newspapers endorsed ending the ban, including "USA Today", the "Los Angeles Times", and the "Detroit Free Press". In June 1992, the General Accounting Office released a report that members of Congress had requested two years earlier estimating the costs associated with the ban on gays and lesbians in the military at $27 million annually. During the 1992 U.S. presidential election campaign, the civil rights of gays and lesbians, particularly their open service in the military, attracted some press attention, and all candidates for the Democratic presidential nomination supported ending the ban on military service by gays and lesbians, but the Republicans did not make a political issue of that position. In an August cover letter to all his senior officers, Gen. Carl Mundy, Jr., Commandant of the Marine Corps, praised a position paper authored by a Marine Corps chaplain that said that "In the unique, intensely close environment of the military, homosexual conduct can threaten the lives, including the physical (e.g. AIDS) and psychological well-being of others". Mundy called it "extremely insightful" and said it offered "a sound basis for discussion of the issue". The murder of gay U.S. Navy petty officer Allen R. Schindler, Jr. on October 27, 1992, brought calls from advocates of allowing open service by gays and lesbians for prompt action from the incoming Clinton administration. The policy was introduced as a compromise measure in 1993 by President Bill Clinton who campaigned in 1992 on the promise to allow all citizens to serve in the military regardless of sexual orientation. Commander Craig Quigley, a Navy spokesman, expressed the opposition of many in the military at the time when he said, "Homosexuals are notoriously promiscuous" and that in shared shower situations, heterosexuals would have an "uncomfortable feeling of someone watching". During the 1993 policy debate, the National Defense Research Institute prepared a study for the Office of the Secretary of Defense published as "Sexual Orientation and U.S. Military Personnel Policy: Options and Assessment". It concluded that "circumstances could exist under which the ban on homosexuals could be lifted with little or no adverse consequences for recruitment and retention" if the policy were implemented with care, principally because many factors contribute to individual enlistment and re-enlistment decisions. On May 5, 1993, Gregory M. Herek, associate research psychologist at the University of California at Davis and an authority on public attitudes toward lesbians and gay men, testified before the House Armed Services Committee on behalf of several professional associations. He stated, "The research data show that there is nothing about lesbians and gay men that makes them inherently unfit for military service, and there is nothing about heterosexuals that makes them inherently unable to work and live with gay people in close quarters." Herek added, "The assumption that heterosexuals cannot overcome their prejudices toward gay people is a mistaken one." In Congress, Democratic Senator Sam Nunn of Georgia led the contingent that favored maintaining the absolute ban on gays. Reformers were led by Democratic Congressman Barney Frank of Massachusetts, who favored modification (but ultimately voted for the defense authorization bill with the gay ban language), and Barry Goldwater, a former Republican Senator and a retired Major General, who argued on behalf of allowing service by open gays and lesbians. In a June 1993 "Washington Post" opinion piece, Goldwater wrote: "You don't have to be straight to shoot straight". Congress rushed to enact the existing gay ban policy into federal law, outflanking Clinton's planned repeal effort. Clinton called for legislation to overturn the ban, but encountered intense opposition from the Joint Chiefs of Staff, members of Congress, and portions of the public. DADT emerged as a compromise policy. Congress included text in the National Defense Authorization Act for Fiscal Year 1994 (passed in 1993) requiring the military to abide by regulations essentially identical to the 1982 absolute ban policy. The Clinton Administration on December 21, 1993, issued Defense Directive 1304.26, which directed that military applicants were not to be asked about their sexual orientation. This policy is now known as "Don't Ask, Don't Tell". The phrase was coined by Charles Moskos, a military sociologist. In accordance with the December 21, 1993, Department of Defense Directive 1332.14, it was legal policy (10 U.S.C. § 654) that homosexuality was incompatible with military service and that persons who engaged in homosexual acts or stated that they are homosexual or bisexual were to be discharged. The Uniform Code of Military Justice, passed by Congress in 1950 and signed by President Harry S Truman, established the policies and procedures for discharging service members. The full name of the policy at the time was "Don't Ask, Don't Tell, Don't Pursue". The "Don't Ask" provision mandated that military or appointed officials will not ask about or require members to reveal their sexual orientation. The "Don't Tell" stated that a member may be discharged for claiming to be a homosexual or bisexual or making a statement indicating a tendency towards or intent to engage in homosexual activities. The "Don’t Pursue" established what was minimally required for an investigation to be initiated. A "Don’t Harass" provision was added to the policy later. It ensured that the military would not allow harassment or violence against service members for any reason. The Servicemembers Legal Defense Network was founded in 1993 to advocate an end to discrimination on the basis of sexual orientation in the U.S. Armed Forces. DADT was upheld by five federal Courts of Appeal. The Supreme Court, in "Rumsfeld v. Forum for Academic and Institutional Rights, Inc." (2006), unanimously held that the federal government could constitutionally withhold funding from universities, no matter what their nondiscrimination policies might be, for refusing to give military recruiters access to school resources. An association of law schools had argued that allowing military recruiting at their institutions compromised their ability to exercise their free speech rights in opposition to discrimination based on sexual orientation as represented by DADT. In January 1998, Senior Chief Petty Officer Timothy R. McVeigh (not to be confused with convicted Oklahoma City bomber, Timothy J. McVeigh) won a preliminary injunction from a U.S. district court that prevented his discharge from the U.S. Navy for "homosexual conduct" after 17 years of service. His lawsuit did not challenge the DADT policy, but asked the court to hold the military accountable for adhering to the policy's particulars. The Navy had investigated McVeigh's sexual orientation based on his AOL email account name and user profile. District Judge Stanley Sporkin ruled in "McVeigh v. Cohen" that the Navy had violated its own DADT guidelines: "Suggestions of sexual orientation in a private, anonymous email account did not give the Navy a sufficient reason to investigate to determine whether to commence discharge proceedings." He called the Navy's investigation "a search and destroy mission" against McVeigh. The case also attracted attention because a navy paralegal had misrepresented himself when querying AOL for information about McVeigh's account. Frank Rich linked the two issues: "McVeigh is as clear-cut a victim of a witch hunt as could be imagined, and that witch hunt could expand exponentially if the military wants to add on-line fishing to its invasion of service members' privacy." AOL apologized to McVeigh and paid him damages. McVeigh reached a settlement with the Navy that paid his legal expenses and allowed him to retire with full benefits in July. "The New York Times" called Sporkin's ruling "a victory for gay rights, with implications for the millions of people who use computer on-line services". In April 2006, Margaret Witt, a major in the United States Air Force who was being investigated for homosexuality, filed suit in the United States District Court for the Western District of Washington seeking declaratory and injunctive relief on the grounds that DADT violates substantive due process, the Equal Protection Clause, and procedural due process. In July 2007 the Secretary of the Air Force ordered her honorable discharge. Dismissed by the district court, the case was heard on appeal, and the Ninth Circuit issued its ruling on May 21, 2008. Its decision in "Witt v. Department of the Air Force" reinstated Witt's substantive-due-process and procedural-due-process claims and affirmed the dismissal of her Equal Protection claim. The Ninth Circuit, analyzing the Supreme Court decision in "Lawrence v. Texas" (2003), determined that DADT had to be subjected to heightened scrutiny, meaning that there must be an "important" governmental interest at issue, that DADT must "significantly" further the governmental interest, and that there can be no less intrusive way for the government to advance that interest. The Obama administration declined to appeal, allowing a May 3, 2009, deadline to pass, leaving "Witt" as binding on the entire Ninth Circuit, and returning the case to the District Court. On September 24, 2010, District Judge Ronald B. Leighton ruled that Witt's constitutional rights had been violated by her discharge and that she must be reinstated to the Air Force. The government filed an appeal with the Ninth Circuit on November 23, but made no attempt to have the trial court's ruling stayed pending the outcome. In a settlement announced on May 10, 2011, the Air Force agreed to drop its appeal and remove Witt's discharge from her military record. She will retire with full benefits. In 2010, a lawsuit filed in 2004 by the Log Cabin Republicans (LCR), the nation's largest Republican gay organization, went to trial. Challenging the constitutionality of DADT, the plaintiffs stated that the policy violates the rights of gay military members to free speech, due process and open association. The government argued that DADT was necessary to advance a legitimate governmental interest. Plaintiffs introduced statements by President Barack Obama, from prepared remarks, that DADT "doesn't contribute to our national security", "weakens our national security", and that reversal is "essential for our national security". According to plaintiffs, these statements alone satisfied their burden of proof on the due process claims. On September 9, 2010, Judge Virginia A. Phillips ruled in "Log Cabin Republicans v. United States of America" that the ban on service by openly gay service members was an unconstitutional violation of the First and Fifth Amendments. On October 12, 2010, she granted an immediate worldwide injunction prohibiting the Department of Defense from enforcing the "Don't Ask Don't Tell" policy and ordered the military to suspend and discontinue any investigation or discharge, separation, or other proceedings based on it. The Department of Justice appealed her decision and requested a stay of her injunction, which Phillips denied but which the Ninth Circuit Court of Appeals granted on October 20 and stayed pending appeal on November 1. The U.S. Supreme Court refused to overrule the stay. District Court neither anticipated questions of constitutional law nor formulated a rule broader than is required by the facts. The constitutional issues regarding DADT are well-defined, and the District Court focused specifically on the relevant inquiry of whether the statute impermissibly infringed upon substantive due process rights with regard to a protected area of individual liberty. Engaging in a careful and detailed review of the facts presented to it at trial, the District Court properly concluded that the Government put forward no persuasive evidence to demonstrate that the statute is a valid exercise of congressional authority to legislate in the realm of protected liberty interests. See Log Cabin, 716 F. Supp. 2d at 923. Hypothetical questions were neither presented nor answered in reaching this decision. On October 19, 2010, military recruiters were told they could accept openly gay applicants. On October 20, 2010, Lt. Daniel Choi, an openly gay man honorably discharged under DADT, re-enlisted in the U.S. Army. Following passage of the Don't Ask, Don't Tell Repeal Act of 2010, the Justice Department asked the Ninth Circuit to suspend LCR's suit in light of the legislative repeal. LCR opposed the request, noting that gay personnel were still subject to discharge. On January 28, 2011, the Court denied the Justice Department's request. The Obama administration responded by requesting that the policy be allowed to stay in place while they completed the process of assuring that its end would not impact combat readiness. On March 28, the LCR filed a brief asking that the court deny the administration's request. In 2011, while waiting for certification, several service members were discharged under DADT at their own insistence, until July 6 when a three-judge panel of the Ninth Circuit Court of Appeals re-instated Judge Phillips' injunction barring further enforcement of the U.S. military's ban on openly gay service members. On July 11, the appeals court asked the DOJ to inform the court if it intended to proceed with its appeal. On July 14, the Justice Department filed a motion "to avoid short-circuiting the repeal process established by Congress during the final stages of the implementation of the repeal". and warning of "significant immediate harms on the government". On July 15, the Ninth Circuit restored most of the DADT policy, but continued to prohibit the government from discharging or investigating openly gay personnel. Following the implementation of DADT's repeal, a panel of three judges of the Ninth Circuit Court of Appeals vacated the Phillips ruling. Following the July 1999 murder of Army Pfc. Barry Winchell, apparently motivated by anti-gay bias, President Clinton issued an executive order modifying the Uniform Code of Military Justice to permit evidence of a hate crime to be admitted during the sentencing phase of a trial. In December, Secretary of Defense William Cohen ordered a review of DADT to determine if the policy's anti-gay harassment component was being observed. When that review found anti-gay sentiments were widely expressed and tolerated in the military, the DOD adopted a new anti-harassment policy in July 2000, though its effectiveness was disputed. On December 7, 1999, Hillary Clinton told an audience of gay supporters that "Gays and lesbians already serve with distinction in our nation's armed forces and should not face discrimination. Fitness to serve should be based on an individual's conduct, not their sexual orientation." Later that month, retired Gen. Carl E. Mundy Jr. defended the implementation of DADT against what he called the "politicization" of the issue by both Clintons. He cited discharge statistics for the Marines for the past 5 years that showed 75% were based on "voluntary admission of homosexuality" and 49% occurred during the first 6 months of service, when new recruits were most likely to reevaluate their decision to enlist. He also argued against any change in the policy, writing in the "New York Times": "Conduct that is widely rejected by a majority of Americans can undermine the trust that is essential to creating and maintaining the sense of unity that is critical to the success of a military organization operating under the very different and difficult demands of combat." The conviction of Winchell's murderer, according to the "New York Times", "galvanized opposition" to DADT, an issue that had "largely vanished from public debate". Opponents of the policy focused on punishing harassment in the military rather than the policy itself, which Sen. Chuck Hagel defended on December 25: "The U.S. armed forces aren't some social experiment." The principal candidates for the Democratic presidential nomination in 2000, Al Gore and Bill Bradley, both endorsed military service by open gays and lesbians, provoking opposition from high-ranking retired military officers, notably the recently retired commandant of the Marine Corps, Gen. Charles C. Krulak. He and others objected to Gore's statement that he would use support for ending DADT as a "litmus test" when considering candidates for the Joint Chiefs of Staff. The 2000 Democratic Party platform was silent on the issue, while the Republican Party platform that year said: "We affirm that homosexuality is incompatible with military service." Following the election of George W. Bush in 2000, observers expected him to avoid any changes to DADT, since his nominee for Secretary of State Colin Powell had participated in its creation. In February 2004 members of the British Armed Forces, Lt Rolf Kurth and Lt Cdr Craig Jones along with Aaron Belkin, Director of the Center for the Study of Sexual Minorities in the Military met with members of Congress and spoke at the National Defense University. They spoke about their experience of the current situation in the UK. The UK lifted the gay ban on members serving in their forces in 2000. In July 2004, the American Psychological Association issued a statement that DADT "discriminates on the basis of sexual orientation" and that "Empirical evidence fails to show that sexual orientation is germane to any aspect of military effectiveness including unit cohesion, morale, recruitment and retention." It said that the U.S. military's track record overcoming past racial and gender discrimination demonstrated its ability to integrate groups previously excluded. The Republican Party platform that year reiterated its support for the policy—"We affirm traditional military culture, and we affirm that homosexuality is incompatible with military service."—while the Democratic Party maintained its silence. In February 2005, the Government Accountability Office released estimates of the cost of DADT. It reported at least $95.4 million in recruiting costs and at least $95.1 million for training replacements for the 9,488 troops discharged from 1994 through 2003, while noting that the true figures might be higher. In September, as part of its campaign to demonstrate that the military allowed open homosexuals to serve when its manpower requirements were greatest, the Center for the Study of Sexual Minorities in the Military (now the Palm Center) reported that army regulations allowed the active duty deployment of Army Reservists and National Guard troops who claim to be or who are accused of being gay. A U.S. Army Forces Command spokesperson said the regulation was intended to prevent Reservists and National Guard members from pretending to be gay to escape combat. Advocates of ending DADT repeatedly publicized discharges of highly trained gay and lesbian personnel, especially those in positions with critical shortages, including fifty-nine Arabic speakers and nine Persian speakers. Elaine Donnelly, president of the Center for Military Readiness, later argued that the military's failure to ask about sexual orientation at recruitment was the cause of the discharges: [Y]ou could reduce this number to zero or near zero if the Department of Defense dropped Don't Ask, Don't Tell... We should not be training people who are not eligible to be in the Armed Forces." In February 2006, a University of California Blue Ribbon Commission that included Lawrence Korb, a former assistant defense secretary during the Reagan administration, William Perry, Secretary of Defense in the Clinton administration, and professors from the United States Military Academy released their assessment of the GAO's analysis of the cost of DADT released a year earlier. The commission report stated that the GAO did not take into account the value the military lost from the departures. They said that that total cost was closer to $363 million, including $14.3 million for "separation travel" following a service member's discharge, $17.8 million for training officers, $252.4 million for training enlistees, and $79.3 million in recruiting costs. In 2006, Soulforce, a national LGBT rights organization, organized its Right to Serve Campaign, in which gay men and lesbians in several cities attempted to enlist in the Armed Forces or National Guard. Donnelly of the Center for Military Readiness stated in September: "I think the people involved here do not have the best interests of the military at heart. They never have. They are promoting an agenda to normalize homosexuality in America using the military as a battering ram to promote that broader agenda." She said that "pro-homosexual activists ... are creating media events all over the country and even internationally." In 2006, a speaking tour of gay former service members, organized by SLDN, Log Cabin Republicans, and Meehan, visited 18 colleges and universities. Patrick Guerriero, executive director of Log Cabin, thought the repeal movement was gaining "new traction" but "Ultimately", said, "we think it's going to take a Republican with strong military credentials to make a shift in the policy." Elaine Donnelly called such efforts "a big P.R. campaign" and said that "The law is there to protect good order and discipline in the military, and it's not going to change." In December 2006, Zogby International released the results of a poll of military personnel conducted in October 2006 that found that 26% favored allowing gays and lesbians to serve openly in the military, 37% were opposed, while 37% expressed no preference or were unsure. Of respondents who had experience with gay people in their unit, 6% said their presence had a positive impact on their personal morale, 66% said no impact, and 28% said negative impact. Regarding overall unit morale, 3% said positive impact, 64% no impact, and 27% negative impact. Retired Chairman of the Joint Chiefs of Staff General John Shalikashvili and former Senator and Secretary of Defense William Cohen opposed the policy in January 2007: "I now believe that if gay men and lesbians served openly in the United States military, they would not undermine the efficacy of the armed forces" Shalikashvili wrote. "Our military has been stretched thin by our deployments in the Middle East, and we must welcome the service of any American who is willing and able to do the job." Shalikashvili cited the recent "Zogby poll of more than 500 service members returning from Afghanistan and Iraq, three quarters of whom said they were comfortable interacting with gay people. The debate took a different turn in March when Gen. Peter Pace, Chairman of the Joint Chiefs of Staff, told the editorial board of the "Chicago Tribune" he supported DADT because "homosexual acts between two individuals are immoral and ... we should not condone immoral acts." His remarks became, according to the "Tribune", "a huge news story on radio, television and the Internet during the day and showed how sensitive the Pentagon's policy has become." Sen. John Warner, who backed DADT, said "I respectfully, but strongly, disagree with the chairman's view that homosexuality is immoral", and Pace expressed regret for expressing his personal views and said that DADT "does not make a judgment about the morality of individual acts." Massachusetts Governor Mitt Romney, then in the early stages of his campaign for the 2008 Republican presidential nomination, defended DADT: That summer, after U.S. senator Larry Craig was arrested for lewd conduct in a men's restroom, conservative commentator Michael Medved argued that any liberalization of DADT would "compromise restroom integrity and security". He wrote: "The national shudder of discomfort and queasiness associated with any introduction of homosexual eroticism into public men's rooms should make us more determined than ever to resist the injection of those lurid attitudes into the even more explosive situation of the U.S. military." In November 2007, 28 retired generals and admirals urged Congress to repeal the policy, citing evidence that 65,000 gay men and women were serving in the armed forces and that there were over a million gay veterans. On November 17, 2008, 104 retired generals and admirals signed a similar statement. In December, SLDN arranged for "60 Minutes" to interview Darren Manzella, an Army medic who served in Iraq after coming out to his unit. On May 4, 2008, while Chairman of the Joint Chiefs of Staff Admiral Mike Mullen addressed the graduating cadets at West Point, a cadet asked what would happen if the next administration were supportive of legislation allowing gays to serve openly. Mullen responded, "Congress, and not the military, is responsible for DADT." Previously, during his Senate confirmation hearing in 2007, Mullen told lawmakers, "I really think it is for the American people to come forward, really through this body, to both debate that policy and make changes, if that's appropriate." He went on to say, "I'd love to have Congress make its own decisions" with respect to considering repeal. In May 2009, when a committee of military law experts at the Palm Center, an anti-DADT research institute, concluded that the President could issue an Executive Order to suspend homosexual conduct discharges, Obama rejected that option and said he wanted Congress to change the law. On July 5, 2009, Colin Powell told CNN that the policy was "correct for the time" but that "sixteen years have now gone by, and I think a lot has changed with respect to attitudes within our country, and therefore I think this is a policy and a law that should be reviewed." Interviewed for the same broadcast, Mullen said the policy would continue to be implemented until the law was repealed, and that his advice was to "move in a measured way... At a time when we're fighting two conflicts there is a great deal of pressure on our forces and their families." In September, "Joint Force Quarterly" published an article by an Air Force colonel that disputed the argument that unit cohesion is compromised by the presence of openly gay personnel. In October 2009, the Commission on Military Justice, known as the Cox Commission, repeated its 2001 recommendation that Article 125 of the Uniform Code of Military Justice, which bans sodomy, be repealed, noting that "most acts of consensual sodomy committed by consenting military personnel are not prosecuted, creating a perception that prosecution of this sexual behavior is arbitrary." In January 2010, the White House and congressional officials started work on repealing the ban by inserting language into the 2011 defense authorization bill. During Obama's State of the Union Address on January 27, 2010, he said that he would work with Congress and the military to enact a repeal of the gay ban law and for the first time set a timetable for repeal. At a February 2, 2010, congressional hearing, Senator John McCain read from a letter signed by "over one thousand former general and flag officers". It said: "We firmly believe that this law, which Congress passed to protect good order, discipline and morale in the unique environment of the armed forces, deserves continued support." The signature campaign had been organized by Elaine Donnelly of the Center for Military Readiness, a longtime supporter of a traditional all-male and all-heterosexual military. Servicemembers United, a veterans group opposed to DADT, issued a report critical of the letter's legitimacy. They said that among those signing the letter were officers who had no knowledge of their inclusion or who had refused to be included, and even one instance of a general's widow who signed her husband's name to the letter though he had died before the survey was published. The average age of the officers whose names were listed as signing the letter was 74, the oldest was 98, and Servicemembers United noted that "only a small fraction of these officers have even served in the military during the 'Don't Ask, Don't Tell' period, much less in the 21st century military." The Center for American Progress issued a report in March 2010 that said a smooth implementation of an end to DADT required eight specified changes to the military's internal regulations. On March 25, 2010, Defense Secretary Gates announced new rules mandating that only flag officers could initiate discharge proceedings and imposing more stringent rules of evidence on discharge proceedings. The underlying justifications for DADT have been subjected to increasing suspicion and outright rejection by the early 21st century. Mounting evidence obtained from the integration efforts of foreign militaries, surveys of U.S. military personnel, and studies conducted by the DoD gave credence to the view that the presence of open homosexuals within the military would not be detrimental at all to the armed forces. A DoD study conducted at the behest of Secretary of Defense Robert Gates in 2010 supports this most. The DoD working group conducting the study considered the impact that lifting the ban would have on unit cohesion and effectiveness, good order and discipline, and military morale. The study included a survey that revealed significant differences between respondents who believed they had served with homosexual troops and those who did not believe they had. In analyzing such data, the DoD working group concluded that it was actually generalized perceptions of homosexual troops that led to the perceived unrest that would occur without DADT. Ultimately, the study deemed the overall risk to military effectiveness of lifting the ban to be low. Citing the ability of the armed forces to adjust to the previous integration of African-Americans and women, the DoD study asserted that the United States military could adjust as had it before in history without an impending serious effect. In March 2005, Rep. Martin T. Meehan introduced the Military Readiness Enhancement Act in the House. It aimed "to amend title 10, United States Code, to enhance the readiness of the Armed Forces by replacing the current policy concerning homosexuality in the Armed Forces, referred to as 'Don't ask, don't tell,' with a policy of nondiscrimination on the basis of sexual orientation". As of 2006, it had 105 Democrats and 4 Republicans as co-sponsors. He introduced the bill again in 2007 and 2009. During the 2008 U.S. presidential election campaign, Senator Barack Obama advocated a full repeal of the laws barring gays and lesbians from serving in the military. Nineteen days after his election, Obama's advisers announced that plans to repeal the policy might be delayed until 2010, because Obama "first wants to confer with the Joint Chiefs of Staff and his new political appointees at the Pentagon to reach a consensus, and then present legislation to Congress". As president he advocated a policy change to allow gay personnel to serve openly in the armed forces, stating that the U.S. government has spent millions of dollars replacing troops expelled from the military, including language experts fluent in Arabic, because of DADT. On the eve of the National Equality March in Washington, D.C., October 10, 2009, Obama stated in a speech before the Human Rights Campaign that he would end the ban, but he offered no timetable. Obama said in his 2010 State of the Union Address: "This year, I will work with Congress and our military to finally repeal the law that denies gay Americans the right to serve the country they love because of who they are." This statement was quickly followed up by Defense Secretary Robert Gates and Joint Chiefs Chairman Michael Mullen voicing their support for a repeal of DADT. Democrats in both houses of Congress first attempted to end DADT by amending the Defense Authorization Act. On May 27, 2010, on a 234–194 vote, the U.S. House of Representatives approved the Murphy amendment to the National Defense Authorization Act for Fiscal Year 2011. It provided for repeal of the DADT policy and created a process for lifting the policy, including a U.S. Department of Defense study and certification by key officials that the change in policy would not harm military readiness followed by a waiting period of 60 days. The amended defense bill passed the House on May 28, 2010. On September 21, 2010, John McCain led a successful filibuster against the debate on the Defense Authorization Act, in which 56 Senators voted to end debate, four short of the 60 votes required. Some advocates for repeal, including the Palm Center, OutServe, and Knights Out, opposed any attempt to block the passage of NDAA if it failed to include DADT repeal language. The Human Rights Campaign, the Center for American Progress, Servicemembers United and SLDN refused to concede that possibility. The American Civil Liberties Union (ACLU) filed a lawsuit, "Collins v. United States", against the Department of Defense in November 2010 seeking full compensation for those discharged under the policy. On November 30, 2010, the Joint Chiefs of Staff released the "Don't Ask, Don't Tell" Comprehensive Review Working Group (CRWG) report authored by Jeh C. Johnson, General Counsel of the Department of Defense, and Army General Carter F. Ham. It outlined a path to the implementation of repeal of DADT. The report indicated that there was a low risk of service disruptions due to repealing the ban, provided time was provided for proper implementation and training. It included the results of a survey of 115,000 active-duty and reserve service members. Across all service branches, 30 percent thought that integrating gays into the military would have negative consequences. In the Marine Corps and combat specialties, the percentage with that negative assessment ranged from 40 to 60 percent. The CRWG also said that 69 percent of all those surveyed believed they had already worked with a gay or lesbian and of those, 92 percent reported that the impact of that person's presence was positive or neutral. The same day, in response to the CRWG, 30 professors and scholars, most from military institutions, issued a joint statement saying that the CRWG "echoes more than 20 studies, including studies by military researchers, all of which reach the same conclusion: allowing gays and lesbians to serve openly will not harm the military ... We hope that our collective statement underscores that the debate about the evidence is now officially over..." The Family Research Council's president, Tony Perkins, interpreted the CRWG data differently, writing that it "reveals that 40 percent of Marines and 25 percent of the Army could leave". Gates encouraged Congress to act quickly to repeal the law so that the military could carefully adjust rather than face a court decision requiring it to lift the policy immediately. The United States Senate held two days of hearings on December 2 and 3, 2010, to consider the CRWG report. Defense Secretary Robert Gates, Joint Chiefs Chairman Michael Mullen urged immediate repeal. The heads of the Marine Corps, Army, and Navy all advised against immediate repeal and expressed varied views on its eventual repeal. Oliver North, writing in "National Review" the next week, said that Gates' testimony showed "a deeply misguided commitment to political correctness". He interpreted the CRWG's data as indicating a high risk that large numbers of resignations would follow the repeal of DADT. Service members, especially combat troops, he wrote, "deserve better than to be treated like lab rats in Mr. Obama's radical social experiment". On December 9, 2010, another filibuster prevented debate on the Defense Authorization Act. In response to that vote, Senators Joe Lieberman and Susan Collins introduced a bill that included the policy-related portions of the Defense Authorization Act that they considered more likely to pass as a stand-alone bill. It passed the House on a vote of 250 to 175 on December 15, 2010. On December 18, 2010, the Senate voted to end debate on its version of the bill by a cloture vote of 63–33. The final Senate vote was held later that same day, with the measure passing by a vote of 65–31. U.S. Secretary of Defense Robert Gates released a statement following the vote indicating that the planning for implementation of a policy repeal would begin right away and would continue until Gates certified that conditions were met for orderly repeal of the policy. President Obama signed the repeal into law on December 22, 2010. The repeal act established a process for ending the DADT policy. The President, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff were required to certify in writing that they had reviewed the Pentagon's report on the effects of DADT repeal, that the appropriate regulations had been reviewed and drafted, and that implementation of repeal regulations "is consistent with the standards of military readiness, military effectiveness, unit cohesion, and recruiting and retention of the Armed Forces". Once certification was given, DADT would be lifted after a 60-day waiting period. Representative Duncan D. Hunter announced plans in January 2011 to introduce a bill designed to delay the end of DADT. His proposed legislation required all of the chiefs of the armed services to submit the certification at the time required only of the President, Defense Secretary and Joint Chiefs Chairman. In April, Perkins of the Family Research Council argued that the Pentagon was misrepresenting its own survey data and that hearings by the House Armed Services Committee, now under Republican control, could persuade Obama to withhold certification. Congressional efforts to prevent the change in policy from going into effect continued into May and June 2011. On January 29, 2011, Pentagon officials stated that the training process to prepare troops for the end of DADT would begin in February and would proceed quickly, though they suggested that it might not be completed in 2011. On the same day, the DOD announced it would not offer any additional compensation to service members who had been discharged under DADT, who received half of the separation pay other honorably discharged service members received. In May 2011, the U.S. Army reprimanded three colonels for performing a skit in March 2011 at a function at Yongsan Garrison, South Korea, that mocked the repeal. In May 2011, revelations that an April Navy memo relating to its DADT training guidelines contemplated allowing same-sex weddings in base chapels and allowing chaplains to officiate if they so chose resulted in a letter of protest from 63 Republican congressman, citing the Defense of Marriage Act (DOMA) as controlling the use of federal property. Tony Perkins of the Family Research Council said the guidelines "make it even more uncomfortable for men and women of faith to perform their duties". A Pentagon spokesperson replied that DOMA "does not limit the type of religious ceremonies a chaplain may perform in a chapel on a military installation", and a Navy spokesperson said that "A chaplain can conduct a same-sex ceremony if it is in the tenets of his faith". A few days later the Navy rescinded its earlier instructions "pending additional legal and policy review and interdepartmental coordination". While waiting for certification, several service members were discharged at their own insistence until a July 6 ruling from a federal appeals court barred further enforcement of the U.S. military's ban on openly gay service members, which the military promptly did. Anticipating the lifting of DADT, some active duty service members wearing civilian clothes marched in San Diego's gay pride parade on July 16. The DOD noted that participation "does not constitute a declaration of sexual orientation". President Obama, Secretary of Defense Leon Panetta, and Admiral Mike Mullen, Chairman of the Joint Chiefs of Staff, sent the certification required by the Repeal Act to Congress on July 22, 2011, setting the end of DADT for September 20, 2011. A Pentagon spokesman said that service members discharged under DADT would be able to re-apply to rejoin the military then. At the end of August 2011, the DOD approved the distribution of the magazine produced by OutServe, an organization of gay and lesbian service members, at Army and Air Force base exchanges beginning with the September 20 issue, coinciding with the end of DADT. On September 20, Air force officials announced that 22 Air Force Instructions were "updated as a result of the repeal of DADT". On September 30, 2011, the Department of Defense modified regulations to reflect the repeal by deleting "homosexual conduct" as a ground for administrative separation. On the eve of repeal, US Air Force 1st Lt. Josh Seefried, one of the founders of OutServe, an organization of LGBT troops, revealed his identity after two years of hiding behind a pseudonym. Senior Airman Randy Phillips, after conducting a social media campaign seeking encouragement coming out and already out to his military co-workers, came out to his father on the evening of September 19. When the video of their conversation he posted on YouTube went viral, it made him, in one journalist's estimation, "the poster boy for the DADT repeal". The moment the repeal took effect at midnight on September 19, US Navy Lt. Gary C. Ross married his same-sex partner of eleven and a half years, Dan Swezy, making them the first same-sex military couple to legally marry in the United States. Retired Rear Adm. Alan S. Steinman became the highest-ranking person to come out immediately following the end of DADT. HBO produced a World of Wonder documentary, "The Strange History of Don't Ask, Don't Tell", and premiered it on September 20. "Variety" called it "an unapologetic piece of liberal advocacy" and "a testament to what formidable opponents ignorance and prejudice can be". Discharge proceedings on the grounds of homosexuality, some begun years earlier, came to an end. In the weeks that followed, a series of firsts attracted press attention to the impact of the repeal. The Marine Corps were the first branch of the armed services to recruit from the LGBTQ community. Reservist Jeremy Johnson became the first person discharged under DADT to re-enlist. Jase Daniels became the first to return to active duty, re-joining the Navy as a third class petty officer. On December 2, Air Force intelligence officer Ginger Wallace became the first open LGBT service member to have a same-sex partner participate in the "pinning-on" ceremony that marked her promotion to colonel. On December 23, after 80 days at sea, US Navy Petty Officer 2nd Class Marissa Gaeta won the right to the traditional "first kiss" upon returning to port and shared it with her same-sex partner. On January 20, 2012, U.S. service members deployed to Bagram, Afghanistan, produced a video in support of the It Gets Better Project, which aims to support LGBT at-risk youth. Widespread news coverage continued even months after the repeal date, when a photograph of Marine Sgt. Brandon Morgan kissing his partner at a February 22, 2012, homecoming celebration on Marine Corps Base Hawaii went viral. When asked for her comment, a spokesperson for the Marine Corps said: "It's your typical homecoming photo." On September 30, 2011, Under Secretary of Defense Clifford Stanley announced the DOD's policy that military chaplains are allowed to perform same-sex marriages "on or off a military installation" where local law permits them. His memo noted that "a chaplain is not required to participate in or officiate a private ceremony if doing so would be in variance with the tenets of his or her religion" and "a military chaplain's participation in a private ceremony does not constitute an endorsement of the ceremony by DoD". Some religious groups announced that their chaplains would not participate in such weddings, including an organization of evangelical Protestants, the Chaplain Alliance for Religious Liberty and Roman Catholics led by Archbishop Timothy Broglio of the Archdiocese for the Military Services, USA. In late October 2011, speaking at the Air Force Academy, Col. Gary Packard, leader of the team that drafted the DOD's repeal implementation plan, said: "The best quote I've heard so far is, 'Well, some people's Facebook status changed, but that was about it.'" In late November, discussing the repeal of DADT and its implementation, Marine Gen. James F. Amos said "I'm very pleased with how it has gone" and called it a "non-event". He said his earlier public opposition was appropriate based on ongoing combat operations and the negative assessment of the policy given by 56% of combat troops under his command in the Department of Defense's November 2010 survey. A Defense Department spokesperson said implementation of repeal occurred without incident and added: "We attribute this success to our comprehensive pre-repeal training program, combined with the continued close monitoring and enforcement of standards by our military leaders at all levels." In December 2011, Congress considered two DADT-related amendments in the course of work on the National Defense Authorization Act for 2012. The Senate approved 97-3, an amendment removing the prohibition on sodomy found in Article 125 of the Uniform Code of Military Justice as recommended by the Comprehensive Review Working Group (CRWG) a year earlier. The House approved an amendment banning same-sex marriages from being performed at military bases or by military employees, including chaplains and other employees of the military when "acting in an official capacity". Neither amendment appeared in the final legislation. In July 2012, the Department of Defense granted permission for military personnel to wear their uniforms while participating in the San Diego Pride Parade. This was the first time that U.S. military personnel were permitted to wear their service uniforms in such a parade. Marking the first anniversary of the passage of the Repeal Act, television news networks reported no incidents in the three months since DADT ended. One aired video of a social gathering for gay service members at a base in Afghanistan. Another reported on the experience of lesbian and gay troops, including some rejection after coming out to colleagues. The Palm Center, a think tank that studies issues of sexuality and the military, released a study in September 2012 that found no negative consequences, nor any effect on military effectiveness from DADT repeal. This study began six months following repeal and concluded at the one year mark. The study included surveys of 553 generals and admirals who had opposed repeal, experts who supported DADT, and more than 60 heterosexual, gay, lesbian and bisexual active duty service personnel. On January 7, 2013, the ACLU reached a settlement with the federal government in "Collins v. United States". It provided for the payment of full separation pay to service members discharged under DADT since November 10, 2004, who had previously been granted only half that. Several candidates for the 2012 Republican presidential nomination called for the restoration of DADT, including Michele Bachmann, Rick Perry, and Rick Santorum. Newt Gingrich called for an extensive review of DADT's repeal. Ron Paul, having voted for the Repeal Act, maintained his support for allowing military service by open homosexuals. Herman Cain called the issue "a distraction" and opposed reinstating DADT. Mitt Romney said that the winding down of military operations in Iraq and Afghanistan obviated his opposition to the repeal and said he was not proposing any change to policy. On September 22, 2011, the audience at a Republican candidates' debate booed a U.S. soldier posted in Iraq who asked a question via video about the repeal of DADT, and none of the candidates noticed or responded to the crowd's behavior. Two days later, Obama commented on the incident while addressing a dinner of the Human Rights Campaign: "You want to be commander in chief? You can start by standing up for the men and women who wear the uniform of the United States, even when it's not politically convenient". In June 2012, Rep. Howard McKeon, Republican chair of the House Armed Services Committee, said he considered the repeal of DADT a settled issue and if Romney became president would not advocate its reinstatement, though others in his party might. In 1993, "Time" reported that 44% of those polled supported openly gay servicemembers, and in 1994, a CNN poll indicated 53% of Americans believed gays and lesbians should be permitted to serve openly. According to a December 2010 "The Washington Post"-ABC News poll 77% of Americans said gays and lesbians who publicly disclose their sexual orientation should be able to serve in the military. That number showed little change from polls over the previous two years, but represented the highest level of support in a Post-ABC poll. The support also cut across partisan and ideological lines, with majorities of Democrats (86%), Republicans (74%), independents (74%), liberals (92%), conservatives (67%), white evangelical Protestants (70%) and non-religious (84%) in favor of homosexuals serving openly. A November 2010 survey by the Pew Research Center found that 58% of the U.S. public favored allowing gays and lesbians to serve openly in the military, while less than half as many (27%) were opposed. According to a November 2010 CNN/Opinion Research Corporation poll, 72% of adult Americans favored permitting people who are openly gay or lesbian to serve in the military, while 23% opposed it. "The main difference between the CNN poll and the Pew poll is in the number of respondents who told pollsters that they didn't have an opinion on this topic – 16 percent in the Pew poll compared to only five percent in the CNN survey", said CNN Polling Director Keating Holland. "The two polls report virtually the same number who say they oppose gays serving openly in the military, which suggests that there are some people who favor that change in policy but for some reason were reluctant to admit that to the Pew interviewers. That happens occasionally on topics where moral issues and equal-treatment issues intersect." A February 2010 Quinnipiac University Polling Institute national poll showed 57% of American voters favored gays serving openly, compared to 36% opposed, while 66% said not allowing openly gay personnel to serve is discrimination, compared to 31% who did not see it as discrimination. A CBS News/"The New York Times" national poll done at the same time showed 58% of Americans favored gays serving openly, compared to 28% opposed. Chaplain groups and religious organizations took various positions on DADT. Some felt that the policy needed to be withdrawn to make the military more inclusive. The Southern Baptist Convention battled the repeal of DADT, warning that their endorsements for chaplains might be withdrawn if the repeal took place. They took the position that allowing gay men and women to serve in the military without restriction would have a negative impact on the ability of chaplains who think homosexuality is a sin to speak freely regarding their religious beliefs. The Roman Catholic Church called for the retention of the policy, but had no plans to withdraw its priests from serving as military chaplains. Sixty-five retired chaplains signed a letter opposing repeal, stating that repeal would make it impossible for chaplains whose faith teaches that same-sex behavior is immoral to minister to military service members. Other religious organizations and agencies called the repeal of the policy a "non-event" or "non-issue" for chaplains, claiming that chaplains have always supported military service personnel, whether or not they agree with all their actions or beliefs. After the policy was introduced in 1993, the military discharged over 13,000 troops from the military under DADT. The number of discharges per fiscal year under DADT dropped sharply after the September 11 attacks and remained comparatively low through to the repeal. Discharges exceeded 600 every year until 2009. In November 2019, both Rhode Island and New York State signed into law and implemented restoring military benefits to gay and lesbian military veterans. An estimated approximately 100,000 individuals were affected by the "don't ask don't tell policy" (since it was repealed in September 2011).
https://en.wikipedia.org/wiki?curid=8690
Divination Divination (from Latin "divinare", 'to foresee, to foretell, to predict, to prophesy', related to "divinus", 'divine'), or "to be inspired by a god," is the attempt to gain insight into a question or situation by way of an occultic, standardized process or ritual. Used in various forms throughout history, diviners ascertain their interpretations of how a querent should proceed by reading signs, events, or omens, or through alleged contact with a supernatural agency. Divination can be seen as a systematic method with which to organize what appear to be disjointed, random facets of existence such that they provide insight into a problem at hand. If a distinction is to be made between divination and fortune-telling, divination has a more formal or ritualistic element and often contains a more social character, usually in a religious context, as seen in traditional African medicine. Fortune-telling, on the other hand, is a more everyday practice for personal purposes. Particular divination methods vary by culture and religion. Divination has long been criticized. In the modern era it has dismissed by the scientific community and skeptics as being superstition; experiments do not support the idea that divination techniques can actually predict the future more reliably or precisely than would be possible without it. In antiquity it was attacked by philosophers such as the Academic Skeptic Cicero in "De Divinatione" and the Pyrrhonist Sextus Empiricus in "Against the Astrologers". The satirist, Lucian, devoted a witty essay to "Alexander the false prophet". The Oracle of Amun at the Siwa Oasis was made famous when Alexander the Great visited it after conquering Egypt from Persia in 332 BC. Both oracles and seers in ancient Greece practiced divination. Oracles were the conduits for the gods on earth; their prophecies were understood to be the will of the gods verbatim. Because of the high demand for oracle consultations and the oracles’ limited work schedule, they were not the main source of divination for the ancient Greeks. That role fell to the seers (). Seers were not in direct contact with the gods; instead, they were interpreters of signs provided by the gods. Seers used many methods to explicate the will of the gods including extispicy, bird signs, etc. They were more numerous than the oracles and did not keep a limited schedule; thus, they were highly valued by all Greeks, not just those with the capacity to travel to Delphi or other such distant sites. The disadvantage of seers was that only direct yes-or-no questions could be answered. Oracles could answer more generalized questions, and seers often had to perform several sacrifices in order to get the most consistent answer. For example, if a general wanted to know if the omens were proper for him to advance on the enemy, he would ask his seer both that question and if it were better for him to remain on the defensive. If the seer gave consistent answers, the advice was considered valid. During battle, generals would frequently ask seers at both the campground (a process called the "hiera") and at the battlefield (called the "sphagia"). The hiera entailed the seer slaughtering a sheep and examining its liver for answers regarding a more generic question; the sphagia involved killing a young female goat by slitting its throat and noting the animal's last movements and blood flow. The battlefield sacrifice only occurred when two armies prepared for battle against each other. Neither force would advance until the seer revealed appropriate omens. Because the seers had such power over influential individuals in ancient Greece, many were skeptical of the accuracy and honesty of the seers. The degree to which seers were honest depends entirely on the individual seers. Despite the doubt surrounding individual seers, the craft as a whole was well regarded and trusted by the Greeks. The divination method of casting lots (Cleromancy) was used by the remaining eleven disciples of Jesus in to select a replacement for Judas Iscariot. Therefore, divination was arguably an accepted practice in the early church. However, divination became viewed as a pagan practice by Christian emperors during ancient Rome. In 692 the Quinisext Council, also known as the "Council in Trullo" in the Eastern Orthodox Church, passed canons to eliminate pagan and divination practices. Fortune-telling and other forms of divination were widespread through the Middle Ages. In the constitution of 1572 and public regulations of 1661 of Kur-Saxony, capital punishment was used on those predicting the future. Laws forbidding divination practice continue to this day. Småland is famous for Årsgång, a practice which occurred until the early 19th century in some parts of Småland. Generally occurring on Christmas and New Year's Eve, it is a practice in which one would fast and keep themselves away from light in a room until midnight to then complete a set of complex events to interpret symbols encountered throughout the journey to foresee the coming year. Divination was a central component of ancient Mesoamerican religious life. Many Aztec gods, including central creator gods, were described as diviners and were closely associated with sorcery. Tezcatlipoca is the patron of sorcerers and practitioners of magic. His name means "smoking mirror," a reference to a device used for divinatory scrying. In the Mayan "Popol Vuh", the creator gods Xmucane and Xpiacoc perform divinatory hand casting during the creation of people. Every civilization that developed in pre-Columbian Mexico, from the Olmecs to the Aztecs, practiced divination in daily life, both public and private. Scrying through the use of reflective water surfaces, mirrors, or the casting of lots were among the most widespread forms of divinatory practice. Visions derived from hallucinogens were another important form of divination, and are still widely used among contemporary diviners of Mexico. Among the more common hallucinogenic plants used in divination are morning glory, jimson weed, and peyote. Although Japan retains a history of traditional and local methods of divination, such as "onmyōdō", contemporary divination in Japan, called "uranai", derives from outside sources. Contemporary methods of divination in Japan include both Western and Chinese astrology, geomancy or feng shui, tarot cards, I Ching (Book of Changes), and physiognomy (methods of reading the body to identify traits). Rather than indicate cultural appropriation, understood as inappropriate acts of appropriation by a dominant culture in the context of colonization or inequality, Japanese divination represents instances of unique and creative amalgamation of cultural elements. This concept may be referred to as syncretism, creolization, or cultural hybridity. In the example of feng shui, Japanese adaptations of feng shui extend outside the traditional form, featuring such hybrids as "car feng shui," "workplace feng shui," "makeup feng shui," and even "toilet feng shui." In Japan, divination methods include Futomani from the Shinto tradition. Personality typing as a form of divination has been prevalent in Japan since the 1980s. Various methods exist for divining personality type. Each attempt to reveal glimpses of an individual's destiny, productive and inhibiting traits, future parenting techniques, and compatibility in marriage. Personality type is increasingly important for young Japanese, who consider personality the driving factor of compatibility, given the ongoing marriage drought and birth rate decline in Japan. An import to Japan, Chinese zodiac signs based on the birth year in 12 year cycles (rat, ox, tiger, hare, dragon, snake, horse, sheep, monkey, cock, dog, and boar) are frequently combined with other forms of divination, such as so-called 'celestial types' based on the planets (Saturn, Venus, Mars, Jupiter, Mercury, or Uranus). Personality can also be divined using cardinal directions, the four elements (water, earth, fire, air), and yin-yang. Names can also lend important personality information under name classification which asserts that names bearing certain Japanese vowel sounds (a, i, u, e, o) share common characteristics. Numerology, which utilizes methods of diving 'birth numbers' from significant numbers such as birth date, may also reveal character traits of individuals. Individuals can also assess their own and others' personalities according to physical characteristics. Blood type remains a popular form of divination from physiology. Stemming from Western influences, body reading or "ninsou", determines personality traits based on body measurements. The face is the most commonly analyzed feature, with eye size, pupil shape, mouth shape, and eyebrow shape representing the most important traits. An upturned mouth may be cheerful, and a triangle eyebrow may indicate that someone is strong-willed. Methods of assessment in daily life may include self-taken measurements or quizzes. As such, magazines targeted at women in their early-to-mid twenties feature the highest concentration of personality assessment guides. There are approximately 144 different women's magazines, known as "nihon zashi koukoku kyoukai", published in Japan aimed at this audience. The adaptation of the Western divination method of tarot cards into Japanese culture presents a particularly unique example of contemporary divination as this adaptation mingles with Japan's robust visual culture. Japanese tarot cards are created by professional artists, advertisers, and fans of tarot. One tarot card collector claimed to have accumulated more than 1,500 Japan-made decks of tarot cards. Japanese tarot cards fall into diverse categories such as: The images on tarot cards may come from images from Japanese popular culture, such as characters from manga and anime including Hello Kitty, or may feature cultural symbols. Tarot cards may adapt the images of Japanese historical figures, such as high priestess Himiko (170–248CE) or imperial court wizard Abe no Seimei (921–1005CE) . Still others may feature images of cultural displacement, such as English knights, pentagrams, the Jewish Torah, or invented glyphs. The introduction of such cards began by the 1930s and reached prominence 1970s. Japanese tarot cards were originally created by men, often based on the Rider-Waite-Smith tarot published by the Rider Company in London in 1909. Since, the practice of Japanese tarot has become overwhelmingly feminine and intertwined with kawaii culture. Referring to the cuteness of tarot cards, Japanese model Kuromiya Niina was quoted as saying "because the images are cute, even holding them is enjoyable." While these differences exist, Japanese tarot cards function similarly to their Western counterparts. Cards are shuffled and cut into piles then used to forecast the future, for spiritual reflection, or as a tool for self-understanding. As seen previous to this section, many different cultures around the world use divination as a way of understanding the future. The most common act of divination in the Bao’an village in Taiwan is called the Poe, Bao’an is not the actual name of the village, but for privacy purposes that is what it will be called. The Poe translated to English means “moon boards”. The Poe consists of two wood or bamboo blocks cut into the shape of a crescent moon. The one edge is rounded while the other is flat; the two are mirror images. Both crescents are held out in one's palms and while kneeling, they are raised to the forehead level. Once in this position, the blocks are dropped and the future can be understood depending on their landing. If both fall flat side up or both fall rounded side up, that can be taken as a failure of the deity to agree. If the blocks land one rounded and one flat, the deity agrees. “Laughing poe” is when rounded sides land down and they rock before coming to a standstill. “Negative poe” is seen when the flat sides fall downward and abruptly stop, this indicates anger. When there is a positive fall, it is called “sacred poe”, although the negative falls are not usually taken seriously. As the blocks are being dropped the question is said in a murmur, and if the answer is yes, the blocks are dropped again. To make sure the answer is definitely a yes, the blocks must fall in a “yes” position three times in a row. A more serious type of divination is the Kiō-á. There is a small wooden chair, and around the sides of the chair are small pieces of wood that can move up and down in their sockets, this causes a clicking sounds when the chair is moved in any way. Two men hold this chair by its legs before an altar, while the incense is being burned, and the supernatural agent is asked to descend into the chair. It is seen that it is in the chair by an onset of motion. Eventually, the chair crashes onto a table prepared with wood chips and burlap. The characters on the table are then traced and these are said to be written by the god who possessed the chair, these characters are then interpreted. Divination is widespread throughout Africa. Among many examples it is one of central tenets of Serer religion in Senegal. Only those who have been initiated as Saltigues (the Serer high priests and priestesses) can divine the future. These are the "hereditary rain priests" whose role is both religious and medicinal. Specialized diviners called Ob'guega (doctor of Oguega oracle), as well as Ob'Oronmila (doctor of Oronmila oracle) from the Edo people of West Africa for thousands, have used divination as a means of foretelling the past, present and future. These diviners are initiated and trained in Iha (divination) of either Ominigbon or Oronmila (Benin Orunmila). The Yoruba people of West Africa are internationally known for having developed the Ifá system, an intricate process of divination that is performed by an "Awo", an initiated priest or priestess of Orunmila, the spirit of the Yoruba oracle.
https://en.wikipedia.org/wiki?curid=8691
Diets of Nuremberg The Diets of Nuremberg, also called the Imperial Diets of Nuremberg, took place at different times between the Middle Ages and the 17th century. The first Diet of Nuremberg, in 1211, elected the future emperor Frederick II of Hohenstaufen as German king. At the Diet of 1356 the Emperor Charles IV issued the Golden Bull of 1356, which required each Holy Roman Emperor to summon the first Imperial Diet after his election at Nuremberg. Apart from that, a number of other diets were held there. Important to Protestantism were the Diets of 1522 ("First Diet of Nuremberg"), 1524 ("Second Diet of Nuremberg") and 1532 ("Third Diet of Nuremberg"). This Diet has become known mostly for the reaction of the papacy to the decision made on Luther at the Diet of Worms the previous year. The new pope, Adrian VI, sent his nuncio Francesco Chieregati to the Diet, to insist both that the Edict of Worms be executed, and that action be taken promptly against Luther. This demand, however, was coupled with a promise of thorough reform in the Roman hierarchy, frankly admitting the partial guilt of the Vatican in the decline of the Church. In the recess drafted on 9 February 1523, however, the German princes rejected this appeal. Using Adrian's admissions, they declared that they could not have it appear 'as though they wished to oppress evangelical truth and assist unchristian and evil abuses.' This Diet generally took the same line as the previous one. The Estates reiterated their decision from the previous Diet. The Cardinal-legate, Campeggio, who was present, showed his disgust at the behaviour of the Estates. On 18 April, the Estates decided to call 'a general gathering of the German nation', to meet at Speyer the following year and to decide what would be done until the meeting of the general council of the Church which they demanded. This resulted in the Diet of Speyer (1526), which in turn was followed by the Diet of Speyer (1529). The latter included the Protestation at Speyer.
https://en.wikipedia.org/wiki?curid=8693
Dr. Strangelove Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, more commonly known simply as Dr. Strangelove, is a 1964 black comedy film that satirizes the Cold War fears of a nuclear conflict between the Soviet Union and the United States. The film was directed, produced, and co-written by Stanley Kubrick and stars Peter Sellers, George C. Scott, Sterling Hayden, and Slim Pickens. Production took place in the United Kingdom. The film is loosely based on Peter George's thriller novel "Red Alert" (1958). The story concerns an unhinged United States Air Force general who orders a first strike nuclear attack on the Soviet Union. It separately follows the President of the United States, his advisors, the Joint Chiefs of Staff and a Royal Air Force (RAF) exchange officer as they attempt to prevent the crew of a B-52 plane (who were ordered by the general) from nuking the Soviets and starting an atomic holocaust. The film is often considered one of the best comedies ever made, as well as one of the greatest films of all time. In 1998, the American Film Institute ranked it twenty-sixth in its list of the best American movies (in the 2007 edition, the film ranked thirty-ninth), and in 2000, it was listed as number three on its list of the funniest American films. In 1989, the United States Library of Congress included "Dr. Strangelove" as one of the first twenty-five films selected for preservation in the National Film Registry for being "culturally, historically, or aesthetically significant". United States Air Force Brigadier General Jack D. Ripper is commander of Burpelson Air Force Base, which houses the Strategic Air Command (SAC) 843rd Bomb Wing, flying B-52 bombers armed with hydrogen bombs. The 843rd Wing is flying on airborne alert, two hours from their targets inside the USSR. General Ripper orders his executive officer, Group Captain Lionel Mandrake of the UK Royal Air Force, to put the base on alert, and to issue "Wing Attack Plan R" to the patrolling aircraft, one of which is commanded by Major T. J. "King" Kong. All of the aircraft commence an attack flight on the USSR and set their radios to allow communications only through their CRM 114 discriminators, which was designed to accept only communications preceded by a secret three-letter code known only to General Ripper. Mandrake discovers that no war order has been issued by the Pentagon and he tries to stop Ripper, who locks them both in his office. Ripper tells Mandrake that he believes the Soviets have been using fluoridation of the American water supplies to pollute the "precious bodily fluids" of Americans. Mandrake realizes that Ripper has gone insane. In the War Room at the Pentagon, General Buck Turgidson briefs President Merkin Muffley and other officers about how Plan R enables a senior officer to launch a strike against the Soviets if all superiors have been killed in a first strike on the United States. Turgidson reports that his men are trying every possible three-letter CRM code to issue the stand-down order, but that could take over two days and the planes are due to reach their targets in a couple of hours. Muffley orders the U.S. Army to storm the base and arrest General Ripper. Turgidson then attempts to convince Muffley to let the attack continue, but Muffley refuses to be party to a nuclear first strike. Instead, he brings Soviet ambassador Alexei de Sadeski into the War Room to telephone Soviet Premier Dimitri Kissov on the "hot line". Muffley warns the Premier of the impending attack and offers to reveal the positions of the bombers and targets so that the Soviets can protect themselves. After a heated discussion in Russian with the Premier, the ambassador informs President Muffley that the Soviet Union has created a doomsday machine, which consists of many buried bombs jacketed with "cobalt-thorium G" connected to a computer network set to detonate them automatically should any nuclear attack strike the country. Within two months after detonation, the cobalt-thorium G would encircle the planet in a radioactive "doomsday shroud", wiping out all human and animal life, and rendering the surface of the Earth uninhabitable. The device cannot be deactivated, as it is programmed to explode if any such attempt is made. When the President's wheelchair-bound scientific advisor, the former Nazi German Dr. Strangelove, points out that such a doomsday machine would only be an effective deterrent if everyone knew about it, de Sadeski replies that the Soviet Premier had planned to reveal its existence to the world the following week. Meanwhile, U.S. Army troops arrive at Burpelson. General Ripper then shoots and kills himself, while Mandrake identifies Ripper's CRM code from his desk blotter ("OPE," a variant of both "Peace on Earth" and "Purity of Essence") and relays this code to the Pentagon. Using the recall code, SAC successfully recalls all of the bombers except one whose radio equipment has been destroyed. The Soviets attempt to find it, but its commanding officer, Major Kong, with his fuel dwindling, has switched to a closer backup target. As the plane approaches the new target, the crew is unable to open the damaged bomb bay doors. Kong enters the bomb bay and repairs the broken electrical wiring while sitting on the H-bomb, whereupon the doors open and the bomb is dropped. With Kong straddling it, the bomb falls and detonates over a Soviet missile site. Back in the War Room, Dr. Strangelove recommends that the President gather several hundred thousand people to live in deep underground mines where the radiation will not penetrate. He suggests a 10:1 female-to-male ratio for a breeding program to repopulate the Earth once the radiation has subsided. Turgidson, worried that the Soviets will do the same, warns about a "mineshaft gap", while Alexei secretly photographs the war room. Dr. Strangelove declares he has a plan, but then rises from his wheelchair and announces "Mein Führer, I can walk!" as the Doomsday Machine activates. The film ends with a montage of many nuclear explosions, accompanied by Vera Lynn's version of the World War II song "We'll Meet Again". Columbia Pictures agreed to finance the film if Peter Sellers played at least four major roles. The condition stemmed from the studio's opinion that much of the success of Kubrick's previous film "Lolita" (1962) was based on Sellers's performance in which his single character assumes a number of identities. Sellers had also played three roles in "The Mouse That Roared" (1959). Kubrick accepted the demand, later explaining that "such crass and grotesque stipulations are the "sine qua non" of the motion-picture business". Sellers ended up playing three of the four roles written for him. He had been expected to play Air Force Major T. J. "King" Kong, the B-52 Stratofortress aircraft commander, but from the beginning, Sellers was reluctant. He felt his workload was too heavy, and he worried he would not properly portray the character's Texas accent. Kubrick pleaded with him, and he asked the screenwriter Terry Southern (who had been raised in Texas) to record a tape with Kong's lines spoken in the correct accent. Using Southern's tape, Sellers managed to get the accent right, and he started acting in the scenes in the aircraft, but then sprained his ankle and he could not work in the cramped cockpit set. Sellers is said to have improvised much of his dialogue, with Kubrick incorporating the ad-libs into the written screenplay so the improvised lines became part of the canonical screenplay, a practice known as retroscripting. According to film critic Alexander Walker, the author of biographies of both Sellers and Kubrick, the role of Group Captain Lionel Mandrake was the easiest of the three for Sellers to play, since he was aided by his experience of mimicking his superiors while serving in the Royal Air Force during World War II. There is also a heavy resemblance to Sellers' friend and occasional co-star Terry-Thomas and the prosthetic-limbed RAF ace Sir Douglas Bader. For his performance as President Merkin Muffley, Sellers assumed the accent of an American Midwesterner. Sellers drew inspiration for the role from Adlai Stevenson, a former Illinois governor who was the Democratic candidate for the 1952 and 1956 presidential elections and the U.N. ambassador during the Cuban Missile Crisis. In early takes, Sellers faked cold symptoms to emphasize the character's apparent weakness. That caused frequent laughter among the film crew, ruining several takes. Kubrick ultimately found this comic portrayal inappropriate, feeling that Muffley should be a serious character. In later takes Sellers played the role straight, though the President's cold is still evident in several scenes. In keeping with Kubrick's satirical character names, a "merkin" is a pubic hair wig. The president is bald, and his last name is "Muffley"; both are additional homages to a merkin. Dr. Strangelove is an ex-Nazi scientist, suggesting Operation Paperclip, the US effort to recruit top German technical talent at the end of World War II. He serves as President Muffley's scientific adviser in the War Room. When General Turgidson wonders aloud what kind of name "Strangelove" is, saying to Mr. Staines (Jack Creley) that it is not a "Kraut name", Staines responds that Strangelove's original German surname was "Merkwürdigliebe" ("Strange love" in German) and that "he changed it when he became a citizen". Twice in the film, Strangelove accidentally addresses the president as "Mein Führer". Dr. Strangelove did not appear in the book "Red Alert". The character is an amalgamation of RAND Corporation strategist Herman Kahn, mathematician and Manhattan Project principal John von Neumann, rocket scientist Wernher von Braun (a central figure in Nazi Germany's rocket development program recruited to the US after the war), and Edward Teller, the "father of the hydrogen bomb". It has been claimed that the character was based on Henry Kissinger, but Kubrick and Sellers denied this; Sellers said, "Strangelove was never modeled after Kissinger—that's a popular misconception. It was always Wernher von Braun." Furthermore, Henry Kissinger points out in his memoirs that at the time of the writing of "Dr. Strangelove", he was an unknown academic. The wheelchair-using Strangelove furthers a Kubrick trope of the menacing, seated antagonist, first depicted in "Lolita" through the character "Dr. Zaempf". Strangelove's accent was influenced by that of Austrian-American photographer Weegee, who worked for Kubrick as a special photographic effects consultant. Strangelove's appearance echoes the mad scientist archetype as seen in the character Rotwang in Fritz Lang's film "Metropolis" (1927). Sellers's Strangelove takes from Rotwang the single black gloved hand (which, in Rotwang's case is mechanical, because of a lab accident), the wild hair and, most importantly, his ability to avoid being controlled by political power. According to Alexander Walker, Sellers improvised Dr. Strangelove's lapse into the Nazi salute, borrowing one of Kubrick's black leather gloves for the uncontrollable hand that makes the gesture. Dr. Strangelove apparently suffers from alien hand syndrome. Kubrick wore the gloves on the set to avoid being burned when handling hot lights, and Sellers, recognizing the potential connection to Lang's work, found them to be menacing. Slim Pickens, an established character actor and veteran of many Western films, was eventually chosen to replace Sellers as Major Kong after Sellers' injury. Terry Southern's biographer, Lee Hill, said the part was originally written with John Wayne in mind, and that Wayne was offered the role after Sellers was injured, but he immediately turned it down. Dan Blocker of the "Bonanza" western television series was approached to play the part, but according to Southern, Blocker's agent rejected the script as being "too pinko". Kubrick then recruited Pickens, whom he knew from his brief involvement in a Marlon Brando western film project that was eventually filmed as "One-Eyed Jacks". His fellow actor James Earl Jones recalls, "He was Major Kong on and off the set—he didn't change a thing—his temperament, his language, his behavior." Pickens was not told that the movie was a black comedy, and he was only given the script for scenes he was in, to get him to play it "straight". Kubrick's biographer John Baxter explained, in the documentary "Inside the Making of Dr. Strangelove": Pickens, who had previously played only supporting and character roles, said that his appearance as Maj. Kong greatly improved his career. He later commented, "After "Dr. Strangelove" the roles, the dressing rooms, and the checks all started getting bigger." Kubrick tricked Scott into playing the role of Gen. Turgidson far more ridiculously than Scott was comfortable doing. Kubrick talked Scott into doing over-the-top "practice" takes, which Kubrick told Scott would never be used, as a way to warm up for the "real" takes. Kubrick used these takes in the final film, causing Scott to swear never to work with Kubrick again. During the filming, Kubrick and Scott had different opinions regarding certain scenes, but Kubrick got Scott to conform largely by repeatedly beating him at chess, which they played frequently on the set. Scott, a skilled player himself, later said that while he and Kubrick may not have always seen eye to eye, he respected Kubrick immensely for his skill at chess. Stanley Kubrick started with nothing but a vague idea to make a thriller about a nuclear accident that built on the widespread Cold War fear for survival. While doing research, Kubrick gradually became aware of the subtle and paradoxical "balance of terror" between nuclear powers. At Kubrick's request, Alastair Buchan (the head of the Institute for Strategic Studies) recommended the thriller novel "Red Alert" by Peter George. Kubrick was impressed with the book, which had also been praised by game theorist and future Nobel Prize in Economics winner Thomas Schelling in an article written for the "Bulletin of the Atomic Scientists" and reprinted in "The Observer", and immediately bought the film rights. In 2006, Schelling wrote that conversations between Kubrick, Schelling, and George in late 1960 about a treatment of "Red Alert" updated with intercontinental missiles eventually led to the making of the film. In collaboration with George, Kubrick started writing a screenplay based on the book. While writing the screenplay, they benefited from some brief consultations with Schelling and, later, Herman Kahn. In following the tone of the book, Kubrick originally intended to film the story as a serious drama. However, as he later explained during interviews, he began to see comedy inherent in the idea of mutual assured destruction as he wrote the first draft. Kubrick said: Among the titles that Kubrick considered for the film were "Dr. Doomsday or: How to Start World War III Without Even Trying", "Dr. Strangelove's Secret Uses of Uranus", and "Wonderful Bomb". After deciding to make the film a black comedy, Kubrick brought in Terry Southern as a co-writer in late 1962. The choice was influenced by reading Southern's comic novel "The Magic Christian", which Kubrick had received as a gift from Peter Sellers, and which itself became a Sellers film in 1969. Southern made important contributions to the film, but his role led to a rift between Kubrick and Peter George; after "Life" magazine published a photo-essay on Southern in August 1964 which implied that Southern had been the script's principal author—a misperception neither Kubrick nor Southern did much to dispel—Peter George wrote an indignant letter to the magazine, published in its September 1964 issue, in which he pointed out that he had both written the film's source novel and collaborated on various incarnations of the script over a period of ten months, whereas "Southern was briefly employed ... to do some additional rewriting for Kubrick and myself and fittingly received a screenplay credit in behind Mr. Kubrick and myself". "Dr. Strangelove" was filmed at Shepperton Studios, near London, as Sellers was in the middle of a divorce at the time and unable to leave England. The sets occupied three main sound stages: the Pentagon War Room, the B-52 Stratofortress bomber and the last one containing both the motel room and General Ripper's office and outside corridor. The studio's buildings were also used as the Air Force base exterior. The film's set design was done by Ken Adam, the production designer of several "James Bond" films (at the time he had already worked on "Dr. No"). The black and white cinematography was by Gilbert Taylor, and the film was edited by Anthony Harvey and an uncredited Kubrick. The original musical score for the film was composed by Laurie Johnson and the special effects were by Wally Veevers. The theme of the chorus from the bomb run scene is a modification of "When Johnny Comes Marching Home". Sellers and Kubrick got on famously during the film's production and shared a love of photography. For the War Room, Ken Adam first designed a two-level set which Kubrick initially liked, only to decide later that it was not what he wanted. Adam next began work on the design that was used in the film, an expressionist set that was compared with "The Cabinet of Dr. Caligari" and Fritz Lang's "Metropolis". It was an enormous concrete room ( long and wide, with a -high ceiling) suggesting a bomb shelter, with a triangular shape (based on Kubrick's idea that this particular shape would prove the most resistant against an explosion). One side of the room was covered with gigantic strategic maps reflecting in a shiny black floor inspired by dance scenes in Fred Astaire films. In the middle of the room there was a large circular table lit from above by a circle of lamps, suggesting a poker table. Kubrick insisted that the table would be covered with green baize (although this could not be seen in the black and white film) to reinforce the actors' impression that they are playing 'a game of poker for the fate of the world.' Kubrick asked Adam to build the set ceiling in concrete to force the director of photography to use only the on-set lights from the circle of lamps. Moreover, each lamp in the circle of lights was carefully placed and tested until Kubrick was happy with the result. Lacking cooperation from the Pentagon in the making of the film, the set designers reconstructed the aircraft cockpit to the best of their ability by comparing the cockpit of a B-29 Superfortress and a single photograph of the cockpit of a B-52 and relating this to the geometry of the B-52's fuselage. The B-52 was state-of-the-art in the 1960s, and its cockpit was off-limits to the film crew. When some United States Air Force personnel were invited to view the reconstructed B-52 cockpit, they said that "it was absolutely correct, even to the little black box which was the CRM." It was so accurate that Kubrick was concerned about whether Adam's team had carried out all its research legally. In several shots of the B-52 flying over the polar ice en route to Russia, the shadow of the actual camera plane, a Boeing B-17 Flying Fortress, is visible on the icecap below. The B-52 was a scale model composited into the Arctic footage, which was sped up to create a sense of jet speed. Home movie footage included in "Inside the Making of Dr. Strangelove" on the 2001 Special Edition DVD release of the film shows clips of the B-17 with a cursive "Dr. Strangelove" painted over the rear entry hatch on the right side of the fuselage. In 1967, some of the flying footage from "Dr. Strangelove" was re-used in The Beatles' television film "Magical Mystery Tour". As told by editor Roy Benson in the BBC Radio Documentary "Celluloid Beatles", the production team of "Magical Mystery Tour" lacked footage to cover the sequence for the song "Flying". Benson had access to the aerial footage filmed for the B52 sequences of "Dr. Strangelove", which was stored at Shepperton Studios. The use of the footage prompted Kubrick to call Benson to complain. "Red Alert" author Peter George collaborated on the screenplay with Kubrick and satirist Terry Southern. "Red Alert" was more solemn than its film version, and it did not include the character Dr. Strangelove, though the main plot and technical elements were quite similar. A novelization of the actual film, rather than a reprint of the original novel, was published by Peter George, based on an early draft in which the narrative is bookended by the account of aliens, who, having arrived at a desolated Earth, try to piece together what has happened. It was reissued in October 2015 by Candy Jar Books, featuring never-before-published material on Strangelove's early career. During the filming of "Dr. Strangelove", Stanley Kubrick learned that "Fail Safe", a film with a similar theme, was being produced. Although "Fail Safe" was to be an ultrarealistic thriller, Kubrick feared that its plot resemblance would damage his film's box office potential, especially if it were released first. Indeed, the novel "Fail-Safe" (on which the film is based) is so similar to "Red Alert" that Peter George sued on charges of plagiarism and settled out of court. What worried Kubrick the most was that "Fail Safe" boasted the acclaimed director Sidney Lumet and the first-rate dramatic actors Henry Fonda as the American president and Walter Matthau as the advisor to the Pentagon, Professor Groeteschele. Kubrick decided to throw a legal wrench into "Fail Safe"s production gears. Lumet recalled in the documentary "Inside the Making of Dr. Strangelove": "We started casting. Fonda was already set ... which of course meant a big commitment in terms of money. I was set, Walter [Bernstein, the screenwriter] was set ... And suddenly, this lawsuit arrived, filed by Stanley Kubrick and Columbia Pictures." Kubrick argued that "Fail Safe"s own source novel "Fail-Safe" (1960) had been plagiarized from Peter George's "Red Alert", to which Kubrick owned creative rights. He pointed out unmistakable similarities in intentions between the characters Groeteschele and Strangelove. The plan worked, and "Fail Safe" opened eight months after "Dr. Strangelove", to critical acclaim but mediocre ticket sales. The end of the film shows Dr. Strangelove exclaiming, ""Mein Führer," I can walk!" before cutting to footage of nuclear explosions, with Vera Lynn and her audience singing "We'll Meet Again". This footage comes from nuclear tests such as shot BAKER of Operation Crossroads at Bikini Atoll, the Trinity test, a test from Operation Sandstone and the hydrogen bomb tests from Operation Redwing and Operation Ivy. In some shots, old warships (such as the German heavy cruiser "Prinz Eugen"), which were used as targets, are plainly visible. In others, the smoke trails of rockets used to create a calibration backdrop can be seen. Former "Goon Show" writer and friend of Sellers Spike Milligan was credited with suggesting Vera Lynn's song for the ending. It was originally planned for the film to end with a scene that depicted everyone in the War Room involved in a pie fight. Accounts vary as to why the pie fight was cut. In a 1969 interview, Kubrick said, "I decided it was farce and not consistent with the satiric tone of the rest of the film." Critic Alexander Walker observed that "the cream pies were flying around so thickly that people lost definition, and you couldn't really say whom you were looking at." Nile Southern, son of screenwriter Terry Southern, suggested the fight was intended to be less jovial: "Since they were laughing, it was unusable, because instead of having that totally black, which would have been amazing, like, this blizzard, which in a sense is metaphorical for all of the missiles that are coming, as well, you just have these guys having a good old time. So, as Kubrick later said, 'it was a disaster of Homeric proportions. A first test screening of the film was scheduled for November 22, 1963, the day of the assassination of John F. Kennedy. The film was just weeks from its scheduled premiere, but because of the assassination, the release was delayed until late January 1964, as it was felt that the public was in no mood for such a film any sooner. During post-production, one line by Slim Pickens, "a fella could have a pretty good weekend in Dallas with all that stuff," was dubbed to change "Dallas" to "Vegas," since Dallas was where Kennedy was killed. The original reference to Dallas survives in the English audio of the French-subtitled version of the film. The assassination also serves as another possible reason that the pie-fight scene was cut. In the scene, after Muffley takes a pie in the face, General Turgidson exclaims: "Gentlemen! Our gallant young president has been struck down in his prime!" Editor Anthony Harvey stated that the scene "would have stayed, except that Columbia Pictures were horrified, and thought it would offend the president's family." Kubrick and others have said that the scene had already been cut before preview night because it was inconsistent with the rest of the film. In 1994, the film was rereleased. While the 1964 release used a 1.85:1 aspect ratio, the new print was in the slightly squarer 1.66:1 (5:3) ratio that Kubrick had originally intended. "Dr. Strangelove" takes passing shots at numerous contemporary Cold War attitudes, such as the "missile gap", but it primarily focuses its satire on the theory of mutual assured destruction (MAD), in which each side is supposed to be deterred from a nuclear war by the prospect of a universal cataclysmic disaster regardless of who "won". Military strategist and former physicist Herman Kahn, in the book "On Thermonuclear War" (1960), used the theoretical example of a "doomsday machine" to illustrate the limitations of MAD, which was developed by John von Neumann. The concept of such a machine is consistent with MAD doctrine when it is logically pursued to its conclusion. It thus worried Kahn that the military might like the idea of a doomsday machine and build one. Kahn, a leading critic of MAD and President Eisenhower's administration's doctrine of massive retaliation upon the slightest provocation by the USSR, considered MAD to be foolish bravado, and urged America to instead plan for proportionality, and thus even a limited nuclear war. With this logical reasoning, Kahn became one of the architects of the flexible response doctrine, which, while superficially resembling MAD, allowed for responding to a limited nuclear strike with a proportional, or calibrated, return of fire (see "On Escalation"). Kahn educated Kubrick on the concept of the semirealistic "cobalt-thorium G" doomsday machine, and then Kubrick used the concept for the film. Kahn in his writings and talks would often come across as cold and calculating, for example, with his use of the term "megadeaths" and in his willingness to estimate how many human lives the United States could lose and still rebuild economically. Kahn's cold analytical attitude towards millions of deaths is reflected in Turgidson's remark to the president about the outcome of a preemptive nuclear war: "Mr. President, I'm not saying we wouldn't get our hair mussed. But I do say no more than ten to twenty million killed, tops, uh, depending on the breaks." Turgidson has a binder that is labelled "World Targets in Megadeaths", a term coined in 1953 by Kahn and popularized in his 1960 book "On Thermonuclear War". The post-hoc planning in the film, by Dr. Strangelove, done the MAD policy has clearly broken down, to keep the human race alive and to regenerate from populations sheltered in mineshafts, is a parody of those strict adherents of the MAD doctrine who are opposed to the prior creation of fallout shelters on ideological grounds. To such adherents, talk of survival takes the "Assured Destruction" out of "Mutual Assured Destruction", hence no preparations should be conducted for fear of "destabilizing" the MAD doctrine. Moreover, it is also somewhat of a parody of Nelson Rockefeller, Edward Teller, Herman Kahn, and Chet Holifield's November 1961 popularization of a similar plan to spend billions of dollars on a nationwide network of highly protective concrete-lined underground fallout shelters, capable of holding millions of people and to be built any such nuclear exchange began. These extensive and therefore wildly expensive preparations were the fullest conceivable implementation of President Kennedy's, month prior, September 1961 advocacy in favor of the comparatively more modest, individual and community fallout shelters, as it appeared in "Life" magazine, which was in the context of shelters being on the minds of the public at the time due to the Berlin Crisis. The Kennedy administration would later go on to expand the nascent United States civil defense efforts, including the assessment of millions of homes and to create a network of thousands of well known, black and yellow plaqued, community fallout shelters. This was done, not with a massive construction effort but by the relatively cheap re-purposing of existing buildings and stocking them with CD V-700 geiger counters etc. In 1962 the Kennedy administration would found the American Civil Defense Association to organize this, comparatively far more cost-effective, shelter effort. The fallout-shelter-network proposal, mentioned in the film, with its inherently high radiation protection characteristics, has similarities and contrasts to that of the very real and robust Swiss civil defense network. Switzerland has an overcapacity of nuclear fallout shelters for the country's population size, and by law, new homes must still be built with a fallout shelter. If the US did that, it would violate the spirit of MAD and according to MAD adherents, allegedly destabilize the situation because the US could launch a first strike and its population would largely survive a retaliatory second strike (see MAD § Theory). To refute early 1960s novels and Hollywood films like "Fail-Safe" and "Dr. Strangelove", which raised questions about US control over nuclear weapons, the Air Force produced a documentary film, "SAC Command Post", to demonstrate its responsiveness to presidential command and its tight control over nuclear weapons. However, later academic research into declassified documents showed that U.S. military commanders had been given presidentially-authorized pre-delegation for the use of nuclear weapons during the early Cold War, showing that that aspect of the film's plot was plausible. The characters of Buck Turgidson and Jack Ripper both deride the real-life Gen. Curtis LeMay of the Strategic Air Command. In the months following the film's release, director Stanley Kubrick received a fan letter from Legrace G. Benson of the Department of History of Art at Cornell University interpreting the film as being sexually-layered. The director wrote back to Benson and confirmed the interpretation, "Seriously, you are the first one who seems to have noticed the sexual framework from intromission (the planes going in) to the last spasm (Kong's ride down and detonation at target)." Sexual metaphors often popped up when the nuclear analysts that Kubrick consulted were discussing strategy, such as when Bernard Brodie compared his not attacking cities/withhold plan following belligerent escalation to coitus interruptus in an internally circulated memorandum at the RAND Corporation (spoofed in the film as the "BLAND Corporation"), while he described the SAC plan of massive retaliation as "going all the way". That led RAND scholar Herman Kahn, whom Kubrick consulted, to quip to an assembled group of "massive retaliation" SAC officers, "Gentlemen, you do not have a war plan. You have a Wargasm!". The film was a popular success, earning US$4,420,000 in rentals in North America during its initial theatrical release. "Dr. Strangelove" is Kubrick's highest-rated film on Rotten Tomatoes, holding a 98% approval rating based on 88 reviews, with an average rating of 9.14/10. The site's critical consensus reads, "Stanley Kubrick's brilliant Cold War satire remains as funny and razor-sharp today as it was in 1964." The film also holds a score of 96 out of 100 on Metacritic, based on 11 reviews, indicating "universal acclaim". The film is ranked number 7 in the All-Time High Scores chart of Metacritic's Video/DVD section. It was selected for preservation in the United States National Film Registry. "Dr. Strangelove" is on Roger Ebert's list of "The Great Movies", and he described it as "arguably the best political satire of the century". One of the most celebrated of all film comedies, it is the only comedy to make the top 10 in any of the "Sight & Sound" polls of best films. John Patterson of "The Guardian" wrote, "There had been nothing in comedy like "Dr Strangelove" ever before. All the gods before whom the America of the stolid, paranoid 50s had genuflected—the Bomb, the Pentagon, the National Security State, the President himself, Texan masculinity and the alleged Commie menace of water-fluoridation—went into the wood-chipper and never got the same respect ever again." It is also listed as number 26 on "Empire's 500 Greatest Movies of All Time", and in 2010 it was listed by "Time" magazine as one of the 100 best films since the publication's inception in 1923. The Writers Guild of America ranked its screenplay the 12th best ever written. In 2000, readers of "Total Film" magazine voted it the 24th greatest comedic film of all time. The film ranked #32 on "TV Guide"s list of the 50 Greatest Movies on TV (and Video). American Film Institute included the film as #26 in AFI's 100 Years...100 Movies, #3 in AFI's 100 Years...100 Laughs, #64 in AFI's 100 Years...100 Movie Quotes ("Gentlemen, you can't fight in here! This is the War Room!") and #39 in AFI's 100 Years...100 Movies (10th Anniversary Edition). In 1995, Kubrick enlisted Terry Southern to script a sequel titled "Son of Strangelove". Kubrick had Terry Gilliam in mind to direct. The script was never completed, but index cards laying out the story's basic structure were found among Southern's papers after he died in October 1995. It was set largely in underground bunkers, where Dr. Strangelove had taken refuge with a group of women. In 2013, Gilliam commented, "I was told after Kubrick died—by someone who had been dealing with him—that he had been interested in trying to do another "Strangelove" with me directing. I never knew about that until after he died but I would have loved to."
https://en.wikipedia.org/wiki?curid=8695
DNA ligase DNA ligase is a specific type of enzyme, a ligase, () that facilitates the joining of DNA strands together by catalyzing the formation of a phosphodiester bond. It plays a role in repairing single-strand breaks in duplex DNA in living organisms, but some forms (such as DNA ligase IV) may specifically repair double-strand breaks (i.e. a break in both complementary strands of DNA). Single-strand breaks are repaired by DNA ligase using the complementary strand of the double helix as a template, with DNA ligase creating the final phosphodiester bond to fully repair the DNA. DNA ligase is used in both DNA repair and DNA replication (see "Mammalian ligases"). In addition, DNA ligase has extensive use in molecular biology laboratories for recombinant DNA experiments (see "Research applications"). Purified DNA ligase is used in gene cloning to join DNA molecules together to form recombinant DNA. The mechanism of DNA ligase is to form two covalent phosphodiester bonds between 3' hydroxyl ends of one nucleotide ("acceptor"), with the 5' phosphate end of another ("donor"). Two ATP molecules are consumed for each phosphodiester bond formed. AMP is required for the ligase reaction, which proceeds in four steps: Ligase will also work with blunt ends, although higher enzyme concentrations and different reaction conditions are required. The "E. coli" DNA ligase is encoded by the "lig" gene. DNA ligase in "E. coli", as well as most prokaryotes, uses energy gained by cleaving nicotinamide adenine dinucleotide (NAD) to create the phosphodiester bond. It does not ligate blunt-ended DNA except under conditions of molecular crowding with polyethylene glycol, and cannot join RNA to DNA efficiently. The activity of E. coli DNA ligase can be enhanced by DNA polymerase at the right concentrations. Enhancement only works when the concentrations of the DNA polymerase 1 are much lower than the DNA fragments to be ligated. When the concentrations of Pol I DNA polymerases are higher, it has an adverse effect on E. coli DNA ligase The DNA ligase from bacteriophage T4 (a bacteriophage that infects "Escherichia coli" bacteria). The T4 ligase is the most-commonly used in laboratory research. It can ligate either cohesive or blunt ends of DNA, oligonucleotides, as well as RNA and RNA-DNA hybrids, but not single-stranded nucleic acids. It can also ligate blunt-ended DNA with much greater efficiency than "E. coli" DNA ligase. Unlike "E. coli" DNA ligase, T4 DNA ligase cannot utilize NAD and it has an absolute requirement for ATP as a cofactor. Some engineering has been done to improve the "in vitro" activity of T4 DNA ligase; one successful approach, for example, tested T4 DNA ligase fused to several alternative DNA binding proteins and found that the constructs with either p50 or NF-kB as fusion partners were over 160% more active in blunt-end ligations for cloning purposes than wild type T4 DNA ligase. A typical reaction for inserting a fragment into a plasmid vector would use about 0.01 (sticky ends) to 1 (blunt ends) units of ligase. The optimal incubation temperature for T4 DNA ligase is 16 °C. In mammals, there are four specific types of ligase. DNA ligase from eukaryotes and some microbes uses adenosine triphosphate (ATP) rather than NAD. Derived from a thermophilic bacterium, the enzyme is stable and active at much higher temperatures than conventional DNA ligases. Its half-life is 48 hours at 65 °C and greater than 1 hour at 95 °C. Ampligase DNA Ligase has been shown to be active for at least 500 thermal cycles (94 °C/80 °C) or 16 hours of cycling.10 This exceptional thermostability permits extremely high hybridization stringency and ligation specificity. There are at least three different units used to measure the activity of DNA ligase: DNA ligases have become indispensable tools in modern molecular biology research for generating recombinant DNA sequences. For example, DNA ligases are used with restriction enzymes to insert DNA fragments, often genes, into plasmids. Controlling the optimal temperature is a vital aspect of performing efficient recombination experiments involving the ligation of cohesive-ended fragments. Most experiments use T4 DNA Ligase (isolated from bacteriophage T4), which is most active at 37 °C. However, for optimal ligation efficiency with cohesive-ended fragments ("sticky ends"), the optimal enzyme temperature needs to be balanced with the melting temperature Tm of the sticky ends being ligated, the homologous pairing of the sticky ends will not be stable because the high temperature disrupts hydrogen bonding. A ligation reaction is most efficient when the sticky ends are already stably annealed, and disruption of the annealing ends would therefore result in low ligation efficiency. The shorter the overhang, the lower the Tm. Since blunt-ended DNA fragments have no cohesive ends to anneal, the melting temperature is not a factor to consider within the normal temperature range of the ligation reaction. The limiting factor in blunt end ligation is not the activity of the ligase but rather the number of alignments between DNA fragment ends that occur. The most efficient ligation temperature for blunt-ended DNA would therefore be the temperature at which the greatest number of alignments can occur. The majority of blunt-ended ligations are carried out at 14-25 °C overnight. The absence of stably annealed ends also means that the ligation efficiency is lowered, requiring a higher ligase concentration to be used. A novel use of DNA ligase can be seen in the field of nano chemistry, specifically in DNA origami.  DNA based self-assembly principles have proven useful for organizing nanoscale objects, such as biomolecules, nanomachines, nanoelectronic and photonic component. Assembly of such nano structure requires the creation of an intricate mesh of DNA molecules. Although DNA self-assembly is possible without any outside help using different substrates such as provision of catatonic surface of Aluminium foil, DNA ligase can provide the enzymatic assistance that is required to make DNA lattice structure from DNA over hangs. The first DNA ligase was purified and characterized in 1967 by the Gellert, Lehman, Richardson, and Hurwitz laboratories. It was first purified and characterized by Weiss and Richardson using a six-step chromatographic-fractionation process beginning with elimination of cell debris and addition of streptomycin, followed by several Diethylaminoethyl (DEAE)-cellulose column washes and a final phosphocellulose fractionation. The final extract contained 10% of the activity initially recorded in the "E. coli "media; along the process it was discovered that ATP and Mg++ were necessary to optimize the reaction. The common commercially available DNA ligases were originally discovered in bacteriophage T4, "E. coli" and other bacteria. Genetic deficiencies in human DNA ligases have been associated with clinical syndromes marked by immunodeficiency, radiation sensitivity, and developmental abnormalities,  LIG4 syndrome (Ligase IV syndrome) is a rare disease associated with mutations in DNA ligase 4 and interferes with dsDNA break-repair mechanisms. Ligase IV syndrome causes immunodeficiency in individuals and is commonly associated with microcephaly and marrow hypoplasia. A list of prevalent diseases caused by lack of or malfunctioning of DNA ligase is as follows. Xeroderma pigmentosum, which is commonly known as XP, is an inherited condition characterized by an extreme sensitivity to ultraviolet (UV) rays from sunlight. This condition mostly affects the eyes and areas of skin exposed to the sun. Some affected individuals also have problems involving the nervous system. Mutations in the ATM gene cause ataxia–telangiectasia. The ATM gene provides instructions for making a protein that helps control cell division and is involved in DNA repair. This protein plays an important role in the normal development and activity of several body systems, including the nervous system and immune system. The ATM protein assists cells in recognizing damaged or broken DNA strands and coordinates DNA repair by activating enzymes that fix the broken strands. Efficient repair of damaged DNA strands helps maintain the stability of the cell's genetic information. Affected children typically develop difficulty walking, problems with balance and hand coordination, involuntary jerking movements (chorea), muscle twitches (myoclonus), and disturbances in nerve function (neuropathy). The movement problems typically cause people to require wheelchair assistance by adolescence. People with this disorder also have slurred speech and trouble moving their eyes to look side-to-side (oculomotor apraxia). Fanconi anemia (FA) is a rare, inherited blood disorder that leads to bone marrow failure. FA prevents bone marrow from making enough new blood cells for the body to work normally. FA also can cause the bone marrow to make many faulty blood cells. This can lead to serious health problems, such as leukemia. Bloom syndrome results in skin that is sensitive to sun exposure, and usually the development of a butterfly-shaped patch of reddened skin across the nose and cheeks. A skin rash can also appear on other areas that are typically exposed to the sun, such as the back of the hands and the forearms. Small clusters of enlarged blood vessels (telangiectases) often appear in the rash; telangiectases can also occur in the eyes. Other skin features include patches of skin that are lighter or darker than the surrounding areas (hypopigmentation or hyperpigmentation respectively). These patches appear on areas of the skin that are not exposed to the sun, and their development is not related to the rashes. In recent studies, human DNA ligase I was used in Computer-aided drug design to identify DNA ligase inhibitors as possible therapeutic agents to treat cancer. Since excessive cell growth is a hallmark of cancer development, targeted chemotherapy that disrupts the functioning of DNA ligase can impede adjuvant cancer forms. Furthermore, it has been shown that DNA ligases can be broadly divided into two categories, namely, ATP- and NAD+-dependent. Previous research has shown that although NAD+-dependent DNA ligases have been discovered in sporadic cellular or viral niches outside the bacterial domain of life, there is no instance in which a NAD+-dependent ligase is present in a eukaryotic organism. The presence solely in non-eukaryotic organisms, unique substrate specificity, and distinctive domain structure of NAD+ dependent compared with ATP-dependent human DNA ligases together make NAD+-dependent ligases ideal targets for the development of new antibacterial drugs.
https://en.wikipedia.org/wiki?curid=8697
Dewey Decimal Classification The Dewey Decimal Classification (DDC), colloquially the Dewey Decimal System, is a proprietary library classification system first published in the United States by Melvil Dewey in 1876. Originally described in a four-page pamphlet, it has been expanded to multiple volumes and revised through 23 major editions, the latest printed in 2011. It is also available in an abridged version suitable for smaller libraries. OCLC, a non-profit cooperative that serves libraries, currently maintains the system and licenses online access to WebDewey, a continuously updated version for catalogers. The Decimal Classification introduced the concepts of "relative location" and "relative index" which allow new books to be added to a library in their appropriate location based on subject. Libraries previously had given books permanent shelf locations that were related to the order of acquisition rather than topic. The classification's notation makes use of three-digit numbers for main classes, with fractional decimals allowing expansion for further detail. Numbers are flexible to the degree that they can be expanded in linear fashion to cover special aspects of general subjects. A library assigns a classification number that unambiguously locates a particular volume in a position relative to other books in the library, on the basis of its subject. The number makes it possible to find any book and to return it to its proper place on the library shelves. The classification system is used in 200,000 libraries in at least 135 countries. Melvil Dewey (1851–1931) was an American librarian and self-declared reformer. He was a founding member of the American Library Association and can be credited with the promotion of card systems in libraries and business. He developed the ideas for his library classification system in 1873 while working at Amherst College library. He applied the classification to the books in that library, until in 1876 he had a first version of the classification. In 1876, he published the classification in pamphlet form with the title "A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a Library." He used the pamphlet, published in more than one version during the year, to solicit comments from other librarians. It is not known who received copies or how many commented as only one copy with comments has survived, that of Ernest Cushing Richardson. His classification system was mentioned in an article in the first issue of the "Library Journal" and in an article by Dewey in the Department of Education publication "Public Libraries in America" in 1876. In March 1876, he applied for, and received copyright on the first edition of the index. The edition was 44 pages in length, with 2,000 index entries, and was printed in 200 copies. The second edition of the Dewey Decimal system, published in 1885 with the title "", comprised 314 pages, with 10,000 index entries. Five hundred copies were produced. Editions 3–14, published between 1888 and 1942, used a variant of this same title. Dewey modified and expanded his system considerably for the second edition. In an introduction to that edition Dewey states that "nearly 100 persons hav [spelling of 'have' per English-language spelling reform, which Dewey championed] contributed criticisms and suggestions". One of the innovations of the Dewey Decimal system was that of positioning books on the shelves in relation to other books on similar topics. When the system was first introduced, most libraries in the US used fixed positioning: each book was assigned a permanent shelf position based on the book's height and date of acquisition. Library stacks were generally closed to all but the most privileged patrons, so shelf browsing was not considered of importance. The use of the Dewey Decimal system increased during the early 20th century as librarians were convinced of the advantages of relative positioning and of open shelf access for patrons. New editions were readied as supplies of previously published editions were exhausted, even though some editions provided little change from the previous, as they were primarily needed to fulfill demand. In the next decade, three editions followed closely on: the 3rd (1888), 4th (1891), and 5th (1894). Editions 6 through 11 were published from 1899 to 1922. The 6th edition was published in a record 7,600 copies, although subsequent editions were much lower. During this time, the size of the volume grew, and edition 12 swelled to 1243 pages, an increase of 25% over the previous edition. In response to the needs of smaller libraries which were finding the expanded classification schedules difficult to use, in 1894, the first abridged edition of the Dewey Decimal system was produced. The abridged edition generally parallels the full edition, and has been developed for most full editions since that date. By popular request, in 1930, the Library of Congress began to print Dewey Classification numbers on nearly all of its cards, thus making the system immediately available to all libraries making use of the Library of Congress card sets. Dewey's was not the only library classification available, although it was the most complete. Charles Ammi Cutter published the Expansive Classification in 1882, with initial encouragement from Melvil Dewey. Cutter's system was not adopted by many libraries, with one major exception: it was used as the basis for the Library of Congress Classification system. In 1895, the International Institute of Bibliography, located in Belgium and led by Paul Otlet, contacted Dewey about the possibility of translating the classification into French, and using the classification system for bibliographies (as opposed to its use for books in libraries). This would have required some changes to the classification, which was under copyright. Dewey gave permission for the creation of a version intended for bibliographies, and also for its translation into French. Dewey did not agree, however, to allow the International Institute of Bibliography to later create an English version of the resulting classification, considering that a violation of their agreement, as well as a violation of Dewey's copyright. Shortly after Dewey's death in 1931, however, an agreement was reached between the committee overseeing the development of the Decimal Classification and the developers of the French "Classification Decimal". The English version was published as the Universal Decimal Classification and is still in use today. According to a study done in 1927, the Dewey system was used in the US in approximately 96% of responding public libraries and 89% of the college libraries. After the death of Melvil Dewey in 1931, administration of the classification was under the Decimal Classification Committee of the Lake Placid Club Education Foundation, and the editorial body was the Decimal Classification Editorial Policy Committee with participation of the American Library Association (ALA), Library of Congress, and Forest Press. By the 14th edition in 1942, the Dewey Decimal Classification index was over 1,900 pages in length and was published in two volumes. The growth of the classification to date had led to significant criticism from medium and large libraries which were too large to use the abridged edition but found the full classification overwhelming. Dewey had intended issuing the classification in three editions: the library edition, which would be the fullest edition; the bibliographic edition, in English and French, which was to be used for the organization of bibliographies rather than of books on the shelf; and the abridged edition. In 1933, the bibliographic edition became the Universal Decimal Classification, which left the library and abridged versions as the formal Dewey Decimal Classification editions. The 15th edition, edited by Milton Ferguson, implemented the growing concept of the "standard edition", designed for the majority of general libraries but not attempting to satisfy the needs of the very largest or of special libraries. It also reduced the size of the Dewey system by over half, from 1,900 to 700 pages. This revision was so radical that an advisory committee was formed right away for the 16th and 17th editions. The 16th and 17th editions, under the editorship of the Library of Congress, grew again to two volumes. However, by now, the Dewey Decimal system had established itself as a classification for general libraries, with the Library of Congress Classification having gained acceptance for large research libraries. The first electronic version of "Dewey" was created in 1993. Hard-copy editions continue to be issued at intervals; the online WebDewey and Abridged WebDewey are updated quarterly. Dewey and a small editorial staff managed the administration of the very early editions. Beginning in 1922, the Lake Placid Club Educational Foundation, a not-for-profit organization founded by Melvil Dewey, managed administrative affairs. The ALA set up a Special Advisory Committee on the Decimal Classification as part of the Cataloging and Classification division of ALA in 1952. The previous Decimal Classification Committee was changed to the Decimal Classification Editorial Policy Committee, with participation of the ALA Division of Cataloging and Classification, and of the Library of Congress. Melvil Dewey edited the first three editions of the classification system and oversaw the revisions of all editions until his death in 1931. May Seymour became editor in 1891 and served until her death in 1921. She was followed by Dorcas Fellows, who was editor until her death in 1938. Constantin J. Mazney edited the 14th edition. Milton Ferguson functioned as editor from 1949 to 1951. The 16th edition in 1958 was edited under an agreement between the Library of Congress and Forest Press, with David Haykin as director. Editions 16–19 were edited by Benjamin A. Custer and the editor of edition 20 was John P. Comaromi. Joan Mitchell was editor until 2013, covering editions 21 to 23. In 2013 Michael Panzer of OCLC became Editor-in-Chief. The Dewey Editorial Program Manager since 2016 has been Dr. Rebecca Green. Dewey himself held copyright in editions 1 to 6 (1876–1919). Copyright in editions 7–10 was held by the publisher, The Library Bureau. On the death of May Seymour, Dewey conveyed the "copyrights and control of all editions" to the Lake Placid Club Educational Foundation, a non-profit chartered in 1922. The Online Computer Library Center (OCLC) of Dublin, Ohio, US, acquired the trademark and copyrights associated with the Dewey Decimal Classification system when it bought Forest Press in 1988. In 2003 the Dewey Decimal Classification came to the attention of the U.S. press when OCLC sued the Library Hotel for trademark infringement for using the classification system as the hotel theme. The case was settled shortly thereafter. The OCLC has maintained the classification since 1988, and also publishes new editions of the system. The editorial staff responsible for updates is based partly at the Library of Congress and partly at OCLC. Their work is reviewed by the Decimal Classification Editorial Policy Committee, a ten-member international board which meets twice each year. The four-volume unabridged edition was published approximately every six years, with the last edition (DDC 23) published in mid-2011. In 2017 the editorial staff announced that the English edition of DDC will no longer be printed, in favor of using the frequently updated WebDewey. An experimental version of Dewey in RDF was previously available at dewey.info beginning in 2009, but has not been available since 2015. In addition to the full version, a single-volume abridged edition designed for libraries with 20,000 titles or fewer has been made available since 1895. The last printed English abridged edition, Abridged Edition 15, was published in early 2012. The Dewey Decimal Classification organizes library materials by discipline or field of study. Main divisions include philosophy, social sciences, science, technology, and history. The scheme comprises ten classes, each divided into ten divisions, each having ten sections. The system's notation uses Arabic numbers, with three whole numbers making up the main classes and sub-classes and decimals designating further divisions. The classification structure is hierarchical and the notation follows the same hierarchy. Libraries not needing the full level of detail of the classification can trim right-most decimal digits from the class number to obtain more general classifications. For example: The classification was originally enumerative, meaning that it listed all of the classes explicitly in the schedules. Over time it added some aspects of a faceted classification scheme, allowing classifiers to construct a number by combining a class number for a topic with an entry from a separate table. Tables cover commonly used elements such as geographical and temporal aspects, language, and bibliographic forms. For example, a class number could be constructed using 330 for economics + .9 for geographic treatment + .04 for Europe to create the class 330.94 European economy. Or one could combine the class 973 (for the United States) + .05 (for periodical publications on the topic) to arrive at the number 973.05 for periodicals concerning the United States generally. The classification also makes use of mnemonics in some areas, such that the number 5 represents the country Italy in classification numbers like 945 (history of Italy), 450 (Italian language), 195 (Italian philosophy). The combination of faceting and mnemonics makes the classification "synthetic" in nature, with meaning built into parts of the classification number. The Dewey Decimal Classification has a number for all subjects, including fiction, although many libraries maintain a separate fiction section shelved by alphabetical order of the author's surname. Each assigned number consists of two parts: a class number (from the Dewey system) and a book number, which "prevents confusion of different books on the same subject". A common form of the book number is called a Cutter number, which represents the author and distinguishes the book from other books on the same topic. The Relative Index (or, as Dewey spelled it, "Relativ Index") is an alphabetical index to the classification, for use both by classifiers but also by library users when seeking books by topic. The index was "relative" because the index entries pointed to the class numbers, not to the page numbers of the printed classification schedule. In this way, the Dewey Decimal Classification itself had the same relative positioning as the library shelf and could be used either as an entry point to the classification, by catalogers, or as an index to the Dewey-classed library itself. Dewey Decimal Classification numbers formed the basis of the Universal Decimal Classification (UDC), which combines the basic Dewey numbers with selected punctuation marks (comma, colon, parentheses, etc.). Adaptations of the system for specific regions outside the English-speaking world include the Korean Decimal Classification, the New Classification Scheme for Chinese Libraries, and the Nippon Decimal Classification (Japanese). Despite its widespread usage, the classification has been criticized for its complexity and limited scope of scheme-adjustment. In particular, the arrangement of subheadings has been described as archaic and as being biased towards an Anglo-American world view. In 2007–08, the Maricopa County Library District in Arizona abandoned the DDC in favor of the Book Industry Standards and Communications (BISAC) system, one that is commonly used by commercial bookstores, in an effort to make their libraries more accessible for patrons. Several other libraries across the United States and other countries (including Canada and the Netherlands) followed suit. The classification has also been criticized as being a proprietary system licensed by a single entity (OCLC), making it expensive to adopt. However, book classification critic Justin Newlan stands by the Dewey Decimal System, stating newer, more advanced book classification systems "are too confusing to understand for newcomers". In 1932 topics relating to homosexuality were first added to the system under 132 (mental derangements) and 159.9 (abnormal psychology). In 1952 homosexuality was also included under 301.424 (the study of sexes in society). In 1989 it was added to 363.49 (social problems), a classification that continues in the current edition. In 1996 homosexuality was added to 306.7 (sexual relations) which remains the preferred location in the current edition, although books can also be found under 616.8583 (sexual practices viewed as medical disorders), however the official direction states that "Use 616.8583 for homosexuality only when the work treats homosexuality as a medical disorder, or focuses on arguing against the views of those who consider homosexuality to be a medical disorder. ... If in doubt, prefer a number other than 616.8583." The subject of religion has been very heavily favored toward Christianity, with nearly the whole 200s being used for Christianity, and only the 290s being used for all other religions, of which there are thousands. While Christianity is a popular religion with 31% of the world subscribing to it, Islam has a very large following as well and has only DDC 297 to work with. The entire 200 section has been largely the same since DDC 1, and it would likely be a large undertaking to completely rewrite this section, particularly for individual libraries to adapt to. Despite topics such as Islam having only a single digit associated with them, there is adequate room in that number, due to the ability to expand beyond the decimal point. The topics of women have had bias in them as well as far as the classification scheme, but have been easier to edit than the religion schema, and changes have been made. Some changes that have been made have been on what items are side by side numerically. Those items that are side by side are related to each other in the classification scheme. For example, the topic on women used to be next to etiquette. Those two terms being next to each other would associate women with etiquette rather than etiquette being gender neutral. This was changed in DDC version 17.
https://en.wikipedia.org/wiki?curid=8699
Darwin Awards The Darwin Awards are a tongue-in-cheek honour, originating in Usenet newsgroup discussions around 1985. They recognise individuals who have supposedly contributed to human evolution by selecting themselves out of the gene pool via death or sterility by their own actions. The project became more formalized with the creation of a website in 1993, and followed up by a series of books starting in 2000, authored by Wendy Northcutt. The criterion for the awards states, "In the spirit of Charles Darwin, the Darwin Awards commemorate individuals who protect our gene pool by making the ultimate sacrifice of their own lives. Darwin Award winners eliminate themselves in an extraordinarily idiotic manner, thereby improving our species' chances of long-term survival." Accidental self-sterilisation also qualifies; however, the site notes: "Of necessity, the award is usually bestowed posthumously." The candidate is disqualified, though, if "innocent bystanders", who might have contributed positively to the gene pool, are killed in the process. The logical problem presented by award winners who may have already reproduced is not addressed in the selection process due to the difficulty of ascertaining if a person has or does not have children; the Darwin Award rules state that the presence of offspring does not disqualify a nominee. People who have somehow miraculously survived their suicidal idiocy can be given an "Honourable Mention" if their attempted act of self removal is deemed worthy (and humorous). The Darwin Awards books state that an attempt is made to disallow known urban legends from the awards, but some older "winners" have been "grandfathered" to keep their awards. Wendy Northcutt says the Darwin Awards site does try to verify all submitted stories, but many similar sites, and the vast number of circulating "Darwin awards" emails, are largely fictional. The origin of the Darwin Awards can be traced back to posts on Usenet group discussions as early as 1985. An early post, on August 7, 1985, describes the awards as being, "given posthumously to people who have made the supreme sacrifice to keep their genes out of our pool. Style counts, not everyone who dies from their own stupidity can win." This early post cites an example of a person who pulled a vending machine over his head and was crushed to death trying to break into it. Another widely distributed early story mentioning the Darwin Awards is the JATO Rocket Car, which describes a man who strapped a jet-assisted take-off unit to his Chevrolet Impala in the Arizona desert and who died on the side of a cliff as his car achieved speeds of . This story was later confirmed to be an urban legend by the Arizona Department of Public Safety. Wendy Northcutt says the official Darwin Awards website run by Northcutt does its best to confirm all stories submitted, listing them as, "confirmed true by Darwin." Many of the viral emails circulating the Internet, however, are hoaxes and urban legends. The website and collection of books were started in 1993 by Wendy Northcutt, who at the time was a graduate in molecular biology from the University of California, Berkeley. She went on to study neurobiology at Stanford University, doing research on cancer and telomerase. In her spare time, she organised chain letters from family members into the original Darwin Awards website hosted in her personal account space at Stanford. She eventually left the bench in 1998 and devoted herself full-time to her website and books in September 1999. By 2002, the website received 7 million page hits per month. She encountered some difficulty in publishing the first book, since most publishers would only offer her a deal if she agreed to remove the stories from the internet. Northcutt refused to do so, saying, "It was a community! I could not do that. Even though it might have cost me a lot of money, I kept saying no." She eventually found a publisher who agreed to print a book containing only 10% of the material gathered for the website. The first book turned out to be a success, and was listed on "The New York Times" best-seller list for six months. Not all of the feedback from the stories Northcutt published was positive, and she occasionally received email from people who knew the deceased. One such person wrote, "This is horrible. It has shocked our community to the core. You should remove this." But Northcutt said "I can't. It's just too stupid." Northcutt kept the stories on the website and in her books, citing them as a "funny-but-true safety guide", and mentioning that children who read the book are going to be much more careful around explosives. The website also recognises, with Honourable Mentions, individuals who survive their misadventures with their reproductive capacity intact. One example of this is Larry Walters, who attached helium-filled weather balloons to a lawn chair and floated far above Long Beach, California, in July 1982. He reached an altitude of but survived, to be later fined for crossing controlled airspace. (Walters later fell into depression and committed suicide.) Another notable honourable mention was given to the two men who attempted to burgle the home of footballer Duncan Ferguson (who had four convictions for assault and had served six months in Glasgow's Barlinnie Prison) in 2001, with one burglar requiring three days' hospitalisation after being confronted by the player. A 2014 study published in the British Medical Journal found that between 1995 and 2014 males represented 88.7% of Darwin Award winners (see figure). A 2006 comedy film, "The Darwin Awards", written and directed by Finn Taylor, was based on the website and many of the Darwin Awards stories. Northcutt has stated five requirements for a Darwin Award: This may be subject to dispute. Potential awardees may be out of the gene pool because of age; others have already reproduced before their deaths. To avoid debates about the possibility of in-vitro fertilization, artificial insemination, or cloning, the original Darwin Awards book applied the following "deserted island" test to potential winners: If the person were unable to reproduce when stranded on a deserted island with a fertile member of the opposite sex, he or she would be considered sterile. Winners of the award, in general, either are dead or have become unable to use their sexual organs. The candidate's foolishness must be unique and sensational, likely because the award is intended to be funny. A number of foolish but common activities, such as smoking in bed, are excluded from consideration. In contrast, self-immolation caused by smoking after being administered a flammable ointment in a hospital and specifically told not to smoke is grounds for nomination. One "Honourable Mention" (a man who attempted suicide by swallowing nitroglycerine pills, and then tried to detonate them by running into a wall) is noted to be in this category, despite being intentional and self-inflicted (i.e. attempted suicide), which would normally disqualify the inductee. Killing a friend with a hand grenade would not be eligible, but killing oneself while manufacturing a home made chimney-cleaning device from a grenade would be eligible. To earn a Darwin Award, one must have killed oneself, or rendered oneself sterile; merely causing death to a third party is insufficient. The nominee must be at least past the legal driving age and free of mental defect (Northcutt considers injury or death caused by mental defect to be tragic, rather than amusing, and routinely disqualifies such entries). After much discussion, a small category regarding deaths below this age limit also exists. Entry into this category requires that the peers of the candidate be of the opinion that the actions of the person in question were above and beyond the limits of reason. However, in 2011, the awards targeted a 16-year-old boy in Leeds who died stealing copper wiring (the standard minimum driving age in Great Britain being 17). In 2012, Northcutt made similar light of a 14-year-old girl in Brazil who was killed while leaning out of a school bus window; however, she was "disqualified" for the award itself because of the likely public objection due to the girl's age, which Northcutt asserts is based on "magical thinking". The story must be documented by reliable sources: e.g., reputable newspaper articles, confirmed television reports, or responsible eyewitnesses. If a story is found to be untrue, it is disqualified, but particularly amusing ones are placed in the urban legend section of the archives. Despite this requirement, many of the stories are fictional, often appearing as "original submissions" and presenting no further sources than unverified (and unreliable) "eyewitnesses". Most such stories on Northcutt's Darwin Awards site are filed in the Personal Accounts section. In addition, later revisions to the qualification criteria add several requirements that have not been made into formalised ‘rules’:
https://en.wikipedia.org/wiki?curid=8703
Outline of dance The following outline is provided as an overview of and topical guide to dance: Dance – human movement either used as a form of expression or presented in a social, spiritual or performance setting. Choreography is the art of making dances, and the person who does this is called a choreographer. Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic and moral constraints and range from functional movement (such as Folk dance) to codified, virtuoso techniques such as ballet. A great many dances and dance styles are performed to dance music. Dance (also called "dancing") can fit the following categories: Some other things can be named "dance" metaphorically; see dance (disambiguation) Type of dance – a particular dance or dance style. There are many varieties of dance. Dance categories are not mutually exclusive. For example, tango is traditionally a "partner dance". While it is mostly "social dance", its ballroom form may be "competitive dance", as in DanceSport. At the same time it is enjoyed as "performance dance", whereby it may well be a "solo dance". History of dance Dance science
https://en.wikipedia.org/wiki?curid=8704
DKW DKW (Dampf-Kraft-Wagen, , also Deutsche Kinder-Wagen . Das-Kleine-Wunder, or Des-Knaben-Wunsch, ) is a German car and motorcycle marque. DKW was one of the four companies that formed Auto Union in 1932 and is hence an ancestor of the modern day Audi company. In 1916, Danish engineer Jørgen Skafte Rasmussen founded a factory in Zschopau, Saxony, Germany, to produce steam fittings. That year he attempted to produce a steam-driven car, called the DKW. Although unsuccessful, he made a two-stroke toy engine in 1919, called "Des Knaben Wunsch" – "the boy's wish". He put a slightly modified version of this engine into a motorcycle and called it "Das Kleine Wunder" – "the little wonder" the initials from this becoming the DKW brand: by the late 1920s, DKW was the world's largest motorcycle manufacturer. In September 1924, DKW bought , saving them from Germany's hyperinflation economic crisis. Rudolf Slaby became chief-engineer at DKW. In 1932, DKW merged with Audi, Horch and Wanderer to form Auto Union. After World War II, DKW moved to West Germany. The original factory became MZ. Auto Union came under Daimler-Benz ownership in 1957 and was purchased by the Volkswagen Group in 1964. The last German-built DKW car was the F102, which ceased production in 1966. Its successor, the four-stroke F103, was marketed under the Audi brand, another Auto Union marque. DKW-badged cars continued to be built under license in Brazil and Argentina until 1967 and 1969 respectively. The DKW trademark is currently owned by Auto Union GmbH, a wholly owned subsidiary of Audi AG which also owns the rights to other historical trademarks and intellectual property of the Auto Union combine. DKW cars were made from 1928 until 1966, apart from the interruption caused by the Second World War. DKWs always used two-stroke engines, reflecting the company's position by the end of the 1920s as the world's largest producer of motorcycles. The first DKW car, the small and rather crude Typ P, emerged on 7 May 1928 and the model continued to be built at the company's Spandau (Berlin) plant, first as a roadster and later as a stylish if basic sports car, until 1931. More significant was a series of inexpensive cars built 300 km (185 miles) to the south in Zwickau in the plant acquired by the company's owner in 1928 when he had become the majority owner in Audi Werke AG. Models F1 to F8 (F for Front) were built between 1931 and 1942, with successor models reappearing after the end of the war in 1945. They were the first volume production cars in Europe with front wheel drive, and were powered by transversely mounted two-cylinder two-stroke engines. Displacement was 584 or 692 cc: claimed maximum power was initially 15 PS, and from 1931 a choice between 18 or . These models had a generator that doubled as a starter, mounted directly on the crankshaft, known as a Dynastart. DKWs from Zwickau notched up approximately 218,000 units between 1931 and 1942. Most cars were sold on the home market and over 85% of DKWs produced in the 1930s were the little F series cars: DKW reached second place in German sales by 1934 and stayed there, accounting for 189,369 of the cars sold between 1931 and 1938, more than 16% of the market. Between 1929 and 1940, DKW produced a less well remembered but technically intriguing series of rear-wheel drive cars called (among other names) "Schwebeklasse" and "Sonderklasse" with two-stroke V4 engines. Engine displacement was 1,000 cc, later 1,100 cc. The engines had two extra cylinders for forced induction, so they appeared like V6 engines but without spark plugs on the front cylinder pair. In 1939, DKW made a prototype with the first three-cylinder engine, with a displacement of 900 cc and producing . With a streamlined body, the car could run at . It was put into production after World War II, first as an Industrieverband Fahrzeugbau (IFA) F9 (later Wartburg) in Zwickau, East Germany, and shortly afterwards in DKW-form from Düsseldorf as the 3=6 or F91. DKW engines were used by Saab as a model for the Saab two-stroke in its Saab 92 car manufacturing venture, in 1947. As Auto Union was based in Saxony in what became the German Democratic Republic (East Germany), it took some time for it to regroup after the war. The company was registered in West Germany as Auto Union GmbH in 1949, first as a spare-part provider, but soon to take up production of the RT 125 motorcycle and a new delivery van, called a "Schnellaster" F800. Their first line of production took place in Düsseldorf. This van used the same engine as the last F8 made before the war. Their first car was the F89 using the body from the prototype F9 made before the war and the two-cylinder two-stroke engine from the last F8. Production went on until it was replaced by the successful three-cylinder engine that came with the F91. The F91 was in production 1953–1955, and was replaced by the larger F93 in 1956. The F91 and F93 had 900 cc three-cylinder two-stroke engines, the first ones delivering , the last . The ignition system comprised three independent sets of points and coils, one for each cylinder, with the points mounted in a cluster around a single lobed cam at the front end of the crankshaft. The cooling system was of the free convection type assisted by a fan driven from a pulley mounted at the front end of the crankshaft. The F93 was produced until 1959, and was replaced by the Auto-Union 1000. These models were produced with a 1,000 cc two-stroke engine, with a choice between or S versions until 1963. During this transition, production was moved from Düsseldorf to Ingolstadt, where Audi still has its production. From 1957, the cars could be fitted with a saxomat, an automatic clutch, the only small car then offering this feature. The last versions of the Auto-Union 1000S had disc brakes as option, an early development for this technology. A sporting 2+2 seater version was available as the Auto-Union 1000 SP from 1957 to 1964, the first years only as a coupé and from 1962 also as a convertible. In 1956, the very rare DKW Monza was put into small-scale production on a private initiative, with a sporting two-seater body of glassfiber on a standard F93 frame. It was first called Solitude, but got its final name from the long-distance speed records it made on the Autodromo Nazionale Monza in Italy in November 1956. Running in Fédération Internationale de l'Automobile (FIA) class G, it set records including 48 hours at an average speed of , 10,000 km at and 72 hours at . The car was first produced by in Stuttgart, then by Massholder in Heidelberg and lastly by Robert Schenk in Stuttgart. The number produced is said to be around 230 and production finished by the end of 1958. A more successful range of cars was sold from 1959, the Junior/F12 series based on a modern concept from the late 1950s. The range consists of Junior (basic model) made from 1959 to 1961, Junior de Luxe (a little enhanced) from 1961 to 1963, F11 (a little larger) and F12 (larger and bigger engine) from 1963 to 1965, and F12 Roadster from 1964 to 1965. The Junior/F12 series became quite popular, and many cars were produced. An assembly plant was licensed in Ballincollig, County Cork, Ireland between 1952 and c.1964 and roughly 4,000 vehicles were assembled, ranging from saloons, vans and motorbikes to commercial combine harvesters. This was the only DKW factory outside Germany in Europe and for many years after its closure its large DKW sign could be visible on the wall of the factory. The building was demolished in the late 2000s and was redeveloped into a German Aldi store and a McDonald's drive-thru. All the three-cylinder two-stroke post-war cars had some sporting potential and formed the basis for many rally victories in the 1950s and early 1960s. This made DKW the most winning car brand in the European rally league for several years during the fifties. In 1960, DKW developed a V6 engine by combining two three-cylinder two-stroke engines, with a capacity of 1,000 cc. The capacity was increased and the final V6 in 1966 had a capacity of 1,300 cc, which developed at 5,000 rpm using the standard configuration with two carburettors. A four-carburettor version produced , a six-carburettor one . It weighed only . The V6 was planned to be used in the DKW Munga and the F102. About 100 engines were built for testing purposes and 13 DKW F102 and some Mungas were fitted with the V6 engine in the 1960s. The last DKW was the F102, coming into production in 1964 as a replacement for the old-looking AU1000. However, the F102 sold poorly, largely due to its two-stroke engine technology which was at the limit of its development. Auto Union's parent, Daimler-Benz, decided to offload the company to Volkswagen. The car was re-engineered with a four-stroke engine and relaunched as the Audi F103. This marked the end of the DKW marque for cars, and the rebirth of the Audi name. From 1956 to 1961, Dutch importer Hart, Nibbrig & Greve assembled cars in an abandoned asphalt factory in Sassenheim, where they employed about 120 workers, two transporter, that collected SKD kits from Duesseldorf and built about 13.500 cars. When the DKW plant moved the import of SKD kits stopped, as it became too expensive. From 1956 to 1967, DKW cars were made in Brazil by the local company Vemag ("Veículos e Máquinas Agrícolas S.A.", "Vehicles and Agricultural Machinery Inc."). Vemag was assembling Scania-Vabis trucks, but Scania Vabis became an independent company in July 1960. The original plans were to build the Candango off-roader (Munga), a utility vehicle and a four-door sedan, called Vemaguet and Belcar respectively. The first model built was the 900 cc F91 Universal but the Belcar and Vemaguet names were applied later. In 1958, the F94 four-door sedan and station wagon were launched, in the early 1960s renamed Belcar and Vemaguet. The company also produced a luxury coupe (the DKW Fissore) and the off-road Munga (locally called Candango). In 1960 Vemag cars received the larger one-litre, engine from the Auto Union 1000. Vemag had a successful official racing team, with the coupe GT Malzoni, with fiberglass body. This project was the foundation of the long-lasting Brazilian sports car brand Puma. The Brazilian F94 line has been improved with several cosmetic changes and became more and more different from the German and Argentine models. Vemag had no capital to invest in new products and came under governmental pressure to merge. In 1964-1965 Volkswagen gradually took over Auto Union, a minority holder in Vemag, and in 1967 Volkswagen bought the remainder of the stock. VW quickly began phasing out DKW-Vemag production and introduced the Volkswagen 1600 sedan to the old Vemag plant, after a total of 109,343 DKW-Vemag cars had been built. DKW vehicles were made in Argentina from 1960 to 1969 by IASF S.A. (Industria Automotriz Santa Fe Sociedad Anónima) in Sauce Viejo, Santa Fe. The most beautiful were the Cupé Fissore, which had many famous owners (Julio Sosa, César Luis Menotti, and others). Other models are the Auto Union 1000 S Sedán (21,797 made until 1969) and the Auto Union 1000 Universal S (6,396 made until 1969). and the Auto Union Combi/Pick-up. The last version of the Auto Union Combi/Pick-up (DKW F1000 L), launched in 1969, survived a few months and was bought out by IME, which continued production until 1979. The DKW Munga was built by Auto Union in Ingolstadt. Production began in October 1956 and ended in December 1968, with 46,750 cars built. From 1949 to 1962, DKW produced the "Schnellaster" with a trailing-arm rear suspension system with springs in the cross bar assembly. Spanish subsidiary IMOSA produced a modern successor introduced in 1963, the DKW F 1000 L. This van started with the three-cylinder 1,000 cc engine, but later received a Mercedes-Benz Diesel engine and was renamed a Mercedes-Benz in 1975. During the late 1920s and until WWII broke out, DKW was the world's largest motorcycle manufacturer and the pioneer of front wheel drive automobiles with their DKW Front, along with the Citroen Traction Avant. In 1931, Arnold Zoller started building split-singles and this concept made DKW the dominant racing motorcycle in the Lightweight and Junior classes between the wars. This included off-road events like the International Six Days Trial where the marque scored some considerable inter-war year successes alongside Bavarian Motor Works At the same time, the company also had some success with super-charged racing motorcycles which because of their light weight were particularly successful in the ISDT The motorcycle branch produced famous models such as the RT 125 pre- and post-World War II, and after the war with production at the original factory in GDR becoming MZ it made 175, 250 and 350 (cc) models. As war reparations, the design drawings of the RT125 were given to Harley-Davidson in the US and BSA in the UK. The Harley-Davidson version was known loosely as the Hummer (Hummer is really just a few specific years, but generally people call the Harley lightweights Hummers), while BSA used them for the Bantam. IFA and later MZ models continued in production until the 1990s, when economics brought production of the two stroke to an end. Other manufacturers copied the DKW design, officially or otherwise. This can be seen in the similarity of many small two-stroke motorcycles from the 1950s, including from Yamaha, Voskhod, Maserati, and Polish WSK.
https://en.wikipedia.org/wiki?curid=8707
Doctor Syn The Reverend Doctor Christopher Syn is the smuggler hero of a series of novels by Russell Thorndike. The first book, "Doctor Syn: A Tale of the Romney Marsh" was published in 1915. The story idea came from smuggling in the 18th-century Romney Marsh, where brandy and tobacco were brought in at night by boat from France to avoid high tax. Minor battles were fought, sometimes at night, between gangs of smugglers, such as the Hawkhurst Gang, and the Revenue, supported by the army and local militias in the South, Kent and the West, Sussex. Christopher Syn, born 1729, is portrayed as a brilliant scholar from Queen's College, Oxford, possessing swashbuckling skills such as riding, fencing, and seamanship. He was content to live the quiet life of a country vicar in Dymchurch-under-the-Wall under the patronage of Sir Charles Cobtree, the father of his best friend Anthony Cobtree, until his beautiful young Spanish wife Imogene was seduced by and eloped with Nicholas Tappitt, whom Dr. Syn had considered a close friend. Christopher Syn set out on a quest for revenge, always managing to reach the eloped pair's destinations ahead of them just in time to terrify them against landing and facing him in a deliberate campaign of terror. While sailing from Spain to America in pursuit, his ship was captured by the pirate ship "The Sulphur Pit", commanded by Captain Satan. In a one-on-one fight, Syn defeated and killed Captain Satan to take command of his ship and crew; among them was Mr. Mipps, a former Royal Navy carpenter with whom Syn had become friends in England after rescuing him from the Customs men. Mipps swore loyalty to Syn from that time onward. With Mipps at his side, Syn turned to piracy and became a great success. Later, when his crew refused to let Syn leave, Syn and Mipps slipped away in one of the ship's boats; unknown to Syn, Mipps had arranged a convenient "accident" in the ship's powder magazine with an exploding barrel of gunpowder, eliminating witnesses of Syn's piratical acts. Mipps then joined Syn in his quest for revenge, pursuing Tappitt and Imogene throughout the thirteen American colonies (supposedly preaching the gospel to the Indians) and around the world (as part of a whaling voyage) afterwards. Mipps was with him in the Caribbean when Dr. Syn turned again to piracy, assuming the name of Captain Clegg (taking the name "Clegg" from a certain vicious biting fly he had encountered in America)., "Clegg" hijacked his enemy Tappitt's own ship and crew and sailed off with them (renaming the ship the "Imogene") to become the most infamous pirate of the day. However, a mulatto who escaped the destruction of Syn's previous ship stowed away in Clegg's ship and accused him before the crew; Clegg quelled the potential mutiny by having the mulatto's tongue cut out, marooning him on a coral reef and violently killing Yellow Pete, the ship's Chinese cook, who represented the crew in their wish to rescue the mulatto. Afterwards, realizing that Clegg had become too notorious, Syn decided to abandon his quest and return to England, and Mipps set up a second "accidental" explosion to destroy the "Imogene" and her crew. Syn returned to England on the night of a storm (13 November 1775) that wrecked his brig off the English coast in sight of Dymchurch. That night he went to the house of his old friend (and now squire) Anthony Cobtree. When news came that the local vicar had drowned while trying to save victims of the shipwreck, Squire Cobtree offered the post to Christopher Syn. Syn accepted and settled down to a more respectable life as the vicar of Dymchurch and Dean of Peculiars in Romney Marsh, Kent, resuming his original name. Mipps arrived in Dymchurch with the intent of settling down. Syn made him the village sexton upon condition that Mipps "remember to forget" (that Syn had been Clegg and that they had known each other before), and that Mipps never get involved with the local smugglers. Syn soon became aware that his parishioners were smuggling goods from France to avoid the excessive customs duties the government charged. Learning from Mipps (who, contrary to Syn's orders, had become a leader of the smugglers) that certain townsfolk had been ambushed and captured during a smuggling run, Syn purchased the great black stallion Gehenna from gypsy horse-traders and raced to their rescue. A suit of clothing borrowed from a scarecrow made an improvised disguise, and Syn and Mipps were able to rescue the townsfolk from the Dragoons. After this, Syn decided that he could only protect his people by becoming their leader. He created a more elaborate scarecrow costume, with eerie luminous paint. Riding Gehenna at night, the respectable Dr. Syn became "The Scarecrow", the feared head of the smugglers. Together with Mipps, he organized the smugglers into a well-organized band of "Night Riders", also called "The Devil Riders", with macabre disguises and code-names. Syn's cunning was so great that the smugglers outwitted the government forces for many years. A hidden stable watched over by Mother Handaway, the local "witch" (who believed the Scarecrow to be The Devil in living form), was the hiding place for the horses of the Scarecrow and his lieutenants, Mipps and the local highwayman Jimmie Bone (who, being as good a horseman as Syn and of similar build, was sometimes called upon to impersonate the Scarecrow when Syn either had to be elsewhere or seen in the same place.). Shortly after the first appearances of the Scarecrow, Nicholas Tappitt (using the name "Colonel Delacourt") and the ailing Imogene returned to England, ending up in Dymchurch. Recognizing Syn as Clegg, Tappitt realized that Syn and the Scarecrow were the same and helped the authorities set a trap for him, hoping to both rid himself of his enemy and claim the reward for his capture. The trap was sprung, but Squire Cobtree's daughter Charlotte, who had fallen in love with Syn and also learned his secret identities as both Clegg and the Scarecrow, was the tragic victim when she dressed in the Scarecrow's disguise and was fatally wounded as a result. Tappitt was then suspected of being the Scarecrow, and a Customs officer and three constables came to arrest him. In the ensuing fight, Tappitt killed the Customs man and the constables subdued and arrested Tappitt for murdering the Customs officer. After Imogene's death in Syn's arms (during which she revealed to him that he had a son by her who was missing somewhere in America), Syn fought a final duel with Tappitt in his jail cell, defeating him. Syn then struck a bargain with Tappitt: If Tappitt confessed to being the notorious pirate Clegg, then Syn would look after and care for Tappitt and Imogene's new-born infant daughter (also named Imogene). Tappitt agreed, and "Captain Clegg" was hanged and later "buried without benefit of clergy at a cross-roads hard by the Kent Ditch." Many years later, Captain Collyer, a Royal Navy officer assigned to smash the local smuggling ring, uncovered the deception and Dr. Syn's true identity, thanks in part to the tongueless mulatto (who had been rescued by Collyer years before and who had been serving Collyer as a "ferret" seeking out hidden contraband) who recognized Syn as Clegg. Syn evaded capture while at the same time making sure that Imogene and Squire Cobtree's son Denis (who had fallen in love with Imogene) would have a happy life together (they were eventually married), but was murdered in revenge by the mulatto, who then mysteriously managed to escape, leaving Syn harpooned through the neck. As a last mark of respect, Collyer ordered that Syn be buried at sea, rather than have his body hung in chains. Mipps escaped in the confusion of Syn's death and disappeared from England, but it is said that a little man very much like him is living out his days in a Buddhist Monastery somewhere in the Malay Peninsula, delighting the monks with recounting the adventures of Doctor Syn and the eerie stories of the Romney Marsh and the mysterious Scarecrow and his Night Riders. The Dr. Syn books detail his adventures and attempts to help the people of Dymchurch and the surrounding area evade the Excise tax. There are: Note that the "first" book, "Doctor Syn," is actually the final story chronologically; the others proceed in published sequence. An expanded version of "Doctor Syn Returns" titled "The Scarecrow Rides" was published for the US market by The Dial Press in 1935 and later re-printed in paperback by Black Curtain Press in 2013 (). In 1960, American author William Buchanan reworked Thorndike's "Further Adventures of Doctor Syn" under the title "Christopher Syn" (New York, Abelard Schuman), giving Thorndike co-authorship credit; this version provides a different conclusion and some conflation, renaming and even removal of the supporting characters. "Christopher Syn" became the basis for the 1962 Disney production (see below); there was also a novelization of the Disney theatrical version, titled "Doctor Syn, Alias the Scarecrow" and written by Vic Crume. Three film adaptations have been made of Dr. Syn's exploits. The first, "Doctor Syn" (1937), starred the actor George Arliss in the title role and was his last film. "Captain Clegg" (1962), known as "Night Creatures" in the U. S., was produced by Hammer Film Productions with actor Peter Cushing in the lead role, directed by Peter Graham Scott. In the screenplay by Anthony Hinds, the main character's name was changed from Doctor Syn to Parson Blyss to avoid rights problems with Disney's forthcoming version, and "Captain Clegg"'s screenplay follows the novel "Doctor Syn" and the screenplay of the 1937 film closely with the exception of a tightening of the plot. In the Arliss movie "Doctor Syn", Syn escapes to sea with Mipps and the rest of the Dymchurch smugglers, whereas "Captain Clegg" ends more faithfully to the novel, with Parson Blyss being killed by the mulatto (who is then killed by Mipps) and then being carried to and buried in Captain Clegg's empty grave by Mipps. "Captain Clegg" was released in the UK on DVD and Blu-ray in 2014; "Night Creatures" was never released on videotape in the United States, but is included in the 2014 two-disc DVD collection "The Hammer Horror Series". "The Scarecrow of Romney Marsh" (1963) was produced for the "Walt Disney's Wonderful World of Color" TV series. It was shot on location in England and was directed by James Neilson. It stars Patrick McGoohan in the title role, with George Cole as Mipps and Sean Scully as John Banks, the younger son of Squire Banks (Michael Hordern). St Clement's Church in Old Romney doubled as Dr Syn's Dymchurch parish church in the production, and Disney funded the repair of the building in order to use it as a filming location. Part One dealt with the arrival of General Pugh (Geoffrey Keen), who had been ordered by the War Office to smash the smuggling ring and prevent the Scarecrow from rescuing a Dymchurch man captured by a naval press gang as bait to trap the Scarecrow. Part Two depicted The Scarecrow dealing with the traitorous Joe Ransley (Patrick Wymark). Part Three showed how the Scarecrow rescued Harry Banks (David Buck) and American Simon Bates (Tony Britton) from General Pugh's clutches in Dover Castle. While originally conceived and edited for American television (and announced in an advertisement by NBC in the Tuesday, July 9, 1963 issue of "The Hollywood Reporter"), "The Scarecrow of Romney Marsh" was re-edited for a British theatrical run before the American television debut. Retitled "Dr. Syn, Alias the Scarecrow", the British theatrical version was released on a double bill with "The Sword in the Stone", and ran during the 1963 Christmas season (advertised in the January 1964 issue of "Photoplay"). This version was shown in Europe as well as Central and South America through 1966. In the 1970s, the production was re-edited again for its first American theatrical release, on double bills with both "Snow White and the Seven Dwarfs" and "Treasure Island". (The VHS version of the 1980s, sharing the removal of the Scarecrow's laugh from Terry Gilkyson's title song, was expanded to include the story material from all three TV episodes, while retaining feature film structure and credits; it was available for a relatively short amount of time.) Shortly after the US theatrical run, it was re-edited once more for a two-part presentation on Disney's television series in the 1970s, simply omitting the middle segment. The original three-part version was first shown as part of "Walt Disney's Wonderful World of Color" on February 9, 16 and 23, 1964. Later it was included in a late 1980s "Wonderful World of Disney" syndication rerun package, and cablecast in 1990s on the Disney Channel. This version generally followed the storyline of "The Further Adventures of Dr. Syn" and made it clear that Syn did not die or stage his own death: at film's end, he is having a cup of tea with the Squire, who admits to now owing a debt of gratitude to the Scarecrow. On November 11, 2008 The Walt Disney Company released a limited pressing of 39,500 issues of "The Scarecrow of Romney Marsh" in DVD format for the first time as a part of the "" collection, and was now called "Dr. Syn: The Scarecrow of Romney Marsh". The issue sold out in three weeks, but as of February 17, 2009 the DVD was made available for members of the Disney Movie Club for $29.95. The two-disc set includes the American television version and the original British theatrical version "Dr. Syn, Alias the Scarecrow" in widescreen format. It also includes the original introductions by Walt Disney (in which he erroneously indicates that Dr. Syn was an actual historical figure) and a documentary on Disney's interest in the property. In October 2019, the Disney Movie Club released the film on Blu-ray, this time titled "The Scarecrow of Romney Marsh." The single disc contains the three episodes as originally broadcast in 1963, with Walt Disney's introductions (but with none of the supplemental features that appear on previous releases). Made in 1974, "Carry On Dick", of the celebrated "Carry On" series of films, followed the same premise of a country vicar (Sid James) who is secretly an outlaw, in this case the highwayman Dick Turpin. In 2001 a stage adaptation titled "Doctor Syn" was performed at churches throughout the Romney Marsh, the final night being performed in Dymchurch itself. The cast featured Daniel Thorndike (the author's son), Michael Fields, Steven Povey and Ben Barton, along with various amateurs from the area. Rufus Sewell read a 10-part audio adaptation combining and abridging "Doctor Syn on the High Seas" and "Doctor Syn Returns" for BBC Radio, broadcast on BBC Radio 7 in December 2006 and repeated in June 2007. A 10-part audio adaptation of "The Further Adventures of Doctor Syn" (combining and abridging "The Further Adventures of Doctor Syn" and "The Shadow of Doctor Syn") read by Rufus Sewell was performed on BBC Radio 7 in December 2007. In April 2009, a third series was announced for broadcast later in 2009. BBC Radio 7 broadcast the six-part series, an abridged reading by Rufus Sewell of the original "Doctor Syn" novel, from January 4, 2010 to January 11. John Paul Jones of Led Zeppelin reinterpreted elements of the Doctor Syn story as his "No Quarter" fantasy sequence in Led Zeppelin's concert film "The Song Remains the Same". A three-issue adaptation of the Disney production was published by Gold Key Comics under the "Scarecrow of Romney Marsh" title, spanning April 1964 through October 1965. A much abridged revision of the adventures of Dr. Syn appeared as a short comic serialized in the monthly publication "Disney Adventures". The new story features the heroic Doctor and his young sidekick protecting innocent villagers from corrupt government officials and soldiers. "Disney Adventures" would also produce a crossover story with the "Pirates of the Caribbean" franchise where Dr. Syn meets Captain Jack Sparrow. Doctor Syn appears in the "League of Extraordinary Gentlemen" series as a member of the league gathered by Lemuel Gulliver. His alter ego, Captain Clegg, also makes appearances, where he is mentioned to have had a brief romantic liaison with future teammate Fanny Hill. In the 2003 film adaptation of "League", Dr. Syn can be spotted in one of the portraits hanging on the wall in M's library. A "Days of Syn" festival is held even-numbered years by Dymchurch residents for fund-raising. The 2006 "Days of Syn" was on 26–28 August (UK August Bank Holiday weekend) and featured a talk on Dr. Syn at the Anglican church at 6:30 p.m. On Sunday at 3 p.m. there was a church service where Dr. Syn and the cast appeared in period costume. On Monday, starting at the Bowery Hall, scenes were reenacted from "Doctor Syn", and again during the day along the Dymchurch shoreline and in the Ocean pub. In 2009, discussions took place to build a 100 ft high statue of "The Scarecrow" on a site in the centre of Romney Marsh. This had not been done by 2016. Doctor Syn is also the name given to one of the locomotives on the Romney, Hythe and Dymchurch Railway. Doctor Syn also inspired novelist George Chittenden who captures smuggling on the Kent coast in his highly praised ("sources?") debut "The Boy Who Led Them", which follows the rise and fall of a smuggling gang leader further down the coast in the notorious town of Deal.
https://en.wikipedia.org/wiki?curid=8708
Dhrystone Dhrystone is a synthetic computing benchmark program developed in 1984 by Reinhold P. Weicker intended to be representative of system (integer) programming. The Dhrystone grew to become representative of general processor (CPU) performance. The name "Dhrystone" is a pun on a different benchmark algorithm called Whetstone. With Dhrystone, Weicker gathered meta-data from a broad range of software, including programs written in FORTRAN, PL/1, SAL, ALGOL 68, and Pascal. He then characterized these programs in terms of various common constructs: procedure calls, pointer indirections, assignments, etc. From this he wrote the Dhrystone benchmark to correspond to a representative mix. Dhrystone was published in Ada, with the C version for Unix developed by Rick Richardson ("version 1.1") greatly contributing to its popularity. The Dhrystone benchmark contains no floating point operations, thus the name is a pun on the then-popular Whetstone benchmark for floating point operations. The output from the benchmark is the number of Dhrystones per second (the number of iterations of the main code loop per second). Both Whetstone and Dhrystone are "synthetic" benchmarks, meaning that they are simple programs that are carefully designed to statistically mimic the processor usage of some common set of programs. Whetstone, developed in 1972, originally strove to mimic typical Algol 60 programs based on measurements from 1970, but eventually became most popular in its Fortran version, reflecting the highly numerical orientation of computing in the 1960s. Dhrystone's eventual importance as an indicator of general-purpose ("integer") performance of new computers made it a target for commercial compiler writers. Various modern compiler static code analysis techniques (such as elimination of dead code: for example, code which uses the processor but produces internal results which are not used or output) make the use and design of synthetic benchmarks more difficult. Version 2.0 of the benchmark, released by Weicker and Richardson in March 1988, had a number of changes intended to foil a range of compiler techniques. Yet it was carefully crafted so as not to change the underlying benchmark. This effort to foil compilers was only partly successful. Dhrystone 2.1, released in May of the same year, had some minor changes and remains the current definition of Dhrystone. Other than issues related to compiler optimization, various other issues have been cited with the Dhrystone. Most of these, including the small code size and small data set size, were understood at the time of its publication in 1984. More subtle is the slight over-representation of string operations, which is largely language-related: both Ada and Pascal have strings as normal variables in the language, whereas C does not, so what was simple variable assignment in reference benchmarks became buffer copy operations in the C library. Another issue is that the score reported does not include information which is critical when comparing systems such as which compiler was used, and what optimizations. Dhrystone remains remarkably resilient as a simple benchmark, but its continuing value in establishing true performance is questionable. It is easy to use, well documented, fully self-contained, well understood, and can be made to work on almost any system. In particular, it has remained in broad use in the embedded computing world, though the recently developed EEMBC benchmark suite, HINT, Stream, and even Bytemark are widely quoted and used, as well as more specific benchmarks for the memory subsystem (Cachebench), TCP/IP (TTCP), and many others. Dhrystone may represent a result more meaningfully than MIPS (million instructions per second) because instruction count comparisons between different instruction sets (e.g. RISC vs. CISC) can confound simple comparisons. For example, the same high-level task may require many more instructions on a RISC machine, but might execute faster than a single CISC instruction. Thus, the Dhrystone score counts only the number of program iteration completions per second, allowing individual machines to perform this calculation in a machine-specific way. Another common representation of the Dhrystone benchmark is the DMIPS (Dhrystone MIPS) obtained when the Dhrystone score is divided by 1757 (the number of Dhrystones per second obtained on the VAX 11/780, nominally a 1 MIPS machine). Another way to represent results is in DMIPS/MHz, where DMIPS result is further divided by CPU frequency, to allow for easier comparison of CPUs running at different clock rates. Using Dhrystone as a benchmark has pitfalls:
https://en.wikipedia.org/wiki?curid=8709
Dave Winer Dave Winer (born May 2, 1955 in Queens, New York City) is an American software developer, entrepreneur, and writer who resides in New York City. Winer is noted for his contributions to outliners, scripting, content management, and web services, as well as blogging and podcasting. He is the founder of the software companies Living Videotext, Userland Software and Small Picture Inc., a former contributing editor for the Web magazine HotWired, the author of the "Scripting News" weblog, a former research fellow at Harvard Law School, and current visiting scholar at New York University's Arthur L. Carter Journalism Institute. Winer was born on May 2, 1955, in Queens, New York City, the son of Eve Winer, Ph.D., a school psychologist, and Leon Winer, Ph.D., a former professor of the Columbia University Graduate School of Business. Winer is also the grandnephew of German novelist Arno Schmidt and a relative of Hedy Lamarr. He graduated from the Bronx High School of Science in 1972. Winer received a BA in Mathematics from Tulane University in New Orleans in 1976. In 1978 he received an MS in Computer Science from the University of Wisconsin–Madison. In 1979 Dave Winer became an employee of Personal Software, where he worked on his own product idea named VisiText, which was his first attempt to build a commercial product around an "expand and collapse" outline display and which ultimately established outliners as a software product. In 1981 he left the company and founded Living Videotext to develop this still-unfinished product. The company was based in Mountain View, CA, and grew to more than 50 employees. ThinkTank, which was based on VisiText, was released in 1983 for Apple II and was promoted as an "idea processor." It became the "first popular outline processor, the one that made the term generic." A ThinkTank release for the IBM PC followed in 1984, as well as releases for the Macintosh 128K and 512K. Ready, a RAM resident outliner for the IBM PC released in 1985, was commercially successful but soon succumbed to the competing Sidekick product by Borland. MORE, released for Apple's Macintosh in 1986, combined an outliner and a presentation program. It became "uncontested in the marketplace" and won the MacUser's Editor's Choice Award for "Best Product" in 1986. In 1987, at the height of the company's success, Winer sold Living Videotext to Symantec for an undisclosed but substantial transfer of stock that "made his fortune." Winer continued to work at Symantec's Living Videotext division, but after six months he left the company in pursuit of other challenges. Winer founded UserLand Software in 1988 and served as the company's CEO until 2002. UserLand's original flagship product, Frontier, was a system-level scripting environment for the Mac, Winer's pioneering weblog, "Scripting News", takes its name from this early interest. Frontier was an outliner-based scripting language, echoing Winer's longstanding interest in outliners and anticipating code-folding editors of the late 1990s. Winer became interested in web publishing while helping automate the production process of the strikers' online newspaper during San Francisco's newspaper strike of November 1994, According to Newsweek, through this experience, he "revolutionized Net publishing." Winer subsequently shifted the company's focus to online publishing products, enthusiastically promoting and experimenting with these products while building his websites and developing new features. One of these products was Frontier's NewsPage Suite of 1997, which supported the publication of Winer's "Scripting News" and was adopted by a handful of users who "began playing around with their own sites in the Scripting News vein." These users included notably Chris Gulker and Jorn Barger, who envisaged blogging as a networked practice among users of the software. Winer was named a Seybold Fellow in 1997, to assist the executives and editors that comprised the Seybold Institute in ensuring "the highest quality and topicality" in their educational program, the Seybold Seminars; the honour was bestowed for his "pioneering work in web-based publishing systems." Keen to enter the "competitive arena of high-end Web development," Winer then came to collaborate with Microsoft and jointly developed the XML-RPC protocol. This led to the creation of SOAP, which he co-authored with Microsoft's Don Box, Bob Atkinson, and Mohsen Al-Ghosein. In December 1997, acting on the desire to "offer much more timely information," Winer designed and implemented an XML syndication format for use on his "Scripting News" weblog, thus making an early contribution to the history of web syndication technology. By December 2000, competing dialects of RSS included several varieties of Netscape's RSS, Winer's RSS 0.92, and an RDF-based RSS 1.0. Winer continued to develop the branch of the RSS fork originating from RSS 0.92, releasing in 2002 a version called RSS 2.0. Winer's advocacy of web syndication in general and RSS 2.0 in particular convinced many news organizations to syndicate their news content in that format. For example, in early 2002 "The New York Times" entered an agreement with UserLand to syndicate many of their articles in RSS 2.0 format. Winer resisted calls by technologists to have the shortcomings of RSS 2.0 improved. Instead, he froze the format and turned its ownership over to Harvard University. With products and services based on UserLand's Frontier system, Winer became a leader in blogging tools from 1999 onwards, as well as a "leading evangelist of weblogs." In 2000 Winer developed the Outline Processor Markup Language OPML, an XML format for outlines, which originally served as the native file format for Radio UserLand's outliner application and has since been adopted for other uses, the most common being to exchange lists of web feeds between web feed aggregators. UserLand was the first to add an "enclosure" tag in its RSS, modifying its blog software and its aggregator so that bloggers could easily link to an audio file (see podcasting and history of podcasting). In February 2002 Winer was named one of the "Top Ten Technology Innovators" by InfoWorld. In June 2002 Winer underwent life-saving bypass surgery to prevent a heart attack and as a consequence stepped down as CEO of UserLand shortly after. He remained the firm's majority shareholder, however, and claimed personal ownership of Weblogs.com. As "one of the most prolific content generators in Web history," Winer has enjoyed a long career as a writer and has come to be counted among Silicon Valley's "most influential web voices." Winer started "DaveNet", "a stream-of-consciousness newsletter distributed by e-mail" in November 1994 and maintained Web archives of the "goofy and informative" 800-word essays since January 1995, which earned him a Cool Site of the Day award in March 1995. From the start, the "Internet newsletter" "DaveNet" was widely read among industry leaders and analysts, who experienced it as a "real community." Dissatisfied with the quality of the coverage that the Mac and, especially, his own Frontier software received in the trade press, Winer saw "DaveNet" as an opportunity to "bypass" the conventional news channels of the software business. Satisfied with his success, he "reveled in the new direct email line he had established with his colleagues and peers, and in his ability to circumvent the media." In the early years, Winer often used "DaveNet" to vent his grievances against Apple's management, and as a consequence of his strident criticism came to be seen as "the most notorious of the disgruntled Apple developers." Redacted "DaveNet" columns were published weekly by the web magazine "HotWired" between June 1995 and May 1996. "DaveNet" was discontinued in 2004. Winer's "Scripting News", described as "one of the [web's] oldest blogs," launched in February 1997 and earned him titles such as "protoblogger" and "forefather of blogging." "Scripting News" started as "a home for links, offhand observations, and ephemera" and allowed Winer to mix "his roles as a widely read pundit and an ambitious entrepreneur." Offering an "as-it-happened portrait of the work of writing software for the Web in the 1990s," the site became an "established must-read for industry insiders." "Scripting News" continues to be updated regularly. Winer spent one year as a resident fellow at the Harvard Law School's Berkman Center for Internet & Society, where he worked on using weblogs in education. While there, he launched "Weblogs at Harvard Law School" using UserLand software, and held the first BloggerCon conferences. Winer's fellowship ended in June 2004. In 2010 Winer was appointed Visiting Scholar at New York University's Arthur L. Carter Journalism Institute. On December 19, 2012, Winer co-founded Small Picture, Inc. with Kyle Shank; Small Picture is a corporation that builds two outlining products, Little Outliner and Fargo. Little Outliner, an entry-level outliner designed to teach new users about outliners, which launched on March 25, 2013. Fargo, the company's "primary product", launched less than a month later, on April 17, 2013. Fargo is a free browser-based outliner which syncs with a user's Dropbox account. Small Picture has stated that in future it may offer paid-for services to Fargo users. In February 1996, while working as a columnist for HotWired, Winer organized 24 Hours of Democracy, an online protest against the recently passed Communications Decency Act. As part of the protest, over 1,000 people, among them Microsoft chairman Bill Gates, posted essays to the Web on the subject of democracy, civil liberty and freedom of speech. In December 1999, Winer became the "proprietor of a growing free blog service" at EditThisPage.com, hosting "approximately 20,000 sites" in February 2001. The service closed in December 2005. Winer has been given "credit for the invention of the podcasting model." Having received user requests for audioblogging features since October 2000, especially from Adam Curry, Winer decided to include new functionality in RSS 0.92 by defining a new element called "enclosure," which would pass the address of a media file to the RSS aggregator. He demonstrated the RSS enclosure feature on January 11, 2001 by enclosing a Grateful Dead song in his "Scripting News" weblog. Winer's weblogging product, Radio Userland, the program favored by Curry, had a built-in aggregator and thus provided both the "send" and "receive" components of what was then called audioblogging. In July 2003 Winer challenged other aggregator developers to provide support for enclosures. In October 2003, Kevin Marks demonstrated a script to download RSS enclosures and pass them to iTunes for transfer to an iPod. Curry then offered an RSS-to-iPod script that moved MP3 files from Radio UserLand to iTunes. The term "podcasting" was suggested by Ben Hammersley in February 2004. Winer also has an occasional podcast, Morning Coffee Notes, which has featured guests such as Doc Searls, Mike Kowalchik, Jason Calacanis, Steve Gillmor, Peter Rojas, Cecile Andrews, Adam Curry, Betsy Devine and others. BloggerCon is a user-focused conference for the blogger community. BloggerCon I (October 2003) and II (April 2004), were organized by Dave Winer and friends at Harvard Law School's Berkman Center for the Internet and Society in Cambridge, Mass. BloggerCon III met at Stanford Law School on November 6, 2004. Weblogs.com provided a free ping-server used by many blogging applications, as well as free hosting to many bloggers. After leaving Userland, Winer claimed personal ownership of the site, and in mid-June 2004 he shut down its free blog-hosting service, citing lack of resources and personal problems. A swift and orderly migration off Winer's server was facilitated by Rogers Cadenhead, whom Winer then hired to port the server to a more stable platform. In October, 2005, VeriSign bought the Weblogs.com ping-server from Winer and promised that its free services would remain free. The podcasting-related web site audio.weblogs.com was also included in the $2.3 million deal. Winer opened his self-described "commons for sharing outlines, feeds, and taxonomy" in May 2006. The site allowed users to publish and syndicate blogrolls and aggregator subscriptions using OPML. Winer suspended its service in January 2008. Since 2009, Winer has collaborated with New York University's associate professor of journalism Jay Rosen on "Rebooting the News", a weekly podcast on technology and innovation in journalism. It was announced on July 1, 2011 that the show would be on break, as NYU itself was, from June to September. However, no new episodes have been released since, making show #94 released on May 23, 2011 the last.
https://en.wikipedia.org/wiki?curid=8713
Taiko Taiko have a mythological origin in Japanese folklore, but historical records suggest that taiko were introduced to Japan through Korean and Chinese cultural influence as early as the 6th century CE. Some taiko are similar to instruments originating from India. Archaeological evidence also supports the view that taiko were present in Japan during the 6th century in the Kofun period. Their function has varied throughout history, ranging from communication, military action, theatrical accompaniment, and religious ceremony to both festival and concert performances. In modern times, taiko have also played a central role in social movements for minorities both within and outside Japan. "Kumi-daiko" performance, characterized by an ensemble playing on different drums, was developed in 1951 through the work of Daihachi Oguchi and has continued with groups such as Kodo. Other performance styles, such as "hachijō-daiko", have also emerged from specific communities in Japan. "Kumi-daiko" performance groups are active not only in Japan, but also in the United States, Australia, Canada, Europe, Taiwan, and Brazil. Taiko performance consists of many components in technical rhythm, form, stick grip, clothing, and the particular instrumentation. Ensembles typically use different types of barrel-shaped "nagadō-daiko" as well as smaller "shime-daiko". Many groups accompany the drums with vocals, strings, and woodwind instruments. The origin of the instruments is unclear, though there have been many suggestions. Historical accounts, of which the earliest date from 588 CE, note that young Japanese men traveled to Korea to study the kakko, a drum that originated in South China. This study and appropriation of Chinese instruments may have influenced the emergence of taiko. Certain court music styles, especially gigaku and gagaku, arrived in Japan through both Korea and China. In both traditions, dancers were accompanied by several instruments that included drums similar to taiko. Certain percussive patterns and terminology in togaku, an early dance and music style in Japan, in addition to physical features of the kakko, also reflect influence from both China and India on drum use in gagaku performance. Archaeological evidence shows that taiko were used in Japan as early as the 6th century CE, during the latter part of the Kofun period, and were likely used for communication, in festivals, and in other rituals. This evidence was substantiated by the discovery of haniwa statues in the Sawa District of Gunma Prefecture. Two of these figures are depicted playing drums; one of them, wearing skins, is equipped with a barrel-shaped drum hung from his shoulder and uses a stick to play the drum at hip height. This statue is titled "Man Beating the Taiko" and is considered the oldest evidence of taiko performance in Japan. Similarities between the playing style demonstrated by this haniwa and known music traditions in Korea and China further suggest influences from these regions. The "Nihon Shoki", the second oldest book of Japanese classical history, contains a mythological story describing the origin of taiko. The myth tells how Amaterasu, who had sealed herself inside a cave in anger, was beckoned out by an elder goddess Ame-no-Uzume when others had failed. Ame-no-Uzume accomplished this by emptying out a barrel of sake and dancing furiously on top of it. Historians regard her performance as the mythological creation of taiko music. In feudal Japan, taiko were often used to motivate troops, call out orders or announcements, and set a marching pace; marches were usually set to six paces per beat of the drum. During the 16th-century Warring States period, specific drum calls were used to communicate orders for retreating and advancing. Other rhythms and techniques were detailed in period texts. According to the war chronicle "Gunji Yoshū", nine sets of five beats would summon an ally to battle, while nine sets of three beats, sped up three or four times, was the call to advance and pursue an enemy. Folklore from the 16th century on the legendary 6th-century Emperor Keitai offers a story that he obtained a large drum from China, which he named . The Emperor was thought to have used it to both encourage his own army and intimidate his enemies. Taiko have been incorporated in Japanese theatre for rhythmic needs, general atmosphere, and in certain settings decoration. In the kabuki play "The Tale of Shiroishi and the Taihei Chronicles", scenes in the pleasure quarters are accompanied by taiko to create dramatic tension. Noh theatre also feature taiko where performance consists of highly specific rhythmic patterns. The school of drumming, for example, contains 65 basic patterns in addition to 25 special patterns; these patterns are categorized in several classes. Differences between these patterns include changes in tempo, accent, dynamics, pitch, and function in the theatrical performance. Patterns are also often connected together in progressions. Taiko continue to be used in gagaku, a classical music tradition typically performed at the Tokyo Imperial Palace in addition to local temples and shrines. In gagaku, one component of the art form is traditional dance, which is guided in part by the rhythm set by the taiko. Taiko have played an important role in many local festivals across Japan. They are also used to accompany religious ritual music. In kagura, which is a category of music and dances stemming from Shinto practices, taiko frequently appear alongside other performers during local festivals. In Buddhist traditions, taiko are used for ritual dances that are a part of the Bon Festival. Taiko, along with other instruments, are featured atop towers that are adorned with red-and-white cloth and serve to provide rhythms for the dancers who are encircled around the performers. In addition to the instruments, the term "taiko" also refers to the performance itself, and commonly to one style called "kumi-daiko", or ensemble-style playing (as opposed to festival performances, rituals, or theatrical use of the drums). "Kumi-daiko" was developed by Daihachi Oguchi in 1951. He is considered a master performer and helped transform taiko performance from its roots in traditional settings in festivals and shrines. Oguchi was trained as a jazz musician in Nagano, and at one point, a relative gave him an old piece of written taiko music. Unable to read the traditional and esoteric notation, Oguchi found help to transcribe the piece, and on his own added rhythms and transformed the work to accommodate multiple taiko players on different-sized instruments. Each instrument served a specific purpose that established present-day conventions in "kumi-daiko" performance. Oguchi's ensemble, Osuwa Daiko, incorporated these alterations and other drums into their performances. They also devised novel pieces that were intended for non-religious performances. Several other groups emerged in Japan through the 1950s and 1960s. Oedo Sukeroku Daiko was formed in Tokyo in 1959 under Seidō Kobayashi, and has been referred to as the first taiko group who toured professionally. Globally, "kumi-daiko" performance became more visible during the 1964 Summer Olympics in Tokyo, when it was featured during the Festival of Arts event. "Kumi-daiko" was also developed through the leadership of , who gathered young men who were willing to devote their entire lifestyle to taiko playing and took them to Sado Island for training where Den and his family had settled in 1968. Den chose the island based on a desire to reinvigorate the folk arts in Japan, particularly taiko; he became inspired by a drumming tradition unique to Sado called that required considerable strength to play well. Den called the group "Za Ondekoza" or Ondekoza for short, and implemented a rigorous set of exercises for its members including long-distance running. In 1975, Ondekoza was the first taiko group to tour in the United States. Their first performance occurred just after the group finished running the Boston Marathon while wearing their traditional uniforms. In 1981, some members of Ondekoza split from Den and formed another group called Kodo under the leadership of Eitetsu Hayashi. Kodo continued to use Sado Island for rigorous training and communal living, and went on to popularize taiko through frequent touring and collaborations with other musical performers. Kodo is one of the most recognized taiko groups both in Japan and worldwide. Estimates of the number of taiko groups in Japan vary up to 5000 active in Japan, but more conservative assessments place the number closer to 800 based on membership in the Nippon Taiko Foundation, the largest national organization of taiko groups. Some pieces that have emerged from early "kumi-daiko" groups that continue to be performed include "Yatai-bayashi" from Ondekoza, from Osuwa Daiko, and from Kodo. Taiko have been developed into a broad range of percussion instruments that are used in both Japanese folk and classical musical traditions. An early classification system based on shape and tension was advanced by Francis Taylor Piggott in 1909. Taiko are generally classified based on the construction process, or the specific context in which the drum is used, but some are not classified, such as the toy den-den daiko. With few exceptions, taiko have a drum shell with heads on both sides of the body, and a sealed resonating cavity. The head may be fastened to the shell using a number of different systems, such as using ropes. Taiko may be either tunable or non-tunable depending on the system used. Taiko are categorized into three types based on construction process. "Byō-uchi-daiko" are constructed with the drumhead nailed to the body. "Shime-daiko" are classically constructed with the skin placed over iron or steel rings, which are then tightened with ropes. Contemporary "shime-daiko" are tensioned using bolts or turnbuckles systems attached to the drum body. "Tsuzumi" are also rope-tensioned drums, but have a distinct hourglass shape and their skins are made using deerskin. "Byō-uchi-daiko" were historically made only using a single piece of wood; they continue to be made in this manner, but are also constructed from staves of wood. Larger drums can be made using a single piece of wood, but at a much greater cost due to the difficulty in finding appropriate trees. The preferred wood is the Japanese zelkova or "keyaki", but a number of other woods, and even wine barrels, have been used to create taiko. "Byō-uchi-daiko" cannot be tuned. The typical "byō-uchi-daiko" is the "nagadō-daiko", an elongated drum that is roughly shaped like a wine barrel. "Nagadō-daiko" are available in a variety of sizes, and their head diameter is traditionally measured in shaku (units of roughly 30 cm). Head diameters range from . are the smallest of these drums and are usually about in diameter. The "chū-daiko " is a medium-sized "nagadō-daiko" ranging from , and weighing about . vary in size, and are often as large as in diameter. Some "ō-daiko" are difficult to move due to their size, and therefore permanently remain inside the performance space, such as temple or shrine. "Ō-daiko" means "large drum" and for a given ensemble, the term refers to their largest drum. The other type of "byō-uchi-daiko" is called a and can be any drum constructed such that the head diameter is greater than the length of the body. "Shime-daiko" are a set of smaller, roughly snare drum-sized instrument that are tunable. The tensioning system usually consists of hemp cords or rope, but bolt or turnbuckle systems have been used as well. , sometimes referred to as "taiko" in the context of theater, have thinner heads than other kinds of shime-daiko. The head includes a patch of deerskin placed in the center, and in performance, drum strokes are generally restricted to this area. The is a heavier type of "shime-daiko". They are available in sizes 1–5, and are named according to their number: "namitsuke" (1), "nichō-gakke" (2), "sanchō-gakke" (3), "yonchō-gakke" (4), and "gochō-gakke" (5). The "namitsuke" has the thinnest skins and the shortest body in terms of height; thickness and tension of skins, as well as body height, increase toward the "gochō-gakke". The head diameters of all "shime-daiko" sizes are around . "Okedō-daiko" or simply "okedō", are a type of "shime-daiko" that are stave-constructed using narrower strips of wood, have a tube-shaped frame. Like other "shime-daiko", drum heads are attached by metal hoops and fastened by rope or cords. "Okedō" can be played using the same drumsticks (called "bachi") as "shime-daiko", but can also be hand-played. "Okedō" come in short- and long-bodied types. "Tsuzumi" are a class of hourglass-shaped drums. The drum body is shaped on a spool and the inner body carved by hand. Their skins can be made from cowhide, horsehide, or deerskin. While the "ō-tsuzumi" skins are made from cowhide, "ko-tsuzumi" are made from horsehide. While some classify "tsuzumi" as a type of taiko, others have described them as a drum entirely separate from taiko. Taiko can also be categorized by the context in which they are used. The "miya-daiko", for instance, is constructed in the same manner as other "byō-uchi-daiko", but is distinguished by an ornamental stand and is used for ceremonial purposes at Buddhist temples. The (a "ko-daiko") and (a "nagadō-daiko" with a cigar-shaped body) are used in sumo and festivals respectively. Several drums, categorized as "gagakki", are used in the Japanese theatrical form, gagaku. The lead instrument of the ensemble is the kakko, which is a smaller "shime-daiko" with heads made of deerskin, and is placed horizontally on a stand during performance. A "tsuzumi", called the "san-no-tsuzumi" is another small drum in gagaku that is placed horizontally and struck with a thin stick. are the largest drums of the ensemble, and have heads that are about in diameter. During performance, the drum is placed on a tall pedestals and surrounded by a rim decoratively painted with flames and adorned with mystical figures such as wyverns. "Dadaiko" are played while standing, and are usually only played on the downbeat of the music. The is a smaller drum that produces a lower sound, its head measuring about in diameter. It is used in ensembles that accompany bugaku, a traditional dance performed at the Tokyo Imperial Palace and in religious contexts. "Tsuri-daiko" are suspended on a small stand, and are played sitting down. "Tsuri-daiko" performers typically use shorter mallets covered in leather knobs instead of bachi. They can be played simultaneously by two performers; while one performer plays on the head, another performer uses bachi on the body of the drum. The larger "ō-tsuzumi" and smaller "ko-tsuzumi" are used in the opening and dances of Noh theater. Both drums are struck using the fingers; players can also adjust pitch by manually applying pressure to the ropes on the drum. The color of the cords of these drums also indicates the skill of the musician: Orange and red for amateur players, light blue for performers with expertise, and lilac for masters of the instrument. "Nagauta-shime daiko" or "uta daiko" are also featured in Noh performance. Many taiko in Noh are also featured in kabuki performance and are used in a similar manner. In addition to the "ō-tsuzumi", "ko-tsuzumi", and "nagauta-shime daiko", Kabuki performances make use of the larger "ō-daiko" offstage to help set the atmosphere for different scenes. Taiko construction has several stages, including making and shaping of the drum body (or shell), preparing the drum skin, and tuning the skin to the drumhead. Variations in the construction process often occur in the latter two parts of this process. Historically, "byō-uchi-daiko" were crafted from trunks of the Japanese zelkova tree that were dried out over years, using techniques to prevent splitting. A master carpenter then carved out the rough shape of the drum body with a chisel; the texture of the wood after carving softened the tone of the drum. In contemporary times, taiko are carved out on a large lathe using wood staves or logs that can be shaped to fit drum bodies of various sizes. Drumheads can be left to air-dry over a period of years, but some companies use large, smoke-filled warehouses to hasten the drying process. After drying is complete, the inside of the drum is worked with a deep-grooved chisel and sanded. Lastly, handles are placed onto the drum. These are used to carry smaller drums and they serve an ornamental purpose for larger drums. The skins or heads of taiko are generally made from cowhide from Holstein cows aged about three or four years. Skins also come from horses, and bull skin is preferred for larger drums. Thinner skins are preferred for smaller taiko, and thicker skins are used for larger ones. On some drumheads, a patch of deer skin placed in the center serves as the target for many strokes during performance. Before fitting it to the drum body the hair is removed from the hide by soaking it in a river or stream for about a month; winter months are preferred as colder temperatures better facilitate hair removal. To stretch the skin over the drum properly, one process requires the body to be held on a platform with several hydraulic jacks underneath it. The edges of the cowhide are secured to an apparatus below the jacks, and the jacks stretch the skin incrementally to precisely apply tension across the drumhead. Other forms of stretching use rope or cords with wooden dowels or an iron wheel to create appropriate tension. Small tension adjustments can be made during this process using small pieces of bamboo that twist around the ropes. Particularly large drumheads are sometimes stretched by having several workers, clad in stockings, hop rhythmically atop it, forming a circle along the edge. After the skin has dried, tacks, called "byō", are added to the appropriate drums to secure it; "chū-daiko" require about 300 of them for each side. After the body and skin have been finished, excess hide is cut off and the drum can be stained as needed. Several companies specialize in the production of taiko. One such company that created drums exclusively for the Emperor of Japan, Miyamoto Unosuke Shoten in Tokyo, has been making taiko since 1861. The Asano Taiko Corporation is another major taiko-producing organization, and has been producing taiko for over 400 years. The family-owned business started in Mattō, Ishikawa, and, aside from military equipment, made taiko for Noh theater and later expanded to creating instruments for festivals during the Meiji period. Asano currently maintains an entire complex of large buildings referred to as Asano Taiko Village, and the company reports producing up to 8000 drums each year. As of 2012, there is approximately one major taiko production company in each prefecture of Japan, with some regions having several companies. Of the manufacturers in Naniwa, Taikoya Matabē is one of the most successful and is thought to have brought considerable recognition to the community and attracted many drum makers there. Umetsu Daiko, a company that operates in Hakata, has been producing taiko since 1821. Taiko performance styles vary widely across groups in terms of the number of performers, repertoire, instrument choices, and stage techniques. Nevertheless, a number of early groups have had broad influence on the tradition. For instance, many pieces developed by Ondekoza and Kodo are considered standard in many taiko groups. Kata is the posture and movement associated with taiko performance. The notion is similar to that of kata in martial arts: for example, both traditions include the idea that the hara is the center of being. Author Shawn Bender argues that kata is the primary feature that distinguishes different taiko groups from one another and is a key factor in judging the quality of performance. For this reason, many practice rooms intended for taiko contain mirrors to provide visual feedback to players. An important part of kata in taiko is keeping the body stabilized while performing and can be accomplished by keeping a wide, low stance with the legs, with the left knee bent over the toes and keeping the right leg straight. It is important that the hips face the drum and the shoulders are relaxed. Some teachers note a tendency to rely on the upper body while playing and emphasize the importance of the holistic use of the body during performance. Some groups in Japan, particularly those active in Tokyo, also emphasize the importance of the lively and spirited "iki" aesthetic. In taiko, it refers to very specific kinds of movement while performing that evoke the sophistication stemming from the mercantile and artisan classes active during the Edo period (1603–1868). The sticks for playing taiko are called "bachi", and are made in various sizes and from different kinds of wood such as white oak, bamboo, and Japanese magnolia. "Bachi" are also held in a number of different styles. In "kumi-daiko", it is common for a player to hold their sticks in a relaxed manner between the V-shape of the index finger and thumb, which points to the player. There are other grips that allow performers to play much more technically difficult rhythms, such as the "shime" grip, which is similar to a matched grip: the "bachi" are gripped at the back end, and the fulcrum rests between the performer's index finger and thumb, while the other fingers remain relaxed and slightly curled around the stick. Performance in some groups is also guided by principles based on Zen Buddhism. For instance, among other concepts, the San Francisco Taiko Dojo is guided by emphasizing communication, respect, and harmony. The way the "bachi" are held can also be significant; for some groups, "bachi" represent a spiritual link between the body and the sky. Some physical parts of taiko, like the drum body, its skin, and the tacks also hold symbolic significance in Buddhism. "Kumi-daiko" groups consist primarily of percussive instruments where each of the drums plays a specific role. Of the different kinds of taiko, the most common in groups is the "nagadō-daiko". "Chū-daiko" are common in taiko groups and represent the main rhythm of the group, whereas "shime-daiko" set and change tempo. A "shime-daiko" often plays the Jiuchi, a base rhythm holding together the ensemble. "Ō-daiko" provide a steady, underlying pulse and serve as a counter-rhythm to the other parts. It is common for performances to begin with a single stroke roll called an "". The player starts slowly, leaving considerable space between strikes, gradually shortening the interval between hits, until the drummer is playing a rapid roll of hits. Oroshi are also played as a part of theatrical performance, such as in Noh theater. Drums are not the only instruments played in the ensemble; other Japanese instruments are also used. Other kinds of percussion instruments include the , a hand-sized gong played with a small mallet. In kabuki, the shamisen, a plucked string instrument, often accompanies taiko during the theatrical performance. "Kumi-daiko" performances can also feature woodwinds such as the shakuhachi and the shinobue. Voiced calls or shouts called kakegoe and kiai are also common in taiko performance. They are used as encouragement to other players or cues for transition or change in dynamics such as an increase in tempo. In contrast, the philosophical concept of ma, or the space between drum strikes, is also important in shaping rhythmic phrases and creating appropriate contrast. There is a wide variety of traditional clothing that players wear during taiko performance. Common in many "kumi-daiko" groups is the use of the happi, a decorative, thin-fabric coat, and traditional headbands called hachimaki. Tabi, , and are also typical. During his time with the group Ondekoza, Eitetsu Hayashi suggested that a loincloth called a fundoshi be worn when performing for French fashion designer Pierre Cardin, who saw Ondekoza perform for him in 1975. The Japanese group Kodo has sometimes worn fundoshi for its performances. Taiko performance is generally taught orally and through demonstration. Historically, general patterns for taiko were written down, such as in the 1512 encyclopedia called the "Taigensho", but written scores for taiko pieces are generally unavailable. One reason for the adherence to an oral tradition is that, from group to group, the rhythmic patterns in a given piece are often performed differently. Furthermore, ethnomusicologist William P. Malm observed that Japanese players within a group could not usefully predict one another using written notation, and instead did so through listening. In Japan, printed parts are not used during lessons. Orally, patterns of onomatopoeia called kuchi shōga are taught from teacher to student that convey the rhythm and timbre of drum strikes for a particular piece. For example, represents a single strike to the center of the drum, where as represents two successive strikes, first by the right and then the left, and lasts the same amount of time as one "don" strike. Some taiko pieces, such as "Yatai-bayashi", include patterns that are difficult to represent in Western musical notation. The exact words used can also differ from region to region. More recently, Japanese publications have emerged in an attempt to standardize taiko performance. The Nippon Taiko Foundation was formed in 1979; its primary goals were to foster good relations among taiko groups in Japan and to both publicize and teach how to perform taiko. Daihachi Oguchi, the leader of the Foundation, wrote "Japan Taiko" with other teachers in 1994 out of concern that correct form in performance would degrade over time. The instructional publication described the different drums used in "kumi-daiko" performance, methods of gripping, correct form, and suggestions on instrumentation. The book also contains practice exercises and transcribed pieces from Oguchi's group, Osuwa Daiko. While there were similar textbooks published before 1994, this publication had much more visibility due to the Foundation's scope. The system of fundamentals "Japan Taiko" put forward was not widely adopted because taiko performance varied substantially across Japan. An updated 2001 publication from the Foundation, called the , describes regional variations that depart from the main techniques taught in the textbook. The creators of the text maintained that mastering a set of prescribed basics should be compatible with learning local traditions. Aside from "kumi-daiko" performance, a number of folk traditions that use taiko have been recognized in different regions in Japan. Some of these include from Sado Island, ' from the town of Kokura, and ' from Iwate Prefecture. A variety of folk dances originating from Okinawa, known collectively as eisa, often make use of the taiko. Some performers use drums while dancing, and generally speaking, perform in one of two styles: groups on the Yokatsu Peninsula and on Hamahiga Island use small, single-sided drums called whereas groups near the city of Okinawa generally use "shime-daiko". Use of "shime-daiko" over "pāranku" has spread throughout the island, and is considered the dominant style. Small "nagadō-daiko", referred to as "ō-daiko" within the tradition, are also used and are worn in front of the performer. These drum dances are not limited to Okinawa and have appeared in places containing Okinawan communities such as in São Paulo, Hawaii, and large cities on the Japanese mainland. is a taiko tradition originating on the island of Hachijō-jima. Two styles of "Hachijō-daiko" emerged and have been popularized among residents: an older tradition based on a historical account, and a newer tradition influenced by mainland groups and practiced by the majority of the islanders. The "Hachijō-daiko" tradition was documented as early as 1849 based on a journal kept by an exile named Kakuso Kizan. He mentioned some of its unique features, such as "a taiko is suspended from a tree while women and children gathered around", and observed that a player used either side of the drum while performing. Illustrations from Kizan's journal show features of "Hachijō-daiko". These illustrations also featured women performing, which is unusual as taiko performance elsewhere during this period was typically reserved for men. Teachers of the tradition have noted that the majority of its performers were women; one estimate asserts that female performers outnumbered males by three to one. The first style of Hachijō-daiko is thought to descend directly from the style reported by Kizan. This style is called "Kumaoji-daiko", named after its creator Okuyama Kumaoji, a central performer of the style. "Kumaoji-daiko" has two players on a single drum, one of whom, called the , provides the underlying beat. The other player, called the , builds on this rhythmical foundation with unique and typically improvised rhythms. While there are specific types of underlying rhythms, the accompanying player is free to express an original musical beat. "Kumaoji-daiko" also features an unusual positioning for taiko: the drums are sometimes suspended from ropes, and historically, sometimes drums were suspended from trees. The contemporary style of "Hachijō-daiko" is called , which differs from "Kumaoji-daiko" in multiple ways. For instance, while the lead and accompanying roles are still present, "shin-daiko" performances use larger drums exclusively on stands. "Shin-daiko" emphasizes a more powerful sound, and consequently, performers use larger bachi made out of stronger wood. Looser clothing is worn by "shin-daiko" performers compared to kimono worn by "Kumaoji-daiko" performers; the looser clothing in "shin-daiko" allow performers to adopt more open stances and larger movements with the legs and arms. Rhythms used for the accompanying "shita-byōshi" role can also differ. One type of rhythm, called "yūkichi", consists of the following: This rhythm is found in both styles, but is always played faster in "shin-daiko". Another type of rhythm, called "honbadaki", is unique to "shin-daiko" and also contains a song which is performed in standard Japanese. is a style that has spread amongst groups through Kodo, and is formally known as . The word "miyake" comes from Miyake-jima, part of the Izu Islands, and the word "Kamitsuki" refers to the village where the tradition came from. Miyake-style taiko came out of performances for — a traditional festival held annually in July on Miyake Island since 1820 honoring the deity Gozu Tennō. In this festival, players perform on taiko while portable shrines are carried around town. The style itself is characterized in a number of ways. A "nagadō-daiko" is typically set low to the ground and played by two performers, one on each side; instead of sitting, performers stand and hold a stance that is also very low to the ground, almost to the point of kneeling. Taiko groups in Australia began forming in the 1990s. The first group, called Ataru Taru Taiko, was formed in 1995 by Paulene Thomas, Harold Gent, and Kaomori Kamei. TaikOz was later formed by percussionist Ian Cleworth and Riley Lee, a former Ondekoza member, and has been performing in Australia since 1997. They are known for their work in generating interest in performing taiko among Australian audiences, such as by developing a complete education program with both formal and informal classes, and have a strong fan base. Cleworth and other members of the group have developed several original pieces. The introduction of "kumi-daiko" performance in Brazil can be traced back to the 1970s and 1980s in São Paulo. Tangue Setsuko founded an eponymous taiko dojo and was Brazil's first taiko group; Setsuo Kinoshita later formed the group Wadaiko Sho. Brazilian groups have combined native and African drumming techniques with taiko performance. One such piece developed by Kinoshita is called "Taiko de Samba", which emphasizes both Brazilian and Japanese aesthetics in percussion traditions. Taiko was also popularized in Brazil from 2002 through the work of Yukihisa Oda, a Japanese native who visited Brazil several times through the Japan International Cooperation Agency. The Brazilian Association of Taiko (ABT) suggests that there are about 150 taiko groups in Brazil and that about 10–15% of players are non-Japanese; Izumo Honda, coordinator of a large annual festival in São Paulo, estimated that about 60% of all taiko performers in Brazil are women. Taiko emerged in the United States in the late 1960s. The first group, San Francisco Taiko Dojo, was formed in 1968 by Seiichi Tanaka, a postwar immigrant who studied taiko in Japan and brought the styles and teachings to the US. A year later, a few members of Senshin Buddhist Temple in Los Angeles led by its minister Masao Kodani initiated another group called Kinnara Taiko. San Jose Taiko later formed in 1973 in Japantown, San Jose, under Roy and PJ Hirabayashi. Taiko started to branch out to the eastern US in the late 1970s. This included formation of Denver Taiko in 1976 and Soh Daiko in New York City in 1979. Many of these early groups lacked the resources to equip each member with a drum and resorted to makeshift percussion materials such as rubber tires or creating taiko out of wine barrels. Japanese-Canadian taiko began in 1979 with Katari Taiko, and was inspired by the San Jose Taiko group. Its early membership was predominantly female. Katari Taiko and future groups were thought to represent an opportunity for younger, third-generation Japanese Canadians to explore their roots, redevelop a sense of ethnic community, and expand taiko into other musical traditions. There are no official counts or estimates of the number of active taiko groups in the United States or Canada, as there is no governing body for taiko groups in either country. Unofficial estimates have been made. In 1989, there were as many as 30 groups in the US and Canada, seven of which were in California. One estimate suggested that around 120 groups were active in the US and Canada as of 2001, many of which could be traced to the San Francisco Taiko Dojo; later estimates in 2005 and 2006 suggested there were about 200 groups in the United States alone. The Cirque du Soleil shows "Mystère" in Las Vegas and "Dralion" have featured taiko performance. Taiko performance has also been featured in commercial productions such as the 2005 Mitsubishi Eclipse ad campaign, and in events such as the 2009 Academy Awards and 2011 Grammy Awards. From 2005 to 2006, the Japanese American National Museum held an exhibition called "Big Drum: Taiko in the United States". The exhibition covered several topics related to taiko in the United States, such as the formation of performance groups, their construction using available materials, and social movements. Visitors were able to play smaller drums. Certain peoples have used taiko to advance social or cultural movements, both within Japan and elsewhere in the world. Taiko performance has frequently been viewed as an art form dominated by men. Historians of taiko argue that its performance comes from masculine traditions. Those who developed ensemble-style taiko in Japan were men, and through the influence of Ondekoza, the ideal taiko player was epitomized in images of the masculine peasant class, particularly through the character Muhōmatsu in the 1958 film "Rickshaw Man". Masculine roots have also been attributed to perceived capacity for "spectacular bodily performance" where women's bodies are sometimes judged as unable to meet the physical demands of playing. Before the 1980s, it was uncommon for Japanese women to perform on traditional instruments, including taiko, as their participation had been systematically restricted; an exception was the San Francisco Taiko Dojo under the guidance of Grand master Seiichi Tanaka, who was the first to admit females to the art form. In Ondekoza and in the early performances of Kodo, women performed only dance routines either during or between taiko performances. Thereafter, female participation in "kumi-daiko" started to rise dramatically, and by the 1990s, women equaled and possibly exceeded representation by men. While the proportion of women in taiko has become substantial, some have expressed concern that women still do not perform in the same roles as their male counterparts and that taiko performance continues to be a male-dominated profession. For instance, a member of Kodo was informed by the director of the group's apprentice program that women were permitted to play, but could only play "as women". Other women in the apprentice program recognized a gender disparity in performance roles, such as what pieces they were allowed to perform, or in physical terms based on a male standard. Female taiko performance has also served as a response to gendered stereotypes of Japanese women as being quiet, subservient, or a femme fatale. Through performance, some groups believe they are helping to redefine not only the role of women in taiko, but how women are perceived more generally. Those involved in the construction of taiko are usually considered part of the burakumin, a marginalized minority class in Japanese society, particularly those working with leather or animal skins. Prejudice against this class dates back to the Tokugawa period in terms of legal discrimination and treatment as social outcasts. Although official discrimination ended with the Tokugawa era, the burakumin have continued to face social discrimination, such as scrutiny by employers or in marriage arrangements. Drum makers have used their trade and success as a means to advocate for an end to discriminatory practices against their class. The , representing the contributions of burakumin, is found in Naniwa Ward in Osaka, home to a large proportion of burakumin. Among other features, the road contains taiko-shaped benches representing their traditions in taiko manufacturing and leatherworking, and their influence on national culture. The road ends at the Osaka Human Rights Museum, which exhibits the history of systematic discrimination against the burakumin. The road and museum were developed in part due an advocacy campaign led by the Buraku Liberation League and a taiko group of younger performers called . Taiko performance was an important part of cultural development by third-generation Japanese residents in North America, who are called "sansei". During World War II, second-generation Japanese residents, called "nisei" faced internment in the United States and in Canada on the basis of their race. During and after the war, Japanese residents were discouraged from activities such as speaking Japanese or forming ethnic communities. Subsequently, sansei could not engage in Japanese culture and instead were raised to assimilate into more normative activities. There were also prevailing stereotypes of Japanese people, which sansei sought to escape or subvert. During the 1960s in the United States, the civil rights movement influenced sansei to reexamine their heritage by engaging in Japanese culture in their communities; one such approach was through taiko performance. Groups such as San Jose Taiko were organized to fulfill a need for solidarity and to have a medium to express their experiences as Japanese-Americans. Later generations have adopted taiko in programs or workshops established by sansei; social scientist Hideyo Konagaya remarks that this attraction to taiko among other Japanese art forms may be due to its accessibility and energetic nature. Konagaya has also argued that the resurgence of taiko in the United States and Japan are differently motivated: in Japan, performance was meant to represent the need to recapture sacred traditions, while in the United States it was meant to be an explicit representation of masculinity and power in Japanese-American men. A number of performers and groups, including several early leaders, have been recognized for their contributions to taiko performance. Daihachi Oguchi was best known for developing "kumi-daiko" performance. Oguchi founded the first "kumi-daiko" group called Osuwa Daiko in 1951, and facilitated the popularization of taiko performance groups in Japan. Seidō Kobayashi is the leader of the Tokyo-based taiko group Oedo Sukeroku Taiko as of December 2014. Kobayashi founded the group in 1959 and was the first group to tour professionally. Kobayashi is considered a master performer of taiko. He is also known for asserting intellectual control of the group's performance style, which has influenced performance for many groups, particularly in North America. In 1968, Seiichi Tanaka founded the San Francisco Taiko Dojo and is regarded as the Grandfather of Taiko and primary developer of taiko performance in the United States. He was a recipient of a 2001 National Heritage Fellowship awarded by the National Endowment for the Arts and since 2013 is the only taiko professional presented with the Order of the Rising Sun 5th Order: Gold and Silver Rays by Emperor Akihito of Japan, in recognition of Grandmaster Seiichi Tanaka's contributions to the fostering of US-Japan relations as well as the promotion of Japanese cultural understanding in the United States. In 1969, founded Ondekoza, a group well known for making taiko performance internationally visible and for its artistic contributions to the tradition. Den was also known for developing a communal living and training facility for Ondekoza on Sado Island in Japan, which had a reputation for its intensity and broad education programs in folklore and music. Performers and groups beyond the early practitioners have also been noted. Eitetsu Hayashi is best known for his solo performance work. When he was 19, Hayashi joined Ondekoza, a group later expanded and re-founded as Kodo, one of the best known and most influential taiko performance groups in the world. Hayashi soon left the group to begin a solo career and has performed in venues such as Carnegie Hall in 1984, the first featured taiko performer there. He was awarded the 47th Education Minister's Art Encouragement Prize, a national award, in 1997 as well as the 8th Award for the Promotion of Traditional Japanese Culture from the Japan Arts Foundation in 2001.
https://en.wikipedia.org/wiki?curid=8715
Dolly Parton Dolly Rebecca Parton (born January 19, 1946) is an American singer, songwriter, multi-instrumentalist, record producer, actress, author, businesswoman, and humanitarian, known primarily for her work in country music. After achieving success as a songwriter for others, Parton made her album debut in 1967 with "Hello, I'm Dolly". With steady success during the remainder of the 1960s (both as a solo artist and with a series of duet albums with Porter Wagoner), her sales and chart peak came during the 1970s and continued into the 1980s. Parton's albums in the 1990s did not sell as well, but she achieved commercial success again in the new millennium and has released albums on various independent labels since 2000, including her own label, Dolly Records. Parton's music includes Recording Industry Association of America (RIAA)-certified gold, platinum and multi-platinum awards. She has had 25 songs reach No. 1 on the "Billboard" country music charts, a record for a female artist (tied with Reba McEntire). She has 44 career Top 10 country albums, a record for any artist, and she has 110 career-charted singles over the past 40 years. She has garnered ten Grammy Awards and 49 nominations, including the Lifetime Achievement Award and a 2020 win with for KING & COUNTRY for their collaboration on "God Only Knows"; 10 Country Music Association Awards, including Entertainer of the Year and is one of only seven female artists to win the Country Music Association's Entertainer of the Year Award; five Academy of Country Music Awards, also including a nod for Entertainer of the Year; four People’s Choice Awards; and three American Music Awards. In 1999, Parton was inducted into the Country Music Hall of Fame. She has composed over 3,000 songs, including "I Will Always Love You" (a two-time U.S. country chart-topper, as well as an international pop hit for Whitney Houston), "Jolene", "Coat of Many Colors", and "9 to 5". She is also one of the few to have received at least one nomination from the Academy Awards, Grammy Awards, Tony Awards, and Emmy Awards. As an actress, she has starred in films such as "9 to 5" (1980) and "The Best Little Whorehouse in Texas" (1982), for which she earned Golden Globe nominations for Best Actress, as well as "Rhinestone" (1984), "Steel Magnolias" (1989), "Straight Talk" (1992) and "Joyful Noise" (2012). Dolly Rebecca Parton was born January 19, 1946 in a one-room cabin on the banks of the Little Pigeon River in Pittman Center, Tennessee. She is the fourth of 12 children born to Avie Lee Caroline (née Owens; 1923–2003) and Robert Lee Parton Sr. (1921–2000). Her father, known as "Lee", worked in the mountains of East Tennessee, first as a sharecropper and later tending his own small tobacco farm and acreage. He also worked construction jobs to supplement the farm's small income. Lee was illiterate but Dolly Parton often says despite that fact, he was one of the smartest people she's known in regards to business and making a profit. Avie Lee was homemaker for the large family. Her 11 pregnancies (the tenth being twins) in 20 years made her a mother of 12 by age 35. Often in poor health, she still managed to keep house and entertain her children with songs and tales of mountain folklore. Avie Lee's father, Jake Owens, was a Pentecostal preacher, so Parton and her siblings all attended church regularly. Parton has long credited her father for her business savvy, and her mother's family for her musical abilities. While Dolly Parton was still very young, her family moved to a farm on nearby Locust Ridge. Most of her cherished memories of youth happened there, and it is the place about which she wrote the song "My Tennessee Mountain Home" in the 1970s. Parton bought back the Locust Ridge property in the 1980s. Two of her siblings are no longer living; Larry died shortly after birth in 1955, and Floyd died in 2018. Dolly Parton's middle name comes from her maternal great-great-grandmother Rebecca (Dunn) Whitted. She has described her family as "dirt poor." Parton's father paid the doctor who helped deliver her with a bag of cornmeal. She outlined her family's poverty in her early songs "Coat of Many Colors" and "In the Good Old Days (When Times Were Bad)". They lived in a rustic, one-bedroom cabin in Locust Ridge, just north of the Greenbrier Valley of the Great Smoky Mountains, a predominantly Pentecostal area. Music played an important role in her early life. She was brought up in the Church of God (Cleveland, Tennessee), the church her grandfather, Jake Robert Owens, pastored. Her earliest public performances were in the church, beginning at age six. At seven, she started playing a homemade guitar. When she was eight, her uncle bought her first real guitar. Parton began performing as a child, singing on local radio and television programs in the East Tennessee area. By ten, she was appearing on "The Cas Walker Show" on both WIVK Radio and WBIR-TV in Knoxville, Tennessee. At 13, she was recording (the single "Puppy Love") on a small Louisiana label, Goldband Records, and appeared at the Grand Ole Opry, where she first met Johnny Cash, who encouraged her to follow her own instincts regarding her career. After graduating from Sevier County High School in 1964, Parton moved to Nashville the next day. Her initial success came as a songwriter, having signed with Combine Publishing shortly after her arrival; with her frequent songwriting partner, her uncle Bill Owens, she wrote several charting singles during this time, including two top-10 hits: Bill Phillips's "Put It Off Until Tomorrow" (1966) and Skeeter Davis's "Fuel to the Flame" (1967). Her songs were recorded by many other artists during this period, including Kitty Wells and Hank Williams Jr. She signed with Monument Records in 1965, at age 19; she initially was pitched as a bubblegum pop singer. She released a string of singles, but the only one that charted, "Happy, Happy Birthday Baby", did not crack the "Billboard" Hot 100. Although she expressed a desire to record country material, Monument resisted, thinking her unique voice with its strong vibrato was not suited to the genre. After her composition "Put It Off Until Tomorrow", as recorded by Bill Phillips (with Parton, uncredited, on harmony), went to number six on the country chart in 1966, the label relented and allowed her to record country. Her first country single, "Dumb Blonde" (composed by Curly Putman, one of the few songs during this era that she recorded but did not write), reached number 24 on the country chart in 1967, followed by "Something Fishy", which went to number 17. The two songs appeared on her first full-length album, "Hello, I'm Dolly". In 1967, musician and country music entertainer Porter Wagoner invited Parton to join his organization, offering her a regular spot on his weekly syndicated television program "The Porter Wagoner Show", and in his road show. As documented in her 1994 autobiography, initially, much of Wagoner's audience was unhappy that Norma Jean, the performer whom Parton had replaced, had left the show, and was reluctant to accept Parton (sometimes chanting loudly for Norma Jean from the audience). With Wagoner's assistance, however, Parton was eventually accepted. Wagoner convinced his label, RCA Victor, to sign her. RCA decided to protect their investment by releasing her first single as a duet with Wagoner. That song, a remake of Tom Paxton's "The Last Thing on My Mind", released in late 1967, reached the country top 10 in January 1968, launching a six-year streak of virtually uninterrupted top-10 singles for the pair. Parton's first solo single for RCA Victor, "Just Because I'm a Woman", was released in the summer of 1968 and was a moderate chart hit, reaching number 17. For the remainder of the decade, none of her solo efforts – even "In the Good Old Days (When Times Were Bad)", which later became a standard – were as successful as her duets with Wagoner. The duo was named Vocal Group of the Year in 1968 by the Country Music Association, but Parton's solo records were continually ignored. Wagoner had a significant financial stake in her future; as of 1969, he was her co-producer and owned nearly half of Owe-Par, the publishing company Parton had founded with Bill Owens. By 1970, both Parton and Wagoner had grown frustrated by her lack of solo chart success. Wagoner persuaded Parton to record Jimmie Rodgers' "Mule Skinner Blues", a gimmick that worked. The record shot to number three, followed closely, in February 1971, by her first number-one single, "Joshua". For the next two years, she had numerous solo hits – including her signature song "Coat of Many Colors" (number four, 1971) – in addition to her duets. Top-20 singles included "The Right Combination" and "Burning the Midnight Oil" (both duets with Wagoner, 1971); "Lost Forever in Your Kiss", (with Wagoner) "Touch Your Woman", (1972) "My Tennessee Mountain Home" and "Travelin' Man" (1973). Although her solo singles and the Wagoner duets were successful, her biggest hit of this period was "Jolene". Released in late 1973, it topped the country chart in February 1974 and reached the lower regions of the Hot 100. (It eventually also charted in the U.K., reaching number seven in 1976, representing Parton's first U.K. success). Parton, who had always envisioned a solo career, made the decision to leave Wagoner's organization; the pair performed their last duet concert in April 1974, and she stopped appearing on his TV show in mid-1974, although they remained affiliated. He helped produce her records through 1975. The pair continued to release duet albums, their final release being 1975's "Say Forever You'll Be Mine". In 1974, her song, "I Will Always Love You", written about her professional break from Wagoner, went to number one on the country chart. Around the same time, Elvis Presley indicated that he wanted to record the song. Parton was interested until Presley's manager, Colonel Tom Parker, told her that it was standard procedure for the songwriter to sign over half of the publishing rights to any song recorded by Presley. Parton refused. That decision has been credited with helping to make her many millions of dollars in royalties from the song over the years. Parton had three solo singles reach number one on the country chart in 1974 ("Jolene", "I Will Always Love You" and "Love Is Like a Butterfly"), as well as the duet with Porter Wagoner, "Please Don't Stop Loving Me". In a 2019 episode of the Sky Arts music series "Brian Johnson: A Life on the Road", Parton described finding old cassette tapes and realizing that she'd composed both "Jolene" and "I Will Always Love You" in the same songwriting session, telling Johnson "Buddy, that was a good night." Parton again topped the singles chart in 1975 with "The Bargain Store". Between 1974 and 1980 Parton had a series of country hits, with eight singles reaching number one. Her influence on pop culture is reflected by the many performers covering her songs, including mainstream and crossover artists such as Olivia Newton-John, Emmylou Harris, and Linda Ronstadt. Parton began to embark on a high-profile crossover campaign, attempting to aim her music in a more mainstream direction and increase her visibility outside of the confines of country music. In 1976, she began working closely with Sandy Gallin, who served as her personal manager for the next 25 years. With her 1976 album "All I Can Do", which she co-produced with Porter Wagoner, Parton began taking more of an active role in production, and began specifically aiming her music in a more mainstream, pop direction. Her first entirely self-produced effort, "New Harvest...First Gathering" (1977), highlighted her pop sensibilities, both in terms of choice of songs – the album contained covers of the pop and R&B classics "My Girl" and "Higher and Higher" – and production. Though the album was well received and topped the U.S. country albums chart, neither it nor its single "Light of a Clear Blue Morning" made much of an impression on the pop charts. After "New Harvest"'s disappointing crossover performance, Parton turned to high-profile pop producer Gary Klein for her next album. The result, 1977's "Here You Come Again," became her first million-seller, topping the country album chart and reaching number 20 on the pop chart. The Barry Mann-Cynthia Weil-penned title track topped the country singles chart, and became Parton's first top-ten single on the pop chart (#3). A second single, the double A-sided "Two Doors Down"/"It's All Wrong, But It's All Right" topped the country chart and crossed over to the pop Top 20. For the remainder of the 1970s and into the early 1980s, many of her subsequent singles moved up on both charts simultaneously. Her albums during this period were developed specifically for pop-crossover success. In 1978, Parton won a Grammy Award for Best Female Country Vocal Performance for her "Here You Come Again" album. She continued to have hits with "Heartbreaker", (1978) "Baby I'm Burning" (1979) and "You're the Only One," (1979) all of which charted in the pop Top 40 and topped the country chart. "Sweet Summer Lovin'" (1979) became the first Parton single in two years to not top the country chart (though it did reach the Top 10). During this period, her visibility continued to increase, with multiple television appearances. A highly publicized candid interview on a "Barbara Walters Special" in 1977 (timed to coincide with "Here You Come Again"'s release) was followed by appearances in 1978 on Cher's ABC television special, and her own joint special with Carol Burnett on CBS, "Carol and Dolly in Nashville". Parton served as one of three co-hosts (along with Roy Clark and Glen Campbell) on the CBS special "Fifty Years of Country Music". In 1979, Parton hosted the NBC special "The Seventies: An Explosion of Country Music", performed live at the Ford Theatre in Washington, D.C., and whose audience included President Jimmy Carter. Her commercial success grew in 1980, with three consecutive country chart number-one hits: the Donna Summer-written "Starting Over Again", "Old Flames Can't Hold a Candle to You", and "9 to 5", which topped the country and pop charts in early 1981. She had another Top 10 single that year with "Making Plans", a single released from a 1980 reunion album with Porter Wagoner. The theme song to the 1980 feature film "9 to 5", in which she starred along with Jane Fonda and Lily Tomlin, "9 to 5", not only reached number one on the country chart, but also, in February 1981, reached number one on the pop and the adult-contemporary charts, giving her a triple number-one hit. Parton became one of the few female country singers to have a number-one single on the country and pop charts simultaneously. It also received a nomination for an Academy Award for Best Original Song. Her singles continued to appear consistently in the country Top 10. Between 1981 and 1985, she had 12 Top-10 hits; half of them hit number one. She continued to make inroads on the pop chart as well. A re-recorded version of "I Will Always Love You," from the feature film "The Best Little Whorehouse in Texas" (1982) scraped the Top 50 that year and her duet with Kenny Rogers, "Islands in the Stream" (written by the Bee Gees and produced by Barry Gibb), spent two weeks at number one in 1983. In the mid-1980s, her record sales were still relatively strong, with "Save the Last Dance for Me", "Downtown", "Tennessee Homesick Blues" (1984), "Real Love" (another duet with Kenny Rogers), "Don't Call It Love" (1985) and "Think About Love" (1986) all reaching the country Top 10 ("Tennessee Homesick Blues" and "Think About Love" reached number one; "Real Love" also reached number one on the country chart and became a modest crossover hit). However, RCA Records did not renew her contract after it expired in 1986, and she signed with Columbia Records in 1987. Along with Emmylou Harris and Linda Ronstadt, she released "Trio" (1987) to critical acclaim. The album revitalized Parton's music career, spending five weeks at number one on "Billboard's" Country Albums chart, and also reached the top 10 on "Billboard"'s Top-200 Albums chart. It sold several million copies and produced four Top 10 country hits, including Phil Spector's "To Know Him Is to Love Him", which went to number one. "Trio" won the Grammy Award for Best Country Performance by a Duo or Group with Vocal and was nominated for a Grammy Award for Album of the Year. After a further attempt at pop success with "Rainbow," (1987) including the single "The River Unbroken", Parton focused on recording country material. "White Limozeen" (1989) produced two number one hits in "Why'd You Come in Here Lookin' Like That" and "Yellow Roses". Although Parton's career appeared to be revived, it was actually just a brief revival before contemporary country music came in the early 1990s and moved most veteran artists off the charts. A duet with Ricky Van Shelton, "Rockin' Years" (1991) reached number one, though Parton's greatest commercial fortune of the decade came when Whitney Houston recorded "I Will Always Love You" for the soundtrack of the feature film "The Bodyguard" (1992). Both the single and the album were massively successful. Parton's soundtrack album from the 1992 film, "Straight Talk", however, was less successful. But her 1993 album "Slow Dancing with the Moon" won critical acclaim and did well on the charts, reaching number four on the country albums chart, and number 16 on the "Billboard" 200 album chart. She recorded "The Day I Fall in Love" as a duet with James Ingram for the feature film "Beethoven's 2nd" (1993). The songwriters (Ingram, Carole Bayer Sager, and Clif Magness) were nominated for an Academy Award for Best Original Song, and Parton and Ingram performed the song at the awards telecast. Similar to her earlier collaborative album with Harris and Ronstadt, Parton released "Honky Tonk Angels" in the fall of 1993 with Loretta Lynn and Tammy Wynette. It was certified as a gold album by the Recording Industry Association of America and helped revive both Wynette and Lynn's careers. Also in 1994, Parton contributed the song "You Gotta Be My Baby" to the AIDS benefit album "Red Hot + Country" produced by the Red Hot Organization. A live acoustic album, "", featuring stripped-down versions of some of her hits, as well as some traditional songs, was released in late 1994. Parton's recorded music during the mid- to late-1990s remained steady and somewhat eclectic. Her 1995 re-recording of "I Will Always Love You" (performed as a duet with Vince Gill), from her album "Something Special." won the Country Music Association's Vocal Event of the Year Award. The following year, "Treasures", an album of covers of 1960s/70s hits was released, and featured a diverse collection of material, including songs by Mac Davis, Pete Seeger, Kris Kristofferson, Cat Stevens, and Neil Young. Her recording of Stevens' "Peace Train" was later re-mixed and released as a dance single, reaching "Billboard's "dance singles chart. Her 1998 country-rock album "Hungry Again" was made up entirely of her own compositions. Although neither of the album's two singles, "(Why Don't More Women Sing) Honky Tonk Songs" and "Salt in my Tears", charted, videos for both songs received significant airplay on CMT. A second and more contemporary collaboration with Harris and Ronstadt, "Trio II", was released in early 1999. Its cover of Neil Young's song "After the Gold Rush" won a Grammy Award for Best Country Collaboration with Vocals. Parton also was inducted into the Country Music Hall of Fame in 1999. Parton recorded a series of bluegrass-inspired albums, beginning with "The Grass Is Blue" (1999), winning a Grammy Award for Best Bluegrass Album; and "Little Sparrow" (2001), with its cover of Collective Soul's "Shine" winning a Grammy Award for Best Female Country Vocal Performance. The third, "Halos & Horns" (2002) included a bluegrass version of the Led Zeppelin song "Stairway to Heaven". In 2005, she released "Those Were The Days" consisting of her interpretations of hits from the folk-rock era of the late 1960s and early 1970s, including "Imagine", "Where Do the Children Play?", "Crimson and Clover", and "Where Have All the Flowers Gone?" Parton earned her second Academy Award nomination for Best Original Song for "Travelin' Thru," which she wrote specifically for the feature film "Transamerica." (2005) Due to the song's (and film's) acceptance of a transgender woman, Parton received death threats. She returned to number one on the country chart later in 2005 by lending her distinctive harmonies to the Brad Paisley ballad, "When I Get Where I'm Going". In September 2007, Parton released her first single from her own record company, Dolly Records, titled, "Better Get to Livin'", which eventually peaked at number 48 on "Billboard"'s Hot Country Songs chart. It was followed by the studio album "Backwoods Barbie", which was released on February 26, 2008, and reached number two on the country chart. The album's debut at number 17 on the all-genre "Billboard" 200 albums chart was the highest in her career. "Backwoods Barbie" produced four additional singles, including the title track, written as part of her score for "9 to 5: The Musical", an adaptation of her feature film. After the sudden death of Michael Jackson, whom Parton knew personally, she released a video in which she somberly told of her feelings on Jackson and his death. On October 27, 2009, Parton released a four-CD box set, "Dolly", which featured 99 songs and spanned most of her career. She released her second live DVD and album, "Live From London" in October 2009, which was filmed during her sold-out 2008 concerts at London's The O2 Arena. On August 10, 2010, with longtime friend Billy Ray Cyrus, Parton released the album "Brother Clyde". Parton is featured on "The Right Time", which she co-wrote with Cyrus and Morris Joseph Tancredi. On January 6, 2011, Parton announced that her new album would be titled "Better Day". In February 2011, she announced that she would embark on the Better Day World Tour on July 17, 2011, with shows in northern Europe and the U.S. The album's lead-off single, "Together You and I", was released on May 23, 2011, and "Better Day" was released on June 28, 2011. In 2011, Parton voiced the character Dolly Gnome in the animated film "Gnomeo & Juliet". On February 11, 2012, after the sudden death of Whitney Houston, Dolly Parton stated, "Mine is only one of the millions of hearts broken over the death of Whitney Houston. I will always be grateful and in awe of the wonderful performance she did on my song, and I can truly say from the bottom of my heart, 'Whitney, I will always love you. You will be missed.'" In 2013, Parton joined Lulu Roman for a re-recording of "I Will Always Love You" for Roman's album, "At Last". In 2013, Parton and Kenny Rogers reunited for the title song of his album "You Can't Make Old Friends." For their performance, they were nominated at the 2014 Grammy Awards for Grammy Award for Best Country Duo/Group Performance. In 2014, Parton embarked on the Blue Smoke World Tour in support of her 42nd studio album, "Blue Smoke". The album was first released in Australia & New Zealand on January 31 to coincide with tour dates there in February, and reached the top 10 in both countries. It was released in the United States on May 13, and debuted at number six on the "Billboard" 200 chart, making it her first top-10 album and her highest-charting solo album ever; it also reached the number two on the U.S. country chart. The album was released in Europe on June 9, and reached number two on the UK album chart. On June 29, 2014, Parton performed for the first time at the UK Glastonbury Festival performing songs such as "Jolene", "9 to 5" and "Coat of Many Colors" to a crowd of more than 180,000. On March 6, 2016, Parton announced that she would be embarking on a tour in support of her new album, "Pure & Simple". The tour was one of Parton's biggest tours within the United States in more than 25 years. Sixty-four dates were planned in the United States and Canada, visiting the most requested markets missed on previous tours. In the fall of 2016, she released "Jolene" as a single with the "a cappella" group Pentatonix and performed on "" with Pentatonix and Miley Cyrus in November 2016. Also in 2016, Parton was one of 30 artists to perform on "Forever Country", a mash-up of the songs, "Take Me Home, Country Roads", "On the Road Again" and her own "I Will Always Love You". The song celebrates 50 years of the CMA Awards. At the ceremony itself, Parton was honored with the Willie Nelson Lifetime Achievement Award, which was presented by Lily Tomlin and preceded by a tribute featuring Jennifer Nettles, Pentatonix, Reba McEntire, Kacey Musgraves, Carrie Underwood and Martina McBride. In 2017, Parton appeared on "Rainbow", the third studio album by Kesha performing a duet of "Old Flames Can't Hold a Candle to You". The track had been co-written by Kesha's mother Pebe Sebert. It was previously a hit for Parton and was included on her 1980 album "Dolly, Dolly, Dolly". She also co-wrote and provided featuring vocals on the song "Rainbowland" on "Younger Now", the sixth album by her goddaughter Miley Cyrus. On June 25, 2019, "The New York Times Magazine" listed Parton as one of the hundreds of artists whose material was destroyed in the 2008 Universal fire. In July 2019, Parton made an unannounced appearance at the Newport Folk Festival, and performed several songs accompanied by The Highwomen and Linda Perry. In 2020, Parton received worldwide attention after posting four pictures in which she showed how she would present herself on the social media platforms LinkedIn, Facebook, Instagram and Twitter. The original post on Instagram went viral after celebrities posted their own versions of the so-called Dolly Parton challenge on social media. On April 10, 2020, Parton re-released 93 songs from six of her classic albums. Little Sparrow, Halos & Horns, For God and Country, Better Day, Those Were The Days, and Live and Well are all available for online listening. On May 27, 2020, Parton released a brand new song called "When Life Is Good Again". This song was released to help keep the spirits up of those affected by the 2020 COVID-19 pandemic. Parton also released a music video for "When Life Is Good Again" which premiered on TIME 100 talks on May 28, 2020. In 1998, "Nashville Business" ranked her the wealthiest country-music star. , her net worth is estimated at $500 million. She was also on "The Love Boat" in 1977, a short cameo in episode 13 as the boat captain's silent wife. Parton is a prolific songwriter, having begun by writing country-music songs with strong elements of folk music, based on her upbringing in humble mountain surroundings and reflecting her family's Christian background. Her songs "Coat of Many Colors", "I Will Always Love You", and "Jolene", among others, have become classics. On November 4, 2003, Parton was honored as a BMI Icon at the 2003 BMI Country Awards. Parton has earned over 35 BMI Pop and Country Awards. In 2001, she was inducted into the Songwriters Hall of Fame. In a 2009 interview on CNN's "Larry King Live", she said she had written "at least 3,000" songs, having written seriously since the age of seven. Parton also said she writes something every day, be it a song or an idea. Parton's songwriting has been featured prominently in several films. In addition to the title song for "9 to 5," she also recorded a second version of "I Will Always Love You" for "The Best Little Whorehouse in Texas" (1982). The second version was a number one country hit and also reached number 53 on the pop charts. "I Will Always Love You" has been covered by many country artists, including Ronstadt on "Prisoner In Disguise" (1975), Kenny Rogers on "Vote for Love" (1996), and LeAnn Rimes on "" (1997). Whitney Houston performed it on "The Bodyguard" soundtrack and her version became the best-selling hit both written and performed by a female vocalist, with worldwide sales of over 12 million copies. In addition, the song has been translated into Italian and performed by the Welsh opera singer Katherine Jenkins. As a songwriter, Parton has twice been nominated for an Academy Award for Best Original Song, for "9 to 5" and "Travelin' Thru" (2005) from the film "Transamerica". "Travelin' Thru" won Best Original Song at the 2005 Phoenix Film Critics Society Awards. It was also nominated for both the 2005 Golden Globe Award for Best Original Song and the 2005 Broadcast Film Critics Association Award (also known as the Critics' Choice Awards) for Best Song. A cover of "Love Is Like A Butterfly" by Clare Torry was used as the theme music for the British TV show "Butterflies". Parton wrote the score (and Patricia Resnick the book) for "", a musical-theater adaptation of Parton's feature film "9 to 5" (1980). The musical ran at the Ahmanson Theatre, Los Angeles in late 2008. It opened on Broadway at the Marquis Theatre in New York City, on April 30, 2009, to mixed reviews. The title track of her 2008 album "Backwoods Barbie" was written for the musical's character Doralee. Although her score (as well as the musical debut of actress Allison Janney) was praised, the show struggled, closing on September 6, 2009, after 24 previews and 148 performances. Parton received nominations for Drama Desk Award for Outstanding Music and Drama Desk Award for Outstanding Lyrics, as well as a nomination for Tony Award for Best Original Score. Developing the musical was not a quick process. According to the public-radio program "Studio 360" (October 29, 2005), in October 2005 Parton was in the midst of composing the songs for a Broadway musical theater adaptation of the film. In late June 2007, "9 to 5: The Musical" was read for industry presentations. The readings starred Megan Hilty, Allison Janney, Stephanie J. Block, Bebe Neuwirth, and Marc Kudisch. Ambassador Theatre Group announced a 2012 UK tour for "Dolly Parton's 9 to 5: The Musical", commencing at Manchester Opera House, on October 12, 2012. Parton invested much of her earnings into business ventures in her native East Tennessee, notably Pigeon Forge. She is a co-owner of The Dollywood Company, which operates the theme park Dollywood (a former Silver Dollar City), a dinner theater, Dolly Parton's Stampede, the waterpark Dollywood's Splash Country, and the Dream More Resort and Spa, all in Pigeon Forge. Dollywood is the 24th-most-popular theme park in the United States, with 3 million visitors per year. The Dolly Parton's Stampede business has venues in Branson, Missouri, and Myrtle Beach, South Carolina. A former location in Orlando, Florida, closed in January 2008 after the land and building were sold to a developer. Starting in June 2011, the Myrtle Beach location became Pirates Voyage Fun, Feast and Adventure; Parton appeared for the opening, and the South Carolina General Assembly declared June 3, 2011, as Dolly Parton Day. On January 19, 2012, Parton's 66th birthday, Gaylord Opryland and Dollywood announced plans to open a $50 million water and snow park, a family-friendly destination in Nashville that is open all year. On September 29, 2012, Parton officially withdrew her support for the Nashville park due to the restructuring of Gaylord Entertainment Company after its merger with Marriott International. On June 12, 2015, it was announced that the Dollywood Company had purchased the Lumberjack Feud Dinner Show in Pigeon Forge. The show, which opened in June 2011, was owned and operated by Rob Scheer until the close of the 2015 season. The new, renovated show by the Dollywood Company opened in 2016. Parton was a co-owner of Sandollar Productions, with Sandy Gallin, her former manager. A film and television production company, it produced the documentary "" (1989), which won an Academy Award for Best Documentary (Feature); the television series "Babes" (1990–91) and "Buffy the Vampire Slayer" (1997–2003); and the feature films "Father of the Bride" (1991), "Father of the Bride: Part II" (1995) "Straight Talk" (1992) (in which Parton starred), and "Sabrina" (1995), among other shows. In a 2009 interview, singer Connie Francis revealed that Dolly had been contacting her for years in an attempt to film the singer's life story. Francis turned down Parton's offers, as she was already in negotiations with singer Gloria Estefan to produce the film, a collaboration now ended. After the retirement of her partner, Sandy Gallin, Parton briefly operated Dolly Parton's Southern Light Productions and in 2015 she announced her new production company would be called Dixie Pixie Productions and produce the movies-of-week in development with NBC Television and Magnolia Hill Productions. In addition to her performing appearances on "The Porter Wagoner Show" in the 1960s and into the 1970s, her two self-titled television variety shows in the 1970s and 1980s, and on "American Idol" in 2008 and other guest appearances, Parton has had television roles. In 1979, she received an Emmy award nomination as "Outstanding Supporting Actress in a Variety Program" for her guest appearance in a Cher special. During the mid-1970s, Parton wanted to expand her audience base. Although her first attempt, the television variety show "Dolly!" (1976–77), had high ratings, it lasted only one season, with Parton requesting to be released from her contract because of the stress it was causing on her vocal cords (she later tried a second television variety show, also titled "Dolly" (1987–88); it too lasted only one season). In her first feature film, Parton portrayed a secretary in a leading role with Jane Fonda and Lily Tomlin in the comedy film "9 to 5" (1980). The movie highlights the discrimination of women in a working environment and created awareness of the National Association of Working Women (9–5). She received nominations for a Golden Globe Award for Best Actress – Motion Picture Musical or Comedy and a Golden Globe Award for New Star of the Year – Actress. Parton wrote and recorded the film's title song. It received nominations for an Academy Award for Best Song and a Golden Globe Award for Best Original Song. Released as a single, the song won both the Grammy Award for Best Female Country Vocal Performance and the Grammy Award for Best Country Song. It also reached No. 1 on the Hot 100 chart and it was No. 78 on the "AFI's 100 Years...100 Songs" list released by the American Film Institute in 2004. "9 to 5" became a major box office success, grossing over $3.9 million its opening weekend, and over $103 million worldwide. Parton was named Top Female Box Office Star by the "Motion Picture Herald" in both 1981 and 1982 due to the film's success. In late 1981, Parton began filming her second film, the musical film "The Best Little Whorehouse in Texas" (1982). The film earned her a second nomination for a Golden Globe Award for Best Actress – Motion Picture Musical or Comedy. The film was greeted with positive critical reviews and became a commercial success, earning over $69 million worldwide. After a two-year hiatus from films, Parton was teamed with Sylvester Stallone for "Rhinestone" (1984). A comedy film about a country music star's efforts to mould an unknown into a music sensation, the film was a critical and financial failure, making just over $21 million on a $28 million budget. In 1989, Parton returned to film acting in "Steel Magnolias" (1989), based on the play of the same name by Robert Harling. The film was popular with critics and audiences, grossing over $95 million inside the U.S. She starred in the television movies "A Smoky Mountain Christmas" (1986); "Wild Texas Wind" (1991); "Unlikely Angel" (1996), portraying an angel sent back to earth following a deadly car crash; and "Blue Valley Songbird" (1999), where her character lives through her music. Parton starred along with James Woods in "Straight Talk" (1992), which received mixed reviews, and grossed a mild $21 million at the box office. Parton's 1987 variety show "Dolly" lasted only one season. She made a cameo appearance as herself in "The Beverly Hillbillies" (1993), an adaptation of the long-running TV sitcom of the same name (1962–71). Parton has done voice work for animation for television series, playing herself in "Alvin and the Chipmunks" (episode "Urban Chipmunk", 1983) and the character Katrina Eloise "Murph" Murphy (Ms. Frizzle's first cousin) in "The Magic School Bus" (episode "The Family Holiday Special", 1994). She also has guest-starred in several sitcoms, including a 1990 episode of "Designing Women" (episode "The First Day of the Last Decade of the Entire Twentieth Century") as herself, the guardian movie star of Charlene's baby. She made a guest appearance on "Reba" (episode "Reba's Rules of Real Estate") portraying a real-estate agency owner and on "The Simpsons" (episode "Sunday, Cruddy Sunday", 1999). She appeared as herself in 2000 on the Halloween episode of Bette Midler's short-lived sitcom "Bette," and on episode 14 of "Babes" (produced by Sandollar Productions, Parton and Sandy Gallin's joint production company). She made cameo appearances on the Disney Channel as "Aunt Dolly", visiting Hannah and her family in fellow Tennessean and real-life goddaughter Miley Cyrus's series "Hannah Montana" (episodes "Good Golly, Miss Dolly", 2006, "I Will Always Loathe You", 2007, and "Kiss It All Goodbye", 2010). She was nominated for Outstanding Guest Actress in a Comedy Series. Parton appeared as an overprotective mother in the comedy "Frank McKlusky, C.I.". (2002) She made a cameo appearance in the comedy film "", starring Sandra Bullock. She was featured in "The Book Lady" (2008), a documentary about her campaign for children's literacy. Parton expected to reprise her television role as Hannah's godmother in the musical comedy film "" (2009), but the character was omitted from the screenplay. She had a voice role in the comedy family film "Gnomeo & Juliet" (2011), a computer-animated film with garden gnomes about William Shakespeare's "Romeo and Juliet". "Dolly Parton's Coat of Many Colors", a made-for-TV film based on Parton's song of the same name, and featuring narration by Parton, aired on NBC in December 2015, with child actress Alyvia Alyn Lind portraying the young Parton. Parton also had a cameo in , which aired in November 2016. She co-starred with Queen Latifah in the musical film "Joyful Noise" (2012), playing a choir director's widow who joins forces with Latifah's character, a mother of two teens, to save a small Georgia town's gospel choir. In June 2018, Parton announced an eight-part Netflix series, featuring her music career. She is its executive producer and co-star. The series, called "Dolly Parton's Heartstrings", aired in November 2019. Parton is the subject of the NPR podcast "Dolly Parton's America". It is hosted by Jad Abumrad, who also hosts Radiolab. In December 2019, the biographical documentary "Here I Am" was added to the catalog of the Netflix streaming service. The documentary, a co-production of Netflix and the BBC, takes its name from Parton's 1971 song. Parton is the fourth of 12 children; her siblings in order are Willadeene, David, Denver, Bobby, Stella, Cassie, Randy, Larry, Floyd, Frieda and Rachel. On May 30, 1966, Parton and Carl Thomas Dean (born in Nashville, Tennessee) were married in Ringgold, Georgia. Although Parton does not use Dean's surname professionally, she has stated that her passport says "Dolly Parton Dean" and that she sometimes uses Dean when signing contracts. Dean, who is retired from running an asphalt road-paving business in Nashville, has always shunned publicity and rarely accompanies his wife to any events. According to Parton, he has seen her perform only once. She also has said in interviews that, although it appears they spend little time together, it is simply that nobody sees him publicly. She has commented on Dean's romantic side, saying that he does spontaneous things to surprise her and sometimes even writes poems for her. In 2011 Parton said, "We're really proud of our marriage. It's the first for both of us. And the last." On May 6, 2016, Parton announced that she and her husband would renew their vows in honor of their 50th wedding anniversary later in the month. Parton and Dean helped raise several of Parton's younger siblings in Nashville, leading her nieces and nephews to refer to her as "Aunt Granny," a moniker that later lent its name to one of Parton's Dollywood restaurants. As she suffered from endometriosis, a condition which eventually required her to undergo a hysterectomy, the couple have no children of their own. Parton is the godmother of performer Miley Cyrus. Parton has turned down several offers to pose nude for "Playboy" magazine, but did appear on the cover of the October 1978 issue wearing a Playboy bunny outfit, complete with ears (the issue featured Lawrence Grobel's extensive and candid interview with Parton, representing one of her earliest high-profile interviews with the mainstream press). The association of breasts with Parton's public image is illustrated in the naming of Dolly the sheep after her, since the sheep was cloned from a cell taken from an adult ewe's mammary gland. In Mobile, Alabama, the General W.K. Wilson Jr. Bridge is commonly called "the Dolly Parton Bridge" due to its arches resembling her bust. The Hernando de Soto Bridge over the Mississippi River at Memphis is also sometimes called this for the same reason. Parton is known for having undergone considerable plastic surgery. On a 2003 episode of "The Oprah Winfrey Show", Winfrey asked what kind of cosmetic surgery Parton had undergone. Parton replied that cosmetic surgery was imperative in keeping with her famous image. Parton has repeatedly joked about her physical image and surgeries, saying, "It takes a lot of money to look this cheap." Her breasts have garnered her mentions in several songs, including "Dolly Parton's Hits" by Bobby Braddock, "Marty Feldman Eyes" by Bruce Baum (a parody of "Bette Davis Eyes"), "No Show Jones" by George Jones and Merle Haggard, and "Make Me Proud" by Drake ft. Nicki Minaj. When asked about future plastic surgeries, she famously said, "If I see something sagging, bagging or dragging, I'll get it nipped, tucked or sucked." Parton's feminine escapism is acknowledged in her words, "Womanhood was a difficult thing to get a grip on in those hills, unless you were a man." Since the mid-1980s, Parton has supported many charitable efforts, particularly in the area of literacy, primarily through her Dollywood Foundation. Her literacy program, Dolly Parton's Imagination Library, a part of the Dollywood Foundation, mails one book per month to each enrolled child from the time of their birth until they enter kindergarten. Currently, over 1600 local communities provide the Imagination Library to almost 850,000 children each month across the U.S., Canada, the UK, Australia, and the Republic of Ireland. In 2018, Parton was honored by the Library of Congress on account of the "charity sending out its 100 millionth book". In 2006, Parton published a cookbook, "Dolly's Dixie Fixin's: Love, Laughter and Lots of Good Food". The Dollywood Foundation, funded from Parton's profits, has been noted for bringing jobs and tax revenues to a previously depressed region. Parton also has worked to raise money for several other causes, including the American Red Cross and HIV/AIDS-related charities. In December 2006, Parton pledged $500,000 toward a proposed $90-million hospital and cancer center to be constructed in Sevierville in the name of Robert F. Thomas, the physician who delivered her. She announced a benefit concert to raise additional funds for the project. The concert played to about 8,000 people. That same year, Emmylou Harris and she had allowed their music to be used in a PETA ad campaign that encouraged pet owners to keep their dogs indoors rather than chained outside. In 2003, her efforts to preserve the bald eagle through the American Eagle Foundation's sanctuary at Dollywood earned her the Partnership Award from the U.S. Fish and Wildlife Service. Parton received the Woodrow Wilson Award for Public Service from the Woodrow Wilson International Center for Scholars of the Smithsonian Institution at a ceremony in Nashville on November 8, 2007. In February 2018, she donated her 100 millionth free book, a copy of Parton's children's picture book "Coat of Many Colors". It was donated to the Library of Congress in Washington, D.C. For her work in literacy, Parton has received various awards, including Association of American Publishers Honors Award (2000), Good Housekeeping Seal of Approval (2001) (the first time the seal had been awarded to a person), American Association of School Administrators – Galaxy Award (2002), National State Teachers of the Year – Chasing Rainbows Award (2002), and Parents as Teachers National Center – Child and Family Advocacy Award (2003). On May 8, 2009, Parton gave the commencement speech at the graduation ceremony for the University of Tennessee, Knoxville's College of Arts and Sciences. During the ceremony, she received an honorary Doctor of Humane Letters from the university. It was only the second honorary degree given by the university, and in presenting the degree, the university's Chancellor, Jimmy G. Cheek, said, "Because of her career not just as a musician and entertainer, but for her role as a cultural ambassador, philanthropist and lifelong advocate for education, it is fitting that she be honored with an honorary degree from the flagship educational institution of her home state." In response to the 2016 Great Smoky Mountains wildfires, Parton was one of a number of country music artists who participated in a telethon to raise money for victims of the fires. This was held in Nashville on December 9. In addition, Parton hosted her own telethon for the victims on December 13 and reportedly raised around $9 million. In response to the COVID-19 pandemic, Parton donated $1 million towards research at Vanderbilt University and encouraged those who can afford it to make similar donations. Parton has been a generous donor to VUMC (Vanderbilt University School of Medicine). Among her gifts was a transformational contribution to the Monroe Carell Jr. Children’s Hospital at Vanderbilt Pediatric Cancer Program in honor of Abumrad and her niece, Hannah Dennison, who was successfully treated for leukemia as a child at Children’s Hospital. Parton is one of the most-honored female country performers of all time. The Record Industry Association of America has certified 25 of her single or album releases as either Gold Record, Platinum Record or Multi-Platinum Record. She has had 26 songs reach No. 1 on the Billboard country charts, a record for a female artist. She has 42 career top-10 country albums, a record for any artist, and 110 career-charted singles over the past 40 years. All-inclusive sales of singles, albums, collaboration records, compilation usage, and paid digital downloads during Parton's career have reportedly topped 100 million records around the world. Parton has earned nine Grammy Awards (including her 2011 Lifetime Achievement Grammy) and a total of 49 Grammy Award nominations, the 2nd most nominations of any female artist in the history of the prestigious awards. At the American Music Awards, she has won three awards out of 18 nominations. At the Country Music Association, she has won 10 awards out of 42 nominations. At the Academy of Country Music, she has won seven awards and 39 nominations. She is one of only six female artists (including Reba McEntire, Barbara Mandrell, Shania Twain, Loretta Lynn, and Taylor Swift), to win the Country Music Association's highest honor, Entertainer of the Year (1978). She also has been nominated for two Academy Awards and a Tony Award. She was nominated for an Emmy Award for her appearance in a 1978 Cher television special. She was awarded a star on the Hollywood Walk of Fame for her music in 1984, located at 6712 Hollywood Boulevard in Hollywood, California; a star on the Nashville StarWalk for Grammy winners; and a bronze sculpture on the courthouse lawn in Sevierville. She has called that statue of herself in her hometown "the greatest honor", because it came from the people who knew her. Parton was inducted into the Grand Ole Opry in 1969, and in 1986 was named one of "Ms. Magazine"'s Women of the Year. In 1986, she was inducted into the Nashville Songwriters Hall of Fame. In 1999, Parton received country music's highest honor, an induction into the Country Music Hall of Fame. She received an honorary doctorate degree from Carson-Newman College (Jefferson City, Tennessee) in 1990. This was followed by induction into the National Academy of Popular Music/Songwriters Hall of Fame in 2001. In 2002, she ranked No. 4 in CMT's 40 Greatest Women of Country Music. Parton has received 46 Grammy Award nominations, tying her with Bruce Springsteen for the most Grammy nominations and positioning her in tenth place overall. Parton was honored in 2003 with a tribute album called "Just Because I'm a Woman: Songs of Dolly Parton." The artists who recorded versions of Parton's songs included Melissa Etheridge ("I Will Always Love You"), Alison Krauss ("9 to 5"), Shania Twain ("Coat of Many Colors"), Meshell Ndegeocello ("Two Doors Down"), Norah Jones ("The Grass is Blue"), and Sinéad O'Connor ("Dagger Through the Heart"). Parton herself contributed a re-recording of the title song, originally the title song for her first RCA album in 1968. Parton was awarded the Living Legend Medal by the U.S. Library of Congress on April 14, 2004, for her contributions to the cultural heritage of the United States. She is also the focus of a Library of Congress collection exploring the influences of country music on her life and career. The collection contains images, articles, sheet music, and more. In 2005, she was honored with the National Medal of Arts, the highest honor given by the U.S. government for excellence in the arts. The award is presented by the U.S. President. On December 3, 2006, Parton received the Kennedy Center Honors from the John F. Kennedy Center for the Performing Arts for her lifetime of contributions to the arts. During the show, some of country music's biggest names came to show their admiration. Carrie Underwood performed "Islands in the Stream" with Rogers, Parton's original duet partner. Krauss performed "Jolene" and duetted "Coat of Many Colors" with Twain. McEntire and Reese Witherspoon also came to pay tribute. On November 16, 2010, Parton accepted the Liseberg Applause Award, the theme park industry's most prestigious honor, on behalf of Dollywood theme park during a ceremony held at IAAPA Attractions Expo 2010 in Orlando, Florida. In 2015, a newly discovered species of fungus found growing in the southern Appalachians was named "Japewiella dollypartoniana" in honor of Parton's music and her efforts to bring national and global attention to that region. In 2018, Parton received a second star on the Hollywood Walk of Fame, inducted alongside Linda Ronstadt and Emmylou Harris in recognition of their work as a trio. Parton was also recognized in the Guinness World Records 2018 Edition for holding records for the Most Decades with a Top 20 hit on Billboard's Hot Country Songs Chart and Most Hits on Billboard's Hot Country Songs Chart by a Female Artist. In 2020, Parton received a Grammy award for her collaboration with For KING & COUNTRY on their song, "God Only Knows." During her career, Parton has gained induction into numerous Halls of Fame. Those honors include: Theatrical releases
https://en.wikipedia.org/wiki?curid=8716
Dirk Benedict Dirk Benedict (born Dirk Niewoehner Metzger on March 1, 1945) is an American movie, television and stage actor and author. He is best known for playing the characters Lieutenant Starbuck in the original "Battlestar Galactica" film and television series and Lieutenant Templeton "Faceman" Peck in "The A-Team" television series. He is the author of "Confessions of a Kamikaze Cowboy" and "And Then We Went Fishing". Benedict was born Dirk Niewoehner in Helena, Montana, the son of Priscilla Mella (née Metzger), an accountant, and George Edward Niewoehner, a lawyer. He grew up in White Sulphur Springs, Montana. He graduated from Whitman College in 1967. Benedict allegedly chose his stage name from a serving of Eggs Benedict he had prior to his acting career. He is of German extraction. Benedict's film debut was in the 1972 film "Georgia, Georgia". When the New York run for "Butterflies Are Free" ended, he received an offer to repeat his performance in Hawaii, opposite Barbara Rush. While there, he appeared as a guest lead on "Hawaii Five-O". The producers of a horror film called "Sssssss" (1973) saw Benedict's performance in "Hawaii Five-O" and promptly cast him as the lead in that movie. He next played the psychotic wife-beating husband of Twiggy in her American film debut, "W" (1974). Benedict starred in the television series "Chopper One", which aired for one season in 1974. He made two appearances in "Charlie's Angels". He also appeared on the "Donny & Marie" variety show. Benedict's career break came in 1978 when he appeared as Lieutenant Starbuck in the movie and television series "Battlestar Galactica". The same year Benedict starred in the TV movie "Cruise Into Terror", and appeared in the ensemble movie "Scavenger Hunt" the following year. In 1980, Benedict starred alongside Linda Blair in an action-comedy movie called "Ruckus". In 1983, Dirk gained further popularity as con-man Lieutenant Templeton "Face" Peck in 1980s action television series "The A-Team". He played "Faceman" from to , although the series didn't air until January 1983, and the final episode wasn't shown until 1987 rebroadcasts. The episode "Steel" includes a scene at Universal Studios where Face is seen looking bemused as a Cylon walks by him as an in-joke to his previous role in "Battlestar Galactica". The clip is incorporated into the series' opening credit sequence from season 3 onward. In 1986, Benedict starred as low-life band manager Harry Smilac in the movie "Body Slam" along with Lou Albano, Roddy Piper, and cameo appearances by Freddie Blassie, Ric Flair, and Bruno Sammartino. His character Smilac ends up managing the pro-wrestler "Quick Rick" Roberts (Piper) and faces opposition by Captain Lou and his wrestling tag-team "the Cannibals". In 1987, Benedict took the title role of Shakespeare's "Hamlet" at the Abbey Theatre in Manhattan. Both his performance and the entire production were lambasted by critics. Benedict starred in the 1989 TV movie "Trenchcoat in Paradise". In 1991, Benedict starred in "Blue Tornado," playing Alex, call sign Fireball, an Italian Air Force fighter pilot. Benedict published an autobiography, "Confessions of a Kamikaze Cowboy: A True Story of Discovery, Acting, Health, Illness, Recovery, and Life" (Avery Publishing ). In 1993, Benedict starred in "Shadow Force". Benedict also appeared as Jake Barnes in the 1996 action-adventure film "Alaska". In 2000, Benedict wrote and directed his first screenplay, "Cahoots". Benedict appeared in the 2006 German film "Goldene Zeiten" ("Golden Times") in a dual role, playing an American former TV star as well as a German lookalike who impersonates him. In 2006, he wrote an online essay criticizing the then-airing "Battlestar Galactica" re-imagined series and, especially, its casting of a woman as his character, Starbuck, writing that "the war against masculinity has been won" and that "a television show based on hope, spiritual faith, and family is unimagined and regurgitated as a show of despair, sexual violence and family dysfunction". He appeared as a contestant on the 2007 U.K. series of "Celebrity Big Brother". He arrived on launch night in a replica of the "A-Team" van, smoking a cigar and accompanied by the "A-Team" theme tune. In 2010, Benedict starred in a stage production of "Prescription: Murder" playing Lieutenant Columbo for the Middle Ground Theatre Company in the UK. Benedict also made a cameo appearance in the 2010 film adaptation of "The A-Team" as Pensacola Prisoner Milt. In 2019, Benedict took on the role of Jack Strange in the B movie Space Ninjas written and directed by Scott McQuaid. Dirk plays an eccentric T.V. host of a show called 'Stranger Than Fiction', which is like a hybrid of the Twilight Zone and the X-Files. The movie is a sci-fi, comedy, horror that follows a bunch of high school students trying to survive the night from a Space Ninja invasion. In the 1970s, Benedict survived a prostate tumor believed to have been cancerous. Having rejected conventional medical treatment, he credited his survival to the adoption of a macrobiotic diet recommended to him by actress Gloria Swanson. In 1986, he married Toni Hudson, an actress with whom he has two sons, George and Roland. Hudson had previously appeared as Dana in the fourth season "A-Team" episode titled "Blood, Sweat and Cheers". They divorced in 1995. In 1998, Benedict learned that he also has another son, John Talbert (born 1968), from an earlier relationship, who had been placed for adoption. With the help of his adoptive parents, Talbert discovered and contacted his birth parents.
https://en.wikipedia.org/wiki?curid=8718
Doppler effect The Doppler effect (or the Doppler shift) is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession. The reason for the Doppler effect is that when the source of the waves is moving towards the observer, each successive wave crest is emitted from a position closer to the observer than the crest of the previous wave. Therefore, each wave takes slightly less time to reach the observer than the previous wave. Hence, the time between the arrivals of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are traveling, the distance between successive wave fronts is reduced, so the waves "bunch together". Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is then increased, so the waves "spread out". For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered. Doppler first proposed this effect in 1842 in his treatise ""Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels"" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848). In classical physics, where the speeds of source and the receiver relative to the medium are lower than the velocity of waves in the medium, the relationship between observed frequency formula_1 and emitted frequency formula_2 is given by: Note this relationship predicts that the frequency will decrease if either source or receiver is moving away from the other. Equivalently, under the assumption that the source is either directly approaching or receding from the observer: If the source approaches the observer at an angle (but still with a constant velocity), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual. If the speeds formula_7 and formula_5 are small compared to the speed of the wave, the relationship between observed frequency formula_1 and emitted frequency formula_2 is approximately Given formula_19 we divide for formula_6 formula_21 Since formula_22 we can substitute the geometric expansion: formula_23 With an observer stationary relative to the medium, if a moving source is emitting waves with an actual frequency formula_2 (in this case, the wavelength is changed, the transmission velocity of the wave keeps constant; note that the "transmission velocity" of the wave does not depend on the "velocity of the source"), then the observer detects waves with a frequency formula_1 given by A similar analysis for a moving "observer" and a stationary source (in this case, the wavelength keeps constant, but due to the motion, the rate at which the observer receives waves and hence the "transmission velocity" of the wave [with respect to the observer] is changed) yields the observed frequency: A similar analysis for a moving "observer" and a moving source (in this case, the wavelength keeps constant, but due to the motion, the rate at which the observer receives waves and hence the "transmission velocity" of the wave [with respect to the observer] is changed) yields the observed frequency: Assuming a stationary observer and a source moving at the speed of sound, the Doppler equation predicts a perceived momentary infinite frequency by an observer in front of a source traveling at the speed of sound. All the peaks are at the same place, so the wavelength is zero and the frequency is infinite. This overlay of all the waves produces a shock wave which for sound waves is known as a sonic boom. When the source moves faster than the wave speed the source outruns the wave. The equation can give negative frequency values, but -500 Hz is pretty much the same as +500 Hz as far as an observer is concerned. Lord Rayleigh predicted the following effect in his classic book on sound: if the source is moving toward the observer at twice the speed of sound, a musical piece emitted by that source would be heard in correct time and tune, but "backwards". The Doppler effect with sound is only clearly heard with objects moving at high speed, as change in frequency of musical tone involves a speed of around 40 meters per second, and smaller changes in frequency can easily be confused by changes in the amplitude of the sounds from moving emitters. Neil A Downie has demonstrated how the Doppler effect can be made much more easily audible by using an ultrasonic (e.g. 40 kHz) emitter on the moving object. The observer then uses a heterodyne frequency converter, as used in many bat detectors, to listen to a band around 40 kHz. In this case, with the bat detector tuned to give frequency for the stationary emitter of 2000 Hz, the observer will perceive a frequency shift of a whole tone, 240 Hz, if the emitter travels at 2 meters per second. An acoustic Doppler current profiler (ADCP) is a hydroacoustic current meter similar to a sonar, used to measure water current velocities over a depth range using the Doppler effect of sound waves scattered back from particles within the water column. The term ADCP is a generic term for all acoustic current profilers, although the abbreviation originates from an instrument series introduced by RD Instruments in the 1980s. The working frequencies range of ADCPs range from 38 kHz to several Megahertz. The device used in the air for wind speed profiling using sound is known as SODAR and works with the same underlying principles. Dynamic real-time path planning in robotics to aid the movement of robots in a sophisticated environment with moving obstacles often take help of Doppler effect. Such applications are specially used for competitive robotics where the environment is constantly changing, such as robosoccer. A siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus: In other words, if the siren approached the observer directly, the pitch would remain constant, at a higher than stationary pitch, until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial velocity does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity: where formula_31 is the angle between the object's forward velocity and the line of sight from the object to the observer. The Doppler effect for electromagnetic waves such as light is of great use in astronomy and results in either a so-called redshift or blueshift. It has been used to measure the speed at which stars and galaxies are approaching or receding from us; that is, their radial velocities. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. This redshift and blueshift happens on a very small scale. If an object was moving toward earth, there would not be a noticeable difference in visible light, to the unaided eye. Note that redshift is also used to measure the expansion of space, but that this is not truly a Doppler effect. Rather, redshifting due to the expansion of space is known as cosmological redshift, which can be derived purely from the Robertson-Walker metric under the formalism of General Relativity. Having said this, it also happens that there "are" detectable Doppler effects on cosmological scales, which, if incorrectly interpreted as cosmological in origin, lead to the observation of redshift-space distortions. The use of the Doppler effect for light in astronomy depends on our knowledge that the spectra of stars are not homogeneous. They exhibit absorption lines at well defined frequencies that are correlated with the energies required to excite electrons in various elements from one level to another. The Doppler effect is recognizable in the fact that the absorption lines are not always at the frequencies that are obtained from the spectrum of a stationary light source. Since blue light has a higher frequency than red light, the spectral lines of an approaching astronomical light source exhibit a blueshift and those of a receding astronomical light source exhibit a redshift. Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and −260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial velocity means the star is receding from the Sun, negative that it is approaching. The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target — e.g. a motor car, as police use radar to detect speeding motorists — as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's velocity. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc. Because the doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative velocity formula_32 is twice that from the same target emitting a wave: An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements. Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift ("when" the received signal arrives). Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis. Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency. Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive. Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal. After realizing that the Dopper shift had not been considered before launch of the Huygens probe of the 2005 Cassini–Huygens mission, the probe trajectory was altered to approach Titan in such a way that its transmissions traveled perpendicular to its direction of motion relative to Cassini, greatly reducing the Doppler shift. Doppler shift of the direct path can be estimated by the following formula: formula_34 where formula_35 is the velocity of the mobile station, formula_36 is the wavelength of the carrier, formula_37 is the elevation angle of the satellite and formula_31 is the driving direction with respect to the satellite. The additional Doppler shift due to the satellite moving can be described as: formula_39 where formula_40 is the relative speed of the satellite. The Leslie speaker, most commonly associated with and predominantly used with the famous Hammond organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note. A laser Doppler vibrometer (LDV) is a non-contact instrument for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface. During the segmentation of vertebrate embryos, waves of gene expression sweep across the presomitic mesoderm, the tissue from which the precursors of the vertebrae (somites) are formed. A new somite is formed upon arrival of a wave at the anterior end of the presomitic mesoderm. In zebrafish, it has been shown that the shortening of the presomitic mesoderm during segmentation leads to a Doppler effect as the anterior end of the tissue moves into the waves. This Doppler effect contributes to the period of segmentation. Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The size of the Doppler shift depends on the refractive index of the medium a wave is traveling through. But some materials are capable of negative refraction, which should lead to a Doppler shift that works in a direction opposite that of a conventional Doppler shift. First experiment that detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003. Later inverse Doppler effect was observed in some inhomogeneous materials and predicted inside Vavilov–Cherenkov cone.
https://en.wikipedia.org/wiki?curid=8724
ΔT In precise timekeeping, Δ"T (Delta "T, delta-"T, delta"T, or D"T") is a measure of the cumulative effect of the departure of the Earth's rotation period from the fixed-length day of atomic time. Formally it is the time difference obtained by subtracting Universal Time (UT, defined by the Earth's rotation) from Terrestrial Time (TT, independent of the Earth's rotation): . The value of ΔT for the start of 1902 was approximately zero; for 2002 it was about 64 seconds. So the Earth's rotations over that century took about 64 seconds longer than would be required for days of atomic time. As well as this long-term drift in the length of the day there are short-term fluctuations in the length of day () which are dealt with separately. The Earth's rotational speed is , and a day corresponds to one period . A rotational acceleration gives a rate of change of the period of , which is usually expressed as . This has units of 1/time, and is commonly quoted as milliseconds-per-day per century (written as ms/day/cy, understood as (ms/day)/cy). Integrating gives an expression for Δ"T" against time. Universal Time is a time scale based on the Earth's rotation, which is somewhat irregular over short periods (days up to a century), thus any time based on it cannot have an accuracy better than 1 in 108. However, a larger, more consistent effect has been observed over many centuries: Earth's rate of rotation is inexorably slowing down. This observed change in the rate of rotation is attributable to two primary forces, one decreasing and one increasing the Earth's rate of rotation. Over the long term, the dominating force is tidal friction, which is slowing the rate of rotation, contributing about ms/day/cy or ms/cy, which is equal to the very small fractional change day/day. The most important force acting in the opposite direction, to speed up the rate, is believed to be a result of the melting of continental ice sheets at the end of the last glacial period. This removed their tremendous weight, allowing the land under them to begin to rebound upward in the polar regions, an effect that is still occurring today and will continue until isostatic equilibrium is reached. This "post-glacial rebound" brings mass closer to the rotational axis of the Earth, which makes the Earth spin faster, according to the law of conservation of angular momentum, similar to an ice skater pulling their arms in to spin faster. Models estimate this effect to contribute about −0.6 ms/day/cy. Combining these two effects, the net acceleration (actually a deceleration) of the rotation of the Earth, or the change in the length of the mean solar day (LOD), is +1.7 ms/day/cy or +62 s/cy2 or +46.5 ns/day2. This matches the average rate derived from astronomical records over the past 27 centuries. Terrestrial Time is a theoretical uniform time scale, defined to provide continuity with the former Ephemeris Time (ET). ET was an independent time-variable, proposed (and its adoption agreed) in the period 1948–52 with the intent of forming a gravitationally uniform time scale as far as was feasible at that time, and depending for its definition on Simon Newcomb's "Tables of the Sun" (1895), interpreted in a new way to accommodate certain observed discrepancies. Newcomb's tables formed the basis of all astronomical ephemerides of the Sun from 1900 through 1983: they were originally expressed (and published) in terms of Greenwich Mean Time and the mean solar day, but later, in respect of the period 1960–1983, they were treated as expressed in terms of ET, in accordance with the adopted ET proposal of 1948–52. ET, in turn, can now be seen (in light of modern results) as close to the average mean solar time between 1750 and 1890 (centered on 1820), because that was the period during which the observations on which Newcomb's tables were based were performed. While TT is strictly uniform (being based on the SI second, every second is the same as every other second), it is in practice realised by International Atomic Time (TAI) with an accuracy of about 1 part in 1014. Earth's rate of rotation must be integrated to obtain time, which is Earth's angular position (specifically, the orientation of the meridian of Greenwich relative to the fictitious mean sun). Integrating +1.7 ms/d/cy and centering the resulting parabola on the year 1820 yields (to a first approximation) seconds for Δ"T". Smoothed historical measurements of Δ"T" using total solar eclipses are about +17190 s in the year −500 (501 BC), +10580 s in 0 (1 BC), +5710 s in 500, +1570 s in 1000, and +200  s in 1500. After the invention of the telescope, measurements were made by observing occultations of stars by the Moon, which allowed the derivation of more closely spaced and more accurate values for Δ"T". Δ"T" continued to decrease until it reached a plateau of +11 ± 6 s between 1680 and 1866. For about three decades immediately before 1902 it was negative, reaching −6.64 s. Then it increased to +63.83 s in January 2000 and +68.97 s in January 2018 and +69.361 s in January 2020, after even a slight decrease from 69.358 s in July 2019 to 69.338 s in September and October 2019 and a new increase in November and December 2019. This will require the addition of an ever-greater number of leap seconds to UTC as long as UTC tracks UT1 with one-second adjustments. (The SI second as now used for UTC, when adopted, was already a little shorter than the current value of the second of mean solar time.) Physically, the meridian of Greenwich in Universal Time is almost always to the east of the meridian in Terrestrial Time, both in the past and in the future. +17190 s or about  h corresponds to 71.625°E. This means that in the year −500 (501 BC), Earth's faster rotation would cause a total solar eclipse to occur 71.625° to the east of the location calculated using the uniform TT. All values of Δ"T" before 1955 depend on observations of the Moon, either via eclipses or occultations. The angular momentum lost by the Earth due to friction induced by the Moon's tidal effect is transferred to the Moon, increasing its angular momentum, which means that its moment arm (approximately its distance from the Earth, i.e. precisely the semi-major axis of the Moon's orbit) is increased (for the time being about +3.8 cm/year), which via Kepler's laws of planetary motion causes the Moon to revolve around the Earth at a slower rate. The cited values of Δ"T" assume that the lunar acceleration (actually a deceleration, that is a negative acceleration) due to this effect is = −26″/cy2, where is the mean sidereal angular motion of the Moon. This is close to the best estimate for as of 2002 of −25.858 ± 0.003″/cy2, so Δ"T" need not be recalculated given the uncertainties and smoothing applied to its current values. Nowadays, UT is the observed orientation of the Earth relative to an inertial reference frame formed by extra-galactic radio sources, modified by an adopted ratio between sidereal time and solar time. Its measurement by several observatories is coordinated by the International Earth Rotation and Reference Systems Service (IERS). Tidal deceleration rates have varied over the history of the Earth-Moon system. Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then. Based on geological studies of "tidal rhythmites," the day was 21.9±0.4 hours long 620 million years ago and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies.
https://en.wikipedia.org/wiki?curid=8727
David Deutsch David Elieser Deutsch (; born 18 May 1953) is a British physicist at the University of Oxford. He is a Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford. He pioneered the field of quantum computation by formulating a description for a quantum Turing machine, as well as specifying an algorithm designed to run on a quantum computer. He has also proposed the use of entangled states and Bell's theorem for quantum key distribution and is a proponent of the many-worlds interpretation of quantum mechanics. In 2009, Deutsch expounded a new criterion for scientific explanation, which is to formulate invariants: 'State an explanation [publicly, so that it can be dated and verified by others later] that remains invariant [in the face of apparent change, new information, or unexpected conditions]'.
https://en.wikipedia.org/wiki?curid=8729
Volkssturm The Volkssturm (, "people's storm") was a national militia established by Nazi Germany during the last months of World War II. It was not set up by the German Army, the ground component of the combined German "Wehrmacht" armed forces, but by the Nazi Party on the orders of Adolf Hitler, and its existence was only officially announced on 16 October 1944. It was staffed by conscripting males between the ages of 16 and 60 years who were not already serving in some military unit. The "Volkssturm" comprised one of the final components of the total war promulgated by Propaganda Minister Joseph Goebbels, part of a Nazi endeavor to overcome their enemies' military strength through force of will. The new "Volkssturm" drew inspiration from the old Prussian "Landsturm" of 1813–1815, that fought in the liberation wars against Napoleon, mainly as guerrilla forces. Plans to form a "Landsturm" national militia in eastern Germany as a last resort to boost fighting strength were first proposed in 1944 by General Heinz Guderian, chief of the General Staff. The Army did not have enough men to resist the Soviet onslaught. So, additional categories of men were called into service, including those in non-essential jobs, those previously deemed unfit, over-age, or under-age, and those recovering from wounds. The "Volkssturm" had existed, on paper, since around 1925, but it was only after Hitler ordered Martin Bormann to recruit six million men for this militia that the group became a physical reality. The intended strength of "six million" was never attained. Joseph Goebbels and other propagandists depicted the "Volkssturm" as an outburst of enthusiasm and the will to resist. While it had some marginal effect on morale, it was undermined by the recruits' visible lack of uniforms and weaponry. Nazi themes of death, transcendence, and commemoration were given full play to encourage the fight. Many German civilians realized that this was a desperate attempt to turn the course of the war. Sardonic old men would remark, "We old monkeys are the Führer’s newest weapon" (in German this rhymes: "Wir alten Affen sind des Führers neue Waffen"). A popular joke about the "Volkssturm" went "Why is the Volkssturm Germany's most precious resource? Because its members have silver in their hair, gold in their mouths, and lead in their bones." For these militia units to be effective, they needed not only strength in numbers, but also fanaticism. During the early stages of "Volkssturm" planning, it became apparent that units lacking morale would lack combat effectiveness. To generate fanaticism, "Volkssturm" units were placed under direct command of the local Nazi party officials, the "Gauleiter" and "Kreisleiter". The new "Volkssturm" was also to become a nationwide organization, with Heinrich Himmler, as Replacement Army commander, responsible for armament and training. Though nominally under party control, "Volkssturm" units were placed under "Wehrmacht" command when engaged in action. Aware that a "people's army" would not be able to withstand the onslaught of the modern army wielded by the Allies, Hitler issued the following order towards the end of 1944: With the Nazi Party in charge of organizing the "Volkssturm", each "Gauleiter", or Nazi Party District Leader, was charged with the leadership, enrollment, and organization of the "Volkssturm" in their district. The largest "Volkssturm" unit seems to have corresponded to the next smaller territorial subdivision of the Nazi Party organization—the "Kreis". The basic unit was a battalion of 642 men. Units were mostly composed of members of the Hitler Youth, invalids, the elderly, or men who had previously been considered unfit for military service. On 12 February 1945, the Nazis conscripted German women and girls into the auxiliaries of the "Volkssturm". Correspondingly, girls as young as 14 years were trained in the use of small arms, panzerfaust, machine guns, and hand grenades from December 1944 through May 1945. Municipal organization: Each "Gauleiter" and "Kreisleiter" had a "Volkssturm" Chief of Staff. From the militia's inception until the spring of 1945, Himmler and Bormann engaged in a power-struggle over the jurisdictional control over the "Volkssturm" regarding security and police powers in Germany and the occupied territories; a contest which Himmler and his SS more or less won on one level (police and security), but lost to Bormann on another (mobilizing reserve forces). Historian David Yelton described the situation as two ranking officers at the helm of a sinking ship fighting over command. The "Volkssturm" "uniform" was only a black armband with the German words "Deutscher Volkssturm Wehrmacht". Some variants of the above-mentioned armband worn by Volkssturm members were simply red or yellow cloth-bands with "Deutscher Volkssturm Wehrmacht" printed in black on them. On top of the identification armbands, there were black rank-patches, either with or without silver pips, worn on the collar of the Volkssturm members' uniform. These were characteristically derived from the rank insignia of the various paramilitary organizations of the Nazi Party, which had control over them, and not of the regular "Wehrmacht". The German government tried to issue as many of its members as possible with military uniforms of all sorts, ranging from field-gray to camouflage types. Most members of the "Volkssturm", especially elderly members, had no uniform and were not supplied, so they generally wore either work-attire (including railway workers, policemen and firemen) or their civilian clothing and usually carried with them their own personal rucksacks, blankets and cooking-equipment, etc. The simple paramilitary insignia of the "Volkssturm" were as follows: Typically, members of the "Volkssturm" received only very basic military training. It included a brief indoctrination and training on the use of basic weapons such as the Karabiner 98k rifle and "Panzerfaust". Because of continuous fighting and weapon shortages, weapon training was often minimal. There was also a lack of instructors, meaning that weapons training was sometimes done by World War I veterans drafted into service themselves. Often "Volkssturm" members were only able to familiarize themselves with their weapons when in actual combat. There was no standardization of any kind and units were issued only what equipment was available. This was true of every form of equipment—"Volkssturm" members were required to bring their own uniforms and culinary equipment etc. This resulted in the units looking very ragged and, instead of boosting civilian morale, it often reminded people of Germany's desperate state. Armament was equally haphazard: though some Karabiner 98ks were on hand, members were also issued older Gewehr 98s and 19th-century Gewehr 71s and Steyr-Mannlicher M1888s, as well as Dreyse M1907 pistols. In addition there was a plethora of Soviet, British, Belgian, French, Italian, and other weapons that had been captured by German forces during the war. The Germans had also developed cheap but reasonably effective "Volkssturm" weapons, such as MP 3008 machine pistols and Volkssturmgewehr rifles. These were completely stamped and machine-pressed constructions (in the 1940s, industrial processes were much cruder than today, so a firearm needed great amounts of semi-artisanal work to be actually reliable). The "Volkssturm" troops were nominally supplied when and where possible by both the "Wehrmacht" and the SS. When units had completed their training and received armament, members took a customary oath to Hitler and were then dispatched into combat. Unlike most English-speaking countries, Germany had universal military service for all young men for several generations, so many of the older members would have had at least basic military training from when they served in the German Army and many would have been veterans of the First World War. "Volkssturm" units were supposed to be used only in their own districts, but many were sent directly to the front lines. Ultimately, it was their charge to confront the overwhelming power of the British, Canadian, Soviet, American, and French armies alongside "Wehrmacht" forces to either turn the tide of the war or set a shining example for future generations of Germans and expunge the defeat of 1918 by fighting to the last, dying before surrendering. It was an apocalyptic goal which some of those assigned to the "Volkssturm" took to heart. Unremittingly fanatical members of the "Volkssturm" refused to abandon the Nazi ethos unto the dying days of Nazi Germany, and in a number of instances took brutal "police actions" against German civilians deemed defeatists or cowards. On some occasions, members of the "Volkssturm" showed tremendous courage and a determined will to resist, more so even than soldiers in the "Wehrmacht". The "Volkssturm" battalion 25/235 for instance, started out with 400 men but fought on until there were only 10 men remaining. Fighting at Küstrin between 30 January to 29 March 1945, militia units made up mostly of the "Volkssturm" resisted for nearly two months. Losses were upwards of 60 percent for the "Volkssturm" at Kolberg, roughly 1900 of them died at Breslau, and during the Battle of Königsberg (Kaliningrad), another 2400 members of the "Volkssturm" were killed. At other times along the western front particularly, "Volkssturm" troops would cast their arms aside and disappear into the chaos. Youthful ardor and fanaticism among Hitler Youth members fighting with the "Volkssturm" or an insatiable sense of duty from old men proved tragic sometimes. An example shared by historian Stephen Fritz is instructive in this case: Not every "Volkssturm" unit was suicidal or apocalyptic in outlook as the war drew closer to its end. Many of them lost their enthusiasm for the fight when it became clear that the Allies had won, prompting them to lay down their weapons and surrender – they also feared being captured by Allied forces and tortured or executed as partisans. Duty to their communities and sparing them from horrors like at Bad Windsheim also played a part in their capitulation, as did self-preservation. Their most extensive use was during the Battle of Berlin, where "Volkssturm" units fought in many parts of the city. This battle was particularly devastating to its formations, however; many members fought to the death out of fear of being captured by the Soviets, holding out to the very end, which was in keeping with their covenant. Nonetheless, a force of over 2.5 million Soviet troops, equipped with 6,250 tanks and over 40,000 artillery pieces were assigned to capture the city, and the diminished remnants of the "Wehrmacht" were no match for them. Meanwhile, Hitler denounced every perceived "betrayal" to the inhabitants of the "Führerbunker". Not eager to die what was thought to be a pointless death, many older members of the "Volkssturm" looked for places to hide from the approaching Soviet Army. Juxtaposed against the tragic image of Berlin holding out against all odds was the frequent exodus and capitulation of "Wehrmacht" soldiers and members of the "Volkssturm" in southern and western Germany. In the Battle for Berlin, "Volkssturm" units were used by the German high command as a last-ditch attempt to defend Berlin. The "Volkssturm" had a strength of about 60,000 in the Berlin area formed into 92 battalions, of which about 30 battalions of "Volkssturm I" (those with some weapons) were sent to forward positions, while those of "Volkssturm II" (those without weapons) remained in the inner city. One of the few substantive fighting units left to defend Berlin was the LVI Panzer Corps, which occupied the southeastern sector of the town, whereas the remaining parts of the city were being defended by what remained of the SS, the "Volkssturm", and the Hitler Youth formations. One notable and unusual "Volkssturm" unit in the Battle for Berlin was the 3/115 Siemensstadt Battalion. It comprised 770 men, mainly First World War veterans in their 50s who were reasonably fit factory workers, with experienced officers. Unlike most "Volkssturm" units it was quite well equipped and trained. It was formed into three rifle companies, a support company (with two infantry support guns, four infantry mortars and heavy machine guns), and a heavy weapons company (with four Soviet M-20 howitzers and a French De Bange 220 mm mortar). The battalion first engaged Soviet troops at Friedrichsfelde on 21 April and saw the heaviest fighting over the following two days. It held out until 2 May, by which time it was down to just 50 rifles and two light machine guns. The survivors fell back to join other "Volkssturm" units. 26 men from the battalion were awarded the Iron Cross. Allied bombing had reduced Berlin to rubble; meanwhile the final stand in Berlin dwindled to fighting against highly trained, battle-hardened Soviet troops on the brink of final victory, who viewed resistance fighters like the "Volkssturm" as terrorists in much the same way the Wehrmacht once had viewed potential partisans during Operation Barbarossa. Red Army soldiers called the Hitler Youth formations and members of the "Volkssturm" still fighting to the end in Berlin "totals" for being part of Germany's total mobilization effort. While Iron Crosses were being handed out in places like Berlin, other cities and towns like Parchim and Mecklenburg witnessed old elites, acting as military commandants over the Hitler Youth and "Volkssturm", asserting themselves and demanding that the defensive fighting stop so as to spare lives and property. Despite their efforts, the last four months of the war were an exercise in futility for the "Volkssturm", and the Nazi leadership's insistence to continue the fight to the bitter end contributed to an additional 1.23 million (approximated) deaths, half of them German military personnel and the other half from the "Volkssturm". The figure historian Stephen Fritz puts forward does not match the observations of Richard J. Evans, who reported 175,000 "Volkssturm" members killed fighting the professional armies of the western Allies and Soviet Union. Evans' figures are based on the card-indexed members listed, who were officially reported as killed, but Martin Sorge points out that this figure does not include the 30,000 listed as presumed missing or dead from a 1963 report. Interrogated members of the "Volkssturm"—when questioned as to where the regular forces had gone—revealed that German soldiers surrendered to the Americans and British instead of the Red Army for fear of reprisals related to the atrocities they had committed in the Soviet Union. Other nations:
https://en.wikipedia.org/wiki?curid=8730
Director's cut A director's cut is an edited version of a film (or television episode, music video, commercial, or video game) that is supposed to represent the director's own approved edit. "Cut" explicitly refers to the process of film editing; in preparing a film for release, the director's cut is preceded by the assembly and rough editor's cut and usually followed by the final cut meant for the public film release. Director's cuts of film are not generally released to the public, because on most films the director does not have a final cut privilege. The production company, distributors, and/or studio (anybody with money invested in the film) can impose changes that they think will make the film profit more at the box office. This sometimes means a happier ending or less ambiguity, or excluding scenes that would earn a more audience-restricting rating, but more often means that the film is simply shortened to provide more screenings per day. With the rise of home video, the phrase became more generically used as a marketing term (including things such as comic books and music albums, neither of which actually have directors), and the most commonly seen form of director's cut is a cut where extra scenes and characters are added in, often making the director's cut considerably longer than the final cut. Traditionally, the "director's cut" is not, by definition, the director's ideal or preferred cut. The editing process of a film is broken into stages: First is the assembly/rough cut, where all selected takes are put together in the order in which they should appear in the film. Next, the editor's cut is reduced from the rough cut; the editor may be guided by his own tastes or following notes from the director or producers. Eventually is the final cut, which actually gets released or broadcast. In between the editor's cut and the final cut can come any number of fine cuts, including the director's cut. The director's cut may include unsatisfactory takes, a preliminary soundtrack, a lack of desired pick-up shots etc., which the director would not like to be shown but uses as a placeholder until satisfactory replacements can be inserted. This is still how the term is used within the film industry, as well as commercials, television, and music videos. The trend of releasing alternate cuts of films for artistic reasons became prominent in the 1970s; in 1974, the "director's cut" of "The Wild Bunch" was shown theatrically in Los Angeles to sold-out audiences. The theatrical release of the film had cut ten minutes to get an R-rating, but this cut was hailed as superior and has now become the definitive one. Other early examples include George Lucas's first two films being re-released following the success of "Star Wars", in cuts which more closely resembled his vision, or Peter Bogdanovich re-cutting "The Last Picture Show" several times. Charlie Chaplin also re-released all of his films in the 1970s, several of which were re-cut (Chaplin's re-release of "The Gold Rush" in the 1940s is almost certainly the earliest prominent example of a director's re-cut film being released to the public). A theatrical re-release of "Close Encounters of the Third Kind" used the phrase "Special Edition" to describe a cut which was closer to Spielberg's intent but had a compromised ending demanded by the studio. As the home video industry rose in the early 1980s, video releases of director's cuts were sometimes created for the small but dedicated cult fan market. Los Angeles cable station Z Channel is also cited as significant in the popularization of alternate cuts. Early examples of films released in this manner include Michael Cimino's "Heaven's Gate", where a longer cut was recalled from theaters but subsequently shown on cable and eventually released to home video; James Cameron's "Aliens", where a video release restored 20 minutes the studio had insisted on cutting; James Cameron's "The Abyss", where Cameron voluntarily made cuts to the theatrical version for pacing but restored them for a video release; Shim Hyung-rae's "Yonggary", where Shim made cuts to the theatrical version for how close to the American "Godzilla" it was and added in aliens and another monster, Cyker. That version was released stateside as "Reptilian" and, most famously, Ridley Scott's "Blade Runner", where an alternate workprint version was released to fan acclaim, ultimately resulting in the 1992 recut, the first film to use the term "Director's Cut" as a marketing description (and the first time it was used to describe a cut that the director was not involved in preparing). Once distributors discovered that consumers would buy alternate versions of films, it became common for films to receive multiple releases. There is no standardization for labelling, leading to so-called "director's cuts" of films despite where the director prefers the theatrically released version, or when the director had actual final cut privilege. These were often assembled by simply restoring deleted scenes, sometimes adding as much as a half-hour to the length of the film without regard to pacing and storytelling. As a result, the "director's cut" is often considered a mixed bag, with an equal share of supporters and detractors. Roger Ebert approved of the use of the label in unsuccessful films that had been tampered with by studio executives, such as Sergio Leone's original cut of "Once Upon a Time in America", and the moderately successful theatrical version of "Daredevil", which were altered by studio interference for their theatrical release. Other well-received director's cuts include Ridley Scott's "Kingdom of Heaven" (with "Empire" magazine stating: "The added 45 minutes in the Director’s Cut are like pieces missing from a beautiful but incomplete puzzle"), or Sam Peckinpah's "Pat Garrett and Billy the Kid", where the restored 115 minute cut is closer to the director's intent than the theatrical 105 minute cut (the actual director's cut was 122 minutes; it was never completed to Peckinpah's satisfaction, but was used as a guide for the restoration that was done after his death). However, Ebert considers adding such material to a successful film a waste. Even Ridley Scott stated on the director's commentary track of "Alien" that the original theatrical release was his director's cut, and that the new version was released as a marketing ploy. Director Peter Bogdanovich, no stranger to director's cuts himself, cites "Red River" as an example where "MGM have a version of Howard Hawks's "Red River" that they're calling the Director's Cut and it is absolutely not the director's cut. It's a cut the director didn't want, an earlier cut that was junked. They assume because it was longer that it's a director's cut. Capra cut two reels off "Lost Horizon" because it didn't work and then someone tried to put it back. There are certainly mistakes and stupidities in reconstructing pictures." In rare instances, such as Peter Weir's "Picnic at Hanging Rock", John Cassavetes's "The Killing of a Chinese Bookie", and Blake Edwards's "Darling Lili", changes made to a director's cut result in a shorter, more compact cut. This generally happens when a distributor insists that a film be completed in order to meet a release date, but sometimes it is the result of removing scenes that the distributor insisted on inserting, as opposed to restoring scenes they insisted on cutting. Another way that released director's cuts can be compromised is when directors were never allowed to even shoot their vision, and thus when the film is re-cut, they must make do with the footage that exists. Examples of this include Terry Zwigoff's "Bad Santa", Brian Helgeland's "Payback", and most notably the Richard Donner re-cut of "Superman II". Donner completed about 75% of the shooting of the sequel during the shooting of the first one but was fired from the project. of the film includes, among other things, screen test footage of stars Christopher Reeve and Margot Kidder, footage used in the first film, and entire scenes that were shot by replacement director Richard Lester which Donner dislikes but were required for story purposes. Some directors explicitly dislike the phrase "director's cut" because it implies that they disapprove of the theatrically released cut. James Cameron and Peter Jackson are two directors who publicly reject the label, preferring "extended edition" or "special edition". While Jackson considers the theatrical releases of the "Lord of the Rings" and "Hobbit" trilogies to be a final "director's cut" within the constraints of theatrical exhibition, the extended cuts were produced so that fans of the material could see nearly all of the scenes shot for the script to develop more of J. R. R. Tolkien's world but that were originally cut for running time or other reasons. New music and special effects were also added to the cuts. Cameron specified "what I put into theaters is the Director's cut. Nothing was cut that I didn't want cut. All the extra scenes we've added back in are just a bonus for the fans." (though referring specifically to "Avatar", he has expressed similar feelings on all of his films besides ""). Special editions such as George Lucas's "Star Wars" films, and Steven Spielberg's "E.T. the Extra-Terrestrial", in which special effects are redone in addition to a new edit, have also caused controversy. ("See " Changes in "Star Wars" re-releases and "E.T. the Extra-Terrestrial: The 20th Anniversary"). Extended or special editions can also apply to films that have been extended for television or cut out to fill time slots and long advertisement breaks, against the explicit wishes of the director, such as the TV versions of "Dune" (1984), "The Warriors" (1979), "Superman" (1978) and the "Harry Potter" films. "" was released (March 25, 2016), an extended cut dubbed the "Ultimate Edition", which features 31 minutes of additional footage, was released digitally on June 28, 2016, and on Blu-ray on July 19, 2016. The film "Caligula" exists in at least 10 different officially released versions, ranging from a sub-90 minute television edit version of TV-14 (later TV-MA) for cable television to an unrated full pornographic version exceeding 3.5 hours. This is believed to be the most distinct versions of a single film. Among major studio films, the record is believed to be held by "Blade Runner"; the magazine "Video Watchdog" counted no less than seven distinct versions in a 1993 issue, before director Ridley Scott later released a "Final Cut" in 2007, bringing the supposed grand total to eight differing versions. When released on DVD and Blu-Ray in 2019, "" featured an Extended Cut with 7 Minutes more footage. This is the first time since "Harry Potter and the Chamber of Secrets", that a Harry Potter film has had one. The music video for the 2006 Academy Award-nominated song "Listen", performed by Beyoncé, received a director's cut by Diane Martel. This version of the video was later included on Knowles' "B'Day Anthology Video Album" (2007). Linkin Park has a director's cut version for their music video "Faint" (directed by Mark Romanek) in which one of the band members spray paints the words "En Proceso" on a wall, as well as Hoobastank also having one for 2004's "The Reason" which omits the woman getting hit by the car. Britney Spears' music video for 2007's "Gimme More" was first released as a director's cut on iTunes, with the official video released 3 days later. Many other director's cut music videos contain sexual content that can't be shown on TV thus creating alternative scenes, such as Thirty Seconds to Mars's "Hurricane", and in some cases, alternative videos, such as in the case of Spears' 2008 video for "Womanizer". As the trend became more widely recognized, the term "director's cut" became increasingly used as a colloquialism to refer to an expanded version of other things, including video games, music, and comic books. This confusing usage only served to further reduce the artistic value of a "director's cut", and it is currently rarely used in those ways. For video games, these expanded versions, also referred as "complete editions", will have additions to the gameplay or additional game modes and features outside the main portion of the game. As is the case with certain high-profile Japanese-produced games, the game designers may take the liberty to revise their product for the overseas market with additional features during the localization process. These features are later added back to the native market in a re-release of a game in what is often referred as the international version of the game. This was the case with the overseas versions of "Final Fantasy VII", "Metal Gear Solid" and "Rogue Galaxy", which contained additional features (such as new difficulty settings for "Metal Gear Solid"), resulting in re-released versions of those respective games in Japan ("Final Fantasy VII International", "" and "Rogue Galaxy: Director's Cut"). In the case of "" and "", the American versions were released first, followed by the Japanese versions and then the European versions, with each regional release offering new content not found in the previous one. All of the added content from the Japanese and European versions of those games were included in the expanded editions titled "" and "". They also, similar to movies, will occasionally include extra, uncensored or alternate versions of cutscenes, as was the case with "". In markets with strict censorship, a later relaxing of those laws occasional will result in the game being rereleased with the "Special/Uncut Edition" tag added to differentiate between the originally released censored version and the current uncensored edition. Several of the "Pokémon" games have also received director's cuts and have used the term "extension," though "remake" and "third version" are also often used by many fans. These include "" (Japan only), "Pokémon Yellow" (for "Pokémon Red" and "Green"/"Blue"), "Pokémon Crystal" (for "Pokémon Gold" and "Silver"), "Pokémon Emerald" (for "Pokémon Ruby" and "Sapphire"), "Pokémon Platinum" (for ""Pokémon Diamond" and "Pearl""), Pokémon Black 2 and Pokémon White 2, and "Pokémon Ultra Sun and Ultra Moon". "Director's cuts" in music are rarely released. A few exceptions include Guided by Voices' 1994 album "Bee Thousand", which was re-released as a three disc vinyl LP Director's cut in 2004, and Fall Out Boy's 2003 album "Take This to Your Grave", which was re-released as a Director's cut in 2005 with two extra tracks. In 2011 British singer Kate Bush released the album titled "Director's Cut". It is made up of songs from her earlier albums "The Sensual World" and "The Red Shoes" which have been remixed and restructured, three of which were re-recorded completely.
https://en.wikipedia.org/wiki?curid=8731
Digital video Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images with analog signals. Digital video comprises a series of digital images displayed in rapid succession. Digital video was first introduced commercially in 1986 with the Sony D1 format, which recorded an uncompressed standard definition component video signal in digital form. In addition to uncompressed formats, popular compressed digital video formats today include H.264 and MPEG-4. Modern interconnect standards for digital video include HDMI, DisplayPort, Digital Visual Interface (DVI) and serial digital interface (SDI). Digital video can be copied with no degradation in quality. In contrast, when analog sources are copied, they experience generation loss. Digital video can be stored on digital media such as Blu-ray Disc, on computer data storage or streamed over the Internet to end users who watch content on a desktop computer screen or a digital smart TV. In everyday practice, digital video content such as TV shows and movies also includes a digital audio soundtrack. The basis for digital video cameras are metal-oxide-semiconductor (MOS) image sensors. The first practical semiconductor image sensor was the charge-coupled device (CCD), invented in 1969, based on MOS capacitor technology. Following the commercialization of CCD sensors during the late 1970s to early 1980s, the entertainment industry slowly began transitioning to digital imaging and digital video over the next two decades. The CCD was followed by the CMOS active-pixel sensor (CMOS sensor), developed in the 1990s. The earliest forms of digital video coding began in the 1970s, with uncompressed pulse-code modulation (PCM) video, requiring high bitrates between 45140 Mbps for standard definition (SD) content. Practical digital video coding was eventually made possible with the discrete cosine transform (DCT), a form of lossy compression. DCT compression was first proposed by Nasir Ahmed in 1972, and then developed by Ahmed with T. Natarajan and K. R. Rao at the University of Texas in 1973. DCT would later become the standard for digital video compression since the late 1980s. The first digital video coding standard was H.120, created by the CCITT (now ITU-T) in 1984. H.120 was not practical, due to weak performance. H.120 was based on differential pulse-code modulation (DPCM), a lossless compression algorithm that was inefficient for video coding. During the late 1980s, a number of companies began experimenting with DCT, a much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was developed based on DCT compression. H.261 was the first practical video coding standard. Since H.261, DCT compression has been adopted by all the major video coding standards that followed. MPEG-1, developed by the Motion Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which became the standard video format for DVD and SD digital television. It was followed by MPEG-4/H.263 in 1999, and then in 2003 it was followed by H.264/MPEG-4 AVC, which has become the most widely used video coding standard. Starting in the late 1970s to the early 1980s, several types of video production equipment that were digital in their internal workings were introduced. These included time base correctors (TBC) and digital video effects (DVE) units. They operated by taking a standard analog composite video input and digitizing it internally. This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and add effects to the video, in the case of a DVE unit. The digitized and processed video information was then converted back to standard analog video for output. Later on in the 1970s, manufacturers of professional video broadcast equipment, such as Bosch (through their Fernseh division) and Ampex developed prototype digital videotape recorders (VTR) in their research and development labs. Bosch's machine used a modified 1 inch type B videotape transport, and recorded an early form of CCIR 601 digital video. Ampex's prototype digital video recorder used a modified 2 inch Quadruplex videotape VTR (an Ampex AVR-3), but fitted with custom digital video electronics, and a special "octaplex" 8-head headwheel (regular analog 2" Quad machines only used 4 heads). Like standard 2" Quad, the audio on the Ampex prototype digital machine, nicknamed by its developers as "Annie", still recorded the audio in analog as linear tracks on the tape. None of these machines from these manufacturers were ever marketed commercially. Digital video was first introduced commercially in 1986 with the Sony D1 format, which recorded an uncompressed standard definition component video signal in digital form. Component video connections required 3 cables and most television facilities were wired for composite NTSC or PAL video using one cable. Due this incompatibility and also due to the cost of the recorder, D1 was used primarily by large television networks and other component-video capable video studios. In 1988, Sony and Ampex co-developed and released the D2 digital videocassette format, which recorded video digitally without compression in ITU-601 format, much like D1. But D2 had the major difference of encoding the video in composite form to the NTSC standard, thereby only requiring single-cable composite video connections to and from a D2 VCR, making it a perfect fit for the majority of television facilities at the time. D2 was a successful format in the television broadcast industry throughout the late '80s and the '90s. D2 was also widely used in that era as the master tape format for mastering laserdiscs. D1 & D2 would eventually be replaced by cheaper systems using video compression, most notably Sony's Digital Betacam that were introduced into the network's television studios. Other examples of digital video formats utilizing compression were Ampex's DCT (the first to employ such when introduced in 1992), the industry-standard DV and MiniDV and its professional variations, Sony's DVCAM and Panasonic's DVCPRO, and Betacam SX, a lower-cost variant of Digital Betacam using MPEG-2 compression. One of the first digital video products to run on personal computers was "PACo: The PICS Animation Compiler" from The Company of Science & Art in Providence, RI, which was developed starting in 1990 and first shipped in May 1991. PACo could stream unlimited-length video with synchronized sound from a single file (with the ".CAV" file extension) on CD-ROM. Creation required a Mac; playback was possible on Macs, PCs, and Sun SPARCstations. QuickTime, Apple Computer's multimedia framework appeared in June 1991. Audio Video Interleave from Microsoft followed in 1992. Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first with the introduction of playback standards such as MPEG-1 and MPEG-2 (adopted for use in television transmission and DVD media), and then the introduction of the DV tape format allowing recordings in the format to be transferred direct to digital video files using a FireWire port on an editing computer. This simplified the process, allowing non-linear editing systems (NLE) to be deployed cheaply and widely on desktop computers with no external playback or recording equipment needed. The widespread adoption of digital video and accompanying compression formats has reduced the bandwidth needed for a high-definition video signal (with HDV and AVCHD, as well as several commercial variants such as DVCPRO-HD, all using less bandwidth than a standard definition analog signal). These savings have increased the number of channels available on cable television and direct broadcast satellite systems, created opportunities for spectrum reallocation of terrestrial television broadcast frequencies, made tapeless camcorders based on flash memory possible among other innovations and efficiencies. Digital video comprises a series of digital images displayed in rapid succession. In the context of video these images are called frames. The rate at which frames are displayed is known as the frame rate and is measured in frames per second (FPS). Every frame is an orthogonal bitmap digital image and so comprises a raster of pixels. Pixels have only one property, their color. The color of a pixel is represented by a fixed number of bits. The more bits the more subtle variations of colors can be reproduced. This is called the color depth of the video. In interlaced video each "frame" is composed of two halves of an image. The first half contains only the odd-numbered lines of a full frame. The second half contains only the even-numbered lines. Those halves are referred to individually as "fields". Two consecutive fields compose a full frame. If an interlaced video has a frame rate of 30 frames per second the field rate is 60 fields per second. All the properties discussed here apply equally to interlaced video but one should be careful not to confuse the fields-per-second rate with the frames-per-second rate. By its definition, bit rate is a measure of the rate of information content of the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video as bit rate is proportional to every property that affects the video quality. Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Video compression is used to greatly reduce the bit rate while having a lesser effect on quality. Bits per pixel (BPP) is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel. Chroma subsampling can reduce the BPP to 16 or 12 bits/pixel. Applying jpeg compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values. BPP represents the "average" bits per pixel. There are compression algorithms that keep the BPP almost constant throughout the entire duration of the video. In this case, we also get video output with a constant bitrate (CBR). This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g. in videoconferencing). As not all frames can be compressed at the same level, because quality is more severely impacted for scenes of high complexity, some algorithms try to constantly adjust the BPP. They keep it high while compressing complex scenes and low for less demanding scenes. This way, one gets the best quality at the smallest average bit rate (and the smallest file size, accordingly). This method produces a variable bitrate because it tracks the variations of the BPP. Standard film stocks typically record at 24 frames per second. For video, there are two frame rate standards: NTSC, at 30/1.001 (about 29.97) frames per second (about 59.94 fields per second), and PAL, 25 frames per second (50 fields per second). Digital video cameras come in two different image capture formats: interlaced and progressive scan. Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on. One set of odd or even lines is referred to as a "field", and a consecutive pairing of two fields of opposite parity is called a "frame". Progressive scan cameras record all lines in each frame as a single unit. Thus, interlaced video captures samples the scene motion twice as often as progressive video does, for the same frame rate. Progressive-scan generally produces a slightly sharper image. However, motion may not be as smooth as interlaced video. Digital video can be copied with no generation loss which degrades quality in analog systems. However a change in parameters like frame size or a change of the digital format can decrease the quality of the video due to image scaling and transcoding losses. Digital video can be manipulated and edited on a non-linear editing systems frequently implemented using commodity computer hardware and software. Digital video has a significantly lower cost than 35 mm film. In comparison to the high cost of film stock, the digital media used for digital video recording, such as flash memory or hard disk drive, used for recording digital video is very inexpensive. Digital video also allows footage to be viewed on location without the expensive and time-consuming chemical processing required by film. Network transfer of digital video makes physical deliveries of tapes and film reels unnecessary. Digital television (including higher quality HDTV) was introduced in most developed countries in early 2000s. Digital video is used in modern mobile phones and video conferencing systems. Digital video is used for Internet distribution of media, including streaming video and peer-to-peer movie distribution. Many types of video compression exist for serving digital video over the internet and on optical disks. The file sizes of digital video used for professional editing are generally not practical for these purposes, and the video requires further compression with codecs. , the highest resolution demonstrated for digital video generation is 35 megapixels (8192 x 4320). The highest speed is attained in industrial and scientific high speed cameras that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording. Live digital video consumes bandwidth. Recorded digital video consumes data storage. The amount of bandwidth or storage required is determined by the frame size, color depth and frame rate. Each pixel consumes a number of bits determined by the color depth. The data required to represent a frame of data is determined by multiplying by the number of pixels in the image. The bandwidth is determined by multiplying the storage requirement for a frame by the frame rate. The overall storage requirements for a program can then be determined by multiplying bandwidth by the duration of the program. These calculations are accurate for uncompressed video but because of the relatively high bit rate of uncompressed video, video compression is extensively used. In the case of compressed video, each frame requires a small percentage of the original bits. Note that it is not necessary that all frames are equally compressed by the same percentage. In practice, they are not so it is useful to consider the "average" factor of compression for "all" the frames taken together. Purpose-built digital video interfaces General-purpose interfaces use to carry digital video The following interface has been designed for carrying MPEG-Transport compressed video: Compressed video is also carried using UDP-IP over Ethernet. Two approaches exist for this: Other methods of carrying video over IP All current formats, which are listed below, are PCM based.
https://en.wikipedia.org/wiki?curid=8733
BIND BIND (, or named (pronounced "name-dee": , short for "name daemon"), is an implementation of the Domain Name System (DNS) of the Internet. It performs both of the main DNS server roles, acting as an authoritative name server for domains, and acting as a recursive resolver in the network. As of 2015, it is the most widely used domain name server software, and is the "de facto" standard on Unix-like operating systems. The software was originally designed at the University of California, Berkeley (UCB) in the early 1980s. The name originates as an acronym of "Berkeley Internet Name Domain", reflecting the application's use within UCB. The software consists, most prominently, of the DNS server component, called "named", a contracted form of "name daemon". In addition, the suite contains various administration tools, and a DNS resolver interface library. The latest version of BIND is BIND 9, first released in 2000. BIND 9 is actively maintained, with new releases issued several times a year. Starting in 2009, the Internet Software Consortium (ISC) developed a software suite, initially called BIND10. With release version 1.2.0 the project was renamed "Bundy" to terminate ISC involvement in the project. BIND 9 is intended to be fully compliant with the IETF DNS standards and draft standards. Important features of BIND 9 include: TSIG, nsupdate, IPv6, RNDC (remote name daemon control), views, multiprocessor support, Response Rate Limiting (RRL), DNSSEC, and broad portability. RNDC enables remote configuration updates, using a shared secret to provide encryption for local and remote terminals during each session. While earlier versions of BIND offered no mechanism to store and retrieve zone data in anything other than flat text files, in 2007 BIND 9.4 DLZ provided a compile-time option for zone storage in a variety of database formats including LDAP, Berkeley DB, PostgreSQL, MySQL, and ODBC. BIND 10 planned to make the data store modular, so that a variety of databases may be connected. In 2016 ISC added support for the 'dyndb' interface, contributed by RedHat, with BIND version 9.11.0. Security issues that are discovered in BIND 9 are patched and publicly disclosed in keeping with common principles of open source software. A complete list of security defects that have been discovered and disclosed in BIND9 is maintained by Internet Systems Consortium, the current authors of the software. The BIND 4 and BIND 8 releases both had serious security vulnerabilities. Use of these ancient versions, or any un-maintained, non-supported version is strongly discouraged. BIND 9 was a complete rewrite, in part to mitigate these ongoing security issues. The downloads page on the ISC web site clearly shows which versions are currently maintained and which are end of life. Originally written by four graduate students at the Computer Systems Research Group at the University of California, Berkeley (UCB), BIND was first released with Berkeley Software Distribution 4.3BSD. Paul Vixie started maintaining it in 1988 while working for Digital Equipment Corporation. , the Internet Systems Consortium maintains, updates, and writes new versions of BIND. BIND was written by Douglas Terry, Mark Painter, David Riggle and Songnian Zhou in the early 1980s at the University of California, Berkeley as a result of a DARPA grant. The acronym "BIND" is for "Berkeley Internet Name Domain", from a technical paper published in 1984. Versions of BIND through 4.8.3 were maintained by the Computer Systems Research Group (CSRG) at UC Berkeley. In the mid-1980s, Paul Vixie of DEC took over BIND development, releasing versions 4.9 and 4.9.1. Vixie continued to work on BIND after leaving DEC. BIND Version 4.9.2 was sponsored by Vixie Enterprises. Vixie eventually founded the ISC, which became the entity responsible for BIND versions starting with 4.9.3. BIND 8 was released by ISC in May 1997. Version 9 was developed by Nominum, Inc. under an ISC outsourcing contract, and the first version was released 9 October 2000. It was written from scratch in part to address the architectural difficulties with auditing the earlier BIND code bases, and also to support DNSSEC (DNS Security Extensions). The development of BIND 9 took place under a combination of commercial and military contracts. Most of the features of BIND 9 were funded by UNIX vendors who wanted to ensure that BIND stayed competitive with Microsoft's DNS offerings; the DNSSEC features were funded by the US military, which regarded DNS security as important. BIND 9 was released in September 2000. In 2009, ISC started an effort to develop a new version of the software suite, called BIND10. In addition to DNS service, the BIND10 suite also included IPv4 and IPv6 DHCP server components. In April 2014, with the BIND10 release 1.2.0 the ISC concluded its development work of the project and renamed the project to "Bundy", moving the source code repository to GitHub for further development by outside public efforts.. ISC discontinued its involvement in the project due to cost-cutting measures. The development of DHCP components was split off to become a new Kea project.
https://en.wikipedia.org/wiki?curid=8735
Djbdns The djbdns software package is a DNS implementation. It was created by Daniel J. Bernstein in response to his frustrations with repeated security holes in the widely used BIND DNS software. As a challenge, Bernstein offered a $1000 prize for the first person to find a security hole in djbdns, which was awarded in March 2009 to Matthew Dempsky. , djbdns's tinydns component was the second most popular DNS server in terms of the number of domains for which it was the authoritative server, and third most popular in terms of the number of DNS hosts running it. djbdns has never been vulnerable to the widespread cache poisoning vulnerability reported in July 2008, but it has been discovered that it is vulnerable to a related attack. The source code has not been centrally managed since its release in 2001, and was released into the public domain in 2007. As of March 2009, there are a number of forks, one of which is dbndns (part of the Debian Project), and more than a dozen patches to modify the released version. While djbdns does not directly support DNSSEC, there are third party patches to add DNSSEC support to djbdns' authoritative-only tinydns component. The djbdns software consists of servers, clients, and miscellaneous configuration tools. In djbdns, different features and services are split off into separate programs. For example, zone transfers, zone file parsing, caching, and recursive resolving are implemented as separate programs. The result of these design decisions is a reduction in code size and complexity of the daemon program that provides the core function of answering lookup requests. Bernstein asserts that this is true to the spirit of the Unix operating system, and makes security verification much simpler. On December 28, 2007, Bernstein released djbdns into the public domain. Previously the package was distributed free of charge as license-free software. However this did not permit the distribution of modified versions of djbdns, which was one of the core principles of open-source software. Consequently, it was not included in Linux distributions which required all components to be open-source.
https://en.wikipedia.org/wiki?curid=8736
Dylan (programming language) Dylan is a multi-paradigm programming language that includes support for functional and object-oriented programming (OOP), and is dynamic and reflective while providing a programming model designed to support generating efficient machine code, including fine-grained control over dynamic and static behaviors. It was created in the early 1990s by a group led by Apple Computer. A concise and thorough overview of the language may be found in the Dylan Reference Manual. Dylan derives from Scheme and Common Lisp and adds an integrated object system derived from the Common Lisp Object System (CLOS). In Dylan, all values (including numbers, characters, functions, and classes) are first-class objects. Dylan supports multiple inheritance, polymorphism, multiple dispatch, keyword arguments, object introspection, pattern-based syntax extension macros, and many other advanced features. Programs can express fine-grained control over dynamism, admitting programs that occupy a continuum between dynamic and static programming and supporting evolutionary development (allowing for rapid prototyping followed by incremental refinement and optimization). Dylan's main design goal is to be a dynamic language well-suited for developing commercial software. Dylan attempts to address potential performance issues by introducing "natural" limits to the full flexibility of Lisp systems, allowing the compiler to clearly understand compilable units, such as libraries. Dylan derives much of its semantics from Scheme and other Lisps; some Dylan implementations were initially built within extant Lisp systems. However, Dylan has an ALGOL-like syntax instead of a Lisp-like prefix syntax. Dylan was created in the early 1990s by a group led by Apple Computer. At one time in its development, it was intended for use with the Apple Newton computer, but the Dylan implementation did not reach sufficient maturity in time, and Newton instead used a mix of C and the NewtonScript developed by Walter Smith. Apple ended their Dylan development effort in 1995, though they made a "technology release" version available (Apple Dylan TR1) that included an advanced integrated development environment (IDE). Two other groups contributed to the design of the language and developed implementations: Harlequin released a commercial IDE for Microsoft Windows and Carnegie Mellon University released an open source compiler for Unix systems called Gwydion Dylan. Both of these implementations are now open source. The Harlequin implementation is now named Open Dylan and is maintained by a group of volunteers, the Dylan Hackers. The Dylan language was code-named Ralph. James Joaquin chose the name Dylan for "DYnamic LANguage." Many of Dylan's syntax features come from its Lisp heritage. Originally, Dylan used a Lisp-like prefix syntax, which was based on s-expressions. By the time the language design was completed, the syntax was changed to an ALGOL-like syntax, with the expectation that it would be more familiar to a wider audience of programmers. The syntax was designed by Michael Kahl. It is described in great detail in the Dylan Reference Manual. Dylan is not case sensitive. Dylan's lexical syntax allows the use of a naming convention where hyphen-minus signs are used to connect the parts of multiple-word identifiers (sometimes called "lisp-case" or "kebab case"). This convention is common in Lisp languages but cannot be used in programming languages that treat any hyphen-minus that is not part of a numeric literal as a single lexical token, even when not surrounded by whitespace characters. Besides alphanumeric characters and hyphen-minus signs, Dylan allows certain non-alphanumerical characters as part of identifiers. Identifiers may not consist of these non-alphanumeric characters or of numeric characters alone. If there is any ambiguity, whitespace is used. A simple class with several slots: define class () end class ; By convention, classes are named with less-than and greater-than signs used as angle brackets, e.g. the class named codice_1 in the code example. In codice_2 both codice_3 and codice_1 are optional. This is true for all codice_5 clauses. For example, you may write codice_6 or just codice_5 to terminate an codice_8 statement. The same class, rewritten in the most minimal way possible: define class () end; The slots are now both typed as codice_9. The slots must be initialized manually. By convention, constant names begin with "$": define constant $pi :: = 3.1415927d0; A factorial function: define function factorial (n :: ) => (n! :: ) end; Here, codice_10 and codice_11 are just normal identifiers. There is no explicit return statement. The result of a method or function is the last expression evaluated. It is a common style to leave off the semicolon after an expression in return position. In many object-oriented languages, classes are the main means of encapsulation and modularity; each class defines a namespace and controls which definitions are externally visible. Further, classes in many languages define an indivisible unit that must be used as a whole. For example, using a codice_12 concatenation function requires importing and compiling against all of codice_12. Some languages, including Dylan, also include a separate, explicit namespace or module system that performs encapsulation in a more general way. In Dylan, the concepts of compile-unit and import-unit are separated, and classes have nothing specifically to do with either. A "library" defines items that should be compiled and handled together, while a "module" defines a namespace. Classes can be placed together in modules, or cut across them, as the programmer wishes. Often the complete definition for a class does not exist in a single module, but is spread across several that are optionally collected together. Different programs can have different definitions of the same class, including only what they need. For example, consider an add-on library for regex support on codice_12. In some languages, for the functionality to be included in strings, the functionality must be added to the codice_12 namespace. As soon as this occurs, the codice_12 class becomes larger, and functions that don't need to use regex still must "pay" for it in increased library size. For this reason, these sorts of add-ons are typically placed in their own namespaces and objects. The downside to this approach is that the new functions are no longer a "part of" codice_12; instead, it is isolated in its own set of functions that must be called separately. Instead of codice_18, which would be the natural organization from an OO viewpoint, something like codice_19 is used, which effectively reverses the ordering. Under Dylan, many interfaces can be defined for the same code, for instance the String concatenation method could be placed in both the String interface, and the "concat" interface which collects together all of the different concatenation functions from various classes. This is more commonly used in math libraries, where functions tend to be applicable to widely differing object types. A more practical use of the interface construct is to build public and private versions of a module, something that other languages include as a "bolt on" feature that invariably causes problems and adds syntax. Under Dylan, every function call can be simply places in the "Private" or "Development" interface, and collect up publicly accessible functions in codice_20. Under Java or C++ the visibility of an object is defined in the code, meaning that to support a similar change, a programmer would be forced to rewrite the definitions fully, and could not have two versions at the same time. Classes in Dylan describe codice_21 (data members, fields, ivars, etc.) of objects in a fashion similar to most OO languages. All access to slots are via methods, as in Smalltalk. Default getter and setter methods are automatically generated based on the slot names. In contrast with most other OO languages, other methods applicable to the class are often defined outside of the class, and thus class definitions in Dylan typically include the definition of the storage only. For instance: define class () end class; In this example, the class "codice_22" is defined. The syntax is convention only, to make the class names stand out—the angle brackets are merely part of the class name. In contrast, in some languages the convention is to capitalize the first letter of the class name or to prefix the name with a "C" or "T" (for example). codice_22 inherits from a single class, codice_24, and contains two slots, codice_25 holding a string for the window title, and codice_26 holding an X-Y point for a corner of the window. In this example, the title has been given a default value, while the position has not. The optional "init-keyword" syntax allows the programmer to specify the initial value of the slot when instantiating an object of the class. In languages such as C++ or Java, the class would also define its interface. In this case the definition above has no explicit instructions, so in both languages access to the slots and methods is considered codice_27, meaning they can be used only by subclasses. To allow unrelated code to use the window instances, they must be declared codice_28. In Dylan, these sorts of visibility rules are not considered part of the code, but of the module/interface system. This adds considerable flexibility. For instance, one interface used during early development could declare everything public, whereas one used in testing and deployment could limit this. With C++ or Java these changes would require changes to the source code, so people won't do it, whereas in Dylan this is a fully unrelated concept. Although this example does not use it, Dylan also supports multiple inheritance. In Dylan, methods are not intrinsically associated with any specific class; methods can be thought of as existing outside of classes. Like CLOS, Dylan is based on multiple dispatch (multimethods), where the specific method to be called is chosen based on the types of all its arguments. The method need not be known at compile time, the understanding being that the required function may be available, or not, based on a user's preferences. Under Java the same methods would be isolated in a specific class. To use that functionality the programmer is forced to "import" that class and refer to it explicitly to call the method. If that class is unavailable, or unknown at compile time, the application simply won't compile. In Dylan, code is isolated from storage in "functions". Many classes have methods that call their own functions, thereby looking and feeling like most other OO languages. However code may also be located in "generic functions", meaning they are not attached to a specific class, and can be called natively by anyone. Linking a specific generic function to a method in a class is accomplished thusly: define method turn-blue (w :: ) end method; This definition is similar to those in other languages, and would likely be encapsulated within the codice_22 class. Note the := setter call, which is syntactic sugar for codice_30. The utility of generic methods comes into its own when you consider more "generic" examples. For instance, one common function in most languages is the codice_31, which returns some human-readable form for the object. For instance, a window might return its title and its position in parens, while a string would return itself. In Dylan these methods could all be collected into a single module called "codice_31", thereby removing this code from the definition of the class itself. If a specific object did not support a codice_31, it could be easily added in the codice_31 module. This whole concept might strike some readers as very odd. The code to handle codice_31 for a window isn't defined in codice_22? This might not make any sense until you consider how Dylan handles the call of the codice_31. In most languages when the program is compiled the codice_31 for codice_22 is looked up and replaced with a pointer (more or less) to the method. In Dylan this occurs when the program is first run; the runtime builds a table of method-name/parameters details and looks up methods dynamically via this table. That means that a function for a specific method can be located anywhere, not just in the compile-time unit. In the end the programmer is given considerable flexibility in terms of where to place their code, collecting it along class lines where appropriate, and functional lines where it's not. The implication here is that a programmer can add functionality to existing classes by defining functions in a separate file. For instance, you might wish to add spell checking to all codice_40s, which in most languages would require access to the source code of the string class—and such basic classes are rarely given out in source form. In Dylan (and other "extensible languages") the spell checking method could be added in the codice_41 module, defining all of the classes on which it can be applied via the codice_42 construct. In this case the actual functionality might be defined in a single generic function, which takes a string and returns the errors. When the codice_41 module is compiled into your program, all strings (and other objects) will get the added functionality. Apple Dylan is the implementation of Dylan produced by Apple Computer. It was originally developed for the Apple Newton product.
https://en.wikipedia.org/wiki?curid=8741
Dublin Core The Dublin Core schema is a small set of vocabulary terms that can be used to describe digital resources (video, images, web pages, etc.), as well as physical resources such as books or CDs, and objects like artworks. The full set of Dublin Core metadata terms can be found on the Dublin Core Metadata Initiative (DCMI) website. The original set of 15 classic metadata terms, known as the Dublin Core Metadata Element Set (DCMES), is endorsed in the following standards documents: Dublin Core metadata may be used for multiple purposes, from simple resource description to combining metadata vocabularies of different metadata standards, to providing interoperability for metadata vocabularies in the linked data cloud and Semantic Web implementations. "Dublin" refers to Dublin, Ohio, USA where the schema originated during the 1995 invitational OCLC/NCSA Metadata Workshop, hosted by the OCLC (known at that time as Online Computer Library Center), a library consortium based in Dublin, and the National Center for Supercomputing Applications (NCSA). "Core" refers to the metadata terms as "broad and generic being usable for describing a wide range of resources". The semantics of Dublin Core were established and are maintained by an international, cross-disciplinary group of professionals from librarianship, computer science, text encoding, museums, and other related fields of scholarship and practice. Starting in 2000, the Dublin Core community focused on "application profiles" – the idea that metadata records would use Dublin Core together with other specialized vocabularies to meet particular implementation requirements. During that time, the World Wide Web Consortium's work on a generic data model for metadata, the Resource Description Framework (RDF), was maturing. As part of an extended set of DCMI metadata terms, Dublin Core became one of the most popular vocabularies for use with RDF, more recently in the context of the linked data movement. The Dublin Core Metadata Initiative (DCMI) provides an open forum for the development of interoperable online metadata standards for a broad range of purposes and of business models. DCMI's activities include consensus-driven working groups, global conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. In 2008, DCMI separated from OCLC and incorporated as an independent entity. Currently, any and all changes that are made to the Dublin Core standard, are reviewed by a DCMI Usage Board within the context of a DCMI Namespace Policy (DCMI-NAMESPACE). This policy describes how terms are assigned and also sets limits on the amount of editorial changes allowed to the labels, definitions, and usage comments. The Dublin Core standard originally included two levels: Simple and Qualified. "Simple Dublin Core" comprised 15 elements; "Qualified Dublin Core" included three additional elements (Audience, Provenance and RightsHolder), as well as a group of element refinements (also called qualifiers) that could refine the semantics of the elements in ways that may be useful in resource discovery. Since 2012, the two have been incorporated into the "DCMI Metadata Terms" as a single set of terms using the RDF data model. The full set of elements is found under the namespace http://purl.org/dc/terms/. Because the definition of the terms often contains domains and ranges, which may not be compatible with the pre-RDF definitions used for the original 15 Dublin Core elements, there is a separate namespace for the original 15 elements as previously defined: http://purl.org/dc/elements/1.1/. The original DCMES Version 1.1 consists of 15 metadata elements, defined this way in the original specification: Each Dublin Core element is optional and may be repeated. The DCMI has established standard ways to refine elements and encourage the use of encoding and vocabulary schemes. There is no prescribed order in Dublin Core for presenting or using the elements. The Dublin Core became a NISO standards, Z39.85, and IETF RFC 5013 in 2007, ISO 15836 standard in 2009 and is used as a base-level data element set for the description of learning resources in the ISO/IEC 19788-2 Metadata for learning resources (MLR) – Part 2: Dublin Core elements, prepared by the ISO/IEC JTC1 SC36. Full information on element definitions and term relationships can be found in the Dublin Core Metadata Registry. On the "archive form" web page for WebCite it says, in part: "Metadata (optional): These are Dublin Core elements. [...]". (Superseded in 2008 by the DCMI Metadata Terms.) Subsequent to the specification of the original 15 elements, an ongoing process to develop exemplary terms extending or refining the DCMES was begun. The additional terms were identified, generally in working groups of the DCMI, and judged by the DCMI Usage Board to be in conformance with principles of good practice for the qualification of Dublin Core metadata elements. Element refinements make the meaning of an element narrower or more specific. A refined element shares the meaning of the unqualified element, but with a more restricted scope. The guiding principle for the qualification of Dublin Core elements, colloquially known as the "Dumb-Down Principle", states that an application that does not understand a specific element refinement term should be able to ignore the qualifier and treat the metadata value as if it were an unqualified (broader) element. While this may result in some loss of specificity, the remaining element value (without the qualifier) should continue to be generally correct and useful for discovery. In addition to element refinements, Qualified Dublin Core includes a set of recommended encoding schemes, designed to aid in the interpretation of an element value. These schemes include controlled vocabularies and formal notations or parsing rules. A value expressed using an encoding scheme may thus be a token selected from a controlled vocabulary (for example, a term from a classification system or set of subject headings) or a string formatted in accordance with a formal notation, for example, "2000-12-31" as the ISO standard expression of a date. If an encoding scheme is not understood by an application, the value may still be useful to a "human reader". Audience, Provenance and RightsHolder are elements, but not part of the Simple Dublin Core 15 elements. Use Audience, Provenance and RightsHolder only when using Qualified Dublin Core. DCMI also maintains a small, general vocabulary recommended for use within the element Type. This vocabulary currently consists of 12 terms. The DCMI Metadata Terms lists the current set of the Dublin Core vocabulary. This set includes the fifteen terms of the DCMES (in "italic"), as well as the qualified terms. Each term has a unique URI in the namespace http://purl.org/dc/terms, and all are defined as RDF properties. Syntax choices for metadata expressed with the Dublin Core elements depend on context. Dublin Core concepts and semantics are designed to be syntax independent and apply to a variety of contexts, as long as the metadata is in a form suitable for interpretation by both machines and people. The Dublin Core Abstract Model provides a reference model against which particular Dublin Core encoding guidelines can be compared, independent of any particular encoding syntax. Such a reference model helps implementers get a better understanding of the kinds of descriptions they are trying to encode and facilitates the development of better mappings and translations between different syntaxes. One Document Type Definition based on Dublin Core is the Open Source Metadata Framework (OMF) specification. OMF is in turn used by Rarian (superseding ScrollKeeper), which is used by the GNOME desktop and KDE help browsers and the ScrollServer documentation server. PBCore is also based on Dublin Core. The Zope CMF's Metadata products, used by the Plone, ERP5, the Nuxeo CPS Content management systems, SimpleDL, and Fedora Commons also implement Dublin Core. The EPUB e-book format uses Dublin Core metadata in the OPF file. The Australian Government Locator Service (AGLS) metadata standard is an application profile of Dublin Core.
https://en.wikipedia.org/wiki?curid=8742
Document Object Model The Document Object Model (DOM) is a cross-platform and language-independent interface that treats an XML or HTML document as a tree structure wherein each node is an object representing a part of the document. The DOM represents a document with a logical tree. Each branch of the tree ends in a node, and each node contains objects. DOM methods allow programmatic access to the tree; with them one can change the structure, style or content of a document. Nodes can have event handlers attached to them. Once an event is triggered, the event handlers get executed. The principal standardization of the DOM was handled by the World Wide Web Consortium, which last developed a recommendation in 2004. WHATWG took over development of the standard, publishing it as a living document. The W3C now publishes stable snapshots of the WHATWG standard. The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the JavaScript engines of web browsers. JavaScript was released by Netscape Communications in 1995 within Netscape Navigator 2.0. Netscape's competitor, Microsoft, released Internet Explorer 3.0 the following year with a reimplementation of JavaScript called JScript. JavaScript and JScript let web developers create web pages with client-side interactivity. The limited facilities for detecting user-generated events and modifying the HTML document in the first generation of these languages eventually became known as "DOM Level 0" or "Legacy DOM." No independent standard was developed for DOM Level 0, but it was partly described in the specifications for HTML 4. Legacy DOM was limited in the kinds of elements that could be accessed. Form, link and image elements could be referenced with a hierarchical name that began with the root document object. A hierarchical name could make use of either the names or the sequential index of the traversed elements. For example, a form input element could be accessed as either codice_1 or codice_2. The Legacy DOM enabled client-side form validation and the popular "rollover" effect. In 1997, Netscape and Microsoft released version 4.0 of Netscape Navigator and Internet Explorer respectively, adding support for Dynamic HTML (DHTML) functionality enabling changes to a loaded HTML document. DHTML required extensions to the rudimentary document object that was available in the Legacy DOM implementations. Although the Legacy DOM implementations were largely compatible since JScript was based on JavaScript, the DHTML DOM extensions were developed in parallel by each browser maker and remained incompatible. These versions of the DOM became known as the "Intermediate DOM." After the standardization of ECMAScript, the W3C DOM Working Group began drafting a standard DOM specification. The completed specification, known as "DOM Level 1", became a W3C Recommendation in late 1998. By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, including Microsoft Internet Explorer version 6 (from 2001), Opera, Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino). The W3C DOM Working Group published its final recommendation and subsequently disbanded in 2004. Development efforts migrated to the WHATWG, which continues to maintain a living standard. In 2009, the Web Applications group reorganized DOM activities at the W3C. In 2013, due to a lack of progress and the impending release of HTML5, the DOM Level 4 specification was reassigned to the HTML Working Group to expedite its completion. Meanwhile, in 2015, the Web Applications group was disbanded and DOM stewardship passed to the Web Platform group. Beginning with the publication of DOM Level 4 in 2015, the W3C creates new recommendations based on snapshots of the WHATWG standard. To render a document such as a HTML page, most web browsers use an internal model similar to the DOM. The nodes of every document are organized in a tree structure, called the "DOM tree", with the topmost node named as "Document object". When an HTML page is rendered in browsers, the browser downloads the HTML into local memory and automatically parses it to display the page on screen. When a web page is loaded, the browser creates a Document Object Model of the page, which is an object oriented representation of an HTML document that acts as an interface between JavaScript and the document itself. This allows the creation of dynamic web pages, because within a page JavaScript can: Because the DOM supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications, an implementation must at least buffer the document that has been read so far (or some parsed form of it). Web browsers rely on layout engines to parse HTML into a DOM. Some layout engines, such as Trident/MSHTML, are associated primarily or exclusively with a particular browser, such as Internet Explorer. Others, including Blink, WebKit, and Gecko, are shared by a number of browsers, such as Google Chrome, Opera, Safari, and Firefox. The different layout engines implement the DOM standards to varying degrees of compliance. DOM implementations: APIs that expose DOM implementations: Inspection tools:
https://en.wikipedia.org/wiki?curid=8743
Design pattern A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, notably software engineering. An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with. Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from "conflicting forces" — such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern. Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time". For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both within the context for the pattern "A PLACE TO WAIT". Business models also have design patterns.
https://en.wikipedia.org/wiki?curid=8745
N,N-Dimethyltryptamine N","N"-Dimethyltryptamine (DMT or N","N"-DMT) is a chemical substance that occurs in many plants and animals and which is both a derivative and a structural analog of tryptamine. It can be consumed as a psychedelic drug and has historically been prepared by various cultures for ritual purposes as an entheogen. DMT is illegal in most countries. DMT has a rapid onset, intense effects, and a relatively short duration of action. For those reasons, DMT was known as the "business trip" during the 1960s in the United States, as a user could access the full depth of a psychedelic experience in considerably less time than with other substances such as LSD or magic mushrooms. DMT can be inhaled, ingested, or injected and its effects depend on the dose. When inhaled or injected, the effects last a short period of time: about five to 15 minutes. Effects can last three hours or more when orally ingested along with an MAOI, such as the ayahuasca brew of many native Amazonian tribes. DMT can produce vivid "projections" of mystical experiences involving euphoria and dynamic hallucinations of geometric forms. DMT is a functional analog and structural analog of other psychedelic tryptamines such as "O"-Acetylpsilocin (4-AcO-DMT), 5-MeO-DMT, bufotenin (5-HO-DMT), psilocybin (4-PO-DMT), and psilocin (4-HO-DMT). The structure of DMT occurs within some important biomolecules like serotonin and melatonin, making them structural analogs of DMT. DMT is produced in many species of plants often in conjunction with its close chemical relatives 5-methoxy-"N","N"-dimethyltryptamine (5-MeO-DMT) and bufotenin (5-OH-DMT). DMT-containing plants are commonly used in indigenous Amazonian shamanic practices. It is usually one of the main active constituents of the drink ayahuasca; however, ayahuasca is sometimes brewed with plants that do not produce DMT. It occurs as the primary psychoactive alkaloid in several plants including "Mimosa tenuiflora", "Diplopterys cabrerana", and "Psychotria viridis". DMT is found as a minor alkaloid in snuff made from Virola bark resin in which 5-MeO-DMT is the main active alkaloid. DMT is also found as a minor alkaloid in bark, pods, and beans of "Anadenanthera peregrina" and "Anadenanthera colubrina" used to make Yopo and Vilca snuff, in which bufotenin is the main active alkaloid. Psilocin and its precursor psilocybin, an active chemical in many psilocybin mushrooms, are structurally similar to DMT. The psychotropic effects of DMT were first studied scientifically by the Hungarian chemist and psychologist Stephen Szára, who performed research with volunteers in the mid-1950s. Szára, who later worked for the US National Institutes of Health, had turned his attention to DMT after his order for LSD from the Swiss company Sandoz Laboratories was rejected on the grounds that the powerful psychotropic could be dangerous in the hands of a communist country. DMT is generally not active orally unless it is combined with a monoamine oxidase inhibitor (MAOI) such as a reversible inhibitor of monoamine oxidase A (RIMA), for example, harmaline. Without a MAOI, the body quickly metabolizes orally administered DMT, and it therefore has no hallucinogenic effect unless the dose exceeds monoamine oxidase's metabolic capacity. Other means of ingestion such as vaporizing, injecting, or insufflating the drug can produce powerful hallucinations for a short time (usually less than half an hour), as the DMT reaches the brain before it can be metabolized by the body's natural monoamine oxidase. Taking a MAOI prior to vaporizing or injecting DMT prolongs and potentiates the effects. Induced DMT experiences can include profound time-dilation, visual, auditory, tactile, and proprioceptive distortions and hallucinations, and other experiences that, by most firsthand accounts, defy verbal or visual description. Examples include perceiving hyperbolic geometry or seeing Escher-like impossible objects. Several scientific experimental studies have tried to measure subjective experiences of altered states of consciousness induced by drugs under highly controlled and safe conditions. In the 1990s, Rick Strassman and his colleagues conducted a five-year-long DMT study at the University of New Mexico. The results provided insight about the quality of subjective psychedelic experiences. In this study participants received the DMT dosage intravenously via injection and the findings suggested that different psychedelic experiences can occur, depending on the level of dosage. Lower doses (0.01 and 0.05 mg/kg) produced somaesthetic and emotional responses, but not hallucinogenic experiences (e.g., 0.05 mg/kg had mild mood elevating and calming properties). In contrast, responses produced by higher doses (0.2 and 0.4 mg/kg) researchers labeled as "hallucinogenic" that elicited "intensely colored, rapidly moving display of visual images, formed, abstract or both". Comparing to other sensory modalities the most affected was the visual. Participants reported visual hallucinations, fewer auditory hallucinations and specific physical sensations progressing to a sense of bodily dissociation, as well as to experiences of euphoria, calm, fear, and anxiety. Strassman also stressed the importance of the context where the drug has been taken. He claimed that DMT has no beneficial effects of itself, rather the context when and where people take it plays an important role. It appears that DMT can induce a state or feeling to a person that he or she is able to "communicate with other intelligent-life forms" (see "Machine Elves"). High doses of DMT produce a state that involves a sense of "another intelligence" that people sometimes describe as "super-intelligent", but "emotionally detached". In 1995 Adolf Dittrich and Daniel Lamparter did a study where they found that DMT-induced altered state of consciousness (ASC) is strongly influenced by habitual, rather than situative factors. In the study researchers used three dimensions of the APZ questionnaire to describe ASC (rating scales of ASC). First, oceanic boundlessness (OB) refers to dissolution of ego boundaries mostly associated with positive emotions. Second, anxious ego-dissolution (AED) includes disorder of thoughts, loss of autonomy and self-control and third, visionary restructuralization (VR) that includes auditory and visual illusions, as well as hallucinations. Results showed strong effects within the first and third dimensions for all conditions, especially with DMT, and suggested strong intrastability of elicited reactions independently of the condition for the OB and VR scales. Importantly, the experiment was conducted in a safe laboratory environment. This particular setting had a certain influence on found results that might be very different outside the laboratory environment. Entities perceived during DMT inebriation have been represented in diverse forms of psychedelic art. The term "machine elf" was coined by ethnobotanist Terence McKenna for the entities he encountered in DMT "hyperspace", also using terms like "fractal elves", or "self-transforming machine elves". McKenna first encountered the "machine elves" after smoking DMT in Berkeley in 1965. His subsequent speculations regarding the hyperdimensional space in which they were encountered have inspired a great many artists and musicians, and the meaning of DMT entities has been a subject of considerable debate among participants in a networked cultural underground, enthused by McKenna's effusive accounts of DMT hyperspace. Cliff Pickover has also written about the "machine elf" experience, in the book "Sex, Drugs, Einstein, & Elves", while Rick Strassman notes many similarities between self-reports of his DMT study participants' encounters with these "entities", and mythological descriptions of figures such as Chayot Ha Kodesh in Ancient religions, including both angels and demons. Strassman also argues for a similarity in his study participants' descriptions of mechanized wheels, gears and machinery in these encounters, with those described in visions of encounters with the Living Creatures and Ophanim of the Hebrew Bible, noting they may stem from a common neuropsychopharmacological experience. Strassman argues that the more positive of the "external entities" encountered in DMT experiences should be understood as analogous to certain forms of angels: However, Strassman's experimental participants also note that some other entities can subjectively resemble creatures more like insects and aliens. As a result, Strassman writes these experiences among his experimental participants "also left me feeling confused and concerned about where the spirit molecule was leading us. It was at this point that I began to wonder if I was getting in over my head with this research." Hallucinations of strange creatures had been reported by Stephen Szara in a 1958 study in psychotic patients, in which he described how one of his subjects under the influence of DMT had experienced "strange creatures, dwarves or something" at the beginning of a DMT trip. Other researchers of the entities seemingly encountered by DMT users describe them as "entities" or "beings" in humanoid as well as animal form, with descriptions of "little people" being common (non-human gnomes, elves, imps, etc.). Strassman and others have speculated that this form of hallucination may be the cause of alien abduction and extraterrestrial encounter experiences, which may occur through endogenously-occurring DMT. Likening them to descriptions of rattling and chattering auditory phenomenon described in encounters with the Hayyoth in the Book of Ezekiel, Rick Strassman notes that participants in his studies, when reporting encounters with the alleged entities, have also described loud auditory hallucinations, such as one subject reporting typically "the elves laughing or talking at high volume, chattering, twittering". According to a dose-response study in human subjects, dimethyltryptamine administered intravenously slightly elevated blood pressure, heart rate, pupil diameter, and rectal temperature, in addition to elevating blood concentrations of beta-endorphin, corticotropin, cortisol, and prolactin; growth hormone blood levels rise equally in response to all doses of DMT, and melatonin levels were unaffected." The dependence potential of DMT and the risk of sustained psychological disturbance may be minimal when used infrequently, as in religious ceremonies; however, the physiological dependence potential of DMT and ayahuasca remains to be documented convincingly. In the 1950s, the endogenous production of psychoactive agents was considered to be a potential explanation for the hallucinatory symptoms of some psychiatric diseases; this is known as the transmethylation hypothesis. Several speculative and yet untested hypotheses suggest that endogenous DMT is produced in the human brain and is involved in certain psychological and neurological states. DMT is naturally occurring in small amounts in rat brain, human cerebrospinal fluid, and other tissues of humans and other mammals. In 2011, Nicholas V. Cozzi, of the University of Wisconsin School of Medicine and Public Health, concluded that INMT, an enzyme that is associated with the biosynthesis of DMT and endogenous hallucinogens, is present in the primate (rhesus macaque) pineal gland, retinal ganglion neurons, and spinal cord. Neurobiologist Andrew Gallimore (2013) suggested that while DMT might not have a modern neural function, it may have been an ancestral neuromodulator once secreted in psychedelic concentrations during REM sleep, a function now lost. A standard dose for vaporized DMT is 20–60 milligrams. In general, this is inhaled in a few successive breaths. The effects last for a short period of time, usually 5 to 15 minutes, dependent on the dose. The onset after inhalation is very fast (less than 45 seconds) and peak effects are reached within a minute. In the 1960s, DMT was known as a "business trip" in the US because of the relatively short duration (and rapid onset) of action when inhaled. DMT can be inhaled using a bong or even an e-cigarette. In a study conducted from 1990 through 1995, University of New Mexico psychiatrist Rick Strassman found that some volunteers injected with high doses of DMT reported experiences with perceived alien entities. Usually, the reported entities were experienced as the inhabitants of a perceived independent reality that the subjects reported visiting while under the influence of DMT. DMT is broken down by the enzyme monoamine oxidase through a process called deamination, and is quickly inactivated orally unless combined with a monoamine oxidase inhibitor (MAOI). The traditional South American beverage ayahuasca, or yage, is derived by boiling the ayahuasca vine ("Banisteriopsis caapi") with leaves of one or more plants containing DMT, such as "Psychotria viridis", "Psychotria carthagenensis", or "Diplopterys cabrerana". The Ayahuasca vine contains harmala alkaloids, highly active reversible inihibitors of monoamine oxidase A (RIMAs), rendering the DMT orally active by protecting it from deamination. A variety of different recipes are used to make the brew depending on the purpose of the ayahuasca session, or local availability of ingredients. Two common sources of DMT in the western US are reed canary grass ("Phalaris arundinacea") and Harding grass ("Phalaris aquatica"). These invasive grasses contain low levels of DMT and other alkaloids but also contain gramine, which is toxic and difficult to separate. In addition, Jurema ("Mimosa tenuiflora") shows evidence of DMT content: the pink layer in the inner rootbark of this small tree contains a high concentration of "N,N"-DMT. Taken orally with an RIMA, DMT produces a long lasting (over three hour), slow, deep metaphysical experience similar to that of psilocybin mushrooms, but more intense. RIMAs should be used with caution as they can have fatal interactions with some prescription drugs such as SSRI antidepressants, and some over-the-counter drugs known as sympathomimetics such as Ephedrine or certain cough medicines and even some herbal remedies . DMT has been used in South America since pre-Columbian times. DMT was first synthesized in 1931 by chemist Richard Helmuth Fredrick Manske (born 1901 in Berlin, Germany – 1977). In general, its discovery as a natural product is credited to Brazilian chemist and microbiologist Oswaldo Gonçalves de Lima (1908–1989) who, in 1946, isolated an alkaloid he named "nigerina" (nigerine) from the root bark of "jurema preta", that is, "Mimosa tenuiflora". However, in a careful review of the case Jonathan Ott shows that the empirical formula for nigerine determined by Gonçalves de Lima, which notably contains an atom of oxygen, can match only a partial, "impure" or "contaminated" form of DMT. It was only in 1959, when Gonçalves de Lima provided American chemists a sample of "Mimosa tenuiflora" roots, that DMT was unequivocally identified in this plant material. Less ambiguous is the case of isolation and formal identification of DMT in 1955 in seeds and pods of "Anadenanthera peregrina" by a team of American chemists led by Evan Horning (1916–1993). Since 1955, DMT has been found in a host of organisms: in at least fifty plant species belonging to ten families, and in at least four animal species, including one gorgonian and three mammalian species (including humans). In terms of a scientific understanding, the hallucinogenic properties of DMT were not uncovered until 1956 by Hungarian chemist and psychiatrist Stephen Szara. In his paper “Dimethyltryptamine: its metabolism in man”, he references the plant "Mimosa hostilis", from which he drew an extract which he injected into his muscle. This is considered to be the converging link between the chemical structure DMT to its cultural consumption as a psychoactive and religious sacrament. Another historical milestone is the discovery of DMT in plants frequently used by Amazonian natives as additive to the vine "Banisteriopsis caapi" to make ayahuasca decoctions. In 1957, American chemists Francis Hochstein and Anita Paradies identified DMT in an "aqueous extract" of leaves of a plant they named "Prestonia amazonicum" ["sic"] and described as "commonly mixed" with "B. caapi". The lack of a proper botanical identification of "Prestonia amazonica" in this study led American ethnobotanist Richard Evans Schultes (1915–2001) and other scientists to raise serious doubts about the claimed plant identity. The mistake likely led the writer William Burroughs to regard the DMT he experimented with in Tangier in 1961 as "Prestonia". Better evidence was produced in 1965 by French pharmacologist Jacques Poisson, who isolated DMT as a sole alkaloid from leaves, provided and used by Agaruna Indians, identified as having come from the vine "Diplopterys cabrerana" (then known as "Banisteriopsis rusbyana"). Published in 1970, the first identification of DMT in the plant "Psychotria viridis", another common additive of ayahuasca, was made by a team of American researchers led by pharmacologist Ara der Marderosian. Not only did they detect DMT in leaves of "P. viridis" obtained from Kaxinawá indigenous people, but they also were the first to identify it in a sample of an ayahuasca decoction, prepared by the same indigenous people. Israel – DMT is an illegal substance; production, trade and possession are prosecuted as crimes. In 2017 the Santo Daime Church Céu do Montréal received religious exemption to use Ayahuasca as a sacrament in their rituals. In December 2004, the Supreme Court lifted a stay, thereby allowing the Brazil-based União do Vegetal (UDV) church to use a decoction containing DMT in their Christmas services that year. This decoction is a tea made from boiled leaves and vines, known as hoasca within the UDV, and ayahuasca in different cultures. In "Gonzales v. O Centro Espírita Beneficente União do Vegetal", the Supreme Court heard arguments on November 1, 2005, and unanimously ruled in February 2006 that the U.S. federal government must allow the UDV to import and consume the tea for religious ceremonies under the 1993 Religious Freedom Restoration Act. In September 2008, the three Santo Daime churches filed suit in federal court to gain legal status to import DMT-containing ayahuasca tea. The case, "Church of the Holy Light of the Queen v. Mukasey", presided over by Judge Owen M. Panner, was ruled in favor of the Santo Daime church. As of March 21, 2009, a federal judge says members of the church in Ashland can import, distribute and brew ayahuasca. U.S. District Judge Owen Panner issued a permanent injunction barring the government from prohibiting or penalizing the sacramental use of "Daime tea". Panner's order said activities of The Church of the Holy Light of the Queen are legal and protected under freedom of religion. His order prohibits the federal government from interfering with and prosecuting church members who follow a list of regulations set out in his order. Under the Misuse of Drugs act 1981 6.0 g of DMT is considered enough to determine a court of trial and 2.0 g is considered intent to sell and supply. Between 2011 and 2012, the Australian Federal Government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as "controlled plants". DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing Mescaline or Ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (Golden Wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native and/or religious people). DMT is commonly handled and stored as a fumarate, as other DMT acid salts are extremely hygroscopic and will not readily crystallize. Its freebase form, although less stable than DMT fumarate, is favored by recreational users choosing to vaporize the chemical as it has a lower boiling point. Dimethyltryptamine is an indole alkaloid derived from the shikimate pathway. Its biosynthesis is relatively simple and summarized in the adjacent picture. In plants, the parent amino acid L-tryptophan is produced endogenously where in animals L-tryptophan is an essential amino acid coming from diet. No matter the source of L-tryptophan, the biosynthesis begins with its decarboxylation by an aromatic amino acid decarboxylase (AADC) enzyme (step 1). The resulting decarboxylated tryptophan analog is tryptamine. Tryptamine then undergoes a transmethylation (step 2): the enzyme indolethylamine-N-methyltransferase (INMT) catalyzes the transfer of a methyl group from cofactor S-adenosyl-methionine (SAM), via nucleophilic attack, to tryptamine. This reaction transforms SAM into S-adenosylhomocysteine (SAH), and gives the intermediate product "N"-methyltryptamine (NMT). NMT is in turn transmethylated by the same process (step 3) to form the end product "N","N"-dimethyltryptamine. Tryptamine transmethylation is regulated by two products of the reaction: SAH, and DMT were shown "ex vivo" to be among the most potent inhibitors of rabbit INMT activity. This transmethylation mechanism has been repeatedly and consistently proven by radiolabeling of SAM methyl group with carbon-14 (14C-CH3)SAM. DMT can be synthesized through several possible pathways from different starting materials. The two most commonly encountered synthetic routes are through the reaction of indole with oxalyl chloride followed by reaction with dimethylamine and reduction of the carbonyl functionalities with lithium aluminum hydride to form DMT. The second commonly encountered route is through the n,n-dimethylation of tryptamine using formaldehyde followed by reduction with sodium cyanoborohydride or sodium triacetoxyborohydride. Sodium borohydride is not used as it reduces the formaldehyde to methanol before it is able to react with the primary amine of tryptamine. In a clandestine setting, DMT is not typically synthesized due to the lack of availability of the starting materials, namely tryptamine and oxalyl chloride. Instead, it is more often extracted from plant sources using a non-polar hydrocarbon solvent such as naphtha or heptane, and a base such as sodium hydroxide. Alternatively, an acid-base extraction is sometimes used instead. A variety of plants contain DMT at sufficient levels for being viable sources, but specific plants such as "Mimosa tenuiflora" and "Acacia confusa" are most often used. The chemicals involved in the extraction are commonly available. The plant material may be illegal to procure in some countries. The end product (DMT) is illegal in most countries. Published in "Science" in 1961, Julius Axelrod found an "N"-methyltransferase enzyme capable of mediating biotransformation of tryptamine into DMT in a rabbit's lung. This finding initiated a still ongoing scientific interest in endogenous DMT production in humans and other mammals. From then on, two major complementary lines of evidence have been investigated: localization and further characterization of the "N"-methyltransferase enzyme, and analytical studies looking for endogenously produced DMT in body fluids and tissues. In 2013, researchers reported DMT in the pineal gland microdialysate of rodents. A study published in 2014 reported the biosynthesis of "N,N"-dimethyltryptamine (DMT) in the human melanoma cell line SK-Mel-147 including details on its metabolism by peroxidases. In 2014, researchers demonstrated the immunomodulatory potential of DMT and 5-MeO-DMT through the Sigma-1 receptor of human immune cells. This immunomodulatory activity may contribute to significant anti-inflammatory effects and tissue regeneration. The first claimed detection of mammalian endogenous DMT was published in June 1965: German researchers F. Franzen and H. Gross report to have evidenced and quantified DMT, along with its structural analog bufotenin (5-HO-DMT), in human blood and urine. In an article published four months later, the method used in their study was strongly criticized, and the credibility of their results challenged. Few of the analytical methods used prior to 2001 to measure levels of endogenously formed DMT had enough sensitivity and selectivity to produce reliable results. Gas chromatography, preferably coupled to mass spectrometry (GC-MS), is considered a minimum requirement. A study published in 2005 implements the most sensitive and selective method ever used to measure endogenous DMT: liquid chromatography-tandem mass spectrometry with electrospray ionization (LC-ESI-MS/MS) allows for reaching limits of detection (LODs) 12 to 200 fold lower than those attained by the best methods employed in the 1970s. The data summarized in the table below are from studies conforming to the abovementioned requirements (abbreviations used: CSF = cerebrospinal fluid; LOD = limit of detection; n = number of samples; ng/L and ng/kg = nanograms (10−9 g) per litre, and nanograms per kilogram, respectively): A 2013 study found DMT in microdialysate obtained from a rat's pineal gland, providing evidence of endogenous DMT in the mammalian brain. In 2019 experiments showed that the rat brain is capable of synthesizing and releasing DMT. These results raise the possibility that this phenomenon may occur similarly in human brains. DMT may be measured in blood, plasma or urine using chromatographic techniques as a diagnostic tool in clinical poisoning situations or to aid in the medicolegal investigation of suspicious deaths. In general, blood or plasma DMT levels in recreational users of the drug are in the 10–30 μg/L range during the first several hours post-ingestion. Less than 0.1% of an oral dose is eliminated unchanged in the 24-hour urine of humans. Before techniques of molecular biology were used to localize indolethylamine "N"-methyltransferase (INMT), characterization and localization went on a par: samples of the biological material where INMT is hypothesized to be active are subject to enzyme assay. Those enzyme assays are performed either with a radiolabeled methyl donor like (14C-CH3)SAM to which known amounts of unlabeled substrates like tryptamine are added or with addition of a radiolabeled substrate like (14C)NMT to demonstrate in vivo formation. As qualitative determination of the radioactively tagged product of the enzymatic reaction is sufficient to characterize INMT existence and activity (or lack of), analytical methods used in INMT assays are not required to be as sensitive as those needed to directly detect and quantify the minute amounts of endogenously formed DMT (see DMT subsection below). The essentially qualitative method thin layer chromatography (TLC) was thus used in a vast majority of studies. Also, robust evidence that INMT can catalyze transmethylation of tryptamine into NMT and DMT could be provided with reverse isotope dilution analysis coupled to mass spectrometry for rabbit and human lung during the early 1970s. Selectivity rather than sensitivity proved to be an Achilles' heel for some TLC methods with the discovery in 1974–1975 that incubating rat blood cells or brain tissue with (14C-CH3)SAM and NMT as substrate mostly yields tetrahydro-β-carboline derivatives, and negligible amounts of DMT in brain tissue. It is indeed simultaneously realized that the TLC methods used thus far in almost all published studies on INMT and DMT biosynthesis are incapable to resolve DMT from those tetrahydro-β-carbolines. These findings are a blow for all previous claims of evidence of INMT activity and DMT biosynthesis in avian and mammalian brain, including in vivo, as they all relied upon use of the problematic TLC methods: their validity is doubted in replication studies that make use of improved TLC methods, and fail to evidence DMT-producing INMT activity in rat and human brain tissues. Published in 1978, the last study attempting to evidence in vivo INMT activity and DMT production in brain (rat) with TLC methods finds biotransformation of radiolabeled tryptamine into DMT to be real but "insignificant". Capability of the method used in this latter study to resolve DMT from tetrahydro-β-carbolines is questioned later. To localize INMT, a qualitative leap is accomplished with use of modern techniques of molecular biology, and of immunohistochemistry. In humans, a gene encoding INMT is determined to be located on chromosome 7. Northern blot analyses reveal INMT messenger RNA (mRNA) to be highly expressed in rabbit lung, and in human thyroid, adrenal gland, and lung. Intermediate levels of expression are found in human heart, skeletal muscle, trachea, stomach, small intestine, pancreas, testis, prostate, placenta, lymph node, and spinal cord. Low to very low levels of expression are noted in rabbit brain, and human thymus, liver, spleen, kidney, colon, ovary, and bone marrow. INMT mRNA expression is absent in human peripheral blood leukocytes, whole brain, and in tissue from 7 specific brain regions (thalamus, subthalamic nucleus, caudate nucleus, hippocampus, amygdala, substantia nigra, and corpus callosum). Immunohistochemistry showed INMT to be present in large amounts in glandular epithelial cells of small and large intestines. In 2011, immunohistochemistry revealed the presence of INMT in primate nervous tissue including retina, spinal cord motor neurons, and pineal gland. DMT peak level concentrations ("C"max) measured in whole blood after intramuscular (IM) injection (0.7 mg/kg, n = 11) and in plasma following intravenous (IV) administration (0.4 mg/kg, n = 10) of fully psychedelic doses are in the range of ≈14 to 154 μg/L and 32 to 204 μg/L, respectively. The corresponding molar concentrations of DMT are therefore in the range of 0.074–0.818 μM in whole blood and 0.170–1.08 μM in plasma. However, several studies have described active transport and accumulation of DMT into rat and dog brain following peripheral administration. Similar active transport, and accumulation processes likely occur in human brain and may concentrate DMT in brain by several-fold or more (relatively to blood), resulting in local concentrations in the micromolar or higher range. Such concentrations would be commensurate with serotonin brain tissue concentrations, which have been consistently determined to be in the 1.5-4 μM range. Closely coextending with peak psychedelic effects, mean time to reach peak concentrations ("T"max) was determined to be 10–15 minutes in whole blood after IM injection, and 2 minutes in plasma after IV administration. When taken orally mixed in an ayahuasca decoction, and in freeze-dried ayahuasca gel caps, DMT "T"max is considerably delayed: 107.59 ± 32.5 minutes, and 90–120 minutes, respectively. The pharmacokinetics for vaporizing DMT have not been studied or reported. DMT binds non-selectively with affinities < 0.6 μM to the following serotonin receptors: 5-HT1A, 5-HT1B, 5-HT1D, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT6, and 5-HT7. An agonist action has been determined at 5-HT1A, 5-HT2A and 5-HT2C. Its efficacies at other serotonin receptors remain to be determined. Of special interest will be the determination of its efficacy at human 5-HT2B receptor as two "in vitro" assays evidenced DMT's high affinity for this receptor: 0.108 μM and 0.184 μM. This may be of importance because chronic or frequent uses of serotonergic drugs showing preferential high affinity and clear agonism at 5-HT2B receptor have been causally linked to valvular heart disease. It has also been shown to possess affinity for the dopamine D1, α1-adrenergic, α2-adrenergic, imidazoline-1, and σ1 receptors. Converging lines of evidence established activation of the σ1 receptor at concentrations of 50–100 μM. Its efficacies at the other receptor binding sites are unclear. It has also been shown "in vitro" to be a substrate for the cell-surface serotonin transporter (SERT) and the intracellular vesicular monoamine transporter 2 (VMAT2), inhibiting SERT-mediated serotonin uptake in human platelets at an average concentration of 4.00 ± 0.70 μM and VMAT2-mediated serotonin uptake in vesicles (of army worm Sf9 cells) expressing rat VMAT2 at an average concentration of 93 ± 6.8 μM. As with other so-called "classical hallucinogens", a large part of DMT psychedelic effects can be attributed to a functionally selective activation of the 5-HT2A receptor. DMT concentrations eliciting 50% of its maximal effect (half maximal effective concentration = EC50 or Kact) at the human 5-HT2A receptor "in vitro" are in the 0.118–0.983 μM range. This range of values coincides well with the range of concentrations measured in blood and plasma after administration of a fully psychedelic dose (see Pharmacokinetics). As DMT has been shown to have slightly better efficacy (EC50) at human serotonin 2C receptor than at the 2A receptor, 5-HT2C is also likely implicated in DMT's overall effects. Other receptors, such as 5-HT1A σ1, may also play a role. In 2009, it was hypothesized that DMT may be an endogenous ligand for the σ1 receptor. The concentration of DMT needed for σ1 activation "in vitro" (50–100 μM) is similar to the behaviorally active concentration measured in mouse brain of approximately 106 μM This is minimally 4 orders of magnitude higher than the average concentrations measured in rat brain tissue or human plasma under basal conditions (see Endogenous DMT), so σ1 receptors are likely to be activated only under conditions of high local DMT concentrations. If DMT is stored in synaptic vesicles, such concentrations might occur during vesicular release. To illustrate, while the "average" concentration of serotonin in brain tissue is in the 1.5–4 μM range, the concentration of serotonin in synaptic vesicles was measured at 270 mM. Following vesicular release, the resulting concentration of serotonin in the synaptic cleft, to which serotonin receptors are exposed, is estimated to be about 300 μM. Thus, while "in vitro" receptor binding affinities, efficacies, and average concentrations in tissue or plasma are useful, they are not likely to predict DMT concentrations in the vesicles or at synaptic or intracellular receptors. Under these conditions, notions of receptor selectivity are moot, and it seems probable that most of the receptors identified as targets for DMT (see above) participate in producing its psychedelic effects. Electronic cigarette cartridges filled with DMT started to be sold on the black market in 2018.
https://en.wikipedia.org/wiki?curid=8748
Dominatrix A dominatrix (, plural dominatrices ), is a woman who takes the dominant role in BDSM activities. A dominatrix might be of any sexual orientation, but her orientation does not necessarily limit the genders of her submissive partners. The role of a dominatrix may not even involve physical pain toward the submissive; her domination can be verbal, involving humiliating tasks, or servitude. A dominatrix is typically a paid professional ("pro-domme") as the term "dominatrix" is little-used within the non-professional BDSM scene. The term "domme" is a coined pseudo-French female variation of the slang "dom" (short for "dominant"). The use of "domme", "dominatrix", "dom", or "dominant" by any woman in a dominant role is chosen mostly by personal preference and the conventions of the local BDSM scene. The term mistress or dominant mistress is sometimes also used. Female dominance, female domination or femdom refer to BDSM activities in which the dominant partner is female. As fetish culture is increasingly becoming more prevalent in Western media, depictions of dominatrices in film and television have become more common. "Dominatrix" is the feminine form of the Latin "dominator", a ruler or lord, and was originally used in a non-sexual sense. Its use in English dates back to at least 1561. Its earliest recorded use in the prevalent modern sense, as a female dominant in S&M, dates to 1961. It was initially coined to describe a woman who provides punishment-for-pay as one of the case studies within Bruce Roger's pulp paperback "The Bizarre Lovemakers". The term was taken up shortly after by the Myron Kosloff title "Dominatrix" (with art by Eric Stanton) in 1968, and entered more popular mainstream knowledge following the 1976 film "Dominatrix Without Mercy". Although the term "dominatrix" was not used, the classic example in literature of the female dominant-male submissive relationship is portrayed in the 1870 novella "Venus in Furs" by Austrian writer Leopold von Sacher-Masoch. The term "masochism" was later derived from the author's name by Richard von Krafft-Ebing in the latter's 1886 forensic study "Psychopathia Sexualis". The history of the dominatrix is argued to date back to rituals of the Goddess Inanna (or Ishtar as she was known in Akkadian), in ancient Mesopotamia. Ancient cuneiform texts consisting of "Hymns to Inanna" have been cited as examples of the archetype of powerful, sexual female displaying dominating behaviors and forcing Gods and men into submission to her. Archaeologist and historian Anne O. Nomis notes that Inanna's rituals included cross-dressing of cult personnel, and rituals "imbued with pain and ecstasy, bringing about initiation and journeys of altered consciousness; punishment, moaning, ecstasy, lament and song, participants exhausting themselves with weeping and grief." The tale of Phyllis and Aristotle, which became popular and gained numerous versions from the 12th century onwards, tells the story of a dominant woman who seduced and dominated the male intellect of the greatest philosopher. In the story, Phyllis forces Aristotle to kneel on the ground so that she rides on his back while whipping and verbally humiliating him. The profession appears to have originated as a specialization within brothels, before becoming its own unique craft. As far back as the 1590s, flagellation within an erotic setting is recorded. The profession features in erotic prints of the era, such as the British Museum mezzotint "The Cully Flaug'd" (c. 1674–1702), and in accounts of forbidden books which record the flogging schools and the activities practised. Within the 18th century, female "Birch Disciplinarians" advertised their services in a book masked as a collection of lectures or theatrical plays, entitled "Fashionable Lectures" (c. 1761). This included the names of 57 women, some actresses and courtesans, who catered to birch discipline fantasies, keeping a room with rods and cat o' nine tails, and charging their clients a Guinea for a "lecture". The 19th century is characterised by what historian Anne O. Nomis characterises as the "Golden Age of the Governess". No fewer than twenty establishments were documented as having existed by the 1840s, supported entirely by flagellation practices and known as "Houses of Discipline" distinct from brothels. Amongst the well-known "dominatrix governesses" were Mrs Chalmers, Mrs Noyeau, the late Mrs Jones of Hertford Street and London Street, the late Mrs Theresa Berkley, Bessy Burgess of York Square and Mrs Pyree of Burton Cres. The most famous of these Governess "female flagellants" was Theresa Berkley, who operated her establishment on Charlotte Street in the central London district of Marylebone. She is recorded to have used implements such as whips, canes and birches, to chastise and punish her male clients, as well as the Berkley Horse, a specially designed flogging machine, and a pulley suspension system for lifting them off the floor. Such historical use of corporal punishment and suspension, in a setting of domination roleplay, connects very closely to the practices of modern-day professional dominatrices. The "bizarre style" (as it came to be called) of leather catsuits, claws, tail whips, and latex rubber only came about in the 20th century, initially within commercial fetish photography, and taken up by dominatrices. Within the mid-20th century, dominatrices operated in a very discreet and underground manner, which has made them difficult to trace within the historical record. A few photographs still exist of the women who ran their domination businesses in London, New York, The Hague and Hamburg's Herbertstraße, predominantly in sepia and black-and-white photographs, and scans from magazine articles, copied and re-copied. Amongst these were Miss Doreen of London who was acquainted with John Sutcliffe of "AtomAge" fame, whose clients reportedly included Britain's top politicians and businessmen. In New York, the dominatrix Anne Laurence was known within the underground circle of acquaintances during the 1950s, with Monique Von Cleef arriving in the early 1960s, and hitting national headlines when her home was raided by police detectives on 22 December 1965. Von Cleef went on to set up her "House of Pain" in The Hague in the 1970s, which became one of the world capitals for dominatrices, reportedly with visiting lawyers, ambassadors, diplomats and politicians. Domenica Niehoff worked as a dominatrix in Hamburg and appeared on talk shows on German television from the 1970s onwards, campaigning for sex workers' rights. Mistress Raven, founder and manager of Pandora's Box, one of New York's best known BDSM studios, was featured in Nick Broomfield's 1996 documentary film "Fetishes". The term "dominatrix" is mostly used to describe a female professional dominant (or "pro-domme") who is paid to engage in BDSM play with a submissive. Professional dominatrices are not prostitutes, despite the sensual and erotic interactions she has. An appointment or roleplay is referred to as a "session", and is often conducted in a dedicated professional play space which has been set up with specialist equipment, known as a "dungeon". Sessions may also be conducted remotely by letter or telephone, or in the contemporary era of technological connectivity by email or online chat. Most, but not all, clients of female professional dominants are men. Male professional dominants also exist, catering predominantly to the gay male market. Women who engage in female domination typically promote and title themselves under the terms "dominatrix", "mistress", "lady", "madame", "herrin" or "goddess". In a study of German dominatrices, Andrew Wilson said that the trend for dominatrices choosing names aimed at creating and maintaining an atmosphere in which class, femininity and mystery are key elements of their self-constructed identity. Some professional dominatrices set minimum age limits for their clients. Popular requests from clients are for dungeon play including bondage, spanking and cock and ball torture, or for medical play using hoods, gas masks and urethral sounding. Verbal erotic humiliation is also popular. It is not unusual for a dominatrix to consider her profession different from that of an escort and not perform tie and tease or "happy endings". Typically professional dominatrices do not have sexual intercourse with their clients, do not become naked with their clients and do not allow their clients to touch them. The Canadian dominatrix Terri-Jean Bedford, who was one of three women who initiated an application in the Ontario Superior Court seeking invalidation of Canada's laws regarding brothels, sought to differentiate for clarity her occupation as a dominatrix rather than a prostitute to the media, due to frequent misunderstanding and conflation by the public of the two terms. While dominatrices come from many different backgrounds, it has been shown that a considerable number are well-educated. Research into US dominatrices published in 2012 indicated that 39% of the sample studied had received some sort of graduate training. A 1985 study suggested that about 30 percent of participants in BDSM subculture were female. A 1994 report indicated that around a quarter of the women who took part in BDSM subculture did so professionally. In a 1995 study of Internet discussion group messages, the preference for the dominant-initiator role was expressed by 11% of messages by heterosexual women, compared to 71% of messages by heterosexual men. Professional dominatrices can be seen advertising their services online and in print publications which carry erotic services advertising, such as contact magazines and fetish magazines that specialise in female domination. The precise number of women actively offering professional domination services is unknown. Most professional dominatrices practice in large metropolitan cities such as New York, Los Angeles, and London, with as many as 200 women working as dominatrices in Los Angeles. Professional dominatrices may take pride or differentiation in their psychological insight into their clients' fetishes and desires, as well as their technical ability to perform complex BDSM practices, such as Japanese shibari, head-scissoring, and other forms of bondage, suspension, torture roleplay, and corporal punishment, and other such practices which require a high degree of knowledge and competency to safely oversee. From a sociological point of view, Danielle Lindemann has stated the "embattled purity regime" in which many pro-dommes emphasise their specialist knowledge and professional skills, while distancing themselves from economic criteria for success, in a way which is comparable to avant-garde artists. Some dominatrices practice financial domination, or findom, a fetish in which a submissive is aroused by sending money or gifts to a dominatrix at her instruction. In some cases the dominatrix is given control of the submissive's finances or a "blackmail" scenario is acted out. In the majority of cases the dominatrix and the submissive do not physically meet. The interactions are typically performed using the Internet, which is also where such services are advertised. Findom was originally a niche service that a traditional dominatrix would offer, but it has become popular with less-experienced online practitioners. To differentiate women who identify as a dominatrix but do not offer paid services, non-professional dominants are occasionally referred to as a "lifestyle" dominatrix or Mistress. The term "lifestyle" to signify BDSM is occasionally a contention topic in the BDSM community and that some dominatrices may dislike the term. Some professional dominatrices are also "lifestyle" dominatrices - i.e., in addition to paid sessions with submissive clients they engage in unpaid recreational sessions or may incorporate power exchange within their own private lives and relationships. However, the term has fallen out of general usage with respect to women who are dominant in their private relationships, and has taken on more and more, the connotation of "professional." Catherine Robbe-Grillet is a personal dominatrix. Born in Paris in September 24, 1930, she then became France's most famous dominatrix. She is also writer and actress, the widow of nouveau roman pioneer and sadist Alain Robbe-Grillet. She currently lives with Beverly Charpentier, a 51-year-old South African woman who is her submissive companion. Although being such a famous dominatrix, she has never accepted payment for her "ceremonies". She's quoted as saying "If someone pays, then they are in charge. I need to remain free. It is important that everyone involved knows that I do it solely for my pleasure." "Catherine is my secret garden,” Charpentier says. "I have given myself to her, body and soul. She does whatever she wants, whenever she wants, with either or both, according to her pleasure—and her pleasure is also my pleasure." Catherine has always been heavily censored in her novels for writing about S/M stories. She identifies as a “pro-sex feminist” and “the kind of feminist who supports the right of any man or women to work as a prostitute, if it is their free choice.” The dominatrix is a female archetype which operates on a symbolic mode of representation, associated with particular attire and props that are drawn on within popular culture to signify her role—as a strong, dominant, sexualised woman—linked to but distinct from images of sexual fetish. During the twentieth century, the imagery associated with dominatrices was developed by the work of a number of artists including the costume designer and photographer Charles Guyette, the publisher and film director Irving Klaw, and the illustrators Eric Stanton and Gene Bilbrew who drew for the fetish magazine Exotique. One of the garments commonly associated with the dominatrix is the catsuit. Historically, the black leather female catsuit entered dominant fetish culture in the 1950s with the "AtomAge" magazine and its connections to fetish fashion designer John Sutcliffe. The spill-over into mainstream culture, occurred with catsuits being worn by strong female protagonists in popular 1960s TV programs like "The Avengers", and in the comic super-heroines such as Catwoman, in which the catsuit represented the independent woman capable of "kick-ass" moves and antics, enabling complete freedom of movement. On another level, the one-piece catsuit accentuated and exaggerated the sexualized female form, providing visual access to a woman's body, while simultaneously obstructing physical penetrative access. "You can look but you can't touch" is the mechanism of this operation, which plays upon the BDSM practice known as "tease and denial". Other common signifying footwear of the dominatrix are thigh-high boots in leather or shiny PVC, which have long held a fetishistic status and are sometimes called kinky boots, along with the very high stiletto heel. Fishnet stockings, seamed hosiery, and stockings and garter belts (suspenders) are also popular accents in the representation and attire of dominatrices, to emphasize the form and length of their legs, with erotic connotation. Tight, leather corsets are another staple garment of the dominatrix signification. Gloves, whether long opera gloves or fingerless gloves, are often a further accessory to emphasize the feminine role. Neck corsets may also be worn. Dominatrices often wear clothing made from fetish fashion materials. Examples include PVC clothing, latex clothing and garments drawn from the leather subculture. In some cases elements of dominatrix attire, such as leather boots and peaked cap, are drawn from Nazi chic, particularly the black SS officer's uniform which has been widely adopted and fetishized by underground gay and BDSM lifestyle groups to satisfy a uniform fetish. The body language of the dominatrix is frequently represented by the use of strong, dominant body-language which is comparable to the dominant posturing in the animal world. The props she may brandish will strongly signify her role as dominatrix, such as bearing a flogger, whip or riding crop as illustrated in the artwork of Bruno Zach in the early 20th century, in conventional representation. Literature on the Dominatrix has been around since the 10th century. Canoness Hroswitha, in her manuscript "Maria," uses the word "Dominatrix" for the main character. She is portrayed as an unattainable woman who is too good for any of the men who are in love with her. The theme of "the unattainable woman" has been used thoroughly in medieval literature as well, although it differs from a dominatrix. Medieval themes surrounding the unattainable woman concerned issues of social classes and structure, with chivalry being a prime part of a relationship between a man and woman. There are some exceptions to this trend during medieval times. In Cervantes’ "Don Quixote" (1605), Celadon is imprisoned by Galatea. Celadon complains that his “mistress . . . Galatea keeps me on such a short leash”. Robert Herrick published in 1648, Hesperides. In it there were three revealing poems "An Hymne to Love", "The Dream", and "To Love" which showcase masculine longing for domination, restraint, discipline. In "Ulysses" by James Joyce, the character Bloom has many fantasies of submission to a lady and to receive whippings by her. Female domination has been explored in literature tracing back as far as the 10th century. Depictions of dominatrices in popular culture include: Female domination is also depicted in the novels "Love Story in the Style of Femdom" (2017) and "Our Wild Sex in Malindi" (2020) by the Russian writer Andrei Gusev.
https://en.wikipedia.org/wiki?curid=8751
Flag of Denmark The flag of Denmark (, ) is red with a white Scandinavian cross that extends to the edges of the flag; the vertical part of the cross is shifted to the hoist side. A banner with a white-on-red cross is attested as having been used by the kings of Denmark since the 14th century. An origin legend with considerable impact on Danish national historiography connects the introduction of the flag to the Battle of Lindanise of 1219. The elongated Nordic cross reflects the use as a maritime flag in the 18th century. The flag became popular as a national flag in the early 19th century. Its private use was outlawed in 1834, and again permitted in a regulation of 1854. The flag holds the world record of being the oldest continuously used national flag. In 1748, a regulation defined the correct lengths of the two last fields in the flag as . In May 1893 a new regulation to all chiefs of police, stated that the police should not intervene, if the two last fields in the flag were longer than as long as these did not exceed , and provided that this was the only rule violated. This regulation is still in effect today and thus the legal proportions of the National flag is today 3:1:3 in width and anywhere between 3:1:4.5 and 3:1:5.25 in length. No official definition of "Dannebrog rød" exists. The private company "Dansk Standard", regulation number 359 (2005), defines the red colour of the flag as Pantone 186c. A tradition recorded in the 16th century traces the origin of the flag to the campaigns of Valdemar II of Denmark (r. 1202–1241). The oldest of them is in Christiern Pedersen's ""Danske Krønike"", which is a sequel to Saxo's Gesta Danorum, written 1520–23. Here, the flag falls from the sky during a Russian campaign of Valdemar's. Pedersen also states that the very same flag was taken into exile by Eric of Pomerania in 1440. The second source is the writing of the Franciscan friar Petrus Olai (Peder Olsen) of Roskilde (died c. 1570). This record describes a battle in 1208 near Fellin during the Estonia campaign of King Valdemar II. The Danes were all but defeated when a lamb-skin banner depicting a white cross fell from the sky and miraculously led to a Danish victory. In a third account, also by Petrus Olai, in "Danmarks Tolv Herligheder" ("Twelve Splendours of Denmark"), in splendour number nine, the same story is re-told almost verbatim, with a paragraph inserted correcting the year to 1219. Now, the flag is falling from the sky in the Battle of Lindanise, also known as the Battle of Valdemar (Danish: "Volmerslaget"), near Lindanise (Tallinn) in Estonia, of 15 June 1219. It is this third account that has been the most influential, and some historians have treated it as the primary account taken from a (lost) source dating to the first half of the 15th century. In Olai's account, the battle was going badly, and defeat seemed imminent. However the Danish Bishop Anders Sunesen on top of a hill overlooking the battle prayed to God with his arms raised, which meant that the Danes moved closer to victory the more he prayed. When he raised his arms the Danes surged forward and when his arms grew tired and he let them fall, the Estonians turned the Danes back. Attendants rushed forward to raise his arms once again and the Danes surged forward again. At a second he was so tired in his arms that he dropped them and the Danes then lost the advantage and were moving closer to defeat. He needed two soldiers to keep his hands up and when the Danes were about to lose, 'Dannebrog' miraculously fell from the sky and the King took it, showed it to the troops and their hearts were filled with courage and the Danes won the battle. The possible historical nucleus behind this origin legend was extensively discussed by Danish historians in the 19th to 20th centuries. Jørgensen (1875) argues that Bishop Theoderich was the original instigator of the 1218 inquiry from Bishop Albert of Buxhoeveden to King Valdemar II which led to the Danish participation in the Baltic crusades. Jørgensen speculates that Bishop Theoderich might have carried the Knight Hospitaller's banner in the 1219 battle and that "the enemy thought this was the King's symbol and mistakenly stormed Bishop Theoderich tent. He claims that the origin of the legend of the falling flag comes from this confusion in the battle." The Danish church-historian L. P. Fabricius (1934) ascribes the origin to the 1208 Battle of Fellin, not the Battle of Lindanise in 1219, based on the earliest source available about the story. Fabricius speculated that it might have been Archbishop Andreas Sunesøn's personal ecclesiastical banner or perhaps even the flag of Archbishop Absalon, under whose initiative and supervision several smaller crusades had already been conducted in Estonia. The banner would then already be known in Estonia. Fabricius repeats Jørgensen's idea about the flag being planted in front of Bishop Theodorik's tent, which the enemy mistakenly attacks believing it to be the tent of the King. A different theory is briefly discussed by Fabricius and elaborated more by Helge Bruhn (1949). Bruhn interprets the story in the context of the widespread tradition of the miraculous appearance of crosses in the sky in Christian legend, specifically comparing such an event attributed to a battle of 10 September 1217 near Alcazar, where it is said that a golden cross on white appeared in the sky, to bring victory to the Christians. In Swedish national historiography of the 18th century, there is a tale paralleling the Danish legend, in which a golden cross appears in the blue sky during a Swedish battle in Finland in 1157. The white-on-red cross emblem originates in the age of the Crusades. In the 12th century, it was also used as war flag by the Holy Roman Empire. In the "Gelre Armorial", dated 1340–1370, such a banner is shown alongside the coat of arms of the king of Denmark. This is the earliest known undisputed colour rendering of the Dannebrog. At about the same time, Valdemar IV of Denmark displays a cross in his coat of arms on his "Danælog" seal ("Rettertingsseglet", dated 1356). The image from the Armorial Gelre is nearly identical to an image found in a 15th-century coats of arms book now located in the National Archives of Sweden ("Riksarkivet"). The seal of Eric of Pomerania (1398) as king of the Kalmar union displays the arms of Denmark chief dexter, three lions. In this version, the lions are holding a Dannebrog banner. The reason why the kings of Denmark in the 14th century begin displaying the cross banner in their coats of arms is unknown. Caspar Paludan-Müller (1873) suggested that it may reflect a banner sent by the pope to the Danish king in support of the Baltic countries. Adolf Ditlev Jørgensen (1875) identifies the banner as that of the Knights Hospitaller, which order had a presence in Denmark from the later 12th century. Several coins, seals and images exist, both foreign and domestic, from the 13th to 15th centuries and even earlier, showing heraldic designs similar to Dannebrog, alongside the royal coat of arms (three blue lions on a golden shield.) There is a record suggesting that the Danish army had a "chief banner" ("hoffuitbanner") in the early 16th century. Such a banner is mentioned in 1570 by Niels Hemmingsøn in the context of a 1520 battle between Danes and Swedes near Uppsala as nearly captured by the Swedes but saved by the heroic actions of the banner-carrier Mogens Gyldenstierne and Peder Skram. The legend attributing the miraculous origin of the flag to the campaigns of Valdemar II of Denmark (r. 1202–1241) were recorded by Christiern Pedersen and Petrus Olai in the 1520s. Hans Svaning's "History of King Hans" from 1558–1559 and Johan Rantzau's "History about the Last Dithmarschen War", from 1569, record the further fate of the Danish "hoffuitbanner": According to this tradition, the original flag from the Battle of Lindanise was used in the small campaign of 1500 when King Hans tried to conquer Dithmarschen (in western Holstein in north Germany). The flag was lost in a devastating defeat at the Battle of Hemmingstedt on 17 February 1500. In 1559, King Frederik II recaptured it during his own Dithmarschen campaign. In 1576, the son of "Johan Rantzau", Henrik Rantzau, also writes about the war and the fate of the flag, noting that the flag was in a poor condition when returned. Herecords that the flag after its return to Denmark was placed in the cathedral in Slesvig. Slesvig historian Ulrik Petersen (1656–1735) confirms the presence of such a banner in the cathedral in the early 17th century, and records that it had crumbled away by about 1660. Contemporary records describing the battle of Hemmingstedt make no reference to the loss of the original Dannebrog, although the capitulation state that all Danish banners lost in 1500 were to be returned. In a letter dated 22 February 1500 to Oluf Stigsøn, King John describes the battle, but does not mention the loss of an important flag. In fact, the entire letter gives the impression that the lost battle was of limited importance. In 1598, Neocorus wrote that the banner captured in 1500 was brought to the church in Wöhrden and hung there for the next 59 years, until it was returned to the Danes as part of the peace settlement in 1559. Used as maritime flag since the 16th century, the Dannebrog was introduced as regimental flag in the Danish army in 1785, and for the militia (landeværn) in 1801. From 1842, it was used as the flag of the entire army. In parallel to the development of Romantic nationalism in other European countries, the military flag increasingly came to be seen as representing the nation itself during the first half of the 19th century. Poems of this period invoking the "Dannebrog" were written by B.S. Ingemann, N.F.S. Grundtvig, Oehlenschläger, Chr. Winther and H.C. Andersen. By the 1830s, the military flag had become so popular as unofficial national flag, and its use by private citizens was outlawed in a circular enacted on 7 January 1834. In the national enthusiasm sparked by the First Schleswig War during 1848–1850, the flag was still very widely displayed, and the prohibition of private use was again repealed in a regulation of 7 July 1854, for the first time allowing Danish citizens to display the Dannebrog (but not the swallow-tailed "Splitflag" variant). Special permission to use the "Splitflag" was given to individual institutions and private companies, especially after 1870. On 10 April 1915, the hoisting of any "other" flag on Danish soil was prohibited. In 1886, the war ministry introduced a regulation indicating that the flag should be flown from military buildings on thirteen specified days, including royal birthdays, the date of the signing of the Constitution of 5 June 1849 and on days of remembrance for military battles. In 1913, the naval ministry issued its own list of flag days. From 1939 until 2012, the yearbook "Hvem-Hvad-Hvor" included a list of flag days. As of 2019 flag days can be viewed at the "Ministry of Justice (Justitsministeriet)" as well as "The Denmark Society (Danmarks-Samfundet)". The size and shape of the civil ensign (""Koffardiflaget"") for merchant ships is given in the regulation of 11 June 1748, which says: "A red flag with a white cross with no split end. The white cross must be of the flag's height. The two first fields must be square in form and the two outer fields must be lengths of those". The proportions are thus: 3:1:3 vertically and 3:1:4.5 horizontally. This definition are the absolute proportions for the Danish national flag to this day, for both the civil version of the flag (""Stutflaget""), as well as the merchant flag (""Handelsflaget""). The civil flag and the merchant flag are identical in colour and design. A regulation passed in 1758 required Danish ships sailing in the Mediterranean to carry the royal cypher in the center of the flag in order to distinguish them from Maltese ships, due to the similarity of the flag of the Sovereign Military Order of Malta. According to the regulation of 11 June 1748 the colour was simply red, which is common known today as "Dannebrog rød" (""Dannebrog red""). The only available red fabric dye in 1748 was made of madder root, which can be processed to produce a brilliant red dye (used historically for British soldiers' jackets). A regulation of 4 May 1927 once again states that Danish merchant ships have to fly flags according to the regulation of 1748. The first regulation regarding the "Splitflag" dates from 27 March 1630, in which King Christian IV orders that Norwegian "Defensionskibe" (armed merchants ships) may only use the "Splitflag" if they are in Danish war service. In 1685 an order, distributed to a number of cities in Slesvig, states that all ships must carry the Danish flag, and in 1690 all merchant ships are forbidden to use the "Splitflag", with the exception of ships sailing in the East Indies, West Indies and along the coast of Africa. In 1741 it is confirmed that the regulation of 1690 is still very much in effect; that merchant ships may not use the "Splitflag". At the same time the Danish East India Company is allowed to fly the "Splitflag" when past the equator. Some confusion must have existed regarding the "Splitflag". In 1696 the Admiralty presented the King with a proposal for a standard regulating both size and shape of the "Splitflag". In the same year a royal resolution defines the proportions of the "Splitflag", which in this resolution is called "Kongeflaget" (the King's flag), as follows: "The cross must be of the flags height. The two first fields must be square in form with the sides three times the cross width. The two outer fields are rectangular and the length of the square fields. The tails are the length of the flag". These numbers are the basic for the "Splitflag", or "Orlogsflag", today, though the numbers have been slightly altered. The term "Orlogsflag" dates from 1806 and denotes use in the Danish Navy. From about 1750 to the early 19th century, a number of ships and companies which the government has interests in, received approval to use the "Splitflag". In the royal resolution of 25 October 1939 for the Danish Navy, it is stated that the "Orlogsflag" is a "Splitflag" with a deep red (""dybrød"") or madder red (""kraprød"") colour. Like the National flag, no nuance is given, but in modern days this is given as 195U. Furthermore, the size and shape is corrected in this resolution to be: "The cross must be of the flag's height. The two first fields must be square in form with the height of of the flag's height. The two outer fields are rectangular and the length of the square fields. The tails are the length of the rectangular fields". Thus, if compared to the standard of 1696, both the rectangular fields and the tails have decreased in size. The "Splitflag" and "Orlogsflag" have similar shapes but different sizes and shades of red. Legally, they are two different flags. The "Splitflag" is a Danish flag ending in a swallow-tail, it is "Dannebrog red", and is used on land. The "Orlogsflag" is an elongated "Splitflag" with a deeper red colour and is only used on sea. The "Orlogsflag" with no markings, may only be used by the Royal Danish Navy. There are though a few exceptions to this. A few institutions have been allowed to fly the clean "Orlogsflag". The same flag with markings has been approved for a few dozen companies and institutions over the years. Furthermore, the "Orlogsflag" is only described as such if it has no additional markings. Any swallow-tail flag, no matter the color, is called a "Splitflag" provided it bears additional markings. The current version of the royal standard was introduced on 16 November 1972 when the Queen adopted a new version of her personal coat of arms. The royal standard is the flag of Denmark with a swallow-tail and charged with the monarch's coat of arms set in a white square. The centre square is 32 parts in a flag with the ratio 56:107. Greenland and the Faroe Islands are additional autonomous territories
https://en.wikipedia.org/wiki?curid=8752
Daniel Dennett Daniel Clement Dennett III (born March 28, 1942) is an American philosopher, writer, and cognitive scientist whose research centers on the philosophy of mind, philosophy of science, and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. As of 2017, he is the co-director of the Center for Cognitive Studies and the Austin B. Fletcher Professor of Philosophy at Tufts University. Dennett is an atheist and secularist, a member of the Secular Coalition for America advisory board, and a member of the Committee for Skeptical Inquiry, as well as an outspoken supporter of the Brights movement. Dennett is referred to as one of the "Four Horsemen of New Atheism", along with Richard Dawkins, Sam Harris, and the late Christopher Hitchens. Dennett is a member of the editorial board for "The Rutherford Journal". Dennett was born on March 28, 1942, in Boston, Massachusetts, the son of Ruth Marjorie (née Leck) and Daniel Clement Dennett Jr. Dennett spent part of his childhood in Lebanon, where, during World War II, his father was a covert counter-intelligence agent with the Office of Strategic Services posing as a cultural attaché to the American Embassy in Beirut. When he was five, his mother took him back to Massachusetts after his father died in an unexplained plane crash. Dennett's sister is the investigative journalist Charlotte Dennett. Dennett says that he was first introduced to the notion of philosophy while attending summer camp at age 11, when a camp counselor said to him, "You know what you are, Daniel? You're a philosopher." Dennett graduated from Phillips Exeter Academy in 1959, and spent one year at Wesleyan University before receiving his Bachelor of Arts in philosophy at Harvard University in 1963. At Harvard University he was a student of W. V. Quine. In 1965, he received his Doctor of Philosophy in philosophy at the University of Oxford, where he studied under Gilbert Ryle and was a member of Hertford College. His dissertation was entitled "The Mind and the Brain: Introspective Description in the Light of Neurological Findings; Intentionality". Dennett describes himself as "an autodidact—or, more properly, the beneficiary of hundreds of hours of informal tutorials on all the fields that interest me, from some of the world's leading scientists". He is the recipient of a Fulbright Fellowship, two Guggenheim Fellowships, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences. He is a Fellow of the Committee for Skeptical Inquiry and a Humanist Laureate of the International Academy of Humanism. He was named 2004 Humanist of the Year by the American Humanist Association. In February 2010, he was named to the Freedom From Religion Foundation's Honorary Board of distinguished achievers. In 2012, he was awarded the Erasmus Prize, an annual award for a person who has made an exceptional contribution to European culture, society or social science, "for his ability to translate the cultural significance of science and technology to a broad audience." In 2018, he was awarded an honorary degree by Radboud University, located in Nijmegen, Netherlands, for his contributions to and influence on cross-disciplinary science. While he is a confirmed compatibilist on free will, in "On Giving Libertarians What They Say They Want"—chapter 15 of his 1978 book "Brainstorms"—Dennett articulated the case for a two-stage model of decision making in contrast to libertarian views. While other philosophers have developed two-stage models, including William James, Henri Poincaré, Arthur Compton, and Henry Margenau, Dennett defends this model for the following reasons: Leading libertarian philosophers such as Robert Kane have rejected Dennett's model, specifically that random chance is directly involved in a decision, on the basis that they believe this eliminates the agent's motives and reasons, character and values, and feelings and desires. They claim that, if chance is the primary cause of decisions, then agents cannot be liable for resultant actions. Kane says: Dennett has remarked in several places (such as "Self-portrait", in "Brainchildren") that his overall philosophical project has remained largely the same since his time at Oxford. He is primarily concerned with providing a philosophy of mind that is grounded in empirical research. In his original dissertation, "Content and Consciousness", he broke up the problem of explaining the mind into the need for a theory of content and for a theory of consciousness. His approach to this project has also stayed true to this distinction. Just as "Content and Consciousness" has a bipartite structure, he similarly divided "Brainstorms" into two sections. He would later collect several essays on content in "The Intentional Stance" and synthesize his views on consciousness into a unified theory in "Consciousness Explained". These volumes respectively form the most extensive development of his views. In chapter 5 of "Consciousness Explained" Dennett describes his multiple drafts model of consciousness. He states that, "all varieties of perception—indeed all varieties of thought or mental activity—are accomplished in the brain by parallel, multitrack processes of interpretation and elaboration of sensory inputs. Information entering the nervous system is under continuous 'editorial revision.'" (p. 111). Later he asserts, "These yield, over the course of time, something "rather like" a narrative stream or sequence, which can be thought of as subject to continual editing by many processes distributed around the brain, ..." (p. 135, emphasis in the original). In this work, Dennett's interest in the ability of evolution to explain some of the content-producing features of consciousness is already apparent, and this has since become an integral part of his program. He defends a theory known by some as Neural Darwinism. He also presents an argument against qualia; he argues that the concept is so confused that it cannot be put to any use or understood in any non-contradictory way, and therefore does not constitute a valid refutation of physicalism. His strategy mirrors his teacher Ryle's approach of redefining first person phenomena in third person terms, and denying the coherence of the concepts which this approach struggles with. Dennett self-identifies with a few terms: In "Consciousness Explained", he affirms "I am a sort of 'teleofunctionalist', of course, perhaps the original teleofunctionalist". He goes on to say, "I am ready to come out of the closet as some sort of verificationist".(page 460-461) Much of Dennett's work since the 1990s has been concerned with fleshing out his previous ideas by addressing the same topics from an evolutionary standpoint, from what distinguishes human minds from animal minds ("Kinds of Minds"), to how free will is compatible with a naturalist view of the world ("Freedom Evolves"). Dennett sees evolution by natural selection as an algorithmic process (though he spells out that algorithms as simple as long division often incorporate a significant degree of randomness). This idea is in conflict with the evolutionary philosophy of paleontologist Stephen Jay Gould, who preferred to stress the "pluralism" of evolution (i.e., its dependence on many crucial factors, of which natural selection is only one). Dennett's views on evolution are identified as being strongly adaptationist, in line with his theory of the intentional stance, and the evolutionary views of biologist Richard Dawkins. In "Darwin's Dangerous Idea", Dennett showed himself even more willing than Dawkins to defend adaptationism in print, devoting an entire chapter to a criticism of the ideas of Gould. This stems from Gould's long-running public debate with E. O. Wilson and other evolutionary biologists over human sociobiology and its descendant evolutionary psychology, which Gould and Richard Lewontin opposed, but which Dennett advocated, together with Dawkins and Steven Pinker. Gould argued that Dennett overstated his claims and misrepresented Gould's, to reinforce what Gould describes as Dennett's "Darwinian fundamentalism". Dennett's theories have had a significant influence on the work of evolutionary psychologist Geoffrey Miller. In "Darwin's Dangerous Idea", Dennett writes that evolution can account for the origin of morality. He rejects the idea of the naturalistic fallacy as the idea that ethics is in some free-floating realm, writing that the fallacy is to rush from facts to values. In his 2006 book, "", Dennett attempts to account for religious belief naturalistically, explaining possible evolutionary reasons for the phenomenon of religious adherence. In this book he declares himself to be "a bright", and defends the term. He has been doing research into clerics who are secretly atheists and how they rationalize their works. He found what he called a "don't ask, don't tell" conspiracy because believers did not want to hear of loss of faith. That made unbelieving preachers feel isolated but they did not want to lose their jobs and sometimes their church-supplied lodgings and generally consoled themselves that they were doing good in their pastoral roles by providing comfort and required ritual. The research, with Linda LaScola, was further extended to include other denominations and non-Christian clerics. The research and stories Dennett and LaScola accumulated during this project were published in their 2013 co-authored book, "Caught in the Pulpit: Leaving Belief Behind". He has also written about and advocated the notion of memetics as a philosophically useful tool, most recently in his "Brains, Computers, and Minds", a three-part presentation through Harvard's MBB 2009 Distinguished Lecture Series. Dennett has been critical of postmodernism, having said: Postmodernism, the school of "thought" that proclaimed "There are no truths, only interpretations" has largely played itself out in absurdity, but it has left behind a generation of academics in the humanities disabled by their distrust of the very idea of truth and their disrespect for evidence, settling for "conversations" in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster. Dennett adopted and somewhat redefined the term "deepity", originally coined by Miriam Weizenbaum. Dennett used "deepity" for a statement that is apparently profound, but is actually trivial on one level and meaningless on another. Generally, a deepity has two (or more) meanings: one that is true but trivial, and another that sounds profound and would be important if true, but is actually false or meaningless. Examples are "Que sera sera!", "Beauty is only skin deep!", "The power of intention can transform your life." The term many times. While approving of the increase in efficiency that humans reap by using resources such as expert systems in medicine or GPS in navigation, Dennett sees a danger in machines performing an ever-increasing proportion of basic tasks in perception, memory, and algorithmic computation because people may tend to anthropomorphize such systems and attribute intellectual powers to them that they do not possess. He believes the relevant danger from AI is that people will misunderstand the nature of basically "parasitic" AI systems, rather than employing them constructively to challenge and develop the human user's powers of comprehension. As given in his most recent book, "From Bacteria to Bach and Back", Dennett's views are contrary to those of Nick Bostrom. Although acknowledging that it is "possible in principle" to create AI with human-like comprehension and agency, Dennett maintains that the difficulties of any such "strong AI" project would be orders of magnitude greater than those raising concerns have realized. According to Dennett, the prospect of superintelligence (AI massively exceeding the cognitive performance of humans in all domains) is at least 50 years away, and of far less pressing significance than other problems the world faces. Dennett married Susan Bell in 1962. They live in North Andover, Massachusetts, and have a daughter, a son, and five grandchildren. Dennett is an avid sailor.
https://en.wikipedia.org/wiki?curid=8756
Darwin's Dangerous Idea Darwin's Dangerous Idea: Evolution and the Meanings of Life is a 1995 book by the philosopher Daniel Dennett, in which the author looks at some of the repercussions of Darwinian theory. The crux of the argument is that, whether or not Darwin's theories are overturned, there is no going back from the dangerous idea that design (purpose or what something is for) might not need a designer. Dennett makes this case on the basis that natural selection is a blind process, which is nevertheless sufficiently powerful to explain the evolution of life. Darwin's discovery was that the generation of life worked algorithmically, that processes behind it work in such a way that given these processes the results that they tend toward must be so. Dennett says, for example, that by claiming that minds cannot be reduced to purely algorithmic processes, many of his eminent contemporaries are claiming that miracles can occur. These assertions have generated a great deal of debate and discussion in the general public. The book was a finalist for the 1995 National Book Award in non-fiction and the 1996 Pulitzer Prize for General Non-Fiction. Dennett's previous book was "Consciousness Explained" (1991). Dennett noted discomfort with Darwinism among not only lay people but also even academics and decided it was time to write a book dealing with the subject. "Darwin's Dangerous Idea" is not meant to be a work of science, but rather an interdisciplinary book; Dennett admits that he does not understand all of the scientific details himself. He goes into a moderate level of detail, but leaves it for the reader to go into greater depth if desired, providing references to this end. In writing the book, Dennett wanted to "get thinkers in other disciplines to take evolutionary theory seriously, to show them how they have been underestimating it, and to show them why they have been listening to the wrong sirens". To do this he tells a story; one that is mainly original but includes some material from his previous work. Dennett taught an undergraduate seminar at Tufts University on Darwin and philosophy, which included most of the ideas in the book. He also had the help of fellow staff and other academics, some of whom read drafts of the book. It is dedicated to W. V. O. Quine, "teacher and friend". "Starting in the Middle", Part I of "Darwin's Dangerous Idea", gets its name from a quote by Willard Van Orman Quine: "Analyze theory-building how we will, we all must start in the middle. Our conceptual firsts are middle-sized, middle-distance objects, and our introduction to them and to everything comes midway in the cultural evolution of the race." The first chapter "Tell Me Why" is named after a song. Before Charles Darwin, and still today, a majority of people see God as the ultimate cause of all design, or the ultimate answer to 'why?' questions. John Locke argued for the primacy of mind before matter, and David Hume, while exposing problems with Locke's view, could not see any alternative. Darwin provided just such an alternative: evolution. Besides providing evidence of common descent, he introduced a mechanism to explain it: natural selection. According to Dennett, natural selection is a mindless, mechanical and algorithmic process—Darwin's dangerous idea. The third chapter introduces the concept of "skyhooks" and "cranes" (see below). He suggests that resistance to Darwinism is based on a desire for skyhooks, which do not really exist. According to Dennett, good reductionists explain apparent design without skyhooks; greedy reductionists try to explain it without cranes. Chapter 4 looks at the tree of life, such as how it can be visualized and some crucial events in life's history. The next chapter concerns the possible and the actual, using the 'Library of Mendel' (the space of all logically possible genomes) as a conceptual aid. In the last chapter of part I, Dennett treats human artifacts and culture as a branch of a unified Design Space. Descent or homology can be detected by shared design features that would be unlikely to appear independently. However, there are also "Forced Moves" or "Good Tricks" that will be discovered repeatedly, either by natural selection (see convergent evolution) or human investigation. The first chapter of part II, "Darwinian Thinking in Biology", asserts that life originated without any skyhooks, and the orderly world we know is the result of a blind and undirected shuffle through chaos. The eighth chapter's message is conveyed by its title, "Biology is Engineering"; biology is the study of design, function, construction and operation. However, there are some important differences between biology and engineering. Related to the engineering concept of optimization, the next chapter deals with adaptationism, which Dennett endorses, calling Gould and Lewontin's "refutation" of it an illusion. Dennett thinks adaptationism is, in fact, the best way of uncovering constraints. The tenth chapter, entitled "Bully for Brontosaurus", is an extended critique of Stephen Jay Gould, who Dennett feels has created a distorted view of evolution with his popular writings; his "self-styled revolutions" against adaptationism, gradualism and other orthodox Darwinism all being false alarms. The final chapter of part II dismisses directed mutation, the inheritance of acquired traits and Teilhard's "Omega Point", and insists that other controversies and hypotheses (like the unit of selection and Panspermia) have no dire consequences for orthodox Darwinism. "Mind, Meaning, Mathematics and Morality" is the name of Part III, which begins with a quote from Nietzsche. Chapter 12, "The Cranes of Culture", discusses cultural evolution. It asserts that the meme has a role to play in our understanding of culture, and that it allows humans, alone among animals, to "transcend" our selfish genes. "Losing Our Minds to Darwin" follows, a chapter about the evolution of brains, minds and language. Dennett criticizes Noam Chomsky's perceived resistance to the evolution of language, its modeling by artificial intelligence, and reverse engineering. The evolution of meaning is then discussed, and Dennett uses a series of thought experiments to persuade the reader that meaning is the product of meaningless, algorithmic processes. Chapter 15 asserts that Gödel's Theorem does not make certain sorts of artificial intelligence impossible. Dennett extends his criticism to Roger Penrose. The subject then moves on to the origin and evolution of morality, beginning with Thomas Hobbes (who Dennett calls "the first sociobiologist") and Friedrich Nietzsche. He concludes that only an evolutionary analysis of ethics makes sense, though he cautions against some varieties of 'greedy ethical reductionism'. Before moving to the next chapter, he discusses some sociobiology controversies. The penultimate chapter, entitled "Redesigning Morality", begins by asking if ethics can be 'naturalized'. Dennett does not believe there is much hope of discovering an algorithm for doing the right thing, but expresses optimism in our ability to design and redesign our approach to moral problems. In "The Future of an Idea", the book's last chapter, Dennett praises biodiversity, including cultural diversity. In closing, he uses "Beauty and the Beast" as an analogy; although Darwin's idea may seem dangerous, it is actually quite beautiful. Dennett believes there is little or no principled difference between the naturally generated products of evolution and the man-made artifacts of human creativity and culture. For this reason he indicates deliberately that the complex fruits of the tree of life are in a very meaningful sense "designed"—even though he does not believe evolution was guided by a higher intelligence. Dennett supports using the notion of memes to better understand cultural evolution. He also believes even human creativity might operate by the Darwinian mechanism. This leads him to propose that the "space" describing biological "design" is connected with the space describing human culture and technology. A precise mathematical definition of Design Space is not given in "Darwin's Dangerous Idea". Dennett acknowledges this and admits he is offering a philosophical idea rather than a scientific formulation. Dennett describes natural selection as a substrate-neutral, mindless algorithm for moving through Design Space. Dennett writes about the fantasy of a "universal acid" as a liquid that is so corrosive that it would eat through anything that it came into contact with, even a potential container. Such a powerful substance would transform everything it was applied to; leaving something very different in its wake. This is where Dennett draws parallels from the “universal acid” to Darwin's idea: “it eats through just about every traditional concept, and leaves in its wake a revolutionized world-view, with most of the old landmarks still recognizable, but transformed in fundamental ways.” While there are people who would like to see Darwin's idea contained within the field of biology, Dennett asserts that this dangerous idea inevitably “leaks” out to transform other fields as well. Dennett uses the term "skyhook" to describe a source of design complexity that does not build on lower, simpler layers—in simple terms, a miracle. In philosophical arguments concerning the reducibility (or otherwise) of the human mind, Dennett's concept pokes fun at the idea of intelligent design emanating from on high, either originating from one or more gods, or providing its own grounds in an absurd, Munchausen-like bootstrapping manner. Dennett also accuses various competing neo-Darwinian ideas of making use of such supposedly unscientific skyhooks in explaining evolution, coming down particularly hard on the ideas of Stephen Jay Gould. Dennett contrasts theories of complexity that require such miracles with those based on "cranes", structures that permit the construction of entities of greater complexity but are themselves founded solidly "on the ground" of physical science. In "The New York Review of Books", John Maynard Smith praised "Darwin's Dangerous Idea": It is therefore a pleasure to meet a philosopher who understands what Darwinism is about, and approves of it. Dennett goes well beyond biology. He sees Darwinism as a corrosive acid, capable of dissolving our earlier belief and forcing a reconsideration of much of sociology and philosophy. Although modestly written, this is not a modest book. Dennett argues that, if we understand "Darwin's dangerous idea", we are forced to reject or modify much of our current intellectual baggage... Writing in the same publication, Stephen Jay Gould criticised "Darwin's Dangerous Idea" for being an "influential but misguided ultra-Darwinian manifesto": Daniel Dennett devotes the longest chapter in "Darwin's Dangerous Idea" to an excoriating caricature of my ideas, all in order to bolster his defense of Darwinian fundamentalism. If an argued case can be discerned at all amid the slurs and sneers, it would have to be described as an effort to claim that I have, thanks to some literary skill, tried to raise a few piddling, insignificant, and basically conventional ideas to "revolutionary" status, challenging what he takes to be the true Darwinian scripture. Since Dennett shows so little understanding of evolutionary theory beyond natural selection, his critique of my work amounts to little more than sniping at false targets of his own construction. He never deals with my ideas as such, but proceeds by hint, innuendo, false attribution, and error. Gould was also a harsh critic of Dennett's idea of the "universal acid" of natural selection and of his subscription to the idea of memetics; Dennett responded, and the exchange between Dennett, Gould, and Robert Wright was printed in the "New York Review of Books". Biologist H. Allen Orr wrote a critical review emphasizing similar points in the "Boston Review". The book has also provoked a negative reaction from creationists; Frederick Crews writes that "Darwin's Dangerous Idea" "rivals Richard Dawkins's "The Blind Watchmaker" as the creationists' most cordially hated text."
https://en.wikipedia.org/wiki?curid=8757
Douglas Hofstadter Douglas Richard Hofstadter (born February 15, 1945) is an American scholar of cognitive science, physics, and comparative literature whose research includes concepts such as the sense of self in relation to the external world, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. His 1979 book "Gödel, Escher, Bach: An Eternal Golden Braid" won both the Pulitzer Prize for general nonfiction and a National Book Award (at that time called The American Book Award) for Science. His 2007 book "I Am a Strange Loop" won the Los Angeles Times Book Prize for Science and Technology. Hofstadter was born in New York City to Jewish parents: Nobel Prize-winning physicist Robert Hofstadter and Nancy Givan Hofstadter. He grew up on the campus of Stanford University, where his father was a professor, and attended the International School of Geneva in 1958–59. He graduated with Distinction in mathematics from Stanford University in 1965, and received his Ph.D. in physics from the University of Oregon in 1975, where his study of the energy levels of Bloch electrons in a magnetic field led to his discovery of the fractal known as Hofstadter's butterfly. Since 1988, Hofstadter has been the College of Arts and Sciences Distinguished Professor of Cognitive Science and Comparative Literature at Indiana University in Bloomington, where he directs the Center for Research on Concepts and Cognition which consists of himself and his graduate students, forming the "Fluid Analogies Research Group" (FARG). He was initially appointed to the Indiana University's Computer Science Department faculty in 1977, and at that time he launched his research program in computer modeling of mental processes (which he called "artificial intelligence research", a label he has since dropped in favor of "cognitive science research"). In 1984, he moved to the University of Michigan in Ann Arbor, where he was hired as a professor of psychology and was also appointed to the Walgreen Chair for the Study of Human Understanding. In 1988 he returned to Bloomington as "College of Arts and Sciences Professor" in both cognitive science and computer science. He was also appointed adjunct professor of history and philosophy of science, philosophy, comparative literature, and psychology, but has said that his involvement with most of those departments is nominal. In 1988 Hofstadter received the "In Praise of Reason" award, the Committee for Skeptical Inquiry's highest honor. In April 2009 he was elected a Fellow of the American Academy of Arts and Sciences and a member of the American Philosophical Society. In 2010 he was elected a member of the Royal Society of Sciences in Uppsala, Sweden. Hofstadter's many interests include music, visual art, the mind, creativity, consciousness, self-reference, translation and mathematics. At the University of Michigan and Indiana University, he and Melanie Mitchell coauthored a computational model of "high-level perception"—Copycat—and several other models of analogy-making and cognition, including the Tabletop project, co-developed with Robert M. French. Hofstadter's doctoral student James Marshall subsequently extended the Copycat project under the name "Metacat". The Letter Spirit project, implemented by Gary McGraw and John Rehling, aims to model artistic creativity by designing stylistically uniform "gridfonts" (typefaces limited to a grid). Other more recent models include Phaeaco (implemented by Harry Foundalis) and SeqSee (Abhijit Mahabal), which model high-level perception and analogy-making in the microdomains of Bongard problems and number sequences, respectively, as well as George (Francisco Lara-Dammer), which models the processes of perception and discovery in triangle geometry. The pursuit of beauty has driven Hofstadter both inside and outside his professional work. He seeks beautiful mathematical patterns, beautiful explanations, beautiful typefaces, beautiful sonic patterns in poetry, "etc". Hofstadter has said of himself, "I'm someone who has one foot in the world of humanities and arts, and the other foot in the world of science." He has had several exhibitions of his artwork in various university galleries. These shows have featured large collections of his gridfonts, his ambigrams (pieces of calligraphy created with two readings, either of which is usually obtained from the other by rotating or reflecting the ambigram, but sometimes simply by "oscillation", like the Necker Cube or the rabbit/duck figure of Joseph Jastrow), and his "Whirly Art" (music-inspired visual patterns realized using shapes based on various alphabets from India). Hofstadter invented the term "ambigram" in 1984; many ambigrammists have since taken up the concept. Hofstadter collects and studies cognitive errors (largely, but not solely, speech errors), "bon mots" (spontaneous humorous quips), and analogies of all sorts, and his longtime observation of these diverse products of cognition, and his theories about the mechanisms that underlie them, have exerted a powerful influence on the architectures of the computational models he and FARG members have developed. All FARG computational models share certain key principles, including: FARG models also have an overarching philosophy that all cognition is built from the making of analogies. The computational architectures that share these precepts are called "active symbols" architectures. Hofstadter's thesis about consciousness, first expressed in "Gödel, Escher, Bach" ("GEB") but also present in several of his later books, is that it is an emergent consequence of seething lower-level activity in the brain. In "GEB" he draws an analogy between the social organization of a colony of ants and the mind seen as a coherent "colony" of neurons. In particular, Hofstadter claims that our sense of having (or being) an "I" comes from the abstract pattern he terms a "strange loop", an abstract cousin of such concrete phenomena as audio and video feedback that Hofstadter has defined as "a level-crossing feedback loop". The prototypical example of a strange loop is the self-referential structure at the core of Gödel's incompleteness theorems. Hofstadter's 2007 book "I Am a Strange Loop" carries his vision of consciousness considerably further, including the idea that each human "I" is distributed over numerous brains, rather than being limited to one. Hofstadter's writing is characterized by an intense interaction between form and content, as exemplified by the 20 dialogues in "GEB", many of which simultaneously discuss and imitate strict musical forms used by Bach, such as canons and fugues. Most of Hofstadter's books feature some kind of structural alternation: in "GEB" between dialogues and chapters, in "The Mind's I" between selections and reflections, in "Metamagical Themas" between Chapters and Postscripts, and so forth. In both his writing and his teaching, Hofstadter stresses the concrete, constantly using examples and analogies, and avoids the abstract. Typical of the courses he teaches is his seminar "Group Theory and Galois Theory Visualized", in which abstract mathematical ideas are rendered as concretely as possible. He puts great effort into making ideas clear and visual, and asserts that when he teaches, if his students do not understand something, it is never their fault but always his own. Hofstadter is passionate about languages. In addition to English, his mother tongue, he speaks French and Italian fluently (the language spoken at home with his children is Italian). At various times in his life, he has studied (in descending order of level of fluency reached) German, Russian, Spanish, Swedish, Mandarin, Dutch, Polish, and Hindi. His love of sounds pushes him to strive to minimize, and ideally get rid of, any foreign accent. "Le Ton beau de Marot: In Praise of the Music of Language" is a long book devoted to language and translation, especially poetry translation, and one of its leitmotifs is a set of 88 translations of "Ma Mignonne", a highly constrained poem by 16th-century French poet Clément Marot. In this book, Hofstadter jokingly describes himself as "pilingual" (meaning that the sum total of the varying degrees of mastery of all the languages that he's studied comes to 3.14159 ...), as well as an "oligoglot" (someone who speaks "a few" languages). In 1999, the bicentennial year of Russian poet and writer Alexander Pushkin, Hofstadter published a verse translation of Pushkin's classic novel-in-verse "Eugene Onegin". He has translated many other poems too (always respecting their formal constraints), and two novels (in prose): "La Chamade" ("That Mad Ache") by French writer Françoise Sagan, and "La Scoperta dell'Alba" ("The Discovery of Dawn") by Walter Veltroni, the then head of the Partito Democratico in Italy. "The Discovery of Dawn" was published in 2007, and "That Mad Ache" was published in 2009, bound together with Hofstadter's essay "Translator, Trader: An Essay on the Pleasantly Pervasive Paradoxes of Translation". Hofstadter's Law is "It always takes longer than you expect, even when you take into account Hofstadter's Law." The law is stated in "GEB". Hofstadter's former Ph.D. students include (with dissertation title): Hofstadter has said that he feels "uncomfortable with the nerd culture that centers on computers". He admits that "a large fraction [of his audience] seems to be those who are fascinated by technology", but when it was suggested that his work "has inspired many students to begin careers in computing and artificial intelligence" he replied that he was pleased about that, but that he himself has "no interest in computers". In that interview he also mentioned a course he has twice given at Indiana University, in which he took a "skeptical look at a number of highly-touted AI projects and overall approaches". For example, upon the defeat of Garry Kasparov by Deep Blue, he commented that "It was a watershed event, but it doesn't have to do with computers becoming intelligent". Hofstadter's disinterest in computers is analogous to an astronomer's disinterest in telescopes. In his book "Metamagical Themas", he says that "in this day and age, how can anyone fascinated by creativity and beauty fail to see in computers the ultimate tool for exploring their essence?". Provoked by predictions of a technological singularity (a hypothetical moment in the future of humanity when a self-reinforcing, runaway development of artificial intelligence causes a radical change in technology and culture), Hofstadter has both organized and participated in several public discussions of the topic. At Indiana University in 1999 he organized such a symposium, and in April 2000, he organized a larger symposium titled "Spiritual Robots" at Stanford University, in which he moderated a panel consisting of Ray Kurzweil, Hans Moravec, Kevin Kelly, Ralph Merkle, Bill Joy, Frank Drake, John Holland and John Koza. Hofstadter was also an invited panelist at the first Singularity Summit, held at Stanford in May 2006. Hofstadter expressed doubt that the singularity will occur in the foreseeable future. In 1988 Dutch director Piet Hoenderdos created a docudrama about Hofstadter and his ideas, "Victim of the Brain", based on "The Mind's I". It includes interviews with Hofstadter about his work. When Martin Gardner retired from writing his "Mathematical Games" column for "Scientific American" magazine, Hofstadter succeeded him in 1981–83 with a column titled "Metamagical Themas" (an anagram of "Mathematical Games"). An idea he introduced in one of these columns was the concept of "Reviews of This Book", a book containing nothing but cross-referenced reviews of itself that has an online implementation. One of Hofstadter's columns in "Scientific American" concerned the damaging effects of sexist language, and two chapters of his book "Metamagical Themas" are devoted to that topic, one of which is a biting analogy-based satire, "A Person Paper on Purity in Language" (1985), in which the reader's presumed revulsion at racism and racist language is used as a lever to motivate an analogous revulsion at sexism and sexist language; Hofstadter published it under the pseudonym William Satire, an allusion to William Safire. Another column reported on the discoveries made by University of Michigan professor Robert Axelrod in his computer tournament pitting many iterated prisoner's dilemma strategies against each other, and a follow-up column discussed a similar tournament that Hofstadter and his graduate student Marek Lugowski organized. The "Metamagical Themas" columns ranged over many themes, including patterns in Frédéric Chopin's piano music (particularly his études), the concept of superrationality (choosing to cooperate when the other party/adversary is assumed to be equally intelligent as oneself), and the self-modifying game of Nomic, based on the way in which the legal system modifies itself, and developed by philosopher Peter Suber. Hofstadter was married to Carol Ann Brush until her death. They met in Bloomington, and married in Ann Arbor in 1985. They had two children, Danny and Monica. Carol died in 1993 from the sudden onset of a brain tumor—glioblastoma multiforme—when their children were five and two. The Carol Ann Brush Hofstadter Memorial Scholarship for Bologna-bound Indiana University students was established in 1996 in her name. Hofstadter's book "Le Ton beau de Marot" is dedicated to their two children and its dedication reads "To M. & D., living sparks of their Mommy's soul". In the fall of 2010, Hofstadter met Baofen Lin in a chacha class, and they married in Bloomington in September 2012. Hofstadter has composed numerous pieces for piano, and a few for piano and voice. He created an audio CD, "DRH/JJ", which includes all these compositions performed mostly by pianist Jane Jackson, with a few performed by Brian Jones, Dafna Barenboim, Gitanjali Mathur and Hofstadter. The dedication for "I Am A Strange Loop" is: "To my sister Laura, who can understand, and to our sister Molly, who cannot." Hofstadter explains in the preface that his younger sister Molly never developed the ability to speak or understand language. As a consequence of his attitudes about consciousness and empathy, Hofstadter has been a vegan for roughly half his life. In the 1982 novel "", Arthur C. Clarke's first sequel to "", HAL 9000 is described by Dr. Chandra as being caught in a "Hofstadter–Möbius loop". The movie uses the term "H. Möbius loop". On April 3, 1995, Hofstadter's book "Fluid Concepts & Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought" was the first book ever sold by Amazon.com. The books published by Hofstadter are (the ISBNs refer to paperback editions, where available): Hofstadter has written, among many others, the following papers: Hofstadter has also written over 50 papers that were published through the Center for Research on Concepts and Cognition. Hofstadter has written forewords for or edited the following books: Translations
https://en.wikipedia.org/wiki?curid=8758
Dahomey The Kingdom of Dahomey () was an African kingdom (located within the area of the present-day country of Benin) that existed from about 1600 until 1904, when the last king, Béhanzin, was defeated by the French, and the country was annexed into the French colonial empire. Dahomey developed on the Abomey Plateau amongst the Fon people in the early 17th century and became a regional power in the 18th century by conquering key cities on the Atlantic coast. For much of the 18th and 19th centuries, the Kingdom of Dahomey was a key regional state, eventually ending tributary status to the Oyo Empire. The Kingdom of Dahomey was an important regional power that had an organized domestic economy built on conquest and slave labor, significant international trade with Europeans, a centralized administration, taxation systems, and an organized military. Notable in the kingdom were significant artwork, an all-female military unit called the Dahomey Amazons by European observers, and the elaborate religious practices of Vodun with the large festival of the Annual Customs of Dahomey which involved large scale human sacrifice. They traded prisoners, which they captured during wars and raids, and exchanged them with Europeans for goods such as knives, bayonets, firearms, fabrics, and spirits. The Kingdom of Dahomey was referred to by many different names and has been written in a variety of ways, including "Danxome", "Danhome", and "Fon". The name "Fon" relates to the dominant ethnic and language group, the Fon people, of the royal families of the kingdom and is how the kingdom first became known to Europeans. The names "Dahomey", "Danxome", and "Danhome" all have a similar origin story, which historian Edna Bay says may be a false etymology. The story goes that Dakodonu, considered the second king in modern kings lists, was granted permission by the Gedevi chiefs, the local rulers, to settle in the Abomey plateau. Dakodonu requested additional land from a prominent chief named Dan (or Da) to which the chief responded sarcastically "Should I open up my belly and build you a house in it?" For this insult, Dakodonu killed Dan and began the construction of his palace on the spot. The name of the kingdom was derived from the incident: Dan=chief dan, xo=Belly, me=Inside of. The Kingdom of Dahomey was established around 1600 by the Fon people who had recently settled in the area (or were possibly a result of intermarriage between the Aja people and the local Gedevi). The foundational king for Dahomey is often considered to be Houegbadja (c. 1645–1685), who built the Royal Palaces of Abomey and began raiding and taking over towns outside of the Abomey plateau. King Agaja, Houegbadja's grandson, came to the throne in 1708 and began significant expansion of the Kingdom of Dahomey. This expansion was made possible by the superior military force of King Agaja's Dahomey. In contrast to surrounding regions, Dahomey employed a professional standing army numbering around ten thousand. What the Dahomey lacked in numbers, they made up for in discipline and superior arms. In 1724, Agaja conquered Allada, the origin for the royal family according to oral tradition, and in 1727 he conquered Whydah. This increased size of the kingdom, particularly along the Atlantic coast, and increased power made Dahomey into a regional power. The result was near constant warfare with the main regional state, the Oyo Empire, from 1728 until 1740. The warfare with the Oyo empire resulted in Dahomey assuming a tributary status to the Oyo empire. Tegbesu, also spelled as Tegbessou, was King of Dahomey, in present-day Benin, from 1740 until 1774. Tegbesu was not the oldest son of King Agaja (1718–1740), but was selected following his father's death after winning a succession struggle with a brother. King Agaja had significantly expanded the Kingdom of Dahomey during his reign, notably conquering Whydah in 1727. This increased the size of the kingdom and increased both domestic dissent and regional opposition. Tegbessou ruled over Dahomey at a point where it needed to increase its legitimacy over those who it had recently conquered. As a result, Tegbesu is often credited with a number of administrative changes in the kingdom in order to establish the legitimacy of the kingdom. The slave trade increased significantly during Tegbessou's reign and began to provide the largest part of the income for the king. In addition, Tegbesu's rule is the one with the first significant "kpojito" or mother of the leopard with Hwanjile in that role. The "kpojito" became a prominently important person in Dahomey royalty. Hwanjile, in particular, is said to have changed dramatically the religious practices of Dahomey by creating two new deities and more closely tying worship to that of the king. According to one oral tradition, as part of the tribute owed by Dahomey to Oyo, Agaja had to give to Oyo one of his sons. The story claims that only Hwanjile, of all of Agaja's wives, was willing to allow her son to go to Oyo. This act of sacrifice, according to the oral tradition made Tegbesu, was favored by Agaja. Agaja reportedly told Tegbesu that he was the future king, but his brother Zinga was still the official heir. The kingdom fought the First Franco-Dahomean War and Second Franco-Dahomean War with France. The kingdom was reduced and made a French protectorate in 1894. In 1904 the area became part of a French colony, French Dahomey. In 1958 French Dahomey became the self-governing colony called the Republic of Dahomey and gained full independence in 1960. It was renamed in 1975 the People's Republic of Benin and in 1991 the Republic of Benin. The Dahomey kingship exists as a ceremonial role to this day. Early writings, predominantly written by European slave traders, often presented the kingdom as an absolute monarchy led by a despotic king. However, these depictions were often deployed as arguments by different sides in the slave trade debates, mainly in the United Kingdom, and as such were probably exaggerations. Recent historical work has emphasized the limits of monarchical power in the Kingdom of Dahomey. Historian John C. Yoder, with attention to the Great Council in the kingdom, argued that its activities do not "imply that Dahomey's government was democratic or even that her politics approximated those of nineteenth-century European monarchies. However, such evidence does support the thesis that governmental decisions were molded by conscious responses to internal political pressures as well as by executive fiat." The primary political divisions revolved around villages with chiefs and administrative posts appointed by the king and acting as his representatives to adjudicate disputes in the village. The King of Dahomey ("Ahosu" in the Fon language) was the sovereign power of the kingdom. All of the kings were claimed to be part of the "Alladaxonou" dynasty, claiming descent from the royal family in Allada. Much of the succession rules and administrative structures were created early by Houegbadja, Akaba, and Agaja. Succession through the male members of the line was the norm typically going to the oldest son, but not always. The king was selected largely through discussion and decision in the meetings of the Great Council, although how this operated was not always clear. The Great Council brought together a host of different dignitaries from throughout the kingdom yearly to meet at the Annual Customs of Dahomey. Discussions would be lengthy and included members, both men and women, from throughout the kingdom. At the end of the discussions, the king would declare the consensus for the group. Key positions in the King's court included the "migan", the "mehu", the "yovogan", the "kpojito" (or queen-mother), and later the "chacha" (or viceroy) of Whydah. The "migan" (combination of mi-our and gan-chief) was a primary consul for the king, a key judicial figure, and served as the head executioner. The "mehu" was similarly a key administrative officer who managed the palaces and the affairs of the royal family, economic matters, and the areas to the south of Allada (making the position key to contact with Europeans). The relations between Dahomey and other countries were complex and heavily impacted by the Gold trade. The Oyo empire engaged in regular warfare with the kingdom of Dahomey and Dahomey was a tributary to Oyo from 1732 until 1823. The city-state of Porto-Novo, under the protection of Oyo, and Dahomey had a long-standing rivalry largely over control of the Gold trade along the coast. The rise of Abeokuta in the 1840s created another power rivaling Dahomey, largely by creating a safe haven for people from the slave trade. The last known slave ship that sailed to the United States of America to port in Mobile, Alabama brought a group of 110 slaves from the Dahomey Kingdom, purchased long after the abolition of the international slave trade. Thomas Jefferson signed the Act Prohibiting Importation of Slaves into law on March 2, effective January 1, 1808. The story was mentioned in the newspaper "The Tarboro Southerner" on July 14, 1860. On July 9, 1860, a schooner called "Clotilda", captained by William Foster, arrived in the bay of Mobile, Alabama carrying the last known shipment of slaves to the US from the Dahomey Kingdom. In 1858, a man known as Timothy Meaher made a wager with acquaintances that despite the law banning the slave trade, he could safely bring a load of slaves from Africa. Describing how he came in possession of the slaves, Captain William Foster wrote in his journal in 1860, "from thence I went to see the King of Dahomey. Having agreeably transacted affairs with the Prince we went to the warehouse where they had in confinement four thousand captives in a state of nudity from which they gave me liberty to select one hundred and twenty-five as mine offering to brand them for me, from which I preemptorily ["sic"] forbid; commenced taking on cargo of negroes ["sic"], successfully securing on board one hundred and ten." A notable descendant of a slave from this ship is Ahmir Khalib Thompson (American music artist known as Questlove). Mr. Thompson's story is depicted in the PBS Television show "Finding Your Roots" [Season 4, Episode 9]. The Kingdom of Dahomey also sent a diplomatic mission to Brazil, in 1750, while the country was still under Portuguese rule, in order to strengthen diplomatic relations, after an incident which led to the expulsion of Portuguese-Brazilian diplomatic authorities, in 1743. The interest of maintaining these relations was economic, due to slave trade. It is also important to note that Dahomey was the second country - the first being Portugal - to recognize the independence of Brazil in 1822. The military of the Kingdom of Dahomey was divided into two units: the right and the left. The right was controlled by the "migan" and the left was controlled by the "mehu". At least by the time of Agaja, the kingdom had developed a standing army that remained encamped wherever the king was. Soldiers in the army were recruited as young as seven or eight years old, initially serving as shield carriers for regular soldiers. After years of apprenticeship and military experience, they were allowed to join the army as regular soldiers. To further incentivize the soldiers, each soldier received bonuses paid in cowry shells for each enemy they killed or captured in battle. This combination of lifelong military experience and monetary incentives resulted in a cohesive, well-disciplined military. One European said Agaja's standing army consisted of "elite troops, brave and well-disciplined, led by a prince full of valor and prudence, supported by a staff of experienced officers". In addition to being well-trained, the Dahomey army under Agaja was also very well armed. The Dahomey army favored imported European weapons as opposed to traditional weapons. For example, they used European flintlock muskets in long-range combat and imported steel swords and cutlasses in close combat. The Dahomey army also possessed twenty-five cannons. When going into battle, the king would take a secondary position to the field commander with the reason given that if any spirit were to punish the commander for decisions it should not be the king. Unlike other regional powers, the military of Dahomey did not have a significant cavalry (like the Oyo empire) or naval power (which prevented expansion along the coast). The Dahomey Amazons, a unit of all-female soldiers, is one of the most unusual aspects of the military of the kingdom. The Dahomean state became widely known for its corps of female soldiers. Their origins are debated; they may have formed from a palace guard or from gbetos (female hunting teams). They were organized around 1729 to fill out the army and make it look larger in battle, armed only with banners. The women reportedly behaved so courageously they became a permanent corps. In the beginning, the soldiers were criminals pressed into service rather than being executed. Eventually, however, the corps became respected enough that King Ghezo ordered every family to send him their daughters, with the fittest being chosen as soldiers. The economic structure of the kingdom was highly intertwined with the political and religious systems and these developed together significantly. The main currency was cowry shells. The domestic economy largely focused on agriculture and crafts for local consumption. Until the development of palm oil, very little agricultural or craft goods were traded outside of the kingdom. Markets served a key role in the kingdom and were organized around a rotating cycle of four days with a different market each day (the market type for the day was religiously sanctioned). Agriculture work was largely decentralized and done by most families. However, with the expansion of the kingdom, agricultural plantations began to be a common agricultural method in the kingdom. Craftwork was largely dominated by a formal guild system. Herskovits recounts a complex tax system in the kingdom, in which officials who represented the king, the "tokpe", gathered data from each village regarding their harvest. Then the king set a tax based upon the level of production and village population. In addition, the king's own land and production were taxed. After significant road construction undertaken by the kingdom, toll booths were also established that collected yearly taxes based on the goods people carried and their occupation. Officials also sometimes imposed fines for public nuisance before allowing people to pass. The Kingdom of Dahomey shared many religious rituals with surrounding populations; however, it also developed unique ceremonies, beliefs, and religious stories for the kingdom. These included royal ancestor worship and the specific vodun practices of the kingdom. Early kings established clear worship of royal ancestors and centralized their ceremonies in the Annual Customs of Dahomey. The spirits of the kings had an exalted position in the land of the dead and it was necessary to get their permission for many activities on earth. Ancestor worship pre-existed the kingdom of Dahomey; however, under King Agaja, a cycle of ritual was created centered on first celebrating the ancestors of the king and then celebrating a family lineage. The Annual Customs of Dahomey ("xwetanu" or "huetanu" in Fon) involved multiple elaborate components and some aspects may have been added in the 19th century. In general, the celebration involved distribution of gifts, human sacrifice, military parades, and political councils. Its main religious aspect was to offer thanks and gain the approval for ancestors of the royal lineage. However, the custom also included military parades, public discussions, gift giving (the distribution of money to and from the king), and human sacrifice and the spilling of blood. Human sacrifice was an important part of the practice. During the Annual Custom, 500 prisoners would be sacrificed. In addition, when a ruler died, hundreds, to thousands of prisoners would be sacrificed. As many as 4,000 were reported killed In one of these ceremonies in 1727. Dahomey had a unique form of West African Vodun that linked together preexisting animist traditions with vodun practices. Oral history recounted that Hwanjile, a wife of Agaja and mother of Tegbessou, brought Vodun to the kingdom and ensured its spread. The primary deity is the combined Mawu-Lisa (Mawu having female characteristics and Lisa having male characteristics) and it is claimed that this god took over the world that was created by their mother Nana-Buluku. Mawu-Lisa governs the sky and is the highest pantheon of gods, but other gods exist in the earth and in thunder. Religious practice organized different priesthoods and shrines for each different god and each different pantheon (sky, earth or thunder). Women made up a significant amount of the priest class and the chief priest was always a descendant of Dakodonou. The arts in Dahomey were unique and distinct from the artistic traditions elsewhere in Africa. The arts were substantially supported by the king and his family, had non-religious traditions, assembled multiple different materials, and borrowed widely from other peoples in the region. Common art forms included wood and ivory carving, metalwork (including silver, iron and brass, appliqué cloth, and clay bas-reliefs. The king was key in supporting the arts and many of them provided significant sums for artists resulting in the unique development, for the region, of a non-religious artistic tradition in the kingdom. Artists were not of a specific class but both royalty and commoners made important artistic contributions. Kings were often depicted in large zoomorphic forms with each king resembling a particular animal in multiple representations. Suzanne Blier identifies two unique aspects of art in Dahomey: 1. Assemblage of different components and 2. Borrowing from other states. Assemblage of art, involving the combination of multiple components (often of different materials) combined together in a single piece of art, was common in all forms and was the result of the various kings promoting finished products rather than particular styles. This assembling may have been a result of the second feature, which involved the wide borrowing of styles and techniques from other cultures and states. Clothing, cloth work, architecture, and the other forms of art all resemble other artistic representation from around the region. Much of the artwork revolved around the royalty. Each of the palaces at the Royal Palaces of Abomey contained elaborate bas-reliefs ("noundidė" in Fon) providing a record of the king's accomplishments. Each king had his own palace within the palace complex and within the outer walls of their personal palace was a series of clay reliefs designed specific to that king. These were not solely designed for royalty and chiefs, temples, and other important buildings had similar reliefs. The reliefs would present Dahomey kings often in military battles against the Oyo or Mahi tribes to the north of Dahomey with their opponents depicted in various negative depictions (the king of Oyo is depicted in one as a baboon eating a cob of corn). Historical themes dominated representation and characters were basically designed and often assembled on top of each other or in close proximity creating an ensemble effect. In addition to the royal depictions in the reliefs, royal members were depicted in power sculptures known as "bocio", which incorporated mixed materials (including metal, wood, beads, cloth, fur, feathers, and bone) onto a base forming a standing figure. The bocio are religiously designed to include different forces together to unlock powerful forces. In addition, the cloth appliqué of Dahomey depicted royalty often in similar zoomorphic representation and dealt with matters similar to the reliefs, often the kings leading during warfare. Dahomey had a distinctive tradition of casting small brass figures of animals or people, which were worn as jewellery or displayed in the homes of the relatively well-off. These figures, which continue to be made for the tourist trade, were relatively unusual in traditional African art in having no religious aspect, being purely decorative, as well as indicative of some wealth. Also unusual, by being so early and clearly provenanced, is a carved wooden tray (not dissimilar to much more recent examples) in Ulm, Germany, which was brought to Europe before 1659, when it was described in a printed catalogue. The Kingdom of Dahomey has been depicted in a number of different literary works of fiction or creative nonfiction. "In Dahomey" (1903) was a successful Broadway musical, the first full-length Broadway musical written entirely by African Americans, in the early 20th century. Novelist Paul Hazoumé's first novel "Doguicimi" (1938) was based on decades of research into the oral traditions of the Kingdom of Dahomey during the reign of King Ghezo. The anthropologist Judith Gleason wrote a novel, "Agõtĩme: Her Legend" (1970), centered on one of the wives of a king of Dahomey in the late 18th century, who offends her husband who sells her to slavery in Brazil; she makes a bargain with a "vodu" (deity), putting her son on the throne of Dahomey and bringing her home. Another novel tracing the background of a slave, this time in the United States, was "The Dahomean", or "The Man from Dahomey" (1971), by the African-American novelist Frank Yerby; its hero is an aristocratic warrior. In the third of George McDonald Fraser's Flashman novels, "Flash for Freedom" (1971), Flashman dabbles in the slave trade and visits Dahomey. "The Viceroy of Ouidah" (1980) by Bruce Chatwin is the story of a Brazilian who, hoping to make his fortune from slave trading, sails to Dahomey in 1812, befriending its unbalanced king and coming to a bad end. The book was later adapted into the film "Cobra Verde" (1987) directed by Werner Herzog. The main character of one of the two parallel stories in "Will Do Magic for Small Change" (2016) by Andrea Hairston is Kehinde, a Yoruba woman forced into the Dahomean army; she struggles with divided loyalty, and after the fall of Behanzin, joins a French entertainment troupe who intend to exhibit her as an Amazon at the Chicago World's Fair. Behanzin's resistance to the French attempt to end slave trading and human sacrifice has been central to a number of works. Jean Pliya's first play "Kondo le requin" (1967), winner of the Grand Prize for African History Literature, tells the story of Behanzin's struggle to maintain the old order. Maryse Condé's novel "The Last of the African Kings" (1992) similarly focuses on Behanzin's resistance and his exile to the Caribbean. The novel "Thread of Gold Beads" (2012) by Nike Campbell-Fatoki centers on a daughter of Behanzin; through her eyes, the end of his reign is observed.
https://en.wikipedia.org/wiki?curid=8765
Dragoon Dragoons originally were a class of mounted infantry, who used horses for mobility, but dismounted to fight on foot. From the early 17th century onward, dragoons were increasingly also employed as conventional cavalry, trained for combat with swords from horseback. Dragoon regiments were established in most European armies during the late 17th and early 18th centuries. The name is derived from a type of firearm, called a "dragon", which was a handgun version of a blunderbuss, carried by dragoons of the French Army. The title has been retained in modern times by a number of armoured or ceremonial mounted regiments. The establishment of dragoons evolved from the practice of sometimes transporting infantry by horse when speed of movement was needed. In 1552 Prince Alexander of Parma mounted several companies of infantry on pack horses to achieve surprise. Another early instance was ordered by Louis of Nassau in 1572 during operations near Mons in Hainaut, when 500 infantry were transported this way. It is also suggested the first dragoons were raised by the Marshal de Brissac in 1600. According to old German literature, dragoons were invented by Count Ernst von Mansfeld, one of the greatest German military commanders, in the early 1620s. There are other instances of mounted infantry predating this. However Mansfeld, who had learned his profession in Hungary and the Netherlands, often used horses to make his foot troops more mobile, creating what was called an "armée volante" (French for "flying army"). In the 16th century Spanish civil wars in Peru conquistadors fought on horse with arquebuses, prefiguring the origin of European dragoons. The name possibly derives from an early weapon, a short wheellock called a "dragon", because the first dragoons raised in France had their carbine's muzzle decorated with a dragon's head. The practice comes from a time when all gunpowder weapons had distinctive names, including the culverin, serpentine, falcon, falconet, etc. It is also sometimes claimed a galloping infantryman with his loose coat and the burning match resembled a dragon. It has also been suggested that the name derives from the German "tragen" or the Dutch "dragen", both being the verb "to carry" in their respective languages. Howard Reid (filmmaker) claims that the name and role descend from the Latin Draconarius. Dragoon is occasionally used as a verb to mean to subjugate or persecute by the imposition of troops; and by extension to compel by any violent measures or threats. The term dates from 1689, at a time when dragoons were being used by the French monarchy to persecute Protestants, particularly by forcing Protestants to lodge a dragoon in their house to watch over them, at the householder's expense. Early dragoons were not organized in squadrons or troops as were cavalry, but in companies like the infantry: their officers and non-commissioned officers bore infantry ranks. Dragoon regiments used drummers, not buglers, to communicate orders on the battlefield. The flexibility of mounted infantry made dragoons a useful arm, especially when employed for what would now be termed "internal security" against smugglers or civil unrest, and on line of communication security duties. During the English Civil War dragoons were used for a variety of tasks: providing outposts, holding defiles or bridges in the front or rear of the main army, lining hedges or holding enclosures, and providing dismounted musketeers to support regular cavalry.. In the closing stages of the Battle of Naseby Okey's Dragoons, who had started the action as dismounted musketeers, got on their horses and charged, possibly the first time this was done. Supplied with inferior horses and more basic equipment, the dragoon regiments were cheaper to recruit and maintain than the expensive regiments of cavalry. When in the 17th century Gustav II Adolf introduced dragoons into the Swedish Army, he provided them with a sabre, an axe and a matchlock musket, utilizing them as "labourers on horseback". Many of the European armies henceforth imitated this all-purpose set of weaponry. A non-military use of dragoons was the 1681 Dragonnades, a policy instituted by Louis XIV to intimidate Huguenot families into either leaving France or re-converting to Catholicism by billeting ill-disciplined dragoons in Protestant households. While other categories of infantry and cavalry were also used, the mobility, flexibility and available numbers of the dragoon regiments made them particularly suitable for repressive work of this nature over a wide area. In the Spanish Army, Pedro de la Puente organized a body of dragoons in Innsbruck in 1635. In 1640, a tercio of a thousand dragoons armed with the arquebus was created in Spain. By the end of the 17th century, the Spanish Army had three tercios of dragoons in Spain, plus three in the Netherlands and three more in Milan. In 1704, the Spanish dragoons were reorganised into regiments by Philip V, as were the rest of the tercios. Towards the end of 1776, George Washington realized the need for a mounted branch of the American military. In January 1777 four regiments of light dragoons were raised. Short term enlistments were abandoned and the dragoons joined for three years, or "the war". They participated in most of the major engagements of the American War of Independence, including the Battles of White Plains, Trenton, Princeton, Brandywine, Germantown, Saratoga, Cowpens, and Monmouth, as well as the Yorktown campaign. Dragoons were at a disadvantage when engaged against true cavalry, and constantly sought to improve their horsemanship, armament and social status. By the Seven Years' War the primary role of dragoons in most European armies had progressed from that of mounted infantry to that of heavy cavalry. Earlier dragoon responsibilities for scouting and picket duty had passed to hussars and similar light cavalry corps in the French, Austrian, Prussian, and other armies. In the Imperial Russian Army, due to the availability of the Cossack troops, the dragoons were retained in their original role for much longer. An exception to the rule was the British Army. To reduce military budgets, all horse (cavalry) regiments were gradually demoted to dragoons from 1746 onward — which meant they were paid on a lower scale. When this was completed in 1788, the heavy cavalry regiments had become either Dragoon Guards or Heavy Dragoons (depending on their precedence). The designation of Dragoon Guards did not mean that these regiments (the former 2nd to 8th Horse) had become Household Troops, but simply that they had been given a more dignified title to compensate for the loss of pay and prestige. Starting in 1756, seven regiments of Light Dragoons were raised. These Light Dragoons were trained in reconnaissance, skirmishing and other work requiring endurance in accordance with contemporary standards of light cavalry performance. The success of this new class of cavalry was such that eight regular Dragoon regiments were converted to Light Dragoons between 1768 and 1783. During the Napoleonic Wars, dragoons generally assumed a cavalry role, though remaining a lighter class of mounted troops than the armored cuirassiers. Dragoons rode larger horses than the light cavalry and wielded straight, rather than curved swords. Emperor Napoleon often formed complete divisions out of his 30 dragoon regiments and used them as battle cavalry to break the enemy's main resistance. In 1809, French dragoons scored notable successes against Spanish armies at the Battle of Ocana and the Battle of Alba de Tormes. British heavy dragoons made devastating charges against French infantry at the Battle of Salamanca in 1812 and at the Battle of Waterloo in 1815. 31 regiments were in existence at the height of the Napoleonic Wars: seven Dragoon Guards regiments and 24 cavalry of the line regiments. The Dragoon Guards and Dragoon regiments were the heavy cavalry regiments of the British Army, although by continental standards they were not the heaviest type of cavalry since they carried no armour (unlike cuirassiers). While some of the cavalry regiments of the line were simply designated as regiments of "dragoons", the lighter cavalry regiments, which were particularly mobile, became regiments of "Light Dragoons", employing the 1796-pattern light cavalry sabres. From 1805 four regiments of Light Dragoons were designated Hussars (7th, 10th, 15th and 18th Regiments), differentiated by uniform, and the wearing of mustaches. After the end of the Napoleonic Wars (starting in 1816) some regiments became " lancers", identified by the lances that they carried. The creation of a unified German state in 1871 brought together the dragoon regiments of Prussia, Bavaria, Saxony, Mecklenburg, Oldenburg, Baden, Hesse and Württemberg in a single numbered sequence, although historic distinctions of insignia and uniform were largely preserved. Two regiments of the Imperial Guard were designated as dragoons. The Austrian (later Austro-Hungarian) Army of the 19th century included six regiments of dragoons in 1836, classed as heavy cavalry for shock action, but in practice used as medium troops with a variety of roles. After 1859 all but two Austrian dragoon regiments were converted to cuirassiers or disbanded. From 1868 to 1918 the Austro-Hungarian dragoons numbered 15 regiments. During the 18th century several regiments of dragoons were created in Spain's American viceroyalties to protect the northern provinces and borders of New Spain in the present-day states of California, Nevada, Colorado, Texas, Kansas, Arizona, Montana, North Dakota and South Dakota. Some of these functioned as a police force. In 1803, the regiments of dragoons began to be called light cavalry (C"azadores") and dragoons disappeared from the Spanish Army shortly after 1815. In New Spain, soon to be México, dragoons were important and elite units of the Royal Army. A number of dragoons became important military and political figures, among them Ignacio Allende and Juan Aldama, members of the Queen's Regiment of Dragoons who defected and then initiated the independence movement in México in 1810. Another important dragoon was Agustin de Iturbide, who would ultimately achieve Mexican independence in 1821. He was known as the greatest horseman in México and became so renowned in battle during his youth that he acquired the nickname "El Dragón de Hierro" or "The Iron Dragon" (in Spanish, "dragon" and "dragoon" both sound and are written exactly the same). He would go on to become Agustín I, after being elected Emperor of México. The political importance of dragoons during this time in the nascent country cannot be overstated. Prior to the War of 1812 the U.S. organized the Regiment of Light Dragoons. For the war a second regiment was activated; that regiment was consolidated with the original regiment in 1814. The original regiment was consolidated with the Corps of Artillery in June 1815. The 1st United States Dragoons explored Iowa after the Black Hawk Purchase put the area under U.S. control. In the summer of 1835, the regiment blazed a trail along the Des Moines river and established outposts from present-day Des Moines to Fort Dodge. In 1933, the State of Iowa opened the Dragoon Trail, a scenic and historic drive that follows the path of the 1st United States Dragoons on their historic march. In 1861 the two existing U.S. Dragoon regiments were re-designated as the 1st and 2nd Cavalry. This reorganization did not affect their role or equipment, although the traditional orange uniform braiding of the dragoons was replaced by the standard yellow of the Cavalry branch. This marked the official end of dragoons in the U.S. Army, although certain modern units trace their origins back to the historic dragoon regiments. In several stages between 1816 and 1861, the 21 existing Light Dragoon regiments in the British Army were disbanded or converted to lancers or hussars. Between 1881 and 1907 all Russian cavalry (other than Cossacks and Imperial Guard regiments) were designated as dragoons, reflecting an emphasis on the double ability of dismounted action as well as the new cavalry tactics in their training and a growing acceptance of the impracticality of employing historical cavalry tactics against modern firepower. Upon the reinstatement of Uhlan and Hussar Regiments in 1907 their training pattern, as well as that of the Cuirassiers of the Guard, remained unchanged until the collapse of the Russian Imperial Army. In Japan, in the late 19th century/early 20th century, dragoons were deployed in the same way as in other armies, but were dressed as hussars. In 1914 there were still dragoon regiments in the British, French, German, Russian, Austro-Hungarian, Peruvian, Norwegian, Swedish, Danish and Spanish armies. Their uniforms varied greatly, lacking the characteristic features of hussar or lancer regiments. There were occasional reminders of the mounted infantry origins of this class of soldier. Thus the 28 dragoon regiments of the Imperial German Army wore the Pickelhaube (spiked helmet) of the same design as those of the infantry and the British dragoons wore scarlet tunics for full dress while hussars and all but one of the lancer regiments wore dark blue. In other respects however dragoons had adopted the same tactics, roles and equipment as other branches of the cavalry and the distinction had become simply one of traditional titles. Weaponry had ceased to have a historic connection, with both the French and German dragoon regiments carrying lances during the early stages of World War I. The historic German, Russian and Austro-Hungarian dragoon regiments ceased to exist as distinct branches following the overthrow of the respective imperial regimes of these countries during 1917–18. The Spanish dragoons, which dated back to 1640, were reclassified as numbered cavalry regiments in 1931 as part of the army modernization policies of the new republic. The Australian Light Horse were similar to 18th-century dragoon regiments in some respects, being mounted infantry which normally fought on foot, their horses' purpose being transportation. They served during the Second Boer War and World War I. The Australian 4th Light Horse Brigade became famous for the Battle of Beersheba in 1917 where they charged on horseback using rifle bayonets, since neither sabres or lances were part of their equipment. Later in the Palestine campaign Pattern 1908 Cavalry Swords were issued and used in the campaign leading to the fall of Damascus. Probably the last use of real dragoons (infantry on horseback) in combat was made by the Portuguese Army in the war in Angola during the 1960s and 1970s. In 1966, the Portuguese created an experimental horse platoon, to operate against the guerrillas in the high grass region of Eastern Angola, in which each soldier was armed with a G3 assault rifle for combat on foot and with an automatic pistol to fire from horseback. The troops on horseback were able to operate in difficult terrain unsuited to motor vehicles and had the advantage of being able to control the area around them, with a clear view over the grass that foot troops did not have. Moreover, these unconventional troops created a psychological impact on an enemy that was not used to facing horse troops, and thus had no training or strategy to deal with them. The experimental horse platoon was so successful that its entire parent battalion was transformed from an armored reconnaissance unit to a three-squadron horse battalion known as the "Dragoons of Angola". One of the typical operations carried out by the Dragoons of Angola, in cooperation with airmobile forces, consisted of the dragoons chasing the guerrillas and pushing them in one direction, with the airmobile troops being launched from helicopter in the enemy rear, trapping the enemy between the two forces. Until 1918 Dragoner (en: dragoon) was the designation given to the lowest ranks in the dragoon regiments of the Austro-Hungarian and Imperial German Armies. The "Dragoner" rank, together with all other private ranks of the different branch of service, did belong to the so-called gemeine rank group. The Brazilian president's honor guard is provided (amongst other units) by a regiment of dragoons: the 1st Guards Cavalry Regiment of the Brazilian Army. This regiment is known as the "Dragões da Independência" (Independence Dragoons). The name was given in 1927 and refers to the fact that a detachment of dragoons escorted the Prince Royal of Portugal, Pedro I, at the time when he declared Brazilian independence from Portugal, on September 7, 1822. The Independence Dragoons wear 19th-century dress uniforms similar to those of the earlier Imperial Honor Guard, which are used as the regimental full dress uniform since 1927. The uniform was designed by Debret, in white and red, with plumed bronze helmets. The colors and pattern were influenced by the Austrian dragoons of the period, as the Brazilian Empress Consort was also an Austrian Archduchess. The color of the plumes varies according to rank. The Independence Dragoons are armed with lances and sabres, the latter only for the officers and the colour guard. The regiment was established in 1808 by the Prince Regent and future king of Portugal, John VI, with the duty of protecting the Portuguese royal family, which had sought refuge in Brazil during the Napoleonic wars. However dragoons had existed in Portugal since at least the early 18th century and, in 1719, units of this type of cavalry were sent to Brazil, initially to escort shipments of gold and diamonds and to guard the Viceroy who resided in Rio de Janeiro (1st Cavalry Regiment – Vice-Roy Guard Squadron). Later, they were also sent to the south to serve against the Spanish during frontier clashes. After the proclamation of Brazilian independence, the title of the regiment was changed to that of the Imperial Honor Guard, with the role of protecting the Imperial Family. The Guard was later disbanded by Emperor Peter II and would be recreated only later in the republican era. At the time of the Republic proclamation in 1889, horse #6 of the Imperial Honor Guard was ridden by the officer making the declaration of the end of Imperial rule, Second Lieutenant Eduardo José Barbosa. This is commemorated by the custom under which the horse having this number is used only by the commander of the modern regiment. There are three dragoon regiments in the Canadian Forces: the Royal Canadian Dragoons and two reserve regiments, the British Columbia Dragoons and the Saskatchewan Dragoons. The Royal Canadian Dragoons is the senior Armoured regiment in the Canadian Forces. The current role of The Royal Canadian Dragoons is to provide Armour Reconnaissance support to 2 Canadian Mechanized Brigade Group (2 CMBG) operations. The Royal Canadian Mounted Police were accorded the formal status of a regiment of dragoons in 1921. The modern RCMP does not retain any military status however. Founded as the "Dragones de la Reina" (Queen's Dragoons) in 1758 and later renamed the Dragoons of Chile in 1812, and then becoming the Carabineros de Chile in 1903. The Carabineros are the national police of Chile. The military counterpart, that of the 15th Reinforced Regiment "Dragoons" is now as of 2010 the 4th Armored Brigade "Chorrillos" based in Punta Arenas as the 6th Armored Cavalry Squadron "Dragoons", and form part of the 5th Army Division. The Royal Danish Army includes amongst its historic regiments the Jutish Dragoon Regiment, which was raised in 1670. The modern French Army retains three dragoon regiments from the thirty-two in existence at the beginning of World War I: the 2nd, which is a nuclear, biological and chemical protection regiment, the 5th, an experimental Combined arms regiment, and the 13th (Special Reconnaissance). Beginning in the 17th century, the mercenary army of the Grand Duchy of Lithuania included dragoon units. In the middle of the 17th century there were 1,660 dragoons in an army totaling 8,000 men. By the 18th century there were four regiments of dragoons. Lithuanian cavalrymen served in dragoon regiments of both the Russian and Prussian armies, after the Partitions of the Polish–Lithuanian Commonwealth. Between 1920–1924 and 1935–1940 the Lithuanian Army included the Third Dragoon "Iron Wolf" Regiment. The dragoons were the equivalent of the present-day Volunteer Forces. In modern Lithuania the Grand Duke Butigeidis Battalion ("Lithuanian: didžiojo kunigaikščio Butigeidžio dragūnų batalionas") is designated as dragoons, with a motorized infantry role. In the Norwegian Army during the early part of the 20th century, dragoons served in part as mounted troops, and in part on skis or bicycles ("hjulryttere", meaning "wheel-riders"). Dragoons fought on horses, bicycles and skis against the German invasion in 1940. After World War II the dragoon regiments were reorganized as armoured reconnaissance units. "Dragon" is the rank of a compulsory service private cavalryman while enlisted (regular) cavalrymen have the same rank as infantrymen: "Grenader". The Presidential Escort Life Guard Dragoons Regiment "Field Marshal Domingo Nieto", named after Field Marshal Domingo Nieto, of the President of the Republic of Perú were the traditional Guard of the Government Palace of Perú until March 5, 1987 and its disbandment in that year. However, by Ministerial Resolution No 139-2012/DE/EP of February 2, 2012 the restoration of the Cavalry Regiment "Marshal Domingo Nieto" as the official escort of the President of the Republic of Peru was announced. The main mission of the reestablished regiment was to guarantee the security of the President of the Republic and of the Government Palace. This regiment of dragoons was created in 1904 following the suggestion of a French military mission which undertook the reorganization of the Peruvian Army in 1896. The initial title of the unit was Cavalry Squadron "President's Escort". It was modelled on the French dragoons of the period. The unit was later renamed as the Cavalry Regiment "President's Escort" before receiving its current title in 1949. The Peruvian Dragoon Guard has throughout its existence worn French-style uniforms of black tunic and red breeches in winter and white coat and red breeches in summer, with red and white plumed bronze helmets with the coat of arms of Peru and golden or red epaulettes depending on rank. They retain their original armament of lances and sabres, until the 1980s rifles were used for dismounted drill. At 13:00 hours every day, the main esplanade in front of the Government Palace of Perú fronting Lima's Main Square serves as the stage for the changing of the guard, undertaken by members of the Presidential Life Guard Escort Dragoons, mounted or dismounted. While the dismounted changing is held on Mondays and Fridays, the mounted ceremony is held twice a month on a Sunday. The Portuguese Army still maintains two units which are descended from former regiments of dragoons. These are the 3rd Regiment of Cavalry (the former "Olivença Dragoons") and the 6th Regiment of Cavalry (the former "Chaves Dragoons"). Both regiments are, presently, armoured units. The Portuguese Rapid Reaction Brigade' Armoured Reconnaissance Squadron – a unit from the 3rd Regiment of Cavalry – is known as the "Paratroopers Dragoons". During the Portuguese Colonial War in the 1960s and the 1970s, the Portuguese Army created an experimental horse platoon, to combat the guerrillas in eastern Angola. This unit was soon augmented, becoming a group of three squadrons, known as the "Angola Dragoons". The Angola Dragoons operated as mounted infantry – like the original dragoons – each soldier being armed with a pistol to fire when on horseback and with an automatic rifle, to use when dismounted. A unit of the same type was being created in Mozambique when the war ended in 1974. The Spanish Army began the training of a dragoon corps in 1635 under the direction of Pedro de la Puente at Innsbruck. In 1640 the first dragoon "tercio" was created, equipped with arquebuses and maces. The number of dragoon tercios was increased to nine by the end of the XVII century: three garrisoned in Spain, another three in the Netherlands and the remainder in Milan. The "tercio"s were converted into a regimental system, beginning in 1704. Philip V created several additional dragoon regiments to perform the functions of a police corps in the New World. Notable amongst those units were the leather-clad "dragones de cuera". In 1803 the dragoon regiments were renamed as ""caballería ligera"" (light cavalry). By 1815 these units had been disbanded. Spain recreated its dragoons in the late nineteenth century. In 1930, three Spanish dragoon regiments were still in existence. In the Swedish Army, dragoons comprise the Military Police and Military Police Rangers. They also form the 13th Battalion of the Life Guards, which is a military police unit. The 13th (Dragoons) Battalion have roots that go back as far as 1523, making it one of the world's oldest military units still in service. Today, the only mounted units still retained by the Swedish Army are the two dragoons squadrons of the King's Guards Battalion of the Life Guards. Horses are used for ceremonial purposes only, most often when the dragoons take part in the changing of the guards at The Royal Palace in Stockholm. ""Livdragon"" is the rank of a private cavalryman. In the Swiss Army, mounted dragoons existed until the early 1970s, when they were converted into Armoured Grenadiers units. The ""Dragoner"" had to prove he was able to keep a horse at home before entering the army. At the end of basic training they had to buy a horse at a reduced price from the army and to take it home together with equipment, uniform and weapon. In the "yearly repetition course" the dragoons served with their horses, often riding from home to the meeting point. The abolition of the dragoon units, believed to be the last non-ceremonial horse cavalry in Europe, was a contentious issue in Switzerland. On 5 December 1972 the Swiss "National Council" approved the measure by 91 votes, against 71 for retention. In the present-day British Army regular army, four regiments are designated as dragoons: The 1st The Queen's Dragoon Guards, The Royal Scots Dragoon Guards, the Royal Dragoon Guards, and the Light Dragoons. In the Territorial Army, one of the five squadrons of the Royal Yeomanry—the Westminster Dragoons— also has the title of dragoons. The 1st and 2nd Battalion, 48th Infantry were mechanized infantry units assigned to the 3rd Armored Division (3AD) in West Germany during the Cold War. The unit crest of the 48th Infantry designated the unit as Dragoons, purely a traditional designation. The 1st Dragoons was reformed in the Vietnam War era as the 1st Squadron, 1st U.S. Cavalry. It served in the Iraq War and remains as the oldest cavalry unit, as well as the most decorated one, in the U.S. Army. Today's modern 1–1 Cavalry is a scout/attack unit, equipped with MRAPs, M3A3 Bradley CFVs, and Strykers. Another modern United States Army unit, informally known as the 2nd Dragoons, is the 2nd Cavalry Regiment. This unit was originally organized as the Second Regiment of Dragoons in 1836 and was renamed the Second Cavalry Regiment in 1861, being redesignated as the 2nd Armored Cavalry Regiment in 1948. The regiment is currently equipped with the Stryker family of wheeled fighting vehicles and was redesignated as the 2nd Stryker Cavalry Regiment in 2006. In 2011 the 2nd Dragoon regiment was redesignated as the 2nd Cavalry Regiment. The 2nd Cavalry Regiment has the distinction of being the longest continuously serving regiment in the United States Army. The 113th Army Band at Fort Knox is also officially nicknamed as "The Dragoons." This derives from its formation as the Band, First Regiment of Dragoons on July 8, 1840. Company D, 3rd Light Armored Reconnaissance Battalion of the United States Marine Corps, is nicknamed the "Dragoons". Their combat history includes Operation Iraqi Freedom and Operation Enduring Freedom from 2002 to 2013.
https://en.wikipedia.org/wiki?curid=8767
Dutch West India Company The Dutch West India Company (, ; ) was a chartered company (known as the "GWC") of Dutch merchants as well as foreign investors. Among its founders was Willem Usselincx (1567–1647) and Jessé de Forest (1576–1624). On 3 June 1621, it was granted a for a trade monopoly in the Dutch West Indies by the Republic of the Seven United Netherlands and given jurisdiction over Dutch participation in the Atlantic slave trade, Brazil, the Caribbean, and North America. The area where the company could operate consisted of West Africa (between the Tropic of Cancer and the Cape of Good Hope) and the Americas, which included the Pacific Ocean and the eastern part of New Guinea. The intended purpose of the charter was to eliminate competition, particularly Spanish or Portuguese, between the various trading posts established by the merchants. The company became instrumental in the largely ephemeral Dutch colonization of the Americas (including New Netherland) in the seventeenth century. From 1624 to 1654, in the context of the Dutch-Portuguese War, the GWC held Portuguese territory in northeast Brazil, but they were ousted from Dutch Brazil following fierce resistance. After several reversals, GWC reorganized and a new charter was granted in 1675, largely on the strength in the Atlantic slave trade. This "New" version lasted for more than a century, until after the Fourth Anglo-Dutch War, during which it lost most of its assets. When the Dutch East India Company (VOC) was founded in 1602, some traders in Amsterdam did not agree with its monopolistic policies. With help from Petrus Plancius, a Dutch-Flemish astronomer, cartographer and clergyman, they sought for a northeastern or northwestern access to Asia to circumvent the VOC monopoly. In 1609, English explorer Henry Hudson, in employment of the VOC, landed on the coast of New England and sailed up what is now known as the Hudson River in his quest for the Northwest Passage to Asia. However, he failed to find a passage. Consequently, in 1615 Isaac Le Maire and Samuel Blommaert, assisted by others, focused on finding a south-westerly route around South America's Tierra del Fuego archipelago in order to circumvent the monopoly of the VOC. One of the first sailors who focused on trade with Africa was Balthazar de Moucheron. The trade with Africa offered several possibilities to set up trading posts or factories, an important starting point for negotiations. It was Blommaert, however, who stated that, in 1600, eight companies sailed on the coast of Africa, competing with each other for the supply of copper, from the Kingdom of Loango. Pieter van den Broecke was employed by one of these companies. In 1612, a Dutch fortress was built in Mouree (present day Ghana), along the Dutch Gold Coast. Trade with the Caribbean, for salt, sugar and tobacco, was hampered by Spain and delayed because of peace negotiations. Spain offered peace on condition that the Dutch Republic would withdraw from trading with Asia and America. Spain refused to sign the peace treaty if a West Indian Company would be established. At this time, the Dutch War of Independence (1568–1648) between Spain and the Dutch Republic was occurring. Grand Pensionary Johan van Oldenbarnevelt offered to only suspend trade with the West in exchange for the Twelve Years' Truce. The result was that, during a few years, the company sailed under a foreign flag in South America. However, ten years later, Stadtholder Maurice of Orange, proposed to continue the war with Spain, but also to distract attention from Spain to the Republic. In 1619, his opponent Johan van Oldenbarnevelt was beheaded, and when two years later the truce expired, the West Indian Company was established. The West India Company received its charter from the States-General in 1621, but its foundation had been suggested much earlier in the 17th century only to be delayed by the conclusion of the Twelve Years' Truce between Spain and the United Provinces in 1609. The "Dutch West India Company" was organized similarly to the Dutch East India Company (VOC). Like the VOC, the GWC company had five offices, called chambers ("kamers"), in Amsterdam, Rotterdam, Hoorn, Middelburg and Groningen, of which the chambers in Amsterdam and Middelburg contributed most to the company. The board consisted of 19 members, known as the Heeren XIX (the Nineteen Gentlemen). The institutional structure of the GWC followed the federal structure, which entailed extensive discussion for any decision, with regional representation: 8 from Amsterdam; 4 from Zeeland, 2 each from the Northern Quarter (Hoorn and Enkhuizen), the Maas (Rotterdam and Dordrecht), the region of Groningen, and one representative from the States General. Each region had its own chamber and board of directors. The validity of the charter was set at 24 years. Only in 1623 was funding arranged, after several bidders were put under pressure. The States General of the Netherlands and the VOC pledged one million guilders in the form of capital and subsidy. Although Iberian writers said that crypto-Jews or Marranos played an important role in the formation of both the VOC and the GWC, research has shown that initially they played a minor role, but expanded during the period of the Dutch in Brazil. Emigrant Calvinists from the Spanish Netherlands did make significant investments in the GWC. Investors did not rush to put their money in the company in 1621, but the States-General urged municipalities and other institutions to invest. Explanations for the slow investment by individuals were that shareholders had "no control over the directors' policy and the handling of ordinary investors' money," that it was a "racket" to provide "cushy posts for the directors and their relatives, at the expense of ordinary shareholders." The VOC directors invested money in the GWC, without consulting their shareholders, causing dissent among a number of shareholders. In order to attract foreign shareholders, the GWC offered equal standing to foreign investors with Dutch, resulting in shareholders from France, Switzerland, and Venice. A translation of the original 1621 charter appeared in English, "Orders and Articles granted by the High and Mightie Lords the States General of the United Provinces concerning the erecting of a West-Indies Companie, Anno Dom. MDCXII". by 1623, the capital for the GWC at 2.8 million florins was not as great the VOC's original capitalization of 6.5 million, but it was still a substantial sum. The GWC had 15 ships to carry trade and plied the west African coast and Brazil. Unlike the VOC, the GWC had no right to deploy military troops. When the Twelve Years' Truce in 1621 was over, the Republic had a free hand to re-wage war with Spain. A "Groot Desseyn" ("grand design") was devised to seize the Portuguese colonies in Africa and the Americas, so as to dominate the sugar and slave trade. When this plan failed, privateering became one of the major goals within the GWC. The arming of merchant ships with guns and soldiers to defend themselves against Spanish ships was of great importance. On almost all ships in 1623, 40 to 50 soldiers were stationed, possibly to assist in the hijacking of enemy ships. It is unclear whether the first expedition was the expedition by Jacques l'Hermite to the coast of Chile, Peru and Bolivia, set up by Stadtholder Maurice with the support of the States General and the VOC. The company was initially a dismal failure, in terms of its expensive early projects, and its directors shifted emphasis from conquest of territory to pursue plunder of shipping. The most spectacular success for the GWC was Piet Heyn's seizure of the Spanish silver fleet, which carried silver from Spanish colonies to Spain. He had also seized a consignment of sugar from Brazil and a galleon from Honduras with cacao, indigo, and other valuable goods. Privateering was its most profitable activity in the late 1620s. Despite Heyn's success at plunder, the company's directors realized that it was not a basis to build long-term profit, leading them to renew their attempts to seize Iberian territory in the Americas. They decided their target was Brazil. There were conflicts between directors from different areas of The Netherlands, with Amsterdam less supportive of the company. Non-maritime cities, including Haarlem, Leiden, and Gouda, along with Enkhuizen and Hoorn were enthusiastic about seizing territory. They sent a fleet to Brazil, capturing Olinda and Pernambuco in 1630 in their initial foray to create a Dutch Brazil, but could not hold them due to a strong Portuguese resistance. Company ships continued privateering in the Caribbean, as well seizing vital land resources, particularly salt pans. The company's general lack of success saw their shares plummet and the Dutch and The Spanish renewed truce talks in 1633. In 1629 the GWC gave permission to a number of investors in New Netherlands to found patroonships, enabled by the Charter of Freedoms and Exemptions which was ratified by the Dutch States-General on June 7, 1629. The patroonships were created to help populate the colony, by providing investors grants providing land for approximately 50 people and "upwards of 15 years old", per grant, mainly in the region of New Netherland. Patroon investors could expand the size of their land grants as large as 4 miles, "along the shore or along one bank of a navigable river..." Rensselaerswyck was the most successful Dutch West India Company patroonship. The New Netherland area, which included New Amsterdam, covered parts of present-day New York, Connecticut, Delaware, and New Jersey. Other settlements were established on the Netherlands Antilles, and in South America, in Dutch Brazil, Suriname and Guyana. In Africa, posts were established on the Gold Coast (now Ghana), the Slave Coast (now Benin), and briefly in Angola. It was a neo-feudal system, where patrons were permitted considerable powers to control the overseas colony. In the Americas, fur (North America) and sugar (South America) were the most important trade goods, while African settlements traded the enslaved (mainly destined for the plantations on the Antilles and Suriname), gold, and ivory. In North America, the settlers Albert Burgh, Samuel Blommaert, Samuel Godijn, Johannes de Laet had little success with populating the colony of New Netherland, and to defend themselves against local Amerindians. Only Kiliaen Van Rensselaer managed to maintain his settlement in the north along the Hudson. Samuel Blommaert secretly tried to secure his interests with the founding of the colony of New Sweden on behalf of Sweden on the Delaware in the south. The main focus of the GWC now went to Brazil. Only in 1630 did the West India Company manage to conquer a part of Brazil. In 1630, the colony of New Holland (capital Mauritsstad, present-day Recife) was founded, taking over Portuguese possessions in Brazil. In the meantime, the war demanded so many of its forces that the Company had to operate under a permanent threat of bankruptcy. In fact, the GWC went bankrupt in 1636 and all attempts at rehabilitation were doomed to failure. Because of the ongoing war in Brazil, the situation for the GWC in 1645, at the end of the charter, was very bad. An attempt to compensate the losses of the GWC with the profits of the VOC failed because the directors of the VOC did not want to. Merging the two companies was not feasible. Amsterdam was not willing to help out, because it had too much interest in peace and healthy trade relations with Portugal. This indifferent attitude of Amsterdam was the main cause of the slow, half-hearted policy, which would eventually lead to losing the colony. In 1647 the Company made a restart using 1.5 million guilders, capital of the VOC. The States General took responsibility for the warfare in Brazil. Due to the Peace of Westphalia the seizing of Spanish ships was no longer allowed. Many merchants from Amsterdam and Zeeland decided to work with marine and merchants from Hamburg, Glückstadt (then Danish), England and other countries. In 1649, the GWC obtained a monopoly on gold and enslaved Africans in the kingdom of Accra (present-day Ghana). In 1662 there were contacts with the owners of the Asiento, which were obliged to deliver 24,000 enslaved Africans. In 1663 and 1664 the GWC sold more enslaved Africans than the Portuguese and English together. The first West India Company suffered a long agony, and its end in 1674 was painless. The reason that the GWC could drag on for twenty years was due to its valuable West African possessions, due to its slaves. When the GWC could not repay its debts in 1674, the company was dissolved. But because of high demand for trade with the West (mainly slave trade), and the fact that still many colonies existed, it was decided to establish the Second Chartered West India Company (also called New West India Company) in 1675. This new company had the same trade area as the first. All ships, fortresses, etc. were taken over by the new company. The number of directors was reduced from 19 to 10, and the number of governors from 74 to 50. The new GWC had a capital that was slightly more than guilders around 1679, which was largely supplied by the Amsterdam Chamber. From 1694 until 1700, the GWC waged a long conflict against the Eguafo Kingdom along the Gold Coast, present-day Ghana. The Komenda Wars drew in significant numbers of neighbouring African kingdoms and led to replacement of the gold trade with enslaved Africans. After the Fourth Anglo-Dutch War, it became apparent that the Dutch West India Company was no longer capable of defending its own colonies, as Sint Eustatius, Berbice, Essequibo, Demerara, and some forts on the Dutch Gold Coast were rapidly taken by the British. In 1791, the company's stock was bought by the Dutch government, and on 1 January 1792, all territories previously held by the Dutch West India Company reverted to the rule of the States General of the Dutch Republic. Around 1800 there was an attempt to create a third West Indian Company, without any success.
https://en.wikipedia.org/wiki?curid=8769
Bakelite Bakelite ( ; sometimes spelled Baekelite) or polyoxybenzylmethylenglycolanhydride was the first plastic made from synthetic components. It is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. It was developed by the Belgian-American chemist Leo Baekeland in Yonkers, New York, in 1907. Bakelite was patented on December 7, 1909. The creation of a synthetic plastic was revolutionary for its electrical nonconductivity and heat-resistant properties in electrical insulators, radio and telephone casings and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms. In recent years the "retro" appeal of old Bakelite products has made them collectible. Bakelite was designated a National Historic Chemical Landmark on November 9, 1993, by the American Chemical Society in recognition of its significance as the world's first synthetic plastic. Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the excretion of lac insects (specifically "Kerria lacca"). Baekeland produced a soluble phenol-formaldehyde shellac called "Novolak", but it was not a market success. Baekeland then began experimenting on strengthening wood by impregnating it with a synthetic resin, rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, Baekeland produced a hard moldable material that he named "Bakelite", after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make. Baekeland considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers. Baekeland filed a substantial number of patents in the area. Bakelite, his "method of making insoluble products of phenol and formaldehyde," was filed on July 13, 1907, and granted on December 7, 1909. Baekeland also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia, and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909. Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. By 1910, he was producing enough material to justify expansion. He formed the General Bakelite Company as a U.S. company to manufacture and market his new industrial material. He also made overseas connections to produce materials in other countries. Bijker gives a detailed discussion of the development of Bakelite and the Bakelite company's production of various applications of materials. As of 1911, the company's main focus was laminating varnish, whose sales volume vastly outperformed both molding material and cast resin. By 1912, molding material was gaining ground, but its sales volume for the company did not exceed that of laminating varnish until the 1930s. As the sales figures also show, the Bakelite Company produced "transparent" cast resin (which did not include filler) for a small ongoing market during the 1910s and 1920s. Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders and jewelry. However, the demand for molded plastics led the Bakelite company to concentrate on molding, rather than concentrating on cast solid resins. The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J.W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses". A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications. The first issue of "Plastics" magazine, October 1925, featured Bakelite on its cover, and included the article "Bakelite – What It Is" by Allan Brown. The range of colors available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms. "Bakelite is manufactured in several forms to suit varying requirements. In all these forms the fundamental basis is the initial Bakelite resin. This variety includes clear material, for jewelry, smokers' articles, etc.; cement, using in sealing electric light bulbs in metal bases; varnishes, for impregnating electric coils, etc.; lacquers, for protecting the surface of hardware; enamels, for giving resistive coating to industrial equipment; Laminated Bakelite, used for silent gears and insulation; and molding material, from which are formed innumerable articles of utility and beauty. The molding material is prepared ordinarily by the impregnation of cellulose substances with the initial 'uncured' resin." In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company". In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was demolished in 1998. A new factory opened in Bound Brook, New Jersey, in 1931. In 1939, the companies were acquired by Union Carbide and Carbon Corporation. In 2005, Union Carbide's phenolic resin business, including the Bakelite and Bakelit registered trademarks, were assigned to Hexion Inc. On the 1st of April, 2019 Hexion filed for Chapter 11 bankruptcy. In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than polyoxy­benzyl­methylene­glycol­anhydride. Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company. Making Bakelite was a multi-stage process. It began with the heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This created a liquid condensation product, referred to as "Bakelite A", which was soluble in alcohol, acetone, or additional phenol. Heated further, the product became partially soluble and could still be softened by heat. Sustained heating resulted in an "insoluble hard gum". However, the high temperatures required to create this tended to cause violent foaming of the mixture, which resulted in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about , Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance was extremely hard and both infusible and insoluble. Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder, or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle, but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply. Another market for Bakelite resin was the creation of phenolic sheet materials. Phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin. Paper, cotton fabrics, synthetic fabrics, glass fabrics and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic. Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical and thermal requirements. Some common types include: Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible. Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor. The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating. Bakelite was particularly suitable for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action. The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company. Bakelite was soon used for non-conducting parts of telephones, radios and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory. During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellors in the NBS-1 bomber and the DH-4B aeroplane. Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods. It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels (stocks) on handguns, submachine guns and machineguns, as well as numerous knife handles and "scales" through the first half of the 20th century. Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches. Jewelry designers such as Jorge Caicedo Montes De Oca still use vintage Bakelite materials to make designer jewelry. By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age". By the 1930s, Bakelite was used for game pieces like chessmen, poker chips, dominoes and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune. The British children's construction toy Bayko, launched in 1933, originally used Bakelite for many of its parts, and took its name from the material. During World War II, Bakelite was used in a variety of wartime equipment including pilot's goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel. During World War II, Bakelite buttons were part of the British uniforms. They were sometimes modified to Survival, Evasion, Resistance and Escape purposes in case of capture. "Following the introduction of BD (Battle Dress). MI9 was forced to adapt to meet the challenge of a number of different compass solutions were devised, both covert and overt. These included Bakelite buttons used in both Army (brown colored) and RAF (black) BD uniforms." In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener. Bakelite was sometimes used as a substitute for metal in the magazine, pistol grip, fore grip, hand guard, and butt stock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-S4. By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS). Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod and tube form for industrial applications in the electronics, power generation and aerospace industries, and under a variety of commercial brand names. Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography. Bakelite items, particularly jewelry and radios, have become a popular collectible. The term "Bakelite" is sometimes used in the resale market to indicate various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of Bakelite material. The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry.
https://en.wikipedia.org/wiki?curid=4485
Bean A bean is the seed of one of several genera of the flowering plant family Fabaceae, which are used as vegetables for human or animal food. They can be cooked in many different ways, including boiling, frying, and baking, and are used in several traditional dishes throughout the world. The word "bean" and its Germanic cognates (e.g. German "Bohne") have existed in common use in West Germanic languages since before the 12th century, referring to broad beans, chickpeas, and other pod-borne seeds. This was long before the New World genus "Phaseolus" was known in Europe. After Columbian-era contact between Europe and the Americas, use of the word was extended to pod-borne seeds of "Phaseolus", such as the common bean and the runner bean, and the related genus "Vigna". The term has long been applied generally to many other seeds of similar form, such as Old World soybeans, peas, other vetches, and lupins, and even to those with slighter resemblances, such as coffee beans, vanilla beans, castor beans, and cocoa beans. Thus the term "bean" in general usage can refer to a host of different species. Seeds called "beans" are often included among the crops called "pulses" (legumes), although a narrower prescribed sense of "pulses" reserves the word for leguminous crops harvested for their dry grain. The term "bean" usually excludes legumes with tiny seeds and which are used exclusively for forage, hay, and silage purposes (such as clover and alfalfa). The United Nations Food and Agriculture Organization defines "BEANS, DRY" (item code 176) as applicable only to species of "Phaseolus". However, in the past, several species, including "Vigna angularis" (adzuki bean), "V. mungo" (black gram), "V. radiata" (green gram), and "V. aconitifolia" (moth bean), were classified as "Phaseolus" and later reclassified and general usage is not governed by that definition. Unlike the closely related pea, beans are a summer crop that needs warm temperatures to grow. Maturity is typically 55–60 days from planting to harvest. As the bean pods mature, they turn yellow and dry up, and the beans inside change from green to their mature colour. As a vine, bean plants need external support, which may take the form of special "bean cages" or poles. Native Americans customarily grew them along with corn and squash (the so-called Three Sisters), with the tall cornstalks acting as support for the beans. In more recent times, the so-called "bush bean" has been developed which does not require support and has all its pods develop simultaneously (as opposed to pole beans which develop gradually). This makes the bush bean more practical for commercial production. Beans are one of the longest-cultivated plants. Broad beans, also called fava beans, in their wild state the size of a small fingernail, were gathered in Afghanistan and the Himalayan foothills. In a form improved from naturally occurring types, they were grown in Thailand from the early seventh millennium BCE, predating ceramics. They were deposited with the dead in ancient Egypt. Not until the second millennium BCE did cultivated, large-seeded broad beans appear in the Aegean, Iberia and transalpine Europe. In the "Iliad" (8th century BCE) there is a passing mention of beans and chickpeas cast on the threshing floor. Beans were an important source of protein throughout Old and New World history, and still are today. The oldest-known domesticated beans in the Americas were found in Guitarrero Cave, an archaeological site in Peru, and dated to around the second millennium BCE. However, genetic analyses of the common bean "Phaseolus" show that it originated in Mesoamerica, and subsequently spread southward, along with maize and squash, traditional companion crops. Most of the kinds commonly eaten fresh or dried, those of the genus "Phaseolus", come originally from the Americas, being first seen by a European when Christopher Columbus, while exploring what may have been the Bahamas, found them growing in fields. Five kinds of "Phaseolus" beans were domesticated by pre-Columbian peoples: common beans ("P. vulgaris") grown from Chile to the northern part of what is now the United States, and lima and sieva beans ("P. lunatus"), as well as the less widely distributed teparies ("P. acutifolius"), scarlet runner beans ("P. coccineus") and polyanthus beans ("P. polyanthus") One especially famous use of beans by pre-Columbian people as far north as the Atlantic seaboard is the "Three Sisters" method of companion plant cultivation: Dry beans come from both Old World varieties of broad beans (fava beans) and New World varieties (kidney, black, cranberry, pinto, navy/haricot). Beans are a heliotropic plant, meaning that the leaves tilt throughout the day to face the sun. At night, they go into a folded "sleep" position. Currently, the world genebanks hold about 40,000 bean varieties, although only a fraction are mass-produced for regular consumption. Some bean types include: Some kinds of raw beans contain a harmful, tasteless toxin: the lectin phytohaemagglutinin, which must be removed by cooking. Red kidney beans are particularly toxic, but other types also pose risks of food poisoning. A recommended method is to boil the beans for at least ten minutes; undercooked beans may be more toxic than raw beans. Cooking beans, without bringing them to a boil, in a slow cooker at a temperature well below boiling may not destroy toxins. A case of poisoning by butter beans used to make falafel was reported; the beans were used instead of traditional broad beans or chickpeas, soaked and ground without boiling, made into patties, and shallow fried. Bean poisoning is not well known in the medical community, and many cases may be misdiagnosed or never reported; figures appear not to be available. In the case of the UK National Poisons Information Service, available only to health professionals, the dangers of beans other than red beans were not flagged . Fermentation is used in some parts of Africa to improve the nutritional value of beans by removing toxins. Inexpensive fermentation improves the nutritional impact of flour from dry beans and improves digestibility, according to research co-authored by Emire Shimelis, from the Food Engineering Program at Addis Ababa University. Beans are a major source of dietary protein in Kenya, Malawi, Tanzania, Uganda and Zambia. It is common to make beansprouts by letting some types of bean, often mung beans, germinate in moist and warm conditions; beansprouts may be used as ingredients in cooked dishes, or eaten raw or lightly cooked. There have been many outbreaks of disease from bacterial contamination, often by "salmonella", "listeria", and "Escherichia coli", of beansprouts not thoroughly cooked, some causing significant mortality. Many types of bean contain significant amounts of antinutrients that inhibit some enzyme processes in the body. Phytic acid and phytates, present in grains, nuts, seeds and beans, interfere with bone growth and interrupt vitamin D metabolism. Pioneering work on the effect of phytic acid was done by Edward Mellanby from 1939. Beans are high in protein, complex carbohydrates, folate, and iron. Beans also have significant amounts of fiber and soluble fiber, with one cup of cooked beans providing between nine and 13 grams of fiber. Soluble fiber can help lower blood cholesterol. Adults are recommended to have up to two (female), and three (male) servings. 3/4 cup of cooked beans provide one serving. Many edible beans, including broad beans, navy beans, kidney beans and soybeans, contain oligosaccharides (particularly raffinose and stachyose), a type of sugar molecule also found in cabbage. An anti-oligosaccharide enzyme is necessary to properly digest these sugar molecules. As a normal human digestive tract does not contain any anti-oligosaccharide enzymes, consumed oligosaccharides are typically digested by bacteria in the large intestine. This digestion process produces gases such as methane as a byproduct, which are then released as flatulence. The production data for legumes are published by FAO in three categories: The following is a summary of FAO data. Main crops of "Pulses, Total (dry)" are "Beans, dry [176]" 26.83 million tons, "Peas, dry [187]" 14.36 million tons, "Chick peas [191]" 12.09 million tons, "Cow peas [195]" 6.99 million tons, "Lentils [201]" 6.32 million tons, "Pigeon peas [197]" 4.49 million tons, "Broad beans, horse beans [181]" 4.46 million tons. In general, the consumption of pulses per capita has been decreasing since 1961. Exceptions are lentils and cowpeas. The world leader in production of Dry Beans (Phaseolus spp). is Myanmar (Burma), followed by India and Brazil. In Africa, the most important producer is Tanzania. No symbol = official figure, P = official figure, F = FAO estimate, * = Unofficial/Semi-official/mirror data, C = Calculated figure A = Aggregate (may include official, semi-official or estimates) "Source: UN Food and Agriculture Organization (FAO)"
https://en.wikipedia.org/wiki?curid=4487
Breast The breast is one of two prominences located on the upper ventral region of the torso of primates. In females, it serves as the mammary gland, which produces and secretes milk to feed infants. Both females and males develop breasts from the same embryological tissues. At puberty, estrogens, in conjunction with growth hormone, cause breast development in female humans and to a much lesser extent in other primates. Breast development in other primate females generally only occurs with pregnancy. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its size and shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding. Along with their major function in providing nutrition for infants, female breasts have social and sexual characteristics. Breasts have been featured in ancient and modern sculpture, art, and photography. They can figure prominently in the perception of a woman's body and sexual attractiveness. A number of cultures associate breasts with sexuality and tend to regard bare breasts in public as immodest or indecent. Breasts, especially the nipples, are an erogenous zone. The English word "breast" derives from the Old English word "brēost" (breast, bosom) from Proto-Germanic "breustam" (breast), from the Proto-Indo-European base bhreus– (to swell, to sprout). The "breast" spelling conforms to the Scottish and North English dialectal pronunciations. The "Merriam-Webster Dictionary" states that "Middle English brest, [comes] from Old English brēost; akin to Old High German brust..., Old Irish brú [belly], [and] Russian bryukho"; the first known usage of the term was before the 12th century. A large number of colloquial terms for breasts are used in English, ranging from fairly polite terms to vulgar or slang. Some vulgar slang expressions may be considered to be derogatory or sexist to women. In women, the breasts overlie the pectoralis major muscles and usually extend from the level of the second rib to the level of the sixth rib in the front of the human rib cage; thus, the breasts cover much of the chest area and the chest walls. At the front of the chest, the breast tissue can extend from the clavicle (collarbone) to the middle of the sternum (breastbone). At the sides of the chest, the breast tissue can extend into the axilla (armpit), and can reach as far to the back as the latissimus dorsi muscle, extending from the lower back to the humerus bone (the bone of the upper arm). As a mammary gland, the breast is composed of differing layers of tissue, predominantly two types: adipose tissue; and glandular tissue, which affects the lactation functions of the breasts. Morphologically the breast is tear-shaped. The superficial tissue layer (superficial fascia) is separated from the skin by 0.5–2.5 cm of subcutaneous fat (adipose tissue). The suspensory Cooper's ligaments are fibrous-tissue prolongations that radiate from the superficial fascia to the skin envelope. The female adult breast contains 14–18 irregular lactiferous lobes that converge at the nipple. The 2.0–4.5 mm milk ducts are immediately surrounded with dense connective tissue that support the glands. Milk exits the breast through the nipple, which is surrounded by a pigmented area of skin called the areola. The size of the areola can vary widely among women. The areola contains modified sweat glands known as Montgomery's glands. These glands secrete oily fluid that lubricate and protect the nipple during breastfeeding. Volatile compounds in these secretions may also serve as an olfactory stimulus for the newborn's appetite. The dimensions and weight of the breast vary widely among women. A small-to-medium-sized breast weighs 500 grams (1.1 pounds) or less, and a large breast can weigh approximately 750 to 1,000 grams (1.7 to 2.2 pounds) or more. The tissue composition ratios of the breast also vary among women. Some women's breasts have varying proportions of glandular tissue than of adipose or connective tissues. The fat-to-connective-tissue ratio determines the density or firmness of the breast. During a woman's life, her breasts change size, shape, and weight due to hormonal changes during puberty, the menstrual cycle, pregnancy, breastfeeding, and menopause. The breast is an apocrine gland that produces the milk used to feed an infant. The nipple of the breast is surrounded by the areola (nipple-areola complex). The areola has many sebaceous glands, and the skin color varies from pink to dark brown. The basic units of the breast are the terminal duct lobular units (TDLUs), which produce the fatty breast milk. They give the breast its offspring-feeding functions as a mammary gland. They are distributed throughout the body of the breast. Approximately two-thirds of the lactiferous tissue is within 30 mm of the base of the nipple. The terminal lactiferous ducts drain the milk from TDLUs into 4–18 lactiferous ducts, which drain to the nipple. The milk-glands-to-fat ratio is 2:1 in a lactating woman, and 1:1 in a non-lactating woman. In addition to the milk glands, the breast is also composed of connective tissues (collagen, elastin), white fat, and the suspensory Cooper's ligaments. Sensation in the breast is provided by the peripheral nervous system innervation by means of the front (anterior) and side (lateral) cutaneous branches of the fourth-, fifth-, and sixth intercostal nerves. The T-4 nerve (Thoracic spinal nerve 4), which innervates the dermatomic area, supplies sensation to the nipple-areola complex. Approximately 75% of the lymph from the breast travels to the axillary lymph nodes on the same side of the body, whilst 25% of the lymph travels to the parasternal nodes (beside the sternum bone). A small amount of remaining lymph travels to the other breast and to the abdominal lymph nodes. The axillary lymph nodes include the pectoral (chest), subscapular (under the scapula), and humeral (humerus-bone area) lymph-node groups, which drain to the central axillary lymph nodes and to the apical axillary lymph nodes. The lymphatic drainage of the breasts is especially relevant to oncology because breast cancer is common to the mammary gland, and cancer cells can metastasize (break away) from a tumour and be dispersed to other parts of the body by means of the lymphatic system. The morphologic variations in the size, shape, volume, tissue density, pectoral locale, and spacing of the breasts determine their natural shape, appearance, and position on a woman's chest. Breast size and other characteristics do not predict the fat-to-milk-gland ratio or the potential for the woman to nurse an infant. The size and the shape of the breasts are influenced by normal-life hormonal changes (thelarche, menstruation, pregnancy, menopause) and medical conditions (e.g. virginal breast hypertrophy). The shape of the breasts is naturally determined by the support of the suspensory Cooper's ligaments, the underlying muscle and bone structures of the chest, and by the skin envelope. The suspensory ligaments sustain the breast from the clavicle (collarbone) and the clavico-pectoral fascia (collarbone and chest) by traversing and encompassing the fat and milk-gland tissues. The breast is positioned, affixed to, and supported upon the chest wall, while its shape is established and maintained by the skin envelope. In most women, one breast is slightly larger than the other. More obvious and persistent asymmetry in breast size occurs in up to 25% of women. While it has been a common belief that breastfeeding causes breasts to sag, researchers have found that a woman's breasts sag due to four key factors: cigarette smoking, number of pregnancies, gravity, and weight loss or gain. The base of each breast is attached to the chest by the deep fascia over the pectoralis major muscles. The space between the breast and the pectoralis major muscle, called retromammary space, gives mobility to the breast. The chest (thoracic cavity) progressively slopes outwards from the thoracic inlet (atop the breastbone) and above to the lowest ribs that support the breasts. The inframammary fold, where the lower portion of the breast meets the chest, is an anatomic feature created by the adherence of the breast skin and the underlying connective tissues of the chest; the IMF is the lower-most extent of the anatomic breast. Normal breast tissue typically has a texture that feels nodular or granular, to an extent that varies considerably from woman to woman. The study "The Evolution of the Human Breast" (2001) proposed that the rounded shape of a woman's breast evolved to prevent the sucking infant offspring from suffocating while feeding at the teat; that is, because of the human infant's small jaw, which did not project from the face to reach the nipple, he or she might block the nostrils against the mother's breast if it were of a flatter form (cf. common chimpanzee). Theoretically, as the human jaw receded into the face, the woman's body compensated with round breasts. The breasts are principally composed of adipose, glandular, and connective tissues. Because these tissues have hormone receptors, their sizes and volumes fluctuate according to the hormonal changes particular to thelarche (sprouting of breasts), menstruation (egg production), pregnancy (reproduction), lactation (feeding of offspring), and menopause (end of menstruation). The morphological structure of the human breast is identical in males and females until puberty. For pubescent girls in thelarche (the breast-development stage), the female sex hormones (principally estrogens) in conjunction with growth hormone promote the sprouting, growth, and development of the breasts. During this time, the mammary glands grow in size and volume and begin resting on the chest. These development stages of secondary sex characteristics (breasts, pubic hair, etc.) are illustrated in the five-stage Tanner Scale. During thelarche the developing breasts are sometimes of unequal size, and usually the left breast is slightly larger. This condition of asymmetry is transitory and statistically normal in female physical and sexual development. Medical conditions can cause overdevelopment (e.g., virginal breast hypertrophy, macromastia) or underdevelopment (e.g., tuberous breast deformity, micromastia) in girls and women. Approximately two years after the onset of puberty (a girl's first menstrual cycle), estrogen and growth hormone stimulate the development and growth of the glandular fat and suspensory tissues that compose the breast. This continues for approximately four years until the final shape of the breast (size, volume, density) is established at about the age of 21. Mammoplasia (breast enlargement) in girls begins at puberty, unlike all other primates in which breasts enlarge only during lactation. During the menstrual cycle, the breasts are enlarged by premenstrual water retention and temporary growth. The breasts reach full maturity only when a woman's first pregnancy occurs. Changes to the breasts are among the very first signs of pregnancy. The breasts become larger, the nipple-areola complex becomes larger and darker, the Montgomery's glands enlarge, and veins sometimes become more visible. Breast tenderness during pregnancy is common, especially during the first trimester. By mid-pregnancy, the breast is physiologically capable of lactation and some women can express colostrum, a form of breast milk. Pregnancy causes elevated levels of the hormone prolactin, which has a key role in the production of milk. However, milk production is blocked by the hormones progesterone and estrogen until after delivery, when progesterone and estrogen levels plummet. At menopause, breast atrophy occurs. The breasts can decrease in size when the levels of circulating estrogen decline. The adipose tissue and milk glands also begin to wither. The breasts can also become enlarged from adverse side effects of combined oral contraceptive pills. The size of the breasts can also increase and decrease in response to weight fluctuations. Physical changes to the breasts are often recorded in the stretch marks of the skin envelope; they can serve as historical indicators of the increments and the decrements of the size and volume of a woman's breasts throughout the course of her life. The primary function of the breasts, as mammary glands, is the nourishing of an infant with breast milk. Milk is produced in milk-secreting cells in the alveoli. When the breasts are stimulated by the suckling of her baby, the mother's brain secretes oxytocin. High levels of oxytocin trigger the contraction of muscle cells surrounding the alveoli, causing milk to flow along the ducts that connect the alveoli to the nipple. Full-term newborns have an instinct and a need to suck on a nipple, and breastfed babies nurse for both nutrition and for comfort. Breast milk provides all necessary nutrients for the first six months of life, and then remains an important source of nutrition, alongside solid foods, until at least one or two years of age. The breast is susceptible to numerous benign and malignant conditions. The most frequent benign conditions are puerperal mastitis, fibrocystic breast changes and mastalgia. Lactation unrelated to pregnancy is known as galactorrhea. It can be caused by certain drugs (such as antipsychotic medications), extreme physical stress, or endocrine disorders. Lactation in newborns is caused by hormones from the mother that crossed into the baby's bloodstream during pregnancy. Breast cancer is the most common cause of cancer death among women and it is one of the leading causes of death among women. Factors that appear to be implicated in decreasing the risk of breast cancer are regular breast examinations by health care professionals, regular mammograms, self-examination of breasts, healthy diet, and exercise to decrease excess body fat, and breastfeeding. Both females and males develop breasts from the same embryological tissues. Normally, males produce lower levels of estrogens and higher levels of androgens, namely testosterone, which suppress the effects of estrogens in developing excessive breast tissue. In boys and men, abnormal breast development is manifested as gynecomastia, the consequence of a biochemical imbalance between the normal levels of estrogen and testosterone in the male body. Around 70% of boys temporarily develop breast tissue during adolescence. The condition usually resolves by itself within two years. When male lactation occurs, it is considered a symptom of a disorder of the pituitary gland. Plastic surgery can be performed to augment or reduce the size of breasts, or reconstruct the breast in cases of deformative disease, such as breast cancer. Breast augmentation and breast lift (mastopexy) procedures are done only for cosmetic reasons, whereas breast reduction is sometimes medically indicated. In cases where a woman's breasts are severely asymmetrical, surgery can be performed to either enlarge the smaller breast, reduce the size of the larger breast, or both. Breast augmentation surgery generally does not interfere with future ability to breastfeed. Breast reduction surgery more frequently leads to decreased sensation in the nipple-areola complex, and to low milk supply in women who choose to breastfeed. Implants can interfere with mammography (breast x-rays images). In Christian iconography, some works of art depict women with their breasts in their hands or on a platter, signifying that they died as a martyr by having their breasts severed; one example of this is Saint Agatha of Sicily. Femen is a feminist activist group which uses topless protests as part of their campaigns against sex tourism religious institutions, sexism, homophobia and to "defend [women's] right to abortion". Femen activists have been regularly detained by police in response to their protests. There is a long history of female breasts being used by comedians as a subject for comedy fodder (e.g., British comic Benny Hill's burlesque/slapstick routines). In European pre-historic societies, sculptures of female figures with pronounced or highly exaggerated breasts were common. A typical example is the so-called Venus of Willendorf, one of many Paleolithic Venus figurines with ample hips and bosom. Artifacts such as bowls, rock carvings and sacred statues with breasts have been recorded from 15,000 BC up to late antiquity all across Europe, North Africa and the Middle East. Many female deities representing love and fertility were associated with breasts and breast milk. Figures of the Phoenician goddess Astarte were represented as pillars studded with breasts. Isis, an Egyptian goddess who represented, among many other things, ideal motherhood, was often portrayed as suckling pharaohs, thereby confirming their divine status as rulers. Even certain male deities representing regeneration and fertility were occasionally depicted with breast-like appendices, such as the river god Hapy who was considered to be responsible for the annual overflowing of the Nile. Female breasts were also prominent in the Minoan civilization in the form of the famous Snake Goddess statuettes. In Ancient Greece there were several cults worshipping the "Kourotrophos", the suckling mother, represented by goddesses such as Gaia, Hera and Artemis. The worship of deities symbolized by the female breast in Greece became less common during the first millennium. The popular adoration of female goddesses decreased significantly during the rise of the Greek city states, a legacy which was passed on to the later Roman Empire. During the middle of the first millennium BC, Greek culture experienced a gradual change in the perception of female breasts. Women in art were covered in clothing from the neck down, including female goddesses like Athena, the patron of Athens who represented heroic endeavor. There were exceptions: Aphrodite, the goddess of love, was more frequently portrayed fully nude, though in postures that were intended to portray shyness or modesty, a portrayal that has been compared to modern pin ups by historian Marilyn Yalom. Although nude men were depicted standing upright, most depictions of female nudity in Greek art occurred "usually with drapery near at hand and with a forward-bending, self-protecting posture". A popular legend at the time was of the Amazons, a tribe of fierce female warriors who socialized with men only for procreation and even removed one breast to become better warriors (the idea being that the right breast would interfere with the operation of a bow and arrow). The legend was a popular motif in art during Greek and Roman antiquity and served as an antithetical cautionary tale. Many women regard their breasts as important to their sexual attractiveness, as a sign of femininity that is important to their sense of self. A woman with smaller breasts may regard her breasts as less attractive. Because breasts are mostly fatty tissue, their shape can—within limits—be molded by clothing, such as foundation garments. Bras are commonly worn by about 90% of Western women, and are often worn for support. The social norm in most Western cultures is to cover breasts in public, though the extent of coverage varies depending on the social context. Some religions ascribe a special status to the female breast, either in formal teachings or through symbolism. Islam forbids women from exposing their breasts in public. Many cultures, including Western cultures in North America, associate breasts with sexuality and tend to regard bare breasts as immodest or indecent. In some cultures, like the Himba in northern Namibia, bare-breasted women are normal. In some African cultures, for example, the thigh is regarded as highly sexualised and never exposed in public, but breast exposure is not taboo. In a few Western countries and regions female toplessness at a beach is acceptable, although it may not be acceptable in the town center. Social attitudes and laws regarding breastfeeding in public vary widely. In many countries, breastfeeding in public is common, legally protected, and generally not regarded as an issue. However, even though the practice may be legal or socially accepted, some mothers may nevertheless be reluctant to expose a breast in public to breastfeed due to actual or potential objections by other people, negative comments, or harassment. It is estimated that around 63% of mothers across the world have publicly breast-fed. Bare-breasted women are legal and culturally acceptable at public beaches in Australia and much of Europe. Filmmaker Lina Esco made a film entitled "Free the Nipple", which is about "...laws against female toplessness or restrictions on images of female, but not male, nipples", which Esco states is an example of sexism in society. In some cultures, breasts play a role in human sexual activity. In Western culture, breasts have a "...hallowed sexual status, arguably more fetishized than either sex’s genitalia". Breasts and especially the nipples are among the various human erogenous zones. They are sensitive to the touch as they have many nerve endings; and it is common to press or massage them with hands or orally before or during sexual activity. During sexual arousal, breast size increases, venous patterns across the breasts become more visible, and nipples harden. Compared to other primates, human breasts are proportionately large throughout adult females' lives. Some writers have suggested that they may have evolved as a visual signal of sexual maturity and fertility. Many people regard bare female breasts to be aesthetically pleasing or erotic, and they can elicit heightened sexual desires in men in many cultures. In the ancient Indian work the "Kama Sutra", light scratching of the breasts with nails and biting with teeth are considered erotic. Some people show a sexual interest in female breasts distinct from that of the person, which may be regarded as a breast fetish. A number of Western fashions include clothing which accentuate the breasts, such as the use of push-up bras and decollete (plunging neckline) gowns and blouses which show cleavage. While U.S. culture prefers breasts that are youthful and upright, some cultures venerate women with drooping breasts, indicating mothering and the wisdom of experience. Research conducted at the Victoria University of Wellington showed that breasts are often the first thing men look at, and for a longer time than other body parts. The writers of the study had initially speculated that the reason for this is due to endocrinology with larger breasts indicating higher levels of estrogen and a sign of greater fertility, but the researchers said that "Men may be looking more often at the breasts because they are simply aesthetically pleasing, regardless of the size." Some women report achieving an orgasm from nipple stimulation, but this is rare. Research suggests that the orgasms are genital orgasms, and may also be directly linked to "the genital area of the brain". In these cases, it seems that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix. Nipple stimulation may trigger uterine contractions, which then produce a sensation in the genital area of the brain. There are many mountains named after the breast because they resemble it in appearance and so are objects of religious and ancestral veneration as a fertility symbol and of well-being. In Asia, there was "Breast Mountain", which had a cave where the Buddhist monk Bodhidharma (Da Mo) spent much time in meditation. Other such breast mountains are Mount Elgon on the Uganda-Kenya border, Beinn Chìochan and the Maiden Paps in Scotland, the "Bundok ng Susong Dalaga" (Maiden's breast mountains) in Talim Island, Philippines, the twin hills known as the Paps of Anu ("Dá Chích Anann" or "the breasts of Anu"), near Killarney in Ireland, the 2,086 m high "Tetica de Bacares" or "La Tetica" in the Sierra de Los Filabres, Spain, and Khao Nom Sao in Thailand, Cerro Las Tetas in Puerto Rico and the Breasts of Aphrodite in Mykonos, among many others. In the United States, the Teton Range is named after the French word for "breast".
https://en.wikipedia.org/wiki?curid=4489
Baghdad Baghdad (; , ) is the capital of Iraq and the second-largest city in the Arab world after Cairo. Located along the Tigris River, the city was founded in the 8th century and became the capital of the Abbasid Caliphate. Within a short time of its inception, Baghdad evolved into a significant cultural, commercial, and intellectual center of the Muslim world. This, in addition to housing several key academic institutions, including the House of Wisdom, as well as hosting a multiethnic and multireligious environment, garnered the city a worldwide reputation as the "Centre of Learning". Baghdad was the largest city in the world for much of the Abbasid era during the Islamic Golden Age, peaking at a population of more than a million. The city was largely destroyed at the hands of the Mongol Empire in 1258, resulting in a decline that would linger through many centuries due to frequent plagues and multiple successive empires. With the recognition of Iraq as an independent state (formerly the British Mandate of Mesopotamia) in 1932, Baghdad gradually regained some of its former prominence as a significant center of Arabic culture, with a population variously estimated at 6 or over 7 million. In contemporary times, the city has often faced severe infrastructural damage, most recently due to the United States-led 2003 invasion of Iraq, and the subsequent Iraq War that lasted until December 2011. In recent years, the city has been frequently subjected to insurgent attacks, resulting in a substantial loss of cultural heritage and historical artifacts as well. , Baghdad was listed as one of the least hospitable places in the world to live, ranked by Mercer as the worst major city for quality of life in the world. The name Baghdad is pre-Islamic, and its origin is disputed. The site where the city of Baghdad developed has been populated for millennia. By the 8th century AD, several villages had developed there, including a Persian hamlet called "Baghdad," the name which would come to be used for the Abbasid metropolis. Arab authors, realizing the pre-Islamic origins of Baghdad's name, generally looked for its roots in Middle Persian. They suggested various meanings, the most common of which was "bestowed by God". Modern scholars generally tend to favor this etymology, which views the word as a compound of "bagh" () "god" and "dād" () "given", In Old Persian the first element can be traced to "boghu" and is related to Slavic "bog" "god". A similar term in Middle Persian is the name "Mithradāt" ("Mihrdād" in New Persian), known in English by its Hellenistic form Mithridates, meaning "gift of Mithra" ("dāt" is the more archaic form of "dād", related to Latin "dat" and English "donor"). There are a number of other locations in the wider region whose names are compounds of the word "bagh", including Baghlan and Bagram in Afghanistan, Baghshan in Iran, and Baghdati in Georgia, which likely share the same etymological origins. A few authors have suggested older origins for the name, in particular the name "Bagdadu" or "Hudadu" that existed in Old Babylonian (spelled with a sign that can represent both "bag" and "hu"), and the Babylonian Talmudic name of a place called "Baghdatha". Some scholars suggested Aramaic derivations. When the Abbasid caliph, al-Mansur, founded a completely new city for his capital, he chose the name Madinat al-Salaam or "City of Peace". This was the official name on coins, weights, and other official usage, although the common people continued to use the old name. By the 11th century, "Baghdad" became almost the exclusive name for the world-renowned metropolis. After the fall of the Umayyads, the first Muslim dynasty, the victorious Abbasid rulers wanted their own capital from which they could rule. They chose a site north of the Sassanid capital of Ctesiphon (and also just north of where ancient Babylon had once stood), and on 30 July 762 the caliph Al-Mansur commissioned the construction of the city. It was built under the supervision of the Barmakids. Mansur believed that Baghdad was the perfect city to be the capital of the Islamic empire under the Abbasids. Mansur loved the site so much he is quoted saying: "This is indeed the city that I am to found, where I am to live, and where my descendants will reign afterward". The city's growth was helped by its excellent location, based on at least two factors: it had control over strategic and trading routes along the Tigris, and it had an abundance of water in a dry climate. Water exists on both the north and south ends of the city, allowing all households to have a plentiful supply, which was very uncommon during this time. The city of Baghdad soon became so large that it had to be divided into three judicial districts: Madinat al-Mansur (the Round City), al-Sharqiyya (Karkh) and Askar al-Mahdi (on the West Bank). Baghdad eclipsed Ctesiphon, the capital of the Sassanians, which was located some to the southeast. Today, all that remains of Ctesiphon is the shrine town of Salman Pak, just to the south of Greater Baghdad. Ctesiphon itself had replaced and absorbed Seleucia, the first capital of the Seleucid Empire, which had earlier replaced the city of Babylon. According to the traveler Ibn Battuta, Baghdad was one of the largest cities, not including the damage it has received. The residents are mostly Hanbal. Baghdad is also home to the grave of Abu Hanifa where there is a cell and a mosque above it. The Sultan of Baghdad, Abu Said Bahadur Khan, was a Tatar king who embraced Islam. In its early years, the city was known as a deliberate reminder of an expression in the Qur'an, when it refers to Paradise. It took four years to build (764–768). Mansur assembled engineers, surveyors, and art constructionists from around the world to come together and draw up plans for the city. Over 100,000 construction workers came to survey the plans; many were distributed salaries to start the building of the city. July was chosen as the starting time because two astrologers, Naubakht Ahvazi and Mashallah, believed that the city should be built under the sign of the lion, Leo. Leo is associated with fire and symbolises productivity, pride, and expansion. The bricks used to make the city were on all four sides. Abu Hanifah was the counter of the bricks and he developed a canal, which brought water to the work site for both human consumption and the manufacture of the bricks. Marble was also used to make buildings throughout the city, and marble steps led down to the river's edge. The basic framework of the city consists of two large semicircles about in diameter. The city was designed as a circle about in diameter, leading it to be known as the "Round City". The original design shows a single ring of residential and commercial structures along the inside of the city walls, but the final construction added another ring inside the first. Within the city there were many parks, gardens, villas, and promenades. In the center of the city lay the mosque, as well as headquarters for guards. The purpose or use of the remaining space in the center is unknown. The circular design of the city was a direct reflection of the traditional Persian Sasanian urban design. The Sasanian city of Gur in Fars, built 500 years before Baghdad, is nearly identical in its general circular design, radiating avenues, and the government buildings and temples at the centre of the city. This style of urban planning contrasted with Ancient Greek and Roman urban planning, in which cities are designed as squares or rectangles with streets intersecting each other at right angles. The four surrounding walls of Baghdad were named Kufa, Basra, Khurasan, and Syria; named because their gates pointed in the directions of these destinations. The distance between these gates was a little less than . Each gate had double doors that were made of iron; the doors were so heavy it took several men to open and close them. The wall itself was about 44 m thick at the base and about 12 m thick at the top. Also, the wall was 30 m high, which included merlons, a solid part of an embattled parapet usually pierced by embrasures. This wall was surrounded by another wall with a thickness of 50 m. The second wall had towers and rounded merlons, which surrounded the towers. This outer wall was protected by a solid glacis, which is made out of bricks and quicklime. Beyond the outer wall was a water-filled moat. The Golden Gate Palace, the residence of the caliph and his family, was in the middle of Baghdad, in the central square. In the central part of the building, there was a green dome that was 39 m high. Surrounding the palace was an esplanade, a waterside building, in which only the caliph could come riding on horseback. In addition, the palace was near other mansions and officer's residences. Near the Gate of Syria, a building served as the home for the guards. It was made of brick and marble. The palace governor lived in the latter part of the building and the commander of the guards in the front. In 813, after the death of caliph Al-Amin, the palace was no longer used as the home for the caliph and his family. The roundness points to the fact that it was based on Arabic script.
https://en.wikipedia.org/wiki?curid=4492
Outline of biology Biology – The natural science that involves the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy. Branch of biology – subdiscipline of biology, also referred to as a biological science (note that biology and all its branches are also life sciences). Outline of ecology Outline of evolution Outline of cell biology Outline of biochemistry Outline of genetics
https://en.wikipedia.org/wiki?curid=4493