text
stringlengths
10
951k
source
stringlengths
39
44
Guru Meditation The Guru Meditation is an error notice displayed by early versions of the Commodore Amiga computer when they crashed. It is analogous to the "Blue Screen of Death" in Microsoft Windows operating systems, or a kernel panic in Unix. It has later been used as a message for unrecoverable errors in software such as Varnish and VirtualBox. When a Guru Meditation is displayed, the options are to reboot by pressing the left mouse button, or to invoke ROMWack by pressing the right mouse button. (ROMWack is a minimalist debugger built into the operating system which is accessible by connecting a 9600 bit/s terminal to the serial port.) The alert itself appears as a black rectangular box located in the upper portion of the screen. Its border and text are red for a normal Guru Meditation, or green/yellow for a Recoverable Alert, another kind of Guru Meditation. The screen goes black, and the power and disk-activity LEDs may blink immediately before the alert appears. In AmigaOS 1.x, programmed in ROMs known as Kickstart 1.1, 1.2 and 1.3, the errors are always red. In AmigaOS 2.x and 3.x, recoverable alerts are yellow, except for some very early versions of 2.x where they were green. Dead-end alerts are always red and terminal in all OS versions except in a rare series of events, as in when a deprecated Kickstart (example: 1.1) program conditionally boots from disk on a more advanced Kickstart 3.x ROM Amiga running in compatibility mode (therefore eschewing the on-disk OS) and crashes with a red Guru Meditation but subsequently restores itself by pressing the left mouse button, the newer Kickstart recognizing an inadvised low level chipset call for the older ROM directly poking the hardware, and addressing it. The alert occurred when there was a fatal problem with the system. If the system had no means of recovery, it could display the alert, even in systems with numerous critical flaws. In extreme cases, the alert could even be displayed if the system's memory was completely exhausted. The error is displayed as two fields, separated by a period. The format is #0000000x.yyyyyyyy in case of a CPU error, or #aabbcccc.dddddddd in case of a system software error. The first field is either the Motorola 68000 exception number that occurred (if a CPU error occurs) or an internal error identifier (such as an 'Out of Memory' code), in case of a system software error. The second can be the address of a "Task" structure, or the address of a memory block whose allocation or deallocation failed. It is never the address of the code that caused the error. If the cause of the crash is uncertain, this number is rendered as 48454C50, which stands for "HELP" in hexadecimal ASCII characters (48=H, 45=E, 4C=L, 50=P). The text of the alert messages was completely baffling to most users. Only highly technically adept Amiga users would know, for example, that exception 3 was an address error, and meant the program was accessing a word on an unaligned boundary. Users without this specialized knowledge would have no recourse but to look for a "Guru" or to simply reboot the machine and hope for the best. There was a commercially available error handler for AmigaOS, before version 2.04, called GOMF (Get Outta My Face) made by Hypertek/Silicon Springs Development corp. It was able to deal with many kinds of errors and gave the user a choice to either remove the offending process and associated screen, or allow the machine to show the Guru Meditation. In many cases, removal of the offending process gave one the choice to save one's data and exit running programs before rebooting the system. When the damage was not extensive, one was able to continue using the machine. However, it did not save the user from all errors, as one may have still seen this error occasionally. Recoverable Alerts are non-critical crashes in the computer system. In most cases, it is possible to resume work and save files after a Recoverable Alert, while a normal, red Guru Meditation always results in an immediate reboot. It is, however, still recommended to reboot as soon as possible after encountering a Recoverable Alert, because the system may be in an unpredictable state that can cause data corruption. The first byte specifies the area of the system affected. The top bit will be set if the error is a dead end alert. The term "Guru Meditation Error" originated as an in-house joke in Amiga's early days. The company had a product called the "Joyboard", a game controller much like a joystick but operated by the feet, similar to the Wii Balance Board. Early in the development of the Amiga computer operating system, the company's developers became so frustrated with the system's frequent crashes that, as a relaxation technique, a game was developed where a person would sit cross-legged on the Joyboard, resembling an Indian guru. The player tried to remain extremely still; the winner of the game stayed still the longest. If the player moved too much, a "guru meditation" error occurred. The final unlockable balance activity in "Wii Fit" represents a similar game. The same activity is unlocked from the start in "Wii Fit Plus".
https://en.wikipedia.org/wiki?curid=13050
Gnumeric Gnumeric is a spreadsheet program that is part of the GNOME Free Software Desktop Project. Gnumeric version 1.0 was released on 31 December 2001. Gnumeric is distributed as free software under the GNU GPL license; it is intended to replace proprietary spreadsheet programs like Microsoft Excel. Gnumeric was created and developed by Miguel de Icaza, but he has since moved on to other projects. The maintainer was Jody Goldberg. Gnumeric has the ability to import and export data in several file formats, including CSV, Microsoft Excel (write support for the more recent codice_1 format is incomplete), Microsoft Works spreadsheets (codice_2), HTML, LaTeX, Lotus 1-2-3, OpenDocument and Quattro Pro; its native format is the "Gnumeric file format" (codice_3 or codice_4), an XML file compressed with gzip. It includes all of the spreadsheet functions of the North American edition of Microsoft Excel and many functions unique to Gnumeric. Pivot tables and Visual Basic for Applications macros are not yet supported. Gnumeric's accuracy has helped it to establish a niche for statistical analysis and other scientific tasks. For improving the accuracy of Gnumeric, the developers are cooperating with the R Project. Gnumeric has an interface for the creation and editing of graphs different from other spreadsheet software. For editing a graph, Gnumeric displays a window where all the elements of the graph are listed. Other spreadsheet programs typically require the user to select the individual elements of the graph in the graph itself in order to edit them. Gnumeric releases were ported to Microsoft Windows until August 2014 (the latest versions were 1.10.16 and 1.12.17). Use of current version of Gnumeric on Windows is possible with MSYS2 with experienced know-how of a Linux/Unix user. After GTK+ 2.24.10 and 3.6.4, development of Windows version was closed by GNOME. Creation of Windows version was complicated by bugs in old Windows versions of GTK+. Installation of MSYS2 on Windows is a good way to use current GTK software. GTK+ 2.24.10 and 3.6.4 are available on-line. Versions of GTK for 64-bit Windows are prepared by Tom Schoonjans – current examples are 2.24.32 and 3.24.12. This could be also a new start for a new native 64-bit Windows version of Gnumeric. A new way is the new Windows Subsystem for Linux (WSL) on Windows 10 Release 1709 and later. After installing a Linux distribution like Ubuntu, Debian or SUSE from Microsoft Store and with an X-Server like Xming, running thousands of applications like Gnumeric directly is possible. codice_5 is the right command on Ubuntu.
https://en.wikipedia.org/wiki?curid=13051
GNU Debugger The GNU Debugger (GDB) is a portable debugger that runs on many Unix-like systems and works for many programming languages, including Ada, C, C++, Objective-C, Free Pascal, Fortran, Go, and partially others. GDB was first written by Richard Stallman in 1986 as part of his GNU system, after his GNU Emacs was "reasonably stable". GDB is free software released under the GNU General Public License (GPL). It was modeled after the DBX debugger, which came with Berkeley Unix distributions. From 1990 to 1993 it was maintained by John Gilmore. Now it is maintained by the GDB Steering Committee which is appointed by the Free Software Foundation. GDB offers extensive facilities for tracing and altering the execution of computer programs. The user can monitor and modify the values of programs' internal variables, and even call functions independently of the program's normal behavior. GDB target processors (as of 2003) include: Alpha, ARM, AVR, H8/300, Altera Nios/Nios II, System/370, System 390, X86 and its 64-bit extension X86-64, IA-64 "Itanium", Motorola 68000, MIPS, PA-RISC, PowerPC, SuperH, SPARC, and VAX. Lesser-known target processors supported in the standard release have included A29K, ARC, ETRAX CRIS, D10V, D30V, FR-30, FR-V, Intel i960, 68HC11, Motorola 88000, MCORE, MN10200, MN10300, NS32K, Stormy16, and Z8000. (Newer releases will likely not support some of these.) GDB has compiled-in simulators for even lesser-known target processors such like M32R or V850. GDB is still actively developed. As of version 7.0 new features include support for Python scripting and as of version 7.8 GNU Guile scripting as well. Since version 7.0, support for "reversible debugging" — allowing a debugging session to step backward, much like rewinding a crashed program to see what happened — is available. GDB offers a "remote" mode often used when debugging embedded systems. Remote operation is when GDB runs on one machine and the program being debugged runs on another. GDB can communicate to the remote "stub" that understands GDB protocol through a serial device or TCP/IP. A stub program can be created by linking to the appropriate stub files provided with GDB, which implement the target side of the communication protocol. Alternatively, gdbserver can be used to remotely debug the program without needing to change it in any way. The same mode is also used by KGDB for debugging a running Linux kernel on the source level with gdb. With KGDB, kernel developers can debug a kernel in much the same way as they debug application programs. It makes it possible to place breakpoints in kernel code, step through the code, and observe variables. On architectures where hardware debugging registers are available, watchpoints can be set which trigger breakpoints when specified memory addresses are executed or accessed. KGDB requires an additional machine which is connected to the machine to be debugged using a serial cable or Ethernet. On FreeBSD, it is also possible to debug using FireWire direct memory access (DMA). The debugger does not contain its own graphical user interface, and defaults to a command-line interface, although it does contain a text user interface. Several front-ends have been built for it, such as UltraGDB, Xxgdb, Data Display Debugger (DDD), Nemiver, KDbg, the Xcode debugger, GDBtk/Insight, and HP Wildebeest Debugger GUI (WDB GUI). IDEs such as Codelite, , Dev-C++, Geany, GNAT Programming Studio (GPS), KDevelop, Qt Creator, Lazarus, MonoDevelop, Eclipse, NetBeans, and Visual Studio can interface with GDB. GNU Emacs has a "GUD mode" and tools for VIM exist (e.g. clewn). These offer facilities similar to debuggers found in IDEs. Some other debugging tools have been designed to work with GDB, such as memory leak detectors. Consider the following source-code written in C: size_t foo_len( const char *s ) int main( int argc, char *argv[] ) Using the GCC compiler on Linux, the code above must be compiled using the codice_1 flag in order to include appropriate debug information on the binary generated, thus making it possible to inspect it using GDB. Assuming that the file containing the code above is named codice_2, the command for the compilation could be: $ gcc example.c -Og -g -o example And the binary can now be run: $ ./example Segmentation fault Since the example code, when executed, generates a segmentation fault, GDB can be used to inspect the problem. $ gdb ./example GNU gdb (GDB) Fedora (7.3.50.20110722-13.fc16) Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /path/example...done. (gdb) run Starting program: /path/example Program received signal SIGSEGV, Segmentation fault. 0x0000000000400527 in foo_len (s=0x0) at example.c:8 8 return strlen (s); (gdb) print s $1 = 0x0 The problem is present in line 8, and occurs when calling the function codice_3 (because its argument, codice_4, is codice_5). Depending on the implementation of strlen (inline or not), the output can be different, e.g.: GNU gdb (GDB) 7.3.1 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /tmp/gdb/example...done. (gdb) run Starting program: /tmp/gdb/example Program received signal SIGSEGV, Segmentation fault. 0xb7ee94f3 in strlen () from /lib/i686/cmov/libc.so.6 (gdb) bt To fix the problem, the variable codice_6 (in the function codice_7) must contain a valid string. Here is a fixed version of the code: size_t foo_len( const char *s ) int main( int argc, char *argv[] ) Recompiling and running the executable again inside GDB now gives a correct result: GDB prints the output of codice_8 in the screen, and then informs the user that the program exited normally.
https://en.wikipedia.org/wiki?curid=13052
Galeon Galeon is a discontinued Gecko-based web browser that was created by Marco Pesenti Gritti with the goal of delivering a consistent browsing experience to GNOME desktop environment. It gained some popularity in the early 2000s due to its speed, flexibility in configuration and features. The disagreement over the future of Galeon split the development team in 2002, which resulted in the departure of the browser's initial author and several other developers. This event marked the beginning of the browser's popularity decline, which led to its discontinuation in September 2008. Some of Galeon's features were subsequently ported to Epiphany (now called Web) – the descendant of Galeon. Galeon made use of Gecko's features including configuration options and standards support. Apart from that, Galeon had several features that were uncommon in browsers at that time: The project was started by Marco Pesenti Gritti with the goal of creating a web browser that would be fast and consistent with the GNOME desktop environment. The first public version (Galeon 0.6) was released in June 2000. The first releases of Galeon were criticised for lack of such basic features as cookie and proxy support, though the browser added some features with every release. Version 1.2 of Galeon introduced many new features that drew attention of the general public. At the time of Galeon's creation, the most popular Linux browsers, including Netscape and Mozilla, were large multi-functional programs. This made them slow to start and often impractical due to their high memory usage and processor requirements. Opera was somewhat faster, but it was proprietary software distributed in trialware and adware versions, both of which lacked some of the functionality of the Microsoft Windows version. Galeon was widely seen as one of the best Linux browsers available. The polls revealed the substantial usage share of Galeon, though its popularity was regarded as owing to lack of stability evident in Mozilla's browsers. With the release of new version of the GTK+ widget toolkit, which was used to construct the user interface of Galeon, the team decided to write a new version of Galeon from scratch. At the same time the GNOME project has adopted its new human interface guidelines, which promoted simplicity and uniform design. The Galeon team had differing opinions on the new guidelines. The author and lead developer, Marco Pesenti Gritti, endorsed them and saw the rewrite as an opportunity to make Galeon simpler. Many other developers believed that reducing the number of preferences and simplifying the user interface would harm the project. In November 2002, as the result of several discussions on the topic Gritti made the decision to cease his work in Galeon and fork the project and started development of a HIG-compliant web browser he called "Epiphany" (now known as Web). As Gritti no longer controlled the development of Galeon, the previous functionality was restored in subsequent releases and some new features were added, though development got slow after the split. At the same time the rising popularity of Firefox, its status of the default browser in major distribution and the overwhelming number of its extensions led to decline of Galeon's user base. Eventually the Galeon developers announced plans to halt development of Galeon, saying "the current approach is unsustainable" regarding the resources required to maintain it. Instead, they planned to develop a set of extensions for Epiphany to provide similar functionality. Even after development ceased in September 2008, the browser remained popular and in December 2011 was still available in some Linux distribution's repositories, such as Debian 6 Squeeze, although it was not part of Debian 7 Wheezy. Galeon was praised for its customizability and speed, as compared to Netscape Navigator and Firefox, though Konqueror and Opera were still faster on older hardware. Galeon was noted for its session handling and crash recovery. In November 2002, OSNews conducted a poll to determine the most popular Gecko-based browser, which included several browsers for Microsoft Windows, Mac OS X and Linux, but didn't include Netscape Navigator and Mozilla Suite. The Linux-only Galeon was the second most popular, after cross-platform Firefox, at that time known as Phoenix. Critics noted Galeon's tricky plugin installation.
https://en.wikipedia.org/wiki?curid=13055
Gatling gun The Gatling gun is one of the best-known early rapid-fire spring loaded, hand cranked weapons, and a forerunner of the modern machine gun and rotary cannon. Invented by Richard Gatling, it saw occasional use by the Union forces during the American Civil War in the 1860s, which was the first time it was employed in combat. It was later used in numerous military conflicts, including the Boshin War, the Anglo-Zulu War, and the assault on San Juan Hill during the Spanish–American War. It was also used by the Pennsylvania militia in episodes of the Great Railroad Strike of 1877, specifically in Pittsburgh. The Gatling gun's operation centered on a cyclic multi-barrel design which facilitated cooling and synchronized the firing-reloading sequence. Each barrel fired a single shot when it reached a certain point in the cycle, after which it ejected the spent cartridge, loaded a new round, and, in the process, allowed the barrel to cool. This configuration allowed higher rates of fire to be achieved without the barrels overheating. The Gatling gun was an early form of rotary cannon, and today modern rotary cannons are often referred to as "Gatling guns". The Gatling gun was designed by the American inventor Dr. Richard J. Gatling in 1861 and patented on November 4, 1862. Gatling wrote that he created it to reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is. Although the first Gatling gun was capable of firing continuously, it required a person to crank it; therefore it was not a true automatic weapon. The Maxim gun, invented and patented in 1883, was the first true fully automatic weapon, making use of the fired projectile's recoil force to reload the weapon. Nonetheless, the Gatling gun represented a huge leap in firearm technology. Prior to the Gatling gun, the only weapons available to military forces capable of firing many projectiles in a short space of time were mass-firing volley weapons, like the Belgian and French mitrailleuse of the 1860s and 1870s, and field cannons firing canister shot, much like an upsized shotgun. The latter were widely used during and after the Napoleonic Wars. Although the maximum rate of fire was increased by firing multiple projectiles simultaneously, these weapons still needed to be reloaded after each discharge, which for multi-barrel systems like the "mitrailleuse" was cumbersome and time-consuming. This negated much of the advantage of their high rate of fire per discharge, making them much less powerful on the battlefield. In comparison, the Gatling gun offered a rapid and continuous rate of fire without having to be manually reloaded by opening the breech. The original Gatling gun was a field weapon which used multiple rotating barrels turned by a hand crank, and firing loose (no links or belt) metal cartridge ammunition using a gravity feed system from a hopper. The Gatling gun's innovation lay in the use of multiple barrels to limit overheating, a rotating mechanism, and a gravity-feed reloading system, which allowed unskilled operators to achieve a relatively high rate of fire of 200 rounds per minute. The US Army adopted Gatling guns in several calibers, including .42 caliber, .45-70, .50 caliber, 1 inch, and (M1893 and later) .30 Army, with conversions of M1900 weapons to .30-03 and .30-06. The .45-70 weapon was also mounted on some US Navy ships of the 1880s and 1890s. British manufacturer James George Accles, previously employed by Colt 1867–1886, developed a modified Gatling gun circa 1888 known as the Accles Machine Gun. Circa 1895 the American Ordnance Company acquired the rights to manufacture and distribute this weapon in the Americas. It was trialed by the US Navy in December 1895, and was said to be the only weapon to complete the trial out of five competing weapons, but was apparently not adopted by US forces. The Gatling gun was first used in warfare during the American Civil War. Twelve of the guns were purchased personally by Union commanders and used in the trenches during the Siege of Petersburg, Virginia (June 1864 – April 1865). Eight other Gatling guns were fitted on gunboats. The gun was not accepted by the American Army until 1866, when a sales representative of the manufacturing company demonstrated it in combat. On July 17, 1863, Gatling guns were purportedly used to New York anti-draft rioters. Two were brought by a Pennsylvania National Guard unit from Philadelphia to use against strikers in Pittsburgh. Gatling guns were famously "not" used at the Battle of the Little Bighorn, also known as "Custer's Last Stand", when Gen. George Armstrong Custer chose not to bring Gatling Guns with his main force. In April 1867, a Gatling gun was purchased for the Argentine Army by minister Domingo F. Sarmiento under instructions from president Bartolomé Mitre. Captain Luis Germán Astete of the Peruvian Navy took with him dozens of Gatling guns from the United States to Peru in December 1879 during the Peru-Chile War of the Pacific. Gatling guns were used by the Peruvian Navy and Army, especially in the Battle of Tacna (May 1880) and the Battle of San Juan (January 1881) against the invading Chilean Army. Lieutenant A.L. Howard of the Connecticut National Guard had an interest in the company manufacturing Gatling guns, and took a personally owned Gatling gun to Saskatchewan, Canada, in 1885 for use with the Canadian military against Métis rebels during Louis Riel's North-West Rebellion. Early multi-barrel guns were approximately the size and weight of artillery pieces, and were often perceived as a replacement for cannons firing grapeshot or canister shot. Gatling guns were even mounted aboard ships. Compared with earlier weapons such as the "mitrailleuse," which required manual reloading, the Gatling gun was more reliable and easier to operate, and had a lower, but continuous rate of fire. The large wheels required to move these guns around required a high firing position, which increased the vulnerability of their crews. Sustained firing of gunpowder cartridges generated a cloud of smoke, making concealment impossible until smokeless powder became available in the late 19th century. When operators were firing Gatling guns against troops of industrialized nations, they were at risk, being vulnerable to artillery they could not reach and snipers they could not see. The Gatling gun was used most successfully to expand European colonial empires by defeating indigenous warriors mounting massed attacks, including the Zulu, the Bedouin, and the Mahdists. Imperial Russia purchased 400 Gatling guns and used them against Turkmen cavalry and other nomads of central Asia. The British Army first deployed the Gatling gun in 1873-74 during the Anglo-Ashanti wars, and extensively during the latter actions of the 1879 Anglo-Zulu war. The Royal Navy used Gatling guns during the 1882 Anglo-Egyptian War. Because of infighting within army ordnance, Gatling guns were used by the U.S. Army during the Spanish–American War. A four-gun battery of Model 1895 ten-barrel Gatling guns in .30 Army, made by Colt's Arms Company, was formed into a separate detachment led by Lt. John "Gatling Gun" Parker. The detachment proved very effective, supporting the advance of American forces at the Battle of San Juan Hill. Three of the Gatlings with swivel mountings were used with great success against the Spanish defenders. During the American charge up San Juan and Kettle hills, the three guns fired a total of 18,000 .30 Army rounds in 8 1/2 minutes (an average of over 700 rounds per minute per gun of continuous fire) against Spanish troop positions along the crest of both hills, wreaking terrible carnage. Despite this remarkable achievement, the Gatling's weight and cumbersome artillery carriage hindered its ability to keep up with infantry forces over difficult ground, particularly in Cuba, where roads were often little more than jungle footpaths. By this time, the U.S. Marines had been issued the modern tripod-mounted M1895 Colt–Browning machine gun using the 6mm Lee Navy round, which they employed to defeat the Spanish infantry at the battle of Cuzco Wells. Gatling guns were used by the U.S. Army during the Philippine–American War. One such instance was during the Battle of San Jacinto () which was fought on November 11, 1899 in San Jacinto in the Philippines, between Philippine Republican Army soldiers and American troops. The Gatling's weight and artillery carriage hindered its ability to keep up with American troops over uneven terrain, particularly in the Philippines, where outside the cities there were heavily foliaged forests and steep mountain paths. The Gatling gun operated by a hand-crank mechanism, with six barrels revolving around a central shaft (although some models had as many as ten). Each barrel fires once per revolution at about the same position. The barrels, a carrier, and a lock cylinder were separate and all mounted on a solid plate revolving around a central shaft, mounted on an oblong fixed frame. Turning the crank rotated the shaft. The carrier was grooved and the lock cylinder was drilled with holes corresponding to the barrels. The casing was partitioned, and through this opening the barrel shaft was journaled. In front of the casing was a cam with spiral surfaces. The cam imparted a reciprocating motion to the locks when the gun rotated. Also in the casing was a cocking ring with projections to cock and fire the gun. Each barrel had a single lock, working in the lock cylinder on a line with the barrel. The lock cylinder was encased and joined to the frame. Early models had a fibrous matting stuffed in among the barrels, which could be soaked with water to cool the barrels down. Later models eliminated the matting-filled barrels as being unnecessary. Cartridges, held in a hopper, dropped individually into the grooves of the carrier. The lock was simultaneously forced by the cam to move forward and load the cartridge, and when the cam was at its highest point, the cocking ring freed the lock and fired the cartridge. After the cartridge was fired the continuing action of the cam drew back the lock bringing with it the spent casing which then dropped to the ground. The grouped barrel concept had been explored by inventors since the 18th century, but poor engineering and the lack of a unitary cartridge made previous designs unsuccessful. The initial Gatling gun design used self-contained, reloadable steel cylinders with a chamber holding a ball and black-powder charge, and a percussion cap on one end. As the barrels rotated, these steel cylinders dropped into place, were fired, and were then ejected from the gun. The innovative features of the Gatling gun were its independent firing mechanism for each barrel and the simultaneous action of the locks, barrels, carrier and breech. The ammunition that Gatling eventually implemented was a paper cartridge charged with black powder and primed with a percussion cap, because self-contained brass cartridges were not yet fully developed and available. The shells were gravity-fed into the breech through a hopper or simple box "magazine" with an unsprung gravity follower on top of the gun. Each barrel had its own firing mechanism. Despite self-contained brass cartridges replacing the paper cartridge in the 1860s, it wasn't until the Model 1881 that Gatling switched to the 'Bruce'-style feed system (U.S. Patents 247,158 and 343,532) that accepted two rows of .45-70 cartridges. While one row was being fed into the gun, the other could be reloaded, thus allowing sustained fire. The final gun required four operators. By 1886, the gun was capable of firing more than 400 rounds per minute. The smallest-caliber gun also had a Broadwell drum feed in place of the curved box of the other guns. The drum, named after L. W. Broadwell, an agent for Gatling's company, comprised twenty stacks of rounds arranged around a central axis, like the spokes of a wheel, each holding twenty cartridges with the bullet noses oriented toward the central axis. This invention was patented in U. S. 110,338. As each stack emptied, the drum was manually rotated to bring a new stack into use until all 400 rounds had been fired. A more common variant had 240 rounds in twenty stands of fifteen. By 1893, the Gatling was adapted to take the new .30 Army smokeless cartridge. The new M1893 guns featured six barrels, later increased to ten barrels, and were capable of a maximum (initial) rate of fire of 800–900 rounds per minute, though 600 rpm was recommended for continuous fire. Dr. Gatling later used examples of the M1893 powered by electric motor and belt to drive the crank. Tests demonstrated the electric Gatling could fire bursts of up to 1,500 rpm. The M1893, with minor revisions, became the M1895, and 94 guns were produced for the U.S. Army by Colt. Four M1895 Gatlings under Lt. John H. Parker saw considerable combat during the Santiago campaign in Cuba in 1898. The M1895 was designed to accept only the Bruce feeder. All previous models were unpainted, but the M1895 was painted olive drab (O.D.) green, with some parts left blued. The Model 1900 was very similar to the model 1895, but with only a few components finished in O.D. green. The U.S. Army purchased a quantity of M1900s. All Gatling Models 1895–1903 could be mounted on an armored field carriage. In 1903, the Army converted its M1900 guns in .30 Army to fit the new .30-03 cartridge (standardized for the M1903 Springfield rifle) as the M1903. The later M1903-'06 was an M1903 converted to .30-06. This conversion was principally carried out at the Army's Springfield Armory arsenal repair shops. All models of Gatling guns were declared obsolete by the U.S. military in 1911, after 45 years of service. After the Gatling gun was replaced in service by newer recoil or gas-operated weapons, the approach of using multiple externally powered rotating barrels fell into disuse for many decades. However, some examples were developed during the interwar years, but only existed as prototypes or were rarely used. The concept resurfaced after World War II with the development of the Minigun and the M61 Vulcan. Many other versions of the Gatling gun were built from the late 20th century to the present, the largest of these being the 30mm GAU-8 Avenger autocannon. Current usage favors mounted guns, either vehicular or emplaced, where the fire rate necessitates multiple barrels to space out the use of each to avoid melting a single barrel at full auto fire. These guns are not able to be fired by humans, and attempting to do so can be fatal as the rotational force from the extreme rapid rotation of modern miniguns throws the gun at the user if it is not secured.
https://en.wikipedia.org/wiki?curid=13057
East Germany East Germany, officially the German Democratic Republic (GDR; , ), was a state that existed from 1949 to 1990, the period when the eastern portion of Germany was part of the Eastern Bloc during the Cold War. Commonly described as a communist state in English usage, it described itself as a socialist "workers' and peasants' state". It consisted of territory that was administered and occupied by Soviet forces following the end of World War II—the Soviet occupation zone of the Potsdam Agreement, bounded on the east by the Oder–Neisse line. The Soviet zone surrounded West Berlin but did not include it and West Berlin remained outside the jurisdiction of the GDR. The GDR was established in the Soviet zone while the Federal Republic of Germany, commonly referred to as West Germany, was established in the three western zones. A satellite state of the Soviet Union, Soviet occupation authorities began transferring administrative responsibility to German communist leaders in 1948 and the GDR began to function as a state on 7 October 1949. However, Soviet forces remained in the country throughout the Cold War. Until 1989, the GDR was governed by the Socialist Unity Party of Germany (SED), although other parties nominally participated in its alliance organisation, the National Front of the German Democratic Republic. The SED made the teaching of Marxism–Leninism and the Russian language compulsory in schools. The economy was centrally planned and increasingly state-owned. Prices of housing, basic goods and services were heavily subsidised and set by central government planners rather than rising and falling through supply and demand. Although the GDR had to pay substantial war reparations to the Soviets, it became the most successful economy in the Eastern Bloc. Emigration to the West was a significant problem as many of the emigrants were well-educated young people and weakened the state economically. The government fortified its western borders and built the Berlin Wall in 1961. Many people attempting to flee were killed by border guards or booby traps such as landmines. Many others spent large amounts of time imprisoned for attempting to escape. In 1989, numerous social, economic and political forces in the GDR and abroad led to the fall of the Berlin Wall and the establishment of a government committed to liberalisation. The following year, free and fair elections were held and international negotiations led to the signing of the Final Settlement treaty on the status and borders of Germany. The GDR dissolved itself and Germany was reunified on 3 October 1990, becoming a fully sovereign state in the reunified Federal Republic of Germany. Several of the GDR's leaders, notably its last communist leader Egon Krenz, were prosecuted after reunification for crimes committed during the Cold War. Geographically, the GDR bordered the Baltic Sea to the north, Poland to the east, Czechoslovakia to the southeast and West Germany to the southwest and west. Internally, the GDR also bordered the Soviet sector of Allied-occupied Berlin, known as East Berlin, which was also administered as the state's "de facto" capital. It also bordered the three sectors occupied by the United States, United Kingdom and France known collectively as West Berlin. The three sectors occupied by the Western nations were sealed off from the GDR by the Berlin Wall from its construction in 1961 until it was brought down in 1989. The official name was "Deutsche Demokratische Republik" (German Democratic Republic), usually abbreviated to "DDR" (GDR). Both terms were used in East Germany, with increasing usage of the abbreviated form, especially since East Germany considered West Germans and West Berliners to be foreigners following the promulgation of its second constitution in 1968. West Germans, the western media and statesmen initially avoided the official name and its abbreviation, instead using terms like "Ostzone" (Eastern Zone), "Sowjetische Besatzungszone" (Soviet Occupation Zone; often abbreviated to "SBZ") and "sogenannte DDR" or "so-called GDR". The centre of political power in East Berlin was referred to as "Pankow" (the seat of command of the Soviet forces in East Germany was referred to as Karlshorst). Over time, however, the abbreviation "DDR" was also increasingly used colloquially by West Germans and West German media. When used by West Germans, "Westdeutschland" (West Germany) was a term almost always in reference to the geographic region of Western Germany and not to the area within the boundaries of the Federal Republic of Germany. However, this use was not always consistent and West Berliners frequently used the term "Westdeutschland" to denote the Federal Republic. Before World War II, "Ostdeutschland" (eastern Germany) was used to describe all the territories east of the Elbe (East Elbia), as reflected in the works of sociologist Max Weber and political theorist Carl Schmitt. Explaining the internal impact of the GDR regime from the perspective of German history in the long term, historian Gerhard A. Ritter (2002) has argued that the East German state was defined by two dominant forcesSoviet communism on the one hand, and German traditions filtered through the interwar experiences of German communists on the other. It always was constrained by the powerful example of the increasingly prosperous West, to which East Germans compared their nation. The changes wrought by the communists were most apparent in ending capitalism and transforming industry and agriculture, in the militarization of society, and in the political thrust of the educational system and the media. On the other hand, there was relatively little change made in the historically independent domains of the sciences, the engineering professions, the Protestant churches, and in many bourgeois lifestyles. Social policy, says Ritter, became a critical legitimization tool in the last decades and mixed socialist and traditional elements about equally. At the Yalta Conference during World War II, the Allies (the US, the UK, and the Soviet Union) agreed on dividing a defeated Nazi Germany into occupation zones, and on dividing Berlin, the German capital, among the Allied powers as well. Initially this meant the construction of three zones of occupation, i.e., American, British, and Soviet. Later, a French zone was carved out of the US and British zones. The two former parties were notorious rivals when they were active before the Nazis consolidated all power and criminalised them. The unification of the two parties was symbolic of the new friendship of German socialists in defeating their common enemy; however, the communists, who held a majority, had virtually total control over policy. The SED was the ruling party for the entire duration of the East German state. It had close ties with the Soviets, which maintained military forces in East Germany until its dissolution in 1991 (the Russian Federation continued to maintain forces in what had been East Germany until 1994), with the stated purpose of countering NATO bases in West Germany. As West Germany was reorganized and gained independence from its occupiers, the GDR was established in East Germany in 1949. The creation of the two states solidified the 1945 division of Germany. On 10 March 1952, (in what would become known as the "Stalin Note") Stalin put forth a proposal to reunify Germany with a policy of neutrality, with no conditions on economic policies and with guarantees for "the rights of man and basic freedoms, including freedom of speech, press, religious persuasion, political conviction, and assembly" and free activity of democratic parties and organizations. This was turned down; reunification was not a priority for the leadership of West Germany, and the NATO powers declined the proposal, asserting that Germany should be able to join NATO and that such a negotiation with the Soviet Union would be seen as a capitulation. There have been several debates about whether a real chance for reunification had been missed in 1952. In 1949, the Soviets turned control of East Germany over to the SED, headed by Wilhelm Pieck (1876–1960), who became president of the GDR and held the office until his death, while most executive authority was assumed by SED General Secretary Walter Ulbricht. Socialist leader Otto Grotewohl (1894–1964) became prime minister until his death. The government of East Germany denounced West German failures in accomplishing denazification and renounced ties to the Nazi past, imprisoning many former Nazis and preventing them from holding government positions. The SED set a primary goal of ridding East Germany of all traces of Nazism. In the Yalta and Potsdam conferences, the Allies established their joint military occupation and administration of Germany via the Allied Control Council (ACC), a four-power (US, UK, USSR, France) military government effective until the restoration of German sovereignty. In eastern Germany, the Soviet Occupation Zone (SBZ"Sowjetische Besatzungszone") comprised the five states ("Länder") of Mecklenburg-Vorpommern, Brandenburg, Saxony, Saxony-Anhalt, and Thuringia. Disagreements over the policies to be followed in the occupied zones quickly led to a breakdown in cooperation between the four powers, and the Soviets administered their zone without regard to the policies implemented in the other zones. The Soviets withdrew from the ACC in 1948; subsequently as the other three zones were increasingly unified and granted self-government, the Soviet administration instituted a separate socialist government in its zone . Yet, seven years after the Allies' Potsdam Agreement to a unified Germany, the USSR via the Stalin Note (10 March 1952) proposed German reunification and superpower disengagement from Central Europe, which the three Western Allies (the United States, France, the United Kingdom) rejected. Soviet leader Joseph Stalin, a Communist proponent of reunification, died in early March 1953. Similarly, Lavrenty Beria, the First Deputy Prime Minister of the USSR, pursued German reunification, but he was removed from power that same year before he could act on the matter. His successor, Nikita Khrushchev, rejected reunification as equivalent to returning East Germany for annexation to the West; hence reunification went unconsidered until 1989. East Germany considered East Berlin to be its capital, and the Soviet Union and the rest of the Eastern Bloc diplomatically recognized East Berlin as the capital. However, the Western Allies disputed this recognition, considering the entire city of Berlin to be occupied territory governed by the Allied Control Council. According to Margarete Feinstein, East Berlin's status as the capital was largely unrecognized by the West and most Third World countries. In practice, the ACC's authority was rendered moot by the Cold War, and East Berlin's status as occupied territory largely became a legal fiction, and the Soviet sector became fully integrated into the GDR. The deepening Cold War conflict between the Western Powers and the Soviet Union over the unresolved status of West Berlin led to the Berlin Blockade (24 June 194812 May 1949). The Soviet army initiated the blockade by halting all Allied rail, road, and water traffic to and from West Berlin. The Allies countered the Soviets with the Berlin Airlift (1948–49) of food, fuel, and supplies to West Berlin. On 21 April 1946, the Communist Party of Germany (KPD) and the part of the Social Democratic Party of Germany (SPD) in the Soviet zone merged to form the Socialist Unity Party of Germany (SED), which then won the elections of 1946. The SED's government nationalised infrastructure and industrial plants. In 1948, the German Economic Commission (—DWK) under its chairman Heinrich Rau assumed administrative authority in the Soviet occupation zone, thus becoming the predecessor of an East German government. On 7 October 1949, the SED established the (German Democratic RepublicGDR), based on a socialist political constitution establishing its control of the Anti-Fascist National Front of the German Democratic Republic (NF, ), an omnibus alliance of every party and mass organisation in East Germany. The NF was established to stand for election to the (People's Chamber), the East German parliament. The first and only president of the German Democratic Republic was Wilhelm Pieck. However, after 1950, political power in East Germany was held by the First Secretary of the SED, Walter Ulbricht. On 16 June 1953, workers constructing the new boulevard in East Berlin, according to The Sixteen Principles of Urban Design, rioted against a 10% production quota increase. Initially a labour protest, it soon included the general populace, and on 17 June similar protests occurred throughout the GDR, with more than a million people striking in some 700 cities and towns. Fearing anti-communist counter-revolution on 18 June 1953, the government of the GDR enlisted the Soviet Occupation Forces to aid the police in ending the riot; some fifty people were killed and 10,000 were jailed. (See Uprising of 1953 in East Germany.) The German war reparations owed to the Soviets impoverished the Soviet Zone of Occupation and severely weakened the East German economy. In the 1945–46 period, the Soviets confiscated and transported to the USSR approximately 33% of the industrial plant and by the early 1950s had extracted some US$10 billion in reparations in agricultural and industrial products. The poverty of East Germany induced by reparations provoked the ("desertion from the republic") to West Germany, further weakening the GDR's economy. Western economic opportunities induced a brain drain. In response, the GDR closed the Inner German Border, and on the night of 12 August 1961, East German soldiers began erecting the Berlin Wall. In 1971, Soviet leader Leonid Brezhnev had Ulbricht removed; Erich Honecker replaced him. While the Ulbricht government had experimented with liberal reforms, the Honecker government reversed them. The new government introduced a new East German Constitution which defined the German Democratic Republic as a "republic of workers and peasants". Initially, East Germany claimed an exclusive mandate for all of Germany, a claim supported by most of the Communist bloc. It claimed that West Germany was an illegally-constituted NATO puppet state. However, from the 1960s onward, East Germany began recognizing itself as a separate country from West Germany, and shared the legacy of the united German state of 1871–1945. This was formalized in 1974, when the reunification clause was removed from the revised East German constitution. West Germany, in contrast, maintained that it was the only legitimate government of Germany. From 1949 to the early 1970s, West Germany maintained that East Germany was an illegally constituted state. It argued that the GDR was a Soviet puppet state, and frequently referred to it as the "Soviet occupation zone". This position was shared by West Germany's allies as well until 1973. East Germany was recognized primarily by Communist countries and the Arab bloc, along with some "scattered sympathizers". According to the Hallstein Doctrine (1955), West Germany also did not establish (formal) diplomatic ties with any countryexcept the Sovietsthat recognized East German sovereignty. In the early 1970s, the ("Eastern Policy") of "Change Through Rapprochement" of the pragmatic government of FRG Chancellor Willy Brandt, established normal diplomatic relations with the East Bloc states. This policy saw the Treaty of Moscow (August 1970), the Treaty of Warsaw (December 1970), the Four Power Agreement on Berlin (September 1971), the Transit Agreement (May 1972), and the Basic Treaty (December 1972), which relinquished any claims to an exclusive mandate over Germany as a whole and established normal relations between the two Germanys. Both countries were admitted into the United Nations on 18 September 1973. This also increased the number of countries recognizing East Germany to 55, including the US, UK and France, though these three still refused to recognize East Berlin as the capital, and insisted on a specific provision in the UN resolution accepting the two Germanys into the UN to that effect. Following the Ostpolitik the West German view was that East Germany was a "de facto" government within a single German nation and a "de jure" state organisation of parts of Germany outside the Federal Republic. The Federal Republic continued to maintain that it could not within its own structures recognize the GDR "de jure" as a sovereign state under international law; but it fully acknowledged that, within the structures of international law, the GDR was an independent sovereign state. By distinction, West Germany then viewed itself as being within its own boundaries, not only the "de facto" and "de jure" government, but also the sole "de jure" legitimate representative of a dormant "Germany as whole". The two Germanys relinquished any claim to represent the other internationally; which they acknowledged as necessarily implying a mutual recognition of each other as both capable of representing their own populations "de jure" in participating in international bodies and agreements, such as the United Nations and the Helsinki Final Act. This assessment of the Basic Treaty was confirmed in a decision of the Federal Constitutional Court in 1973; Travel between the GDR and Poland, Czechoslovakia, and Hungary became visa-free from 1972. From the beginning, the newly formed GDR tried to establish its own separate identity. Because of the imperial and military legacy of Prussia, the SED repudiated continuity between Prussia and the GDR. The SED destroyed a number of symbolic relics of the former Prussian aristocracy: the Junker manor houses were torn down, the Berliner Stadtschloß was razed, and the equestrian statue of Frederick the Great was removed from East Berlin. Instead the SED focused on the progressive heritage of German history, including Thomas Müntzer's role in the German Peasants' War and the role played by the heroes of the class struggle during Prussia's industrialization. Especially after the Ninth Party Congress in 1976, East Germany upheld historical reformers such as Karl Freiherr vom Stein, Karl August von Hardenberg, Wilhelm von Humboldt, and Gerhard von Scharnhorst as examples and role models. In May 1989, following widespread public anger over the faking of results of local government elections, many citizens applied for exit visas or left the country contrary to GDR laws. The impetus for this exodus of East Germans was the removal of the electrified fence along Hungary's border with Austria on 2 May. Although formally the Hungarian frontier was still closed, many East Germans took the opportunity to enter the country via Czechoslovakia, and then make the illegal crossing from Hungary into Austria and West Germany beyond. By July, 25,000 East Germans had crossed into Hungary, most of whom did not attempt the risky crossing into Austria but remained instead in Hungary or claimed asylum in West German embassies in Prague or Budapest. The opening of a border gate between Austria and Hungary at the Pan-European Picnic on August 19, 1989 then set in motion a chain reaction, at the end of which there was no longer a GDR and the Eastern Bloc had disintegrated. It was the largest escape movement from East Germany since the Berlin Wall was built in 1961. The idea of opening the border at a ceremony came from Otto von Habsburg and was brought up by him to Miklós Németh, the then Hungarian Prime Minister, who promoted the idea. The patrons of the picnic, Habsburg and the Hungarian Minister of State Imre Pozsgay, who were not present at the event, saw the planned event as an opportunity to test Mikhail Gorbachev`s reaction to an opening of the border on the Iron Curtain. In particular, it was examined whether Moscow would give the Soviet troops stationed in Hungary the command to intervene. Extensive advertising for the planned picnic was made by posters and flyers among the GDR holidaymakers in Hungary. The Austrian branch of the Paneuropean Union, which was then headed by Karl von Habsburg, distributed thousands of brochures inviting them to a picnic near the border at Sopron. The local Sopron organizers knew nothing of possible GDR refugees, but thought of a local party with Austrian and Hungarian participation. But with the mass exodus at the Pan-European Picnic, the subsequent hesitant behavior of the Socialist Unity Party of East Germany and the non-intervention of the Soviet Union broke the dams. Thus the bracket of the Eastern Bloc was broken. The reaction to this from Erich Honecker in the "Daily Mirror" of August 19, 1989 was too late and showed the current loss of power: “Habsburg distributed leaflets far into Poland, on which the East German holidaymakers were invited to a picnic. When they came to the picnic, they were given gifts, food and Deutsche Mark, and then they were persuaded to come to the West.” Now tens of thousands of the media-informed East Germans made their way to Hungary, which was no longer ready to keep its borders completely closed or to oblige its border troops to use force of arms. The leadership of the GDR in East Berlin did not dare to completely lock the borders of their own country. The next major turning point in the exodus came on 10 September, when the Hungarian Foreign Minister Gyula Horn announced that his country would no longer restrict movement from Hungary into Austria. Within two days 22,000 East Germans crossed into Austria, with tens of thousands following in the coming weeks. Many others demonstrated against the ruling party, especially in the city of Leipzig. The Leipzig demonstrations became a weekly occurrence, showing a turnout of 10,000 people at the first demonstration on 2 October and peaking at an estimated 300,000 by the end of the month. The protests were surpassed in East Berlin, where half a million demonstrators turned out against the regime on 4 November. Kurt Masur, the conductor of the Leipzig Gewandhaus Orchestra, led local negotiations with the government and held town meetings in the concert hall. The demonstrations eventually led Erich Honecker to resign in October, and he was replaced by a slightly more moderate communist, Egon Krenz. The massive demonstration in East Berlin on 4 November coincided with Czechoslovakia formally opening its border into West Germany. With the West more accessible than ever before, 30,000 East Germans made the crossing via Czechoslovakia in the first two days alone. To try to stem the outward flow of the population, the SED proposed a concessionary law loosening restrictions on travel. When this was rejected in the "Volkskammer" on 5 November, the Cabinet and the Politburo of the GDR resigned. It left only one avenue open for Krenz and the SED, that of completely abolishing travel restrictions between East and West. On 9 November 1989, a few sections of the Berlin Wall were opened, resulting in thousands of East Germans crossing freely into West Berlin and West Germany for the first time in nearly 30 years. Krenz resigned a month later, and the SED opened negotiations with the leaders of the incipient Democratic movement, Neues Forum, to schedule free elections and begin the process of democratization. As part of this, the SED eliminated the clause in the East German constitution guaranteeing the Communists leadership of the state. This was approved in the "Volkskammer" on 1 December 1989 by a vote of 420 to 0. East Germany held its last elections in March 1990. The winner was a coalition headed by the East German branch of West Germany's Christian Democratic Union, which advocated speedy reunification. Negotiations (2+4 Talks) were held involving the two German states and the former Allies which led to agreement on the conditions for German unification. By a two-thirds vote in the "Volkskammer" on 23 August 1990, the German Democratic Republic declared its accession to the Federal Republic of Germany. The five original East German states that had been abolished in the 1952 redistricting were restored. On 3 October 1990, the five states officially joined the Federal Republic of Germany, while East and West Berlin united as a third city-state (in the same manner as Bremen and Hamburg). On 1 July a currency union preceded the political union: the "Ostmark" was abolished, and the Western German "Deutsche Mark" became common currency. Although the Volkskammer's declaration of accession to the Federal Republic had initiated the process of reunification; the act of reunification itself (with its many specific terms, conditions and qualifications; some of which involved amendments to the West German Basic Law) was achieved constitutionally by the subsequent Unification Treaty of 31 August 1990; that is through a binding agreement between the former Democratic Republic and the Federal Republic now recognising each other as separate sovereign states in international law. This treaty was then voted into effect prior to the agreed date for Unification by both the Volkskammer and the Bundestag by the constitutionally required two-thirds majorities; effecting on the one hand, the extinction of the GDR, and on the other, the agreed amendments to the Basic Law of the Federal Republic. The great economic and socio-political inequalities between the former Germanies required government subsidy for the full integration of the German Democratic Republic into the Federal Republic of Germany. Because of the resulting deindustrialization in the former East Germany, the causes of the failure of this integration continue to be debated. Some western commentators claim that the depressed eastern economy is a natural aftereffect of a demonstrably inefficient command economy. But many East German critics contend that the shock-therapy style of privatization, the artificially high rate of exchange offered for the Ostmark, and the speed with which the entire process was implemented did not leave room for East German enterprises to adapt. There were four periods in East German political history. These included: 1949–61, which saw the building of socialism; 1961–1970 after the Berlin Wall closed off escape was a period of stability and consolidation; 1971–85 was termed the Honecker Era, and saw closer ties with West Germany; and 1985–89 saw the decline and extinction of East Germany. The ruling political party in East Germany was the "Sozialistische Einheitspartei Deutschlands" (Socialist Unity Party of Germany, SED). It was created in 1946 through the Soviet-directed merger of the Communist Party of Germany (KPD) and the Social Democratic Party of Germany (SPD) in the Soviet controlled zone. However, the SED quickly transformed into a full-fledged Communist party as the more independent-minded Social Democrats were pushed out. The Potsdam Agreement committed the Soviets to supporting a democratic form of government in Germany, though the Soviets' understanding of democracy was radically different from that of the West. As in other Soviet-bloc countries, non-communist political parties were allowed. Nevertheless, every political party in the GDR was forced to join the National Front of Democratic Germany, a broad coalition of parties and mass political organisations, including: The member parties were almost completely subservient to the SED, and had to accept its "leading role" as a condition of their existence. However, the parties did have representation in the Volkskammer and received some posts in the government. The Volkskammer also included representatives from the "mass organisations" like the Free German Youth ("Freie Deutsche Jugend" or "FDJ"), or the Free German Trade Union Federation. There was also a Democratic Women's Federation of Germany, with seats in the Volkskammer. Important non-parliamentary mass organisations in East German society included the German Gymnastics and Sports Association ("Deutscher Turn- und Sportbund" or "DTSB"), and People's Solidarity ("Volkssolidarität"), an organisation for the elderly. Another society of note was the Society for German-Soviet Friendship. After the fall of Communism, the SED was renamed the "Party of Democratic Socialism" (PDS) which continued for a decade after reunification before merging with the West German WASG to form the Left Party ("Die Linke"). The Left Party continues to be a political force in many parts of Germany, albeit drastically less powerful than the SED. The East German population declined by three million people throughout its forty-one year history, from 19 million in 1948 to 16 million in 1990; of the 1948 population, some 4 million were deported from the lands east of the Oder-Neisse line, which made the home of millions of Germans part of Poland and the Soviet Union. This was a stark contrast from Poland, which increased during that time; from 24 million in 1950 (a little more than East Germany) to 38 million (more than twice of East Germany's population). This was primarily a result of emigration—about one quarter of East Germans left the country before the Berlin Wall was completed in 1961, and after that time, East Germany had very low birth rates, except for a recovery in the 1980s when the birth rate in East Germany was considerably higher than in West Germany. Until 1952, East Germany comprised the capital, East Berlin (though legally it was not fully part of the GDR's territory), and the five German states of Mecklenburg-Vorpommern (in 1947 renamed Mecklenburg), Brandenburg, Saxony-Anhalt, Thuringia, and Saxony, their post-war territorial demarcations approximating the pre-war German demarcations of the Middle German "Länder" (states) and "Provinzen" (provinces of Prussia). The western parts of two provinces, Pomerania and Lower Silesia, the remainder of which were annexed by Poland, remained in the GDR and were attached to Mecklenburg and Saxony, respectively. The East German Administrative Reform of 1952 established 14 "Bezirke" (districts) and "de facto" disestablished the five "Länder". The new "Bezirke", named after their district centres, were as follows: (i) Rostock, (ii) Neubrandenburg, and (iii) Schwerin created from the "Land" (state) of Mecklenburg; (iv) Potsdam, (v) Frankfurt (Oder), and (vii) Cottbus from Brandenburg; (vi) Magdeburg and (viii) Halle from Saxony-Anhalt; (ix) Leipzig, (xi) Dresden, and (xii) Karl-Marx-Stadt (Chemnitz until 1953 and again from 1990) from Saxony; and (x) Erfurt, (xiii) Gera, and (xiv) Suhl from Thuringia. East Berlin was made the country's 15th "Bezirk" in 1961 but retained special legal status until 1968, when the residents approved the new (draft) constitution. Despite the city as a whole being legally under the control of the Allied Control Council, and diplomatic objections of the Allied governments, the GDR administered the "Bezirk" of Berlin as part of its territory. The government of East Germany had control over a large number of military and paramilitary organisations through various ministries. Chief among these was the Ministry of National Defence. Because of East Germany's proximity to the West during the Cold War (1945–92), its military forces were among the most advanced of the Warsaw Pact. Defining what was a military force and what was not is a matter of some dispute. The Nationale Volksarmee (NVA) was the largest military organisation in East Germany. It was formed in 1956 from the Kasernierte Volkspolizei (Barracked People's Police), the military units of the regular police (Volkspolizei), when East Germany joined the Warsaw Pact. From its creation, it was controlled by the Ministry of National Defence (East Germany). It was an all volunteer force until an eighteen-month conscription period was introduced in 1962. It was regarded by NATO officers as the best military in the Warsaw Pact. The NVA consisted of the following branches: The border troops of the Eastern sector were originally organised as a police force, the Deutsche Grenzpolizei, similar to the Bundesgrenzschutz in West Germany. It was controlled by the Ministry of the Interior. Following the remilitarisation of East Germany in 1956, the Deutsche Grenzpolizei was transformed into a military force in 1961, modeled after the Soviet Border Troops, and transferred to the Ministry of National Defense, as part of the National People's Army. In 1973, it was separated from the NVA, but it remained under the same ministry. At its peak, it numbered approximately 47,000 men. After the NVA was separated from the Volkspolizei in 1956, the Ministry of the Interior maintained its own public order barracked reserve, known as the Volkspolizei-Bereitschaften (VPB). These units were, like the Kasernierte Volkspolizei, equipped as motorised infantry, and they numbered between 12,000 and 15,000 men. The Ministry of State Security (Stasi) included the Felix Dzerzhinsky Guards Regiment, which was mainly involved with facilities security and plain clothes events security. They were the only part of the feared Stasi that was visible to the public, and so were very unpopular within the population. The Stasi numbered around 90,000 men, the Guards Regiment around 11,000-12,000 men. The "Kampfgruppen der Arbeiterklasse" (combat groups of the working class) numbered around 400,000 for much of their existence, and were organised around factories. The KdA was the political-military instrument of the SED; it was essentially a "party Army". All KdA directives and decisions were made by the ZK's "Politbüro". They received their training from the Volkspolizei and the Ministry of the Interior. Membership was voluntary, but SED members were required to join as part of their membership obligation. Every man was required to serve eighteen months of compulsory military service; for the medically unqualified and conscientious objector, there were the "Baueinheiten" (construction units), established in 1964, two years after the introduction of conscription, in response to political pressure by the national Lutheran Protestant Church upon the GDR's government. In the 1970s, East German leaders acknowledged that former construction soldiers were at a disadvantage when they rejoined the civilian sphere. The East German state promoted an "anti-imperialist" line that was reflected in all its media and all the schools. This line followed Lenin's theory of imperialism as the highest and last stage of capitalism, and Dimitrov's theory of fascism as the dictatorship of the most reactionary elements of financial capitalism. Popular reaction to these measures was mixed, and Western media penetrated the country both through cross-border television and radio broadcasts from West Germany and from the U.S. propaganda network Radio Free Europe. Dissidents, particularly professionals, sometimes fled to West Germany, which was relatively easy before the construction of the Berlin Wall in 1961. After receiving wider international diplomatic recognition in 1972–73, the GDR began active cooperation with Third World socialist governments and national liberation movements. While the USSR was in control of the overall strategy and Cuban armed forces were involved in the actual combat (mostly in the People's Republic of Angola and socialist Ethiopia), the GDR provided experts for military hardware maintenance and personnel training, and oversaw creation of secret security agencies based on its own Stasi model. Already in the 1960s contacts were established with Angola's MPLA, Mozambique's FRELIMO and the PAIGC in Guinea Bissau and Cape Verde. In the 1970s official cooperation was established with other self-proclaimed socialist governments and people's republics: People's Republic of the Congo, People's Democratic Republic of Yemen, Somali Democratic Republic, Libya, and the People's Republic of Benin. The first military agreement was signed in 1973 with the People's Republic of the Congo. In 1979 friendship treaties were signed with Angola, Mozambique and Ethiopia. It was estimated that altogether, 2000–4000 DDR military and security experts were dispatched to Africa. In addition, representatives from African and Arab countries and liberation movements underwent military training in the GDR. East Germany pursued an anti-Zionist policy; Jeffrey Herf argues that East Germany was waging an undeclared war on Israel. According to Herf, "the Middle East was one of the crucial battlefields of the global Cold War between the Soviet Union and the West; it was also a region in which East Germany played a salient role in the Soviet bloc's antagonism toward Israel." While East Germany saw itself as an "anti-fascist state", it regarded Israel as a "fascist state" and East Germany strongly supported the Palestine Liberation Organization in its armed struggle against Israel. In 1974, the GDR government recognized the PLO as the "sole legitimate representative of the Palestinian people". The PLO declared the Palestinian state on 15 November 1988 during the First Intifada and the GDR recognized the state prior to reunification. After becoming a member of the UN, East Germany "made excellent use of the UN to wage political warfare against Israel [and was] an enthusiastic, high-profile, and vigorous member" of the anti-Israeli majority of the General Assembly. The East German economy began poorly because of the devastation caused by the Second World War; the loss of so many young soldiers, the disruption of business and transportation, the allied bombing campaigns that decimated cities, and reparations owed to the USSR. The Red Army dismantled and transported to Russia the infrastructure and industrial plants of the Soviet Zone of Occupation. By the early 1950s, the reparations were paid in agricultural and industrial products; and Lower Silesia, with its coal mines and Szczecin, an important natural port, were given to Poland by the decision of Stalin and in accordance with Potsdam Agreement. The socialist centrally planned economy of the German Democratic Republic was like that of the USSR. In 1950, the GDR joined the COMECON trade bloc. In 1985, collective (state) enterprises earned 96.7% of the net national income. To ensure stable prices for goods and services, the state paid 80% of basic supply costs. The estimated 1984 per capita income was $9,800 ($22,600 in 2015 dollars). In 1976, the average annual growth of the GDP was approximately five percent. This made East German economy the richest in all of the Soviet Bloc until reunification in 1990. Notable East German exports were photographic cameras, under the Praktica brand; automobiles under the Trabant, Wartburg, and the IFA brands; hunting rifles, sextants, typewriters and wristwatches. Until the 1960s, East Germans endured shortages of basic foodstuffs such as sugar and coffee. East Germans with friends or relatives in the West (or with any access to a hard currency) and the necessary Staatsbank foreign currency account could afford Western products and export-quality East German products via Intershop. Consumer goods also were available, by post, from the Danish Jauerfood, and Genex companies. The government used money and prices as political devices, providing highly subsidised prices for a wide range of basic goods and services, in what was known as "the second pay packet". At the production level, artificial prices made for a system of semi-barter and resource hoarding. For the consumer, it led to the substitution of GDR money with time, barter, and hard currencies. The socialist economy became steadily more dependent on financial infusions from hard-currency loans from West Germany. East Germans, meanwhile, came to see their soft currency as worthless relative to the Deutsche Mark (DM). Economic issues would also persist in the east of Germany after the reunification of the west and the east, James Hawes in his book 'the shortest history of Germany'. Quotes from the federal office of political education (23 June 2009) 'In 1991 alone, 153 billion Deutschmarks had to be transferred to eastern Germany to secure incomes, support businesses and improve infrastructure... by 1999 the total had amounted to 1.634 trillion Marks net... The sums were so large that public debt in Germany more than doubled.' Many western commentators have maintained that loyalty to the SED was a primary criterion for getting a good job, and that professionalism was secondary to political criteria in personnel recruitment and development. Beginning in 1963 with a series of secret international agreements, East Germany recruited workers from Poland, Hungary, Cuba, Albania, Mozambique, Angola and North Vietnam. They numbered more than 100,000 by 1989. Many, such as future politician Zeca Schall (who emigrated from Angola in 1988 as a contract worker) stayed in Germany after the Wende. Religion became contested ground in the GDR, with the governing Communists promoting state atheism, although some people remained loyal to Christian communities. In 1957 the State authorities established a State Secretariat for Church Affairs to handle the government's contact with churches and with religious groups; the SED remained officially atheist. In 1950, 85% of the GDR citizens were Protestants, while 10% were Catholics. In 1961, the renowned philosophical theologian Paul Tillich claimed that the Protestant population in East Germany had the most admirable Church in Protestantism, because the Communists there had not been able to win a spiritual victory over them. By 1989, membership in the Christian churches dropped significantly. Protestants constituted 25% of the population, Catholics 5%. The share of people who considered themselves non-religious rose from 5% in 1950 to 70% in 1989. When it first came to power, the Communist party asserted the compatibility of Christianity and Marxism and sought Christian participation in the building of socialism. At first the promotion of Marxist-Leninist atheism received little official attention. In the mid-1950s, as the Cold War heated up, atheism became a topic of major interest for the state, in both domestic and foreign contexts. University chairs and departments devoted to the study of scientific atheism were founded and much literature (scholarly and popular) on the subject was produced. This activity subsided in the late 1960s amid perceptions that it had started to become counterproductive. Official and scholarly attention to atheism renewed beginning in 1973, though this time with more emphasis on scholarship and on the training of cadres than on propaganda. Throughout, the attention paid to atheism in East Germany was never intended to jeopardise the cooperation that was desired from those East Germans who were religious. East Germany, historically, was majority Protestant (primarily Lutheran) from the early stages of the Protestant Reformation onwards. In 1948, freed from the influence of the Nazi-oriented German Christians, Lutheran, Reformed and United churches from most parts of Germany came together as the Evangelical Church in Germany (EKD) at the Conference of Eisenach ("Kirchenversammlung von Eisenach"). In 1969 the regional Protestant churches in East Germany and East Berlin broke away from the EKD and formed the "Federation of Protestant Churches in the German Democratic Republic" (, BEK), in 1970 also joined by the Moravian "Herrnhuter Brüdergemeine". In June 1991, following the German reunification, the BEK churches again merged with the EKD ones. Between 1956 and 1971 the leadership of the East German Lutheran churches gradually changed its relations with the state from hostility to cooperation. From the founding of the GDR in 1949, the Socialist Unity Party sought to weaken the influence of the church on the rising generation. The church adopted an attitude of confrontation and distance toward the state. Around 1956 this began to develop into a more neutral stance accommodating conditional loyalty. The government was no longer regarded as illegitimate; instead, the church leaders started viewing the authorities as installed by God and, therefore, deserving of obedience by Christians. But on matters where the state demanded something which the churches felt was not in accordance with the will of God, the churches reserved their right to say no. There were both structural and intentional causes behind this development. Structural causes included the hardening of Cold War tensions in Europe in the mid-1950s, which made it clear that the East German state was not temporary. The loss of church members also made it clear to the leaders of the church that they had to come into some kind of dialogue with the state. The intentions behind the change of attitude varied from a traditional liberal Lutheran acceptance of secular power to a positive attitude toward socialist ideas. Manfred Stolpe became a lawyer for the Brandenburg Protestant Church in 1959 before taking up a position at church headquarters in Berlin. In 1969 he helped found the "Bund der Evangelischen Kirchen in der DDR" (BEK), where he negotiated with the government while at the same time working within the institutions of this Protestant body. He won the regional elections for the Brandenburg state assembly at the head of the SPD list in 1990. Stolpe remained in the Brandenburg government until he joined the federal government in 2002. Apart from the Protestant state churches () united in the EKD/BEK and the Catholic Church there was a number of smaller Protestant bodies, including Protestant Free Churches () united in the and the , as well as , and . The Moravian Church also had its presence as the "Herrnhuter Brüdergemeine". There were also other Protestants such as Methodists, Adventists, Mennonites and Quakers. The smaller Catholic Church in eastern Germany had a fully functioning episcopal hierarchy that was in full accord with the Vatican. During the early postwar years, tensions were high. The Catholic Church as a whole (and particularly the bishops) resisted both the East German state and Marxist ideology. The state allowed the bishops to lodge protests, which they did on issues such as abortion. After 1945 the Church did fairly well in integrating Catholic exiles from lands to the east (which mostly became part of Poland) and in adjusting its institutional structures to meet the needs of a church within an officially atheist society. This meant an increasingly hierarchical church structure, whereas in the area of religious education, press, and youth organisations, a system of temporary staff was developed, one that took into account the special situation of Caritas, a Catholic charity organisation. By 1950, therefore, there existed a Catholic subsociety that was well adjusted to prevailing specific conditions and capable of maintaining Catholic identity. With a generational change in the episcopacy taking place in the early 1980s, the state hoped for better relations with the new bishops, but the new bishops instead began holding unauthorised mass meetings, promoting international ties in discussions with theologians abroad, and hosting ecumenical conferences. The new bishops became less politically oriented and more involved in pastoral care and attention to spiritual concerns. The government responded by limiting international contacts for bishops. List of apostolic administrators: East Germany's culture was strongly influenced by communist thought and was marked by an attempt to define itself in opposition to the west, particularly West Germany and the United States. Critics of the East German state have claimed that the state's commitment to Communism was a hollow and cynical tool, Machiavellian in nature, but this assertion has been challenged by studies that have found that the East German leadership was genuinely committed to the advance of scientific knowledge, economic development, and social progress. However, Pence and Betts argue, the majority of East Germans over time increasingly regarded the state's ideals to be hollow, though there was also a substantial number of East Germans who regarded their culture as having a healthier, more authentic mentality than that of West Germany. GDR culture and politics were limited by the harsh censorship. The Puhdys and Karat were some of the most popular mainstream bands in East Germany. Like most mainstream acts, they appeared in popular youth magazines such as "Neues Leben" and "Magazin". Other popular rock bands were , City, Silly and Pankow. Most of these artists recorded on the state-owned AMIGA label. Schlager, which was very popular in the west, also gained a foothold early on in East Germany, and numerous musicians, such as , , and gained national fame. From 1962 to 1976, an international schlager festival was held in Rostock, garnering participants from between 18 and 22 countries each year. The city of Dresden held a similar international festival for schlager musicians from 1971 until shortly before reunification. There was a national schlager contest hosted yearly in Magdeburg from 1966 to 1971 as well. Bands and singers from other Communist countries were popular, e.g. Czerwone Gitary from Poland known as the "Rote Gitarren". Czech Karel Gott, the Golden Voice from Prague, was beloved in both German states. Hungarian band Omega performed in both German states, and Yugoslavian band Korni Grupa toured East Germany in the 1970s. West German television and radio could be received in many parts of the East. The Western influence led to the formation of more "underground" groups with a decisively western-oriented sound. A few of these bands – the so-called Die anderen Bands ("the other bands") – were Die Skeptiker, and Feeling B. Additionally, hip hop culture reached the ears of the East German youth. With videos such as "Beat Street" and "Wild Style", young East Germans were able to develop a hip hop culture of their own. East Germans accepted hip hop as more than just a music form. The entire street culture surrounding rap entered the region and became an outlet for oppressed youth. The government of the GDR was invested in both promoting the tradition of German classical music, and in supporting composers to write new works in that tradition. Notable East German composers include Hanns Eisler, Paul Dessau, Ernst Hermann Meyer, Rudolf Wagner-Régeny, and Kurt Schwaen. The birthplace of Johann Sebastian Bach (1685–1750), Eisenach, was rendered as a museum about him, featuring more than three hundred instruments, which, in 1980, received some 70,000 visitors. In Leipzig, the Bach archive contains his compositions and correspondence and recordings of his music. Governmental support of classical music maintained some fifty symphony orchestras, such as Gewandhausorchester and Thomanerchor in Leipzig; Sächsische Staatskapelle in Dresden; and Berliner Sinfonie Orchester and Staatsoper Unter den Linden in Berlin. Kurt Masur was their prominent conductor. East German theatre was originally dominated by Bertolt Brecht, who brought back many artists out of exile and reopened the "Theater am Schiffbauerdamm" with his Berliner Ensemble. Alternatively, other influences tried to establish a "Working Class Theatre", played for the working class by the working class. After Brecht's death, conflicts began to arise between his family (around Helene Weigel) and other artists about Brecht's legacy, including Slatan Dudow, Erwin Geschonneck, Erwin Strittmatter, Peter Hacks, Benno Besson, Peter Palitzsch and Ekkehard Schall.
https://en.wikipedia.org/wiki?curid=13058
Granville, New South Wales Granville is a suburb in western Sydney, in the state of New South Wales, Australia. Granville is located west of the Sydney central business district, split between the local government areas of Cumberland Council and the City of Parramatta. South Granville is a separate suburb. Lisgar, Redfern, Heath and Mona Streets form the approximate border between Granville and South Granville. The Duck River provides a boundary with Auburn, to the east. In 1855, the Granville area was known as Parramatta Junction, named after the final stop of the first railway line of New South Wales. The Sydney-Parramatta Line ran from Sydney terminus, just south from today's Central railway station to the Granville area which was originally known as 'Parramatta Junction'. This led to the development of this area, which attracted speculators and some local industries. In the early days of European settlement, timber was harvested to fuel the steam engines in Sydney and Parramatta. By the 1860s, the supply of timber was exhausted. The remainder was used by scavengers who made a living by collecting firewood. Wattle bark found use with tanners and the bark from stringybark trees was used for roofing of huts. In 1862, a major estate, "Drainville", became subject to a mortgagee sale and subdivided for villa homes, and small agricultures. At the end of the decade a Tweed Mill was established, which was steam powered using water from the Duck River. . In 1878, the locality received its own post office, which was then part of the stationmasters house. The name 'Parramatta Junction' remained until 1880, when two public meeting voted that the name be changed. Some very strange names were suggested including "Drainwell", "Vauxhall", "Nobbsville", and "Swagsville", but finally the name of Granville in honour of the British Foreign Secretary, the Granville Leveson-Gower, 2nd Earl Granville. Even then the voice of protest was raised declaring the name was "too French", but the dissenter was ignored. At this time, the place had a population of 372, of which 176 were male and 196 female. In this era some German settlers, Joseph Klein and P W Merkell, tried to establish vineyards in the area, but eventually found the land was not suited for this type of agriculture. More farmers discovered the limitations of the local soils and fruit growers complained about the damage from flying foxes. Thus, the only practical use for the grasslands, which replaced the original bushland, was for dairy cattle. The Granville Municipality was formed in 1885 and the council carried on the local government of the area until 1948, when it became part of an enlarged City of Parramatta. On Anzac Day of 1974, Granville was partially severed by flooding of the Duck Creek stormwater channel due to torrential rain that fell over the area. 135 millimetres of rain fell between 11.30 pm and 12.30 pm at Guildford, with the ensuing flood doing major damage through Granville. The nearby RSL was damaged and many of the club's old photographs and honour boards were destroyed. Granville is also the location of the Granville railway disaster, which occurred on 18 January 1977 when a commuter train derailed just before the Bold Street overpass and hit the staunchion, causing the bridge to collapse. 83 people perished, making it the worst rail disaster in Australian history. Granville has a number of heritage-listed sites, including: Granville has a mixture of residential, commercial and industrial developments. The commercial and residential developments are mostly around Granville railway station and Parramatta Road. Granville is primarily dominated by freestanding weatherboard, fibro and unrendered brick buildings. The area is no longer exactly "typical" quarter acre block territory, but blocks are reasonably common. Terraced houses are rare, but increasing in number. Apartment blocks, generally three to four storeys in height, are also becoming more common in the vicinity of the railway station. Buildings that deserve some attention are: The "Crest" building on the corner of Blaxcell and Redfern Streets, was built by Hoyts in 1948 as a movie theatre and was used for screening films up until 1963. The structure of the building is of a Quonset hut design, while the facade and interior is of a post-Art Deco and post-Moderne eclectic style, influenced by the "Picture Palace" architecture popularly used for movie theatres. It is now used as a function hall. The Crest Theatre is now listed in the NSW State Heritage Register as being of "State significance", being one of the few cinemas built in Australia in the 1940s. Externally and internally the building remains largely intact, though the signage on the external decorative pier now reads "B-L-O-U-Z-A", rather than the original "H-O-Y-T-S" (later it was "B-I-N-G-O"). Granville railway station is a major station on the T1 Northern and Western Lines and T2 Inner West & Leppington Line of the Sydney Trains network, served by services on those lines. The station is wheelchair accessible. Granville railway station is located on the Main Suburban line. Granville's bus interchange, as well as a car park, are located adjacent to its train station. Bike racks and lockers are located nearby. Taxi ranks can be found just south of the train station. Granville is serviced by Transdev NSW and features a newly built bus interchange. Transdev NSW operates three bus routes via Granville railway station: Granville station is served by one NightRide (Night Bus) route: Parramatta Road has always been an important thoroughfare for Sydney from its earliest days. From Parramatta the major western road for the state is the Great Western Highway. The M4 Western Motorway, running parallel to the Great Western Highway has taken much of the traffic away from these roads, with entrance and exit ramps close to Parramatta. Granville has a major college of Technical and Further Education, which is part of the South Western Sydney Institute of TAFE. Schools include Granville Boys High School which was founded in 1926, Delany College, Granville Public School, Granville East Public School, Blaxcell Street Public School and Holy Family Catholic School. The suburb is also home to a Cumberland Council branch library. The suburb boasts four pubs. The Royal Hotel and the Granville Hotel are located south and north of the railway line respectively. The Rosehill Hotel is located on the northern side of Parramatta Road and the Vauxhall Inn is on the same street on the western edge of Granville on the corner of Woodville Road. Granville is also home to a sub-branch club of the RSL, known as Granville Diggers. Attractions include live music, bingo, karaoke etc. Granville has an Olympic size pool and a football facility. Historic Garside Park is home to State Super League and Super Youth League club, Granville Rage. Sydney Speedway is a dirt track speedway which opened in 1977 at the old Granville Showground as the Parramatta Speedway. The clay surface caters mainly to Sprintcars and has been home to some of Australia's greatest drivers including ten times Australian Sprintcar Champion Garry Rush, and multiple title holders George Tatnell, his son Brooke Tatnell, and Max Dumesny. The speedway is also the only venue not in North or Central America to host a round of the famous World of Outlaws sprintcar series. Granville Magpies Soccer Club entered the Sydney competition in the early 1880s and has continued to compete with distinction until the early 2000s. The club originally played matches at a paddock behind Hudson Brothers' Works in Clyde before relocating to Macarthur Park, known nowadays as F.S. Garside Park. At the 2011 census, there were 13,989 residents in Granville. More than half of people were born outside of Australia, with the top countries of birth being India, China and Lebanon. Three-quarters of people spoke a language other than English at home. The most common other languages spoken at home were Arabic 18.1%, Cantonese 5.5%, Mandarin 4.8%, Turkish 2.3% and Tongan 2.0%. The housing in Granville was evenly spread between detached houses and higher density units or apartments. 46.3% of residents were renting their home and this was higher than the national average of 29.6%. Data from the 2016 census shows that the population of Granville was 15,332. Of this population:
https://en.wikipedia.org/wiki?curid=13060
Georg Philipp Telemann Georg Philipp Telemann ( – 25 June 1767) () was a German Baroque composer and multi-instrumentalist. Almost completely self-taught in music, he became a composer against his family's wishes. After studying in Magdeburg, Zellerfeld, and Hildesheim, Telemann entered the University of Leipzig to study law, but eventually settled on a career in music. He held important positions in Leipzig, Sorau, Eisenach, and Frankfurt before settling in Hamburg in 1721, where he became musical director of that city's five main churches. While Telemann's career prospered, his personal life was always troubled: his first wife died less than two years after their marriage, and his second wife had extramarital affairs and accumulated a large gambling debt before leaving him. Telemann is one of the most prolific composers in history (at least in terms of surviving oeuvre) and was considered by his contemporaries to be one of the leading German composers of the time—he was compared favorably both to his friend Johann Sebastian Bach, who made Telemann the godfather and namesake of his son Carl Philipp Emanuel, and to George Frideric Handel, whom Telemann also knew personally. As part of his duties, he wrote a considerable amount of music for educating organists under his direction. This includes 48 chorale preludes and 20 small fugues (modal fugues) to accompany his chorale harmonizations for 500 hymns. His music incorporates French, Italian, and German national styles, and he was at times even influenced by Polish popular music. He remained at the forefront of all new musical tendencies, and his music stands as an important link between the late Baroque and early Classical styles. The Telemann Museum in Hamburg is dedicated to him. Telemann was born in Magdeburg, then the capital of the Duchy of Magdeburg, Brandenburg-Prussia. His father Heinrich, deacon at the Church of the Holy Spirit ("Heilige-Geist-Kirche"), died when Telemann was four. The future composer received his first music lessons at 10, from a local organist, and became immensely interested in music in general, and composition in particular. Despite opposition from his mother and relatives, who forbade any musical activities, Telemann found it possible to study and compose in secret, even creating an opera at age 12. In 1697, after studies at the Domschule in Magdeburg and at a school in Zellerfeld, Telemann was sent to the famous Gymnasium Andreanum at Hildesheim, where his musical talent flourished, supported by school authorities, including the rector himself. Telemann was becoming equally adept both at composing and performing, teaching himself flute, oboe, violin, viola da gamba, recorder, double bass, and other instruments. In 1701 he graduated from the Gymnasium and went to Leipzig to become a student at the Leipzig University, where he intended to study law. He ended up becoming a professional musician, regularly composing works for Nikolaikirche and even St. Thomas (Thomaskirche). In 1702 he became director of the municipal opera house "Opernhaus auf dem Brühl", and later music director at the Neukirche. Prodigiously productive, Telemann supplied a wealth of new music for Leipzig, including several operas, one of which was his first major opera, "Germanicus". However, he became engaged in a conflict with the cantor of the Thomaskirche, Johann Kuhnau. The conflict intensified when Telemann started employing numerous students for his projects, including those who were Kuhnau's, from the Thomasschule. Telemann left Leipzig in 1705 at the age of 24, after receiving an invitation to become "Kapellmeister" for the court of Count Erdmann II of Promnitz at Sorau (now Żary, in Poland). His career there was cut short in early 1706 by the hostilities of the Great Northern War, and after a short period of travels he entered the service of Duke Johann Wilhelm, in Eisenach where Johann Sebastian Bach was born. He became "Konzertmeister" on 24 December 1708 and Secretary and "Kapellmeister" in August 1709. During his tenure at Eisenach, Telemann wrote a great deal of music: at least four annual cycles of church cantatas, dozens of sonatas and concertos, and other works. In 1709, he married Amalie Louise Juliane Eberlin, lady-in-waiting to the Countess of Promnitz and daughter of the musician Daniel Eberlin. Their daughter was born in January 1711. The mother died soon afterwards, leaving Telemann depressed and distraught. After less than a year he sought another position, and moved to Frankfurt on 18 March 1712 at the age of 31 to become city music director and "Kapellmeister" at the Barfüßerkirche and St. Catherine's Church. In Frankfurt, he fully gained his mature personal style. Here, as in Leipzig, he was a powerful force in the city's musical life, creating music for two major churches, civic ceremonies, and various ensembles and musicians. By 1720 he had adopted the use of the da capo aria, which had been adopted by composers such as Domenico Scarlatti. Operas such as "Narciso", which was brought to Frankfurt in 1719, written in the Italian idiom of composition, made a mark on Telemann's output. On 28 August 1714, three years after his first wife had died, Telemann married his second wife, Maria Catharina Textor, daughter of a Frankfurt council clerk. They eventually had nine children together. This was a source of much personal happiness, and helped him produce compositions. Telemann continued to be extraordinarily productive and successful, even augmenting his income by working for Eisenach employers as a "Kapellmeister" "von Haus aus", that is, regularly sending new music while not actually living in Eisenach. Telemann's first published works also appeared during the Frankfurt period. His output increased rapidly, for he fervently composed overture-suites and chamber music, most of which is unappreciated. In the latter half of the Frankfurt period, he composed an innovative work, his Viola Concerto in G major, which is twice the length of his violin concertos. Also, here he composed his first choral masterpiece, his "Brockes Passion", in 1716. The composer, however, was still ambitious and wishing for a better post, so in 1721 he accepted the invitation to work in Hamburg as "Kantor" of the Johanneum Lateinschule, and music director of the five largest churches. Soon after arrival, Telemann encountered some opposition from church officials who found his secular music and activities to be too much of a distraction for both Telemann himself and the townsfolk. The next year, when Johann Kuhnau died and the city of Leipzig was looking for a new "Thomaskantor", Telemann applied for the job and was approved, yet declined after Hamburg authorities agreed to give him a suitable raise. After another candidate, Christoph Graupner, declined, the post went to Johann Sebastian Bach. Telemann took a few small trips outside of Germany at this time. However, later in the Hamburg period he traveled to Paris and stayed for eight months, 1737 into 1738. He heard and was impressed by "Castor et Pollux", an opera by French composer Jean-Philippe Rameau. From then on, he incorporated the French operatic style into his vocal works. Before then, his influence was primarily Italian and German. Apart from that, Telemann remained in Hamburg for the rest of his life. A vocal masterpiece of this period is his "St Luke Passion" from 1728, which is a prime example of his fully matured vocal style. His first years there were plagued by marital troubles: his wife's infidelity, and her gambling debts, which amounted to a sum larger than Telemann's annual income. The composer was saved from bankruptcy by the efforts of his friends, and by the numerous successful music and poetry publications Telemann made during the years 1725 to 1740. By 1736 husband and wife were no longer living together because of their financial disagreements. Although still active and fulfilling the many duties of his job, Telemann became less productive in the 1740s, for he was in his 60s. He took up theoretical studies, as well as hobbies such as gardening and cultivating exotic plants, something of a fad in Hamburg at that time, and a hobby shared by Handel. Most of the music of the 1750s appears to have been parodied from earlier works. Telemann's eldest son Andreas died in 1755, and Andreas' son Georg Michael Telemann was raised by the aging composer. Troubled by health problems and failing eyesight in his last years, Telemann was still composing into the 1760s. He died on the evening of 25 June 1767 from what was recorded at the time as a "chest ailment." He was succeeded at his Hamburg post by his godson, Johann Sebastian Bach's second son Carl Philipp Emmanuel Bach. Telemann was one of the most prolific major composers of all time: his all-encompassing oeuvre comprises more than 3,000 compositions, half of which have been lost, and most of which have not been performed since the 18th century. From 1708 to 1750, Telemann composed 1,043 sacred cantatas and 600 overture-suites, and types of concertos for combinations of instruments that no other composer of the time employed. The first accurate estimate of the number of his works was provided by musicologists only during the 1980s and 1990s, when extensive thematic catalogues were published. During his lifetime and the latter half of the 18th century, Telemann was very highly regarded by colleagues and critics alike. Numerous theorists (Marpurg, Mattheson, Quantz, and Scheibe, among others) cited his works as models, and major composers such as J. S. Bach and Handel bought and studied his published works. He was immensely popular not only in Germany but also in the rest of Europe: orders for editions of Telemann's music came from France, Italy, the Netherlands, Belgium, Scandinavian countries, Switzerland, and Spain. It was only in the early 19th century that his popularity came to a sudden halt. Most lexicographers started dismissing him as a "polygraph" who composed too many works, a "Vielschreiber" for whom quantity came before quality. Such views were influenced by an account of Telemann's music by Christoph Daniel Ebeling, a late-18th-century critic who in fact praised Telemann's music and made only passing critical remarks of his productivity. After the Bach revival, Telemann's works were judged as inferior to Bach's and lacking in deep religious feeling. For example, by 1911, the "Encyclopædia Britannica" lacked an article about Telemann, and in one of its few mentions of him referred to "the vastly inferior work of lesser composers such as Telemann" in comparison to Handel and Bach. Particularly striking examples of such judgements were produced by noted Bach biographers Philipp Spitta and Albert Schweitzer, who criticized Telemann's cantatas and then praised works they thought were composed by Bach, but which were composed by Telemann. The last performance of a substantial work by Telemann ("Der Tod Jesu") occurred in 1832, and it was not until the 20th century that his music started being performed again. The revival of interest in Telemann began in the first decades of the 20th century and culminated in the Bärenreiter critical edition of the 1950s. Today each of Telemann's works is usually given a TWV number, which stands for "Telemann-Werke-Verzeichnis" (Telemann Works Catalogue). Telemann's music was one of the driving forces behind the late Baroque and the early Classical styles. Starting in the 1710s he became one of the creators and foremost exponents of the so-called German mixed style, an amalgam of German, French, Italian and Polish styles. Over the years, his music gradually changed and started incorporating more and more elements of the galant style, but he never completely adopted the ideals of the nascent Classical era: Telemann's style remained contrapuntally and harmonically complex, and already in 1751 he dismissed much contemporary music as too simplistic. Composers he influenced musically included pupils of J.S. Bach in Leipzig, such as Wilhelm Friedemann Bach, Carl Philipp Emmanuel Bach and Johann Friedrich Agricola, as well as those composers who performed under his direction in Leipzig (Christoph Graupner, Johann David Heinichen and Johann Georg Pisendel), composers of the Berlin "lieder" school, and finally, his numerous pupils, none of whom, however, became major composers. Equally important for the history of music were Telemann's publishing activities. By pursuing exclusive publication rights for his works, he set one of the most important early precedents for regarding music as the intellectual property of the composer. The same attitude informed his public concerts, where Telemann would frequently perform music originally composed for ceremonies attended only by a select few members of the upper class. Sonata da chiesa, TWV 41:g5 (for Melodic instrument – Violin, Flute or Oboe, from "Der getreue Musikmeister") Notes Sources
https://en.wikipedia.org/wiki?curid=13062
Granville rail disaster The Granville rail/train disaster occurred on Tuesday 18 January 1977 at Granville, New South Wales, a western suburb of Sydney, when a crowded commuter train derailed, running into the supports of a road bridge that collapsed onto two of the train's passenger carriages. It remains the worst rail disaster in Australian history and the greatest loss of life in a confined area post-war: 84 people died, more than 213 were injured, and 1,300 were affected. The official enquiry found the primary cause of the crash to be poor fastening of the track, which caused the track to spread derailing the locomotive. The train involved in the disaster consisted of eight passenger carriages hauled by 46 class electric locomotive 4620, and had commenced its journey towards Sydney from Mount Victoria in the Blue Mountains at 6:09 am. At approximately 8:10 am it was approaching Granville railway station when the locomotive derailed and struck one of the steel-and-concrete pillars supporting the bridge carrying Bold Street over the railway cutting. The derailed engine and first two carriages passed the bridge. The first carriage broke free from the other carriages. Carriage one was torn open when it collided with a severed mast beside the track, killing eight passengers. The remaining carriages came to a halt with the second carriage clear of the bridge. The rear half of the third carriage, and forward half of the fourth carriage came to rest under the weakened bridge, whose weight was estimated at . Within seconds, with all its supports demolished, the bridge and several motor cars on top of it crashed onto the carriages, crushing them and the passengers inside. Of the total number of passengers travelling in the third and fourth carriages, half were killed instantly when the bridge collapsed on them, crushing them in their seats. Several injured passengers were trapped in the train for hours after the accident, with part of the bridge crushing a limb or torso. Some had been conscious and lucid, talking to rescuers, but died of crush syndrome soon after the weight was removed from their bodies. This resulted in changes to rescue procedures for these kinds of accidents. Rescuers also faced greater difficulties as the weight of the bridge was still crushing the affected carriages, reducing the space in which they had to work to get survivors out, until it was declared that no one was allowed to attempt further entry until the bridge had been lifted. Soon after, the bridge settled a further two inches onto the train, trapping two rescuers and crushing a portable generator "like butter". Another danger came from gas; LPG cylinders were kept year-round on board the train to be used in winter for heating. Several people were overcome by gas leaking from ruptured cylinders. The leaking gas also prevented the immediate use of powered rescue tools. The NSW Fire Brigade provided ventilation equipment to dispel the gas and a constant film of water was sprayed over the accident site to prevent the possibility of the gas igniting. The train driver, the assistant crewman, the "second man", and the motorists driving on the fallen bridge all survived. The operation lasted from 8:12 am Tuesday until 6:00 am Thursday. Ultimately, 84 people were killed in the accident, which included an unborn child. The bridge was rebuilt as a single span without any intermediate support piers. Other bridges similar to the destroyed bridge had their piers reinforced. The original inquiry into the accident found that the primary cause of the crash was "the very unsatisfactory condition of the permanent way", being the poor fastening of the track, causing the track to spread and allowing the left front wheel of the locomotive to come off the rail. Other contributing factors included the structure of the bridge itself. When built, the base of its deck was found to be one metre lower than the road. Concrete was added on top to build the surface up level with the road. This additional weight significantly added to the destruction of the wooden train carriages. The disaster prompted substantial increases in rail-maintenance expenditure. The train driver, Edward Olencewicz, was exonerated by the inquiry. For 39 years, the people of the disaster had little to say until the Granville Train Disaster Association Inc. was formed. This was to represent the emotions of those affected (including relatives and friends) via Barry J Gobbe OAM JP and Meredith Knight JP to the Minister for Transport, Andrew Constance and New South Wales Premier, Gladys Berejiklian and requested an apology for the way the real people of the disaster were treated by the then Wran Government of the day. On 4 May 2017 Berejiklian gave a formal apology to the victims of the disaster, in New South Wales Parliament House. Shortly after the disaster, a voluntary group collected unknown donation amounts to allegedly erect the memorial wall for ongoing memorial services. Families and friends of the victims and survivors gather with surviving members of the rescue crews annually. The ceremony ends with the throwing of 84 roses on to the tracks to mark the number of passengers killed. In 2007, a plaque was placed on the bridge to mark the efforts of railway workers who assisted in rescuing survivors from the train. The original group, known as 'the trust', made submissions on rail safety issues, including recommending that fines for safety breaches be dedicated to rail safety improvements, and campaigning for the establishment of an independent railway safety ombudsman.
https://en.wikipedia.org/wiki?curid=13064
George Gershwin George Gershwin (; born Jacob Bruskin Gershowitz, September 26, 1898 – July 11, 1937) was an American composer and pianist whose compositions spanned both popular and classical genres. Among his best-known works are the orchestral compositions "Rhapsody in Blue" (1924) and "An American in Paris" (1928), the songs "Swanee" (1919) and "Fascinating Rhythm" (1924), the jazz standard "I Got Rhythm" (1930), and the opera "Porgy and Bess" (1935) which spawned the hit "Summertime". Gershwin studied piano under Charles Hambitzer and composition with Rubin Goldmark, Henry Cowell, and Joseph Brody. He began his career as a song plugger but soon started composing Broadway theater works with his brother Ira Gershwin and with Buddy DeSylva. He moved to Paris intending to study with Nadia Boulanger, but she refused him. He subsequently composed "An American in Paris", returned to New York City and wrote "Porgy and Bess" with Ira and DuBose Heyward. Initially a commercial failure, it came to be considered one of the most important American operas of the twentieth century and an American cultural classic. Gershwin moved to Hollywood and composed numerous film scores. He died in 1937 of a malignant brain tumor. His compositions have been adapted for use in film and television, with several becoming jazz standards recorded and covered in many variations. Gershwin was of Russian-Jewish and Lithuanian-Jewish ancestry. His grandfather, Jakov Gershowitz, was born in Odessa and had served for 25 years as a mechanic for the Imperial Russian Army to earn the right of free travel and residence as a Jew; finally retiring near Saint Petersburg. His teenage son, Moishe Gershowitz, worked as a leather cutter for women's shoes. Moishe Gershowitz met and fell in love with Roza Bruskina, the teenage daughter of a furrier in Vilnius. She and her family moved to New York because of increasing anti-Jewish sentiment in Russia, changing her first name to Rose. Moishe, faced with compulsory military service if he remained in Russia, moved to America as soon as he could afford to. Once in New York, he changed his first name to Morris. Gershowitz lived with a maternal uncle in Brooklyn, working as a foreman in a women's shoe factory. He married Rose on July 21, 1895, and Gershowitz soon Americanized his name to Gershwine. Their first child, Ira Gershwin, was born on December 6, 1896, after which the family moved into a second-floor apartment on Brooklyn's Snediker Avenue. On September 26, 1898, George was born as second son to Morris and Rose Bruskin Gershwin in their second-floor apartment at 242 Snediker Avenue in Brooklyn. His birth certificate identifies him as Jacob Gershwin, with the surname pronounced 'Gersh-vin' in the Russian and Yiddish immigrant community. He had just one given name, contrary to the American practice of giving children both a first and a middle name. He was named after his grandfather, the army mechanic. He soon became known as George, and changed the spelling of his surname to 'Gershwin' around the time he became a professional musician; other family members followed suit. After Ira and George, another boy, Arthur Gershwin (1900–1981), and a girl, Frances Gershwin (1906–1999), were born into the family. The family lived in many different residences, as their father changed dwellings with each new enterprise in which he became involved. They grew up mostly in the Yiddish Theater District. George and Ira frequented the local Yiddish theaters, with George occasionally appearing onstage as an extra. George lived a boyhood not unusual in New York tenements, which included running around with his friends, roller-skating and misbehaving in the streets. Until 1908, he cared nothing about music. Then as a ten-year-old, he was intrigued upon hearing his friend Maxie Rosenzweig's violin recital. The sound, and the way his friend played, captivated him. At about the same time, George's parents had bought a piano for his older brother Ira. To his parents' surprise, though, and to Ira's relief, it was George who spent more time playing it as he continued to enjoy it. Although his younger sister Frances was the first in the family to make a living through her musical talents, she married young and devoted herself to being a mother and housewife, thus precluding spending any serious time on musical endeavors. Having given up her performing career, she settled upon painting as a creative outlet, which had also been a hobby George briefly pursued. Arthur Gershwin followed in the paths of George and Ira, also becoming a composer of songs, musicals, and short piano works. With a degree of frustration, George tried various piano teachers for about two years (circa 1911) before finally being introduced to Charles Hambitzer by Jack Miller (circa 1913), the pianist in the Beethoven Symphony Orchestra. Until his death in 1918, Hambitzer remained Gershwin's musical mentor, taught him conventional piano technique, introduced him to music of the European classical tradition, and encouraged him to attend orchestral concerts. In 1913, Gershwin left school at the age of 15 and found his first job as a "song plugger". His employer was Jerome H. Remick and Company, a Detroit-based publishing firm with a branch office on New York City's Tin Pan Alley, and he earned $15 a week. His first published song was "When You Want 'Em, You Can't Get 'Em, When You've Got 'Em, You Don't Want 'Em" in 1916 when Gershwin was only 17 years old. It earned him 50 cents. In 1916, Gershwin started working for Aeolian Company and Standard Music Rolls in New York, recording and arranging. He produced dozens, if not hundreds, of rolls under his own and assumed names (pseudonyms attributed to Gershwin include Fred Murtha and Bert Wynn). He also recorded rolls of his own compositions for the Duo-Art and Welte-Mignon reproducing pianos. As well as recording piano rolls, Gershwin made a brief foray into vaudeville, accompanying both Nora Bayes and Louise Dresser on the piano. His 1917 novelty ragtime, "Rialto Ripples", was a commercial success. In 1919 he scored his first big national hit with his song "Swanee," with words by Irving Caesar. Al Jolson, a famous Broadway singer of the day, heard Gershwin perform "Swanee" at a party and decided to sing it in one of his shows. In the late 1910s, Gershwin met songwriter and music director William Daly. The two collaborated on the Broadway musicals "Piccadilly to Broadway" (1920) and "For Goodness' Sake" (1922), and jointly composed the score for "Our Nell" (1923). This was the beginning of a long friendship. Daly was a frequent arranger, orchestrator and conductor of Gershwin's music, and Gershwin periodically turned to him for musical advice. In 1924, Gershwin composed his first major classical work, "Rhapsody in Blue", for orchestra and piano. It was orchestrated by Ferde Grofé and premiered by Paul Whiteman's Concert Band, in New York. It subsequently went on to be his most popular work, and established Gershwin's signature style and genius in blending vastly different musical styles in revolutionary ways. Since the early 1920s Gershwin had frequently worked with the lyricist Buddy DeSylva. Together they created the experimental one-act jazz opera "Blue Monday," set in Harlem. It is widely regarded as a forerunner to the groundbreaking "Porgy and Bess". In 1924, George and Ira Gershwin collaborated on a stage musical comedy "Lady Be Good", which included such future standards as "Fascinating Rhythm" and "Oh, Lady Be Good!". They followed this with "Oh, Kay!" (1926), "Funny Face" (1927) and "Strike Up the Band" (1927 and 1930). Gershwin allowed the song, with a modified title, to be used as a football fight song, "Strike Up The Band for UCLA". In the mid-1920s, Gershwin stayed in Paris for a short period of time, during which he applied to study composition with the noted Nadia Boulanger, who, along with several other prospective tutors such as Maurice Ravel, turned him down, afraid that rigorous classical study would ruin his jazz-influenced style. Maurice Ravel's rejection letter to Gershwin told him, "Why become a second-rate Ravel when you're already a first-rate Gershwin?" While there, Gershwin wrote "An American in Paris". This work received mixed reviews upon its first performance at Carnegie Hall on December 13, 1928, but it quickly became part of the standard repertoire in Europe and the United States. In 1929, the Gershwin brothers created "Show Girl"; The following year brought "Girl Crazy", which introduced the standards "Embraceable You", debuted by Ginger Rogers, and "I Got Rhythm". 1931's "Of Thee I Sing" became the first musical comedy to win the Pulitzer Prize for Drama; the winners were George S. Kaufman, Morrie Ryskind, and Ira Gershwin. Gershwin spent the summer of 1934 on Folly Island in South Carolina after he was invited to visit by the author of the novel "Porgy", DuBose Heyward. He was inspired to write the music to his opera "Porgy and Bess" while on this working vacation"." "Porgy and Bess" was considered another American classic by the composer of "Rhapsody in Blue" — even if critics could not quite figure out how to evaluate it, or decide whether it was opera or simply an ambitious Broadway musical. "It crossed the barriers," per theater historian Robert Kimball. "It wasn't a musical work per se, and it wasn't a drama per se – it elicited response from both music and drama critics. But the work has sort of always been outside category." After the commercial failure of "Porgy and Bess", Gershwin moved to Hollywood, California. In 1936, he was commissioned by RKO Pictures to write the music for the film "Shall We Dance", starring Fred Astaire and Ginger Rogers. Gershwin's extended score, which would marry ballet with jazz in a new way, runs over an hour in length. It took Gershwin several months to compose and orchestrate. Gershwin had a ten-year affair with composer Kay Swift, whom he frequently consulted about his music. The two never married, although she eventually divorced her husband James Warburg in order to commit to the relationship. Swift's granddaughter, Katharine Weber, has suggested that the pair were not married because George's mother Rose was "unhappy that Kay Swift wasn't Jewish". The Gershwins' 1926 musical "Oh, Kay" was named for her. After Gershwin's death, Swift arranged some of his music, transcribed several of his recordings, and collaborated with his brother Ira on several projects. Early in 1937, Gershwin began to complain of blinding headaches and a recurring impression that he smelled burning rubber. On February 11, 1937, he performed his Piano Concerto in F in a special concert of his music with the San Francisco Symphony Orchestra under the direction of French maestro Pierre Monteux. Gershwin, normally a superb pianist in his own compositions, suffered coordination problems and blackouts during the performance. He was at the time working on other Hollywood film projects while living with Ira and his wife Leonore in their rented house in Beverly Hills. Leonore Gershwin began to be disturbed by George's mood swings and his seeming inability to eat without spilling food at the dinner table. She suspected mental illness and insisted he be moved out of their house to lyricist Yip Harburg's empty quarters nearby, where he was placed in the care of his valet, Paul Mueller. The headaches and olfactory hallucinations continued. On the night of July 9, 1937 Gershwin collapsed in Harburg's house, where he had been working on the score of "The Goldwyn Follies". He was rushed back to Cedars of Lebanon, and fell into a coma. Only then did his doctors come to believe that he was suffering from a brain tumor. Leonore called George's close friend Emil Mosbacher and explained the dire need to find a neurosurgeon. Mosbacher immediately called pioneering neurosurgeon Harvey Cushing in Boston, who, retired for several years by then, recommended Dr. Walter Dandy, who was on a boat fishing in Chesapeake Bay with the governor of Maryland. Mosbacher called the White House and had a Coast Guard cutter sent to find the governor's yacht and bring Dandy quickly to shore. Mosbacher then chartered a plane and flew Dandy to Newark Airport, where he was to catch a plane to Los Angeles; however, by that time, Gershwin's condition was critical and the need for surgery was immediate. In the early hours of July 11, doctors at Cedars removed a large brain tumor, believed to have been a glioblastoma, but Gershwin died on the morning of Sunday, July 11, 1937, at the age of 38. The fact that he had suddenly collapsed and become comatose after he stood up on July 9, has been interpreted as brain herniation with Duret haemorrhages. Gershwin's friends and fans were shocked and devastated. John O'Hara remarked: "George Gershwin died on July 11, 1937, but I don't have to believe it if I don't want to." He was interred at Westchester Hills Cemetery in Hastings-on-Hudson, New York. A memorial concert was held at the Hollywood Bowl on September 8, 1937, at which Otto Klemperer conducted his own orchestration of the second of Gershwin's "Three Preludes". Gershwin was influenced by French composers of the early twentieth century. In turn Maurice Ravel was impressed with Gershwin's abilities, commenting, "Personally I find jazz most interesting: the rhythms, the way the melodies are handled, the melodies themselves. I have heard of George Gershwin's works and I find them intriguing." The orchestrations in Gershwin's symphonic works often seem similar to those of Ravel; likewise, Ravel's two piano concertos evince an influence of Gershwin. George Gershwin asked to study with Ravel. When Ravel heard how much Gershwin earned, Ravel replied with words to the effect of, "You should give "me" lessons." (Some versions of this story feature Igor Stravinsky rather than Ravel as the composer; however Stravinsky confirmed that he originally heard the story from Ravel.) Gershwin's own "Concerto in F" was criticized for being related to the work of Claude Debussy, more so than to the expected jazz style. The comparison did not deter him from continuing to explore French styles. The title of "An American in Paris" reflects the very journey that he had consciously taken as a composer: "The opening part will be developed in typical French style, in the manner of Debussy and "Les Six", though the tunes are original." Gershwin was intrigued by the works of Alban Berg, Dmitri Shostakovich, Igor Stravinsky, Darius Milhaud, and Arnold Schoenberg. He also asked Schoenberg for composition lessons. Schoenberg refused, saying "I would only make you a bad Schoenberg, and you're such a good Gershwin already." (This quote is similar to one credited to Maurice Ravel during Gershwin's 1928 visit to France – "Why be a second-rate Ravel, when you are a first-rate Gershwin?") Gershwin was particularly impressed by the music of Berg, who gave him a score of the "Lyric Suite". He attended the American premiere of "Wozzeck", conducted by Leopold Stokowski in 1931, and was "thrilled and deeply impressed". Russian Joseph Schillinger's influence as Gershwin's teacher of composition (1932–1936) was substantial in providing him with a method of composition. There has been some disagreement about the nature of Schillinger's influence on Gershwin. After the posthumous success of "Porgy and Bess", Schillinger claimed he had a large and direct influence in overseeing the creation of the opera; Ira completely denied that his brother had any such assistance for this work. A third account of Gershwin's musical relationship with his teacher was written by Gershwin's close friend Vernon Duke, also a Schillinger student, in an article for "the Musical Quarterly" in 1947. What set Gershwin apart was his ability to manipulate forms of music into his own unique voice. He took the jazz he discovered on Tin Pan Alley into the mainstream by splicing its rhythms and tonality with that of the popular songs of his era. Although George Gershwin would seldom make grand statements about his music, he believed that "true music must reflect the thought and aspirations of the people and time. My people are Americans. My time is today." In 2007, the Library of Congress named its Prize for Popular Song after George and Ira Gershwin. Recognizing the profound and positive effect of popular music on culture, the prize is given annually to a composer or performer whose lifetime contributions exemplify the standard of excellence associated with the Gershwins. On March 1, 2007, the first Gershwin Prize was awarded to Paul Simon. Early in his career, under both his own name and pseudonyms, Gershwin recorded more than one hundred and forty player piano rolls which were a main source of his income. The majority were popular music of the period and a smaller proportion were of his own works. Once his musical theatre-writing income became substantial, his regular roll-recording career became superfluous. He did record additional rolls throughout the 1920s of his main hits for the Aeolian Company's reproducing piano, including a complete version of his "Rhapsody in Blue". Compared to the piano rolls, there are few accessible audio recordings of Gershwin's playing. His first recording was his own "Swanee" with the Fred Van Eps Trio in 1919. The recorded balance highlights the banjo playing of Van Eps, and the piano is overshadowed. The recording took place before "Swanee" became famous as an Al Jolson specialty in early 1920. Gershwin recorded an abridged version of "Rhapsody in Blue" with Paul Whiteman and his orchestra for the Victor Talking Machine Company in 1924, soon after the world premiere. Gershwin and the same orchestra made an electrical recording of the abridged version for Victor in 1927. However, a dispute in the studio over interpretation angered Whiteman and he left. The conductor's baton was taken over by Victor's staff conductor Nathaniel Shilkret. Gershwin made a number of solo piano recordings of tunes from his musicals, some including the vocals of Fred and Adele Astaire, as well as his "Three Preludes" for piano. In 1929, Gershwin "supervised" the world premiere recording of "An American in Paris" with Nathaniel Shilkret and the Victor Symphony Orchestra. Gershwin's role in the recording was rather limited, particularly because Shilkret was conducting and had his own ideas about the music. When it was realized that no one had been hired to play the brief celeste solo, Gershwin was asked if he could and would play the instrument, and he agreed. Gershwin can be heard, rather briefly, on the recording during the slow section. Gershwin appeared on several radio programs, including Rudy Vallee's, and played some of his compositions. This included the third movement of the "Concerto in F" with Vallee conducting the studio orchestra. Some of these performances were preserved on transcription discs and have been released on LP and CD. In 1934, in an effort to earn money to finance his planned folk opera, Gershwin hosted his own radio program titled "Music by Gershwin". The show was broadcast on the NBC Blue Network from February to May and again in September through the final show on December 23, 1934. He presented his own work as well as the work of other composers. Recordings from this and other radio broadcasts include his "Variations on I Got Rhythm", portions of the "Concerto in F", and numerous songs from his musical comedies. He also recorded a run-through of his "Second Rhapsody", conducting the orchestra and playing the piano solos. Gershwin recorded excerpts from "Porgy and Bess" with members of the original cast, conducting the orchestra from the keyboard; he even announced the selections and the names of the performers. In 1935 RCA Victor asked him to supervise recordings of highlights from "Porgy and Bess"; these were his last recordings. A 74-second newsreel film clip of Gershwin playing "I Got Rhythm" has survived, filmed at the opening of the Manhattan Theater (now The Ed Sullivan Theater) in August 1931. There are also silent home movies of Gershwin, some of them shot on Kodachrome color film stock, which have been featured in tributes to the composer. In addition, there is newsreel footage of Gershwin playing "Mademoiselle from New Rochelle" and "Strike Up the Band" on the piano during a Broadway rehearsal of the 1930 production of "Strike Up the Band". In the mid-30s, "Strike Up The Band" was given to UCLA to be used as a football fight song, "Strike Up The Band for UCLA". The comedy team of Clark and McCullough are seen conversing with Gershwin, then singing as he plays. In 1945, the film biography "Rhapsody in Blue" was made, starring Robert Alda as George Gershwin. The film contains many factual errors about Gershwin's life, but also features many examples of his music, including an almost complete performance of "Rhapsody in Blue". In 1965, Movietone Records released an album MTM 1009 featuring Gershwin's piano rolls of the titled "George Gershwin plays RHAPSODY IN BLUE and his other favorite compositions". The B-side of the LP featured nine other recordings. In 1975, Columbia Records released an album featuring Gershwin's piano rolls of "Rhapsody In Blue", accompanied by the Columbia Jazz Band playing the original jazz band accompaniment, conducted by Michael Tilson Thomas. The B-side of the Columbia Masterworks release features Tilson Thomas leading the New York Philharmonic in "An American In Paris." In 1976, RCA Records, as part of its "Victrola Americana" line, released a collection of Gershwin recordings taken from 78s recorded in the 1920s and called the LP "Gershwin plays Gershwin, Historic First Recordings" (RCA Victrola AVM1-1740). Included were recordings of "Rhapsody in Blue" with the Paul Whiteman Orchestra and Gershwin on piano; "An American in Paris", from 1927 with Gershwin on celesta; and "Three Preludes", "Clap Yo' Hands" and Someone to Watch Over Me", among others. There are a total of ten recordings on the album. At the opening ceremony of the 1984 Olympic Games in Los Angeles, "Rhapsody in Blue" was performed in spectacular fashion by many pianists. The soundtrack to Woody Allen's 1979 film "Manhattan" is composed entirely of Gershwin's compositions, including "Rhapsody in Blue", "Love is Sweeping the Country", and "But Not for Me", performed by both the New York Philharmonic under Zubin Mehta and the Buffalo Philharmonic under Michael Tilson Thomas. The film begins with a monologue by Allen: "He adored New York City ... To him, no matter what the season was, this was still a town that existed in black and white and pulsated to the great tunes of George Gershwin." In 1993, two audio CDs featuring piano rolls recorded by Gershwin were issued by Nonesuch Records through the efforts of Artis Wodehouse, and entitled "". In October 2009, it was reported by "Rolling Stone" that Brian Wilson was completing two unfinished compositions by George Gershwin, released as "Brian Wilson Reimagines Gershwin" on August 17, 2010, consisting of ten George and Ira Gershwin songs, bookended by passages from "Rhapsody in Blue", with two new songs completed from unfinished Gershwin fragments by Wilson and band member Scott Bennett. Orchestral Solo piano Operas London musicals Broadway musicals Films for which Gershwin wrote original scores Gershwin died intestate, and his estate passed to his mother. The estate continues to collect significant royalties from licensing the copyrights on his work. The estate supported the Sonny Bono Copyright Term Extension Act because its 1923 cutoff date was shortly before Gershwin had begun to create his most popular works. The copyrights on all Gershwin's solo works expired at the end of 2007 in the European Union, based on its life-plus-70-years rule. In 2005, "The Guardian" determined using "estimates of earnings accrued in a composer's lifetime" that George Gershwin was the wealthiest composer of all time. The George and Ira Gershwin Collection, much of which was donated by Ira and the Gershwin family estates, resides at the Library of Congress. In September 2013, a partnership between the estates of Ira and George Gershwin and the University of Michigan was created and will provide the university's School of Music, Theatre, and Dance access to Gershwin's entire body of work, which includes all of Gershwin's papers, compositional drafts, and scores. This direct access to all of his works will provide opportunities to musicians, composers, and scholars to analyze and reinterpret his work with the goal of accurately reflecting the composers' vision in order to preserve his legacy. The first fascicles of "The Gershwin Critical Edition", edited by Mark Clague, are expected in 2017; they will cover the 1924 jazz band version of "Rhapsody in Blue", "An American in Paris" and "Porgy and Bess".
https://en.wikipedia.org/wiki?curid=13066
LGBT social movements Lesbian, gay, bisexual, and transgender (LGBT) social movements are social movements that advocate for LGBT people in society. Social movements may focus on equal rights, such as the 2000s movement for marriage equality, or they may focus on liberation, as in the gay liberation movement of the 1960s and 1970s. Earlier movements focused on self-help and self-acceptance, such as the homophile movement of the 1950s. Although there is not a primary or an overarching central organization that represents all LGBT people and their interests, numerous LGBT rights organizations are active worldwide. The earliest organizations to support LGBT rights were formed in the early 20th century. A commonly stated goal among these movements is social equality for LGBT people, but there is still denial of full LGBT rights. Some have also focused on building LGBT communities or worked towards liberation for the broader society from biphobia, homophobia, and transphobia. There is a struggle for LGBT rights today. LGBT movements organized today are made up of a wide range of political activism and cultural activity, including lobbying, street marches, social groups, media, art, and research. Sociologist Mary Bernstein writes: "For the lesbian and gay movement, then, cultural goals include (but are not limited to) challenging dominant constructions of masculinity and femininity, homophobia, and the primacy of the gendered heterosexual nuclear family (heteronormativity). Political goals include changing laws and policies in order to gain new rights, benefits, and protections from harm." Bernstein emphasizes that activists seek both types of goals in both the civil and political spheres. As with other social movements, there is also conflict within and between LGBT movements, especially about strategies for change and debates over exactly who represents the constituency of these movements, and this also applies to changing education. There is debate over what extent lesbians, gays, bisexuals, transgender people, intersex people, and others share common interests and a need to work together. Leaders of the lesbian and gay movement of the 1970s, 80s and 90s often attempted to hide masculine lesbians, feminine gay men, transgender people, and bisexuals from the public eye, creating internal divisions within LGBT communities. Roffee and Waling (2016) documented that LGBT people experience microaggressions, bullying and anti-social behaviors from other people within the LGBT community. This is due to misconceptions and conflicting views as to what entails "LGBT". For example, transgender people found that other members of the community were not understanding to their own, individual, specific needs and would instead make ignorant assumptions, and this can cause health risks. Additionally, bisexual people found that lesbian or gay people were not understanding or appreciative of the bisexual sexuality. Evidently, even though most of these people would say that they stand for the same values as the majority of the community, there are still remaining inconsistencies even within the LGBT community. LGBT movements have often adopted a kind of identity politics that sees gay, bisexual, and transgender people as a fixed class of people; a minority group or groups, and this is very common among LGBT communities. Those using this approach aspire to liberal political goals of freedom and equal opportunity, and aim to join the political mainstream on the same level as other groups in society. In arguing that sexual orientation and gender identity are innate and cannot be consciously changed, attempts to change gay, lesbian, and bisexual people into heterosexuals ("conversion therapy") are generally opposed by the LGBT community. Such attempts are often based in religious beliefs that perceive gay, lesbian, and bisexual activity as immoral. However, others within LGBT movements have criticized identity politics as limited and flawed, elements of the queer movement have argued that the categories of gay and lesbian are restrictive, and attempted to deconstruct those categories, which are seen to "reinforce rather than challenge a cultural system that will always mark the non heterosexual as inferior." After the French Revolution the anticlerical feeling in Catholic countries coupled with the liberalizing effect of the Napoleonic Code made it possible to sweep away sodomy laws. However, in Protestant countries, where the church was less severe, there was no general reaction against statutes that were religious in origin. As a result, many of those countries retained their statutes on sodomy until late in the 20th century. However,some countries still have retained their statutes on sodomy. for example in 2008 a case in India's high court was judged using a 150 year old reading that was punishing sodomy In eighteenth- and nineteenth-century Europe, same-sex sexual behavior and cross-dressing were widely considered to be socially unacceptable, and were serious crimes under sodomy and sumptuary laws. There were, however, some exceptions. For example, in the 17th century cross dressing was common in plays, as evident in the content of many of William Shakespeare's plays and by the actors in actual performance (since female roles in Elizabethan theater were always performed by males, usually prepubescent boys). Thomas Cannon wrote what may be the earliest published defense of homosexuality in English, "Ancient and Modern Pederasty Investigated and Exemplify'd" (1749). Although only fragments of his work have survived, it was a humorous anthology of homosexual advocacy, written with an obvious enthusiasm for its subject. It contains the argument: "Unnatural Desire is a Contradiction in Terms; downright Nonsense. Desire is an amatory Impulse of the inmost human Parts: Are not they, however constructed, and consequently impelling, Nature?" Social reformer Jeremy Bentham wrote the first known argument for homosexual law reform in England around 1785, at a time when the legal penalty for buggery was death by hanging. His advocacy stemmed from his utilitarian philosophy, in which the morality of an action is determined by the net consequence of that action on human well-being. He argued that homosexuality was a victimless crime, and therefore not deserving of social approbation or criminal charges. He regarded popular negative attitudes against homosexuality as an irrational prejudice, fanned and perpetuated by religious teachings. However, he did not publicize his views as he feared reprisal; his powerful essay was not published until 1978. The emerging currents of secular humanist thought which had inspired Bentham also informed the French Revolution, and when the newly formed National Constituent Assembly began drafting the policies and laws of the new republic in 1792, groups of militant "sodomite-citizens" in Paris petitioned the Assemblée nationale, the governing body of the French Revolution, for freedom and recognition. In 1791, France became the first nation to decriminalize homosexuality, probably thanks in part to Jean Jacques Régis de Cambacérès, who was one of the authors of the Napoleonic Code. With the introduction of the Napoleonic Code in 1808, the Duchy of Warsaw also decriminalized homosexuality. In 1830, the new Penal Code of the Brazilian Empire did not repeat the title XIII of the fifth book of the "Ordenações Philipinas", which made sodomy a crime. In 1833, an anonymous English-language writer wrote a poetic defense of Captain Nicholas Nicholls, who had been sentenced to death in London for sodomy: Whence spring these inclinations, rank and strong? And harming no one, wherefore call them wrong? Three years later in Switzerland, Heinrich Hoessli published the first volume of "Eros: Die Männerliebe der Griechen" (English: "Eros: The Male Love of the Greeks"), another defense of same-sex love. In many ways, social attitudes to homosexuality became more hostile during the late Victorian era. In 1885, the Labouchere Amendment was included in the Criminal Law Amendment Act, which criminalized 'any act of gross indecency with another male person'; a charge that was successfully invoked to convict playwright Oscar Wilde in 1895 with the most severe sentence possible under the Act. From the 1870s, social reformers began to defend homosexuality, but due to the controversial nature of their advocacy, kept their identities secret. A secret British society called the "Order of Chaeronea" campaigned for the legalization of homosexuality, and counted playwright Oscar Wilde among its members in the last decades of the 19th century. The society was founded by George Cecil Ives, one of the earliest gay rights campaigners, who had been working for the end of oppression of homosexuals, what he called the "Cause". Ives met Oscar Wilde at the Authors' Club in London in 1892. Oscar Wilde was taken by his boyish looks and persuaded him to shave off his moustache, and once kissed him passionately in the Travellers' Club. In 1893, Lord Alfred Douglas, with whom he had a brief affair, introduced Ives to several Oxford poets whom Ives also tried to recruit. In 1897, Ives created and founded the first homosexual rights group, the Order of Chaeronea. Members included Charles Kains Jackson, Samuel Elsworth Cottam, Montague Summers, and John Gambril Nicholson. John Addington Symonds was a poet and an early advocate of male love. In 1873, he wrote "A Problem in Greek Ethics", a work of what would later be called "gay history." Although the "Oxford English Dictionary" credits the medical writer C.G. Chaddock for introducing "homosexual" into the English language in 1892, Symonds had already used the word in "A Problem in Greek Ethics". Symonds also translated classical poetry on homoerotic themes, and wrote poems drawing on ancient Greek imagery and language such as "Eudiades", which has been called "the most famous of his homoerotic poems". While the taboos of Victorian England prevented Symonds from speaking openly about homosexuality, his works published for a general audience contained strong implications and some of the first direct references to male-male sexual love in English literature. By the end of his life, Symonds' homosexuality had become an open secret in Victorian literary and cultural circles. In particular, Symonds' memoirs, written over a four-year period, from 1889 to 1893, form the one of the earliest known works of self-conscious homosexual autobiography in English. The recently decoded autobiographies of Anne Lister are an earlier example in English. Another friend of Ives was the English socialist poet Edward Carpenter. Carpenter thought that homosexuality was an innate and natural human characteristic and that it should not be regarded as a sin or a criminal offense. In the 1890s, Carpenter began a concerted effort to campaign against discrimination on the grounds of sexual orientation, possibly in response to the recent death of Symonds, whom he viewed as his campaigning inspiration. His 1908 book on the subject, "The Intermediate Sex", would become a foundational text of the LGBT movements of the 20th century. Scottish anarchist John Henry Mackay also wrote in defense of same-sex love and androgyny. English sexologist Havelock Ellis wrote the first objective scientific study of homosexuality in 1897, in which he treated it as a neutral sexual condition. Called "Sexual Inversion" it was first printed in German and then translated into English a year later. In the book, Ellis argued that same-sex relationships could not be characterized as a pathology or a crime and that its importance rose above the arbitrary restrictions imposed by society. He also studied what he called 'inter-generational relationships' and that these also broke societal taboos on age difference in sexual relationships. The book was so controversial at the time that one bookseller was charged in court for holding copies of the work. It is claimed that Ellis coined the term 'homosexual', but in fact he disliked the word due to its conflation of Greek and Latin. These early proponents of LGBT rights, such as Carpenter, were often aligned with a broader socio-political movement known as 'free love'; a critique of Victorian sexual morality and the traditional institutions of family and marriage that were seen to enslave women. Some advocates of free love in the early 20th century, including Russian anarchist and feminist Emma Goldman, also spoke in defence of same-sex love and challenged repressive legislation. An early LGBT movement also began in Germany at the turn of the 20th century, centering on the doctor and writer Magnus Hirschfeld. In 1897 he formed the Scientific-Humanitarian Committee campaign publicly against the notorious law "Paragraph 175", which made sex between men illegal. Adolf Brand later broke away from the group, disagreeing with Hirschfeld's medical view of the "intermediate sex", seeing male-male sex as merely an aspect of manly virility and male social bonding. Brand was the first to use "outing" as a political strategy, claiming that German Chancellor Bernhard von Bülow engaged in homosexual activity. The 1901 book "Sind es Frauen? Roman über das dritte Geschlecht" (English: "Are These Women? Novel about the Third Sex") by Aimée Duc was as much a political treatise as a novel, criticising pathological theories of homosexuality and gender inversion in women. Anna Rüling, delivering a public speech in 1904 at the request of Hirschfeld, became the first female Uranian activist. Rüling, who also saw "men, women, and homosexuals" as three distinct genders, called for an alliance between the women's and sexual reform movements, but this speech is her only known contribution to the cause. Women only began to join the previously male-dominated sexual reform movement around 1910 when the German government tried to expand Paragraph 175 to outlaw sex between women. Heterosexual feminist leader Helene Stöcker became a prominent figure in the movement. Friedrich Radszuweit published LGBT literature and magazines in Berlin (e.g., "Die Freundin"). Hirschfeld, whose life was dedicated to social progress for people who were transsexual, transvestite and homosexual, formed the Institut für Sexualwissenschaft (Institute for Sexology) in 1919. The institute conducted an enormous amount of research, saw thousands of transgender and homosexual clients at consultations, and championed a broad range of sexual reforms including sex education, contraception and women's rights. However, the gains made in Germany would soon be drastically reversed with the rise of Nazism, and the institute and its library were destroyed in 1933. The Swiss journal Der Kreis was the only part of the movement to continue through the Nazi era. USSR's Criminal Code of 1922 decriminalized homosexuality. This was a remarkable step in the USSR at the time – which was very backward economically and socially, and where many conservative attitudes towards sexuality prevailed. This step was part of a larger project of freeing sexual relationships and expanding women's rights – including legalizing abortion, granting divorce on demand, equal rights for women, and attempts to socialize housework. During Stalin's era, however, USSR reverted all these progressive measures – re-criminalizing homosexuality and imprisoning gay men and banning abortion. In 1928, English writer Radclyffe Hall published a novel titled "The Well of Loneliness". Its plot centers on Stephen Gordon, a woman who identifies herself as an invert after reading Krafft-Ebing's "Psychopathia Sexualis", and lives within the homosexual subculture of Paris. The novel included a foreword by Havelock Ellis and was intended to be a call for tolerance for inverts by publicizing their disadvantages and accidents of being born inverted. Hall subscribed to Ellis and Krafft-Ebing's theories and rejected (conservatively understood version of) Freud's theory that same-sex attraction was caused by childhood trauma and was curable. In the United States, several secret or semi-secret groups were formed explicitly to advance the rights of homosexuals as early as the turn of the 20th century, but little is known about them. A better documented group is Henry Gerber's Society for Human Rights formed in Chicago in 1924, which was quickly suppressed. Immediately following World War II, a number of homosexual rights groups came into being or were revived across the Western world, in Britain, France, Germany, the Netherlands, the Scandinavian countries and the United States. These groups usually preferred the term "homophile" to "homosexual", emphasizing love over sex. The homophile movement began in the late 1940s with groups in the Netherlands and Denmark, and continued throughout the 1950s and 1960s with groups in Sweden, Norway, the United States, France, Britain and elsewhere. ONE, Inc., the first public homosexual organization in the U.S, was bankrolled by the wealthy transsexual man Reed Erickson. A U.S. transgender rights journal, "Transvestia: The Journal of the American Society for Equality in Dress", also published two issues in 1952. The homophile movement lobbied to establish a prominent influence in political systems of social acceptability. Radicals of the 1970s would later disparage the homophile groups for being assimilationist. Any demonstrations were orderly and polite. By 1969, there were dozens of homophile organizations and publications in the U.S, and a national organization had been formed, but they were largely ignored by the media. A 1962 gay march held in front of Independence Hall in Philadelphia, according to some historians, marked the beginning of the modern gay rights movement. Meanwhile, in San Francisco, the LGBT youth organization Vanguard was formed by Adrian Ravarour to demonstrate for equality, and Vanguard members protested for equal rights during the months of April–July 1966, followed by the August 1966 Compton's riot, where transgender street prostitutes in the poor neighborhood of Tenderloin rioted against police harassment at a popular all-night restaurant, Gene Compton's Cafeteria. The Wolfenden Report was published in Britain on 4 September 1957 after publicized convictions for homosexuality of well-known men, including Lord Montagu. Disregarding the conventional ideas of the day, the committee recommended that "homosexual behaviour between consenting adults in private should no longer be a criminal offence". All but James Adair were in favor of this and, contrary to some medical and psychiatric witnesses' evidence at that time, found that "homosexuality cannot legitimately be regarded as a disease, because in many cases it is the only symptom and is compatible with full mental health in other respects." The report added, "The law's function is to preserve public order and decency, to protect the citizen from what is offensive or injurious, and to provide sufficient safeguards against exploitation and corruption of others … It is not, in our view, the function of the law to intervene in the private life of citizens, or to seek to enforce any particular pattern of behaviour." The report eventually led to the introduction of the Sexual Offences Bill 1967 supported by Labour MP Roy Jenkins, then the Labour Home Secretary. When passed, the Sexual Offences Act decriminalised homosexual acts between two men over 21 years of age "in private" in England and Wales. The seemingly innocuous phrase 'in private' led to the prosecution of participants in sex acts involving three or more men, eg the Bolton 7 who were so convicted as recently as 1998. Bisexual activism became more visible toward the end of the 1960s in the United States. In 1966 bisexual activist Robert A. Martin (a.k.a. Donny the Punk) founded the Student Homophile League at Columbia University and New York University. In 1967 Columbia University officially recognized this group, thus making them the first college in the United States to officially recognize a gay student group. Activism on behalf of bisexuals in particular also began to grow, especially in San Francisco. One of the earliest organizations for bisexuals, the Sexual Freedom League in San Francisco, was facilitated by Margo Rila and Frank Esposito beginning in 1967. Two years later, during a staff meeting at a San Francisco mental health facility serving LGBT people, nurse Maggi Rubenstein came out as bisexual. Due to this, bisexuals began to be included in the facility's programs for the first time. The American Psychiatric Association removed "homosexuality" from the diagnostics manual of mental illness in 1973. The new social movements of the sixties, such as the Black Power and anti-Vietnam war movements in the US, the May 1968 insurrection in France, and Women's Liberation throughout the Western world, inspired many LGBT activists to become more radical, and the Gay Liberation movement emerged towards the end of the decade. This new radicalism is often attributed to the Stonewall riots of 1969, when a group of gay men, transgender women, lesbians, and drag queens at a bar in New York resisted a police raid. Immediately after Stonewall, such groups as the Gay Liberation Front (GLF) and the Gay Activists' Alliance (GAA) were formed. Their use of the word "gay" represented a new unapologetic defiance—as an antonym for "straight" ("respectable sexual behavior"), it encompassed a range of non-normative sexuality and sought ultimately to free the bisexual potential in everyone, rendering obsolete the categories of homosexual and heterosexual. According to Gay Lib writer Toby Marotta, "their Gay political outlooks were not homophile but liberationist". "Out, loud and proud," they engaged in colorful street theater. The GLF's "A Gay Manifesto" set out the aims for the fledgling gay liberation movement, and influential intellectual Paul Goodman published "The Politics of Being Queer" (1969). Chapters of the GLF were established across the U.S. and in other parts of the Western world. The Front Homosexuel d'Action Révolutionnaire was formed in 1971 by lesbians who split from the Mouvement Homophile de France. The Gay Liberation movement overall, like the gay community generally and historically, has had varying degrees of gender nonconformity and assimilationist platforms among its members. Early marches by the Mattachine society and Daughters of Bilitis stressed looking "respectable" and mainstream, and after the Stonewall Uprising the Mattachine Society posted a sign in the window of the club calling for peace. Gender nonconformity has always been a primary way of signaling homosexuality and bisexuality, and by the late 1960s and mainstream fashion was increasingly incorporating what by the 1970s would be considered "unisex" fashions. In 1970, the drag queen caucus of the GLF, including Marsha P. Johnson and Sylvia Rivera, formed the group Street Transvestite Action Revolutionaries (STAR), which focused on providing support for gay prisoners, housing for homeless gay youth and street people, especially other young "street queens". In 1969, Lee Brewster and Bunny Eisenhower formed the Queens Liberation Front (QLF), partially in protest to the treatment of the drag queens at the first Christopher Street Liberation Day March. Bisexual activist Brenda Howard is known as the ""Mother of Pride"" for her work in coordinating the march, which occurred in 1970 in New York City, and she also originated the idea for a week-long series of events around Pride Day which became the genesis of the annual LGBT Pride celebrations that are now held around the world every June. Additionally, Howard along with the bisexual activist Robert A. Martin (aka Donny the Punk) and gay activist L. Craig Schoonmaker are credited with popularizing the word "Pride" to describe these festivities. Bisexual activist Tom Limoncelli later stated, "The next time someone asks you why LGBT Pride marches exist or why [LGBT] Pride Month is June tell them 'A bisexual woman named Brenda Howard thought it should be.'" One of the values of the movement was gay pride. Within weeks of the Stonewall Riots, Craig Rodwell, proprietor of the Oscar Wilde Memorial Bookshop in lower Manhattan, persuaded the Eastern Regional Conference of Homophile Organizations (ERCHO) to replace the Fourth of July Annual Reminder at Independence Hall in Philadelphia with a first commemoration of the Stonewall Riots. Liberation groups, including the Gay Liberation Front, Queens, the Gay Activists Alliance, Radicalesbians, and Street Transvestites Action Revolutionaries (STAR) all took part in the first Gay Pride Week. Los Angeles held a big parade on the first Gay Pride Day. Smaller demonstrations were held in San Francisco, Chicago, and Boston. In the United Kingdom the GLF had its first meeting in the basement of the London School of Economics on 13 October 1970. Bob Mellors and Aubrey Walter had seen the effect of the GLF in the United States and created a parallel movement based on revolutionary politics and alternative lifestyle. By 1971, the UK GLF was recognized as a political movement in the national press, holding weekly meetings of 200 to 300 people. The GLF Manifesto was published, and a series of high-profile direct actions, were carried out. The disruption of the opening of the 1971 Festival of Light was the best organized of GLF action. The Festival of Light, whose leading figures included Mary Whitehouse, met at Methodist Central Hall. Groups of GLF members in drag invaded and spontaneously kissed each other; others released mice, sounded horns, and unveiled banners, and a contingent dressed as workmen obtained access to the basement and shut off the lights. In 1971 the gay liberation movement in Germany and Switzerland started with Rosa von Praunheims movie It Is Not the Homosexual Who Is Perverse, But the Society in Which He Lives. Easter 1972 saw the Gay Lib annual conference held in the Guild of Undergraduates Union (students union) building at the University of Birmingham. By 1974, internal disagreements had led to the movement's splintering. Organizations that spun off from the movement included the London Lesbian and Gay Switchboard, "Gay News", and Icebreakers. The GLF Information Service continued for a few further years providing gay related resources. GLF branches had been set up in some provincial British towns (e.g., Bradford, Bristol, Leeds, and Leicester) and some survived for a few years longer. The Leicester group founded by Jeff Martin was noted for its involvement in the setting up of the local "Gayline", which is still active today and has received funding from the National Lottery. They also carried out a high-profile campaign against the local paper, the "Leicester Mercury", which refused to advertise Gayline's services at the time. From 1970 activists protested the classification of homosexuality as a mental illness by the American Psychiatric Association in their Diagnostic and Statistical Manual of Mental Disorders, and in 1974 it was replaced with a category of "sexual orientation disturbance" then "ego-dystonic homosexuality," which was also deleted, although "gender identity disorder" (a term used for Gender dysphoria) remains. In 1972, Sweden became the first country in the world to allow people who were transsexual by legislation to surgically change their sex and provide free hormone replacement therapy. Sweden also permitted the age of consent for same-sex partners to be at age 15, making it equal to heterosexual couples. In Japan, LGBT groups were established in the 1970s. In 1971, ran for the Upper House election. Bisexuals became more visible in the LGBT rights movement in the 1970s. In 1972 a Quaker group, the Committee of Friends on Bisexuality, issued the "Ithaca Statement on Bisexuality" supporting bisexuals. In that same year the National Bisexual Liberation Group formed in New York. In 1976 the San Francisco Bisexual Center opened. From the anarchist Gay Liberation movement of the early 1970s arose a more reformist and single-issue Gay Rights movement, which portrayed gays and lesbians as a minority group and used the language of civil rights—in many respects continuing the work of the homophile period. In Berlin, for example, the radical was eclipsed by the . Gay and lesbian rights advocates argued that one's sexual orientation does not reflect on one's gender; that is, "you can be a man and desire a man... without any implications for your gender identity as a man," and the same is true if you are a woman. Gays and lesbians were presented as identical to heterosexuals in all ways but private sexual practices, and butch "bar dykes" and flamboyant "street queens" were seen as negative stereotypes of lesbians and gays. Veteran activists such as Sylvia Rivera and Beth Elliot were sidelined or expelled because they were transgender. In 1974, Maureen Colquhoun came out as the first Lesbian MP for the Labour Party in the UK. When elected she was married in a heterosexual marriage. In 1975, the groundbreaking film portraying homosexual gay icon Quentin Crisp's life, The Naked Civil Servant was transmitted by Thames Television for the British Television channel ITV. The British journal Gay Left also began publication. After British Home Stores sacked an openly gay trainee Tony Whitehead, a national campaign subsequently picketed their stores in protest. In 1977, Harvey Milk was elected to the San Francisco Board of Supervisors becoming the first openly gay man in the United States elected to public office. Milk was assassinated by a former city supervisor Dan White in 1978. In 1977, a former Miss America contestant and orange juice spokesperson, Anita Bryant, began a campaign "Save Our Children," in Dade County, Florida (greater Miami), which proved to be a major set-back in the Gay Liberation movement. Essentially, she established an organization which put forth an amendment to the laws of the county which resulted in the firing of many public school teachers on the suspicion that they were homosexual. In 1979, a number of people in Sweden called in sick with a case of "being homosexual," in protest of homosexuality being classified as an illness. This was followed by an activist occupation of the main office of the National Board of Health and Welfare. Within a few months, Sweden became the first country in the world to remove homosexuality as an illness. Lesbian feminism, which was most influential from the mid-1970s to the mid-1980s, encouraged women to direct their energies toward other women rather than men, and advocated lesbianism as the logical result of feminism. As with Gay Liberation, this understanding of the lesbian potential in all women was at odds with the minority-rights framework of the Gay Rights movement. Many women of the Gay Liberation movement felt frustrated at the domination of the movement by men and formed separate organisations; some who felt gender differences between men and women could not be resolved developed "lesbian separatism," influenced by writings such as Jill Johnston's 1973 book "Lesbian Nation". Organizers at the time focused on this issue. Diane Felix, also known as DJ Chili D in the Bay Area club scene, is a Latino American lesbian once joined the Latino American queer organization GALA. She was known for creating entertainment spaces specifically for queer women, especially in Latino American community. These places included gay bars in San Francisco such as A Little More and Colors. Disagreements between different political philosophies were, at times, extremely heated, and became known as the lesbian sex wars, clashing in particular over views on sadomasochism, prostitution and transsexuality. The term "gay" came to be more strongly associated with homosexual males. In Canada, the coming into effect of Section 15 of the Canadian Charter of Rights and Freedoms in 1985 saw a shift in the gay rights movement in Canada, as Canadian gays and lesbians moved from liberation to litigious strategies. Premised on Charter protections and on the notion of the immutability of homosexuality, judicial rulings rapidly advanced rights, including those that compelled the Canadian government to legalize same-sex marriage. It has been argued that while this strategy was extremely effective in advancing the safety, dignity and equality of Canadian homosexuals, its emphasis of sameness came at the expense of difference and may have undermined opportunities for more meaningful change. Mark Segal, often referred to as the dean of American gay journalism, disrupted the CBS evening news with Walter Cronkite in 1973, an event covered in newspapers across the country and viewed by 60% of American households, many seeing or hearing about homosexuality for the first time. Another setback in the United States occurred in 1986, when the US Supreme Court upheld a Georgia anti-sodomy law in the case "Bowers v. Hardwick". (This ruling would be overturned two decades later in "Lawrence v. Texas). Some historians posit that a new era of the gay rights movement began in the 1980s with the emergence of AIDS, which decimated the leadership and shifted the focus for many. This era saw a resurgence of militancy with direct action groups like AIDS Coalition to Unleash Power (ACT UP), formed in 1987, as well as its offshoots Queer Nation (1990) and the Lesbian Avengers (1992). Some younger activists, seeing "gay and lesbian" as increasingly normative and politically conservative, began using "queer" as a defiant statement of all sexual minorities and gender variant people—just as the earlier liberationists had done with "gay". Less confrontational terms that attempt to reunite the interests of lesbian, gay, bisexual, and transgender people also became prominent, including various acronyms like "LGBT", "LGBTQ", and "LGBTI", where the "Q" and "I" stand for "queer" or "questioning" and "intersex", respectively. A 1987 essay titled "The Overhauling of Straight America", by Marshall Kirk and Hunter Madsen (writing as Erastes Pill), lays out a six-point plan for a campaign, which was first published in "Guide" magazine. They argued that gays must portray themselves in a positive way to straight America, and that the main aim of making homosexuality acceptable could be achieved by getting Americans "to think that it is just another thing, with a shrug of their shoulders". Then "your battle for legal and social rights is virtually won". The pair developed their argument in the 1989 book ""After the Ball: How America Will Conquer Its Fear and Hatred of Gays in the '90s."" The book outlined a public relations strategy for the LGBT movement. It argues that after the gay liberation phase of the 1970s and 1980s, gay rights groups should adopt more professional public relations techniques to convey their message. After its publication Kirk appeared in the pages of "Newsweek", "Time" and "The Washington Post". The book is often critically described by social conservatives such as Focus on the Family as important to the success of the LGBT Movement in the 90's and as part of an alleged "homosexual agenda". A "War Conference" of 200 gay leaders was held in Warrenton, VA in 1988. The closing statement of the conference set out a plan for a media campaign: The statement also called for an annual planning conference "to help set and modify our national agenda." The Human Rights Campaign lists this event as a milestone in gay history and identifies it as where National Coming Out Day originated. On June 24, 1994, the first Gay Pride march was celebrated in Asia in the Philippines. In the Middle East, LGBT organizations remain illegal, and LGBT rights activists face extreme opposition from the state. The 1990s also saw the emergence of many LGBT youth movements and organizations such as LGBT youth centers, gay-straight alliances in high schools, and youth-specific activism, such as the National Day of Silence. Colleges also became places of LGBT activism and support for activists and LGBT people in general, with many colleges opening LGBT centers. The 1990s also saw a rapid push of the transgender movement, while at the same time a "sidelining of the identity of those who are transsexual." In the English-speaking world, Leslie Feinberg published "Transgender Liberation: A Movement Whose Time Has Come" in 1992. Gender-variant peoples across the globe also formed minority rights movements. Hijra activists campaigned for recognition as a third sex in India and Travesti groups began to organize against police brutality across Latin America while activists in the United States formed direct-confrontation groups such as the Transexual Menace. The Netherlands was the first country to allow same-sex marriage in 2001. Following with Belgium in 2003 and Spain and Canada in 2005. , same-sex marriages are also recognized in South Africa, Norway, Sweden, Portugal, Iceland, Argentina, Mexico, Denmark, Brazil, France, Uruguay, New Zealand, United Kingdom, Luxembourg, Ireland, the United States, Colombia, Finland, Germany, Malta, Australia, Austria, Taiwan, Ecuador and Costa Rica. South Africa became the first African nation to legalize same-sex marriage in 2006, and is currently the only African nation where same-sex marriage is legal. During this same period, some municipalities have been enacting laws against homosexuality. For example, Rhea County, Tennessee unsuccessfully tried to "ban homosexuals" in 2006. In 2003, in the case "Lawrence v. Texas", the Supreme Court of the United States struck down sodomy laws in fourteen states, making consensual homosexual sex legal in all 50 states, a significant step forward in LGBT activism and one that had been fought for by activists since the inception of modern LGBT social movements. From 6 to 9 November 2006, The Yogyakarta Principles on application of international human rights law in relation to sexual orientation and gender identity was adopted by an international meeting of 29 specialists in Yogyakarta, the International Commission of Jurists and the International Service for Human Rights. The UN declaration on sexual orientation and gender identity gathered 66 signatures in the United Nations General Assembly on 13 December 2008. On 22 October 2009, the assembly of the Church of Sweden, voted strongly in favour of giving its blessing to homosexual couples, including the use of the term marriage, ("matrimony"). Iceland became the first country in the world to legalize same-sex marriage through a unanimous vote: 49–0, on 11 June 2010. A month later, Argentina became the first country in Latin America to legalize same-sex marriage. South Africa became the first African nation to legalize same-sex marriage in 2006, and it remains the only African country where same-sex marriage is legal. Despite this uptick in tolerance of the LGBT community in South Africa, so-called corrective rapes have become prevalent in response, primarily targeting the poorer women who live in townships and those who have no recourse in responding to the crimes because of the notable lack of police presence and prejudice they may face for reporting assaults. The 1993 "Don't ask, don't tell" law, forbidding homosexual people from serving openly in the United States military, was repealed in 2010. This meant that gays and lesbians could now serve openly in the military without any fear of being discharged because of their sexual orientation. In 2012, the United States Department of Housing and Urban Development's Office of Fair Housing and Equal Opportunity issued a regulation to prohibit discrimination in federally-assisted housing programs. The new regulations ensure that the Department's core housing programs are open to all eligible persons, regardless of sexual orientation or gender identity. In early 2014 a series of protests organized by Add The Words, Idaho and former state senator Nicole LeFavour, some including civil disobedience and concomitant arrests, took place in Boise, Idaho which advocated adding the words "sexual orientation" and "gender identity" to the state's Human Rights act. On June 26, 2015, in "Obergefell v. Hodges", the U.S. Supreme Court ruled 5-to-4 that the Constitution requires that same-sex couples be allowed to marry no matter where they live in the United States. With this ruling, the United States became the 17th country to legalize same-sex marriages entirely. Between September 12 and November 7, 2017, Australia held a national survey on the subject of same sex marriage; 61.6% of respondents supported legally recognizing same-sex marriage nationwide. This cleared the way for a private member's bill to be debated in the federal parliament. On 6 September 2018, consensual gay sex was legalised in India by their Supreme Court. LGBT movements are opposed by a variety of individuals and organizations. They may have a personal, political or religious prejudice to gay rights, homosexual relations or gay people. Opponents say same-sex relationships are not marriages, that legalization of same-sex marriage will open the door for the legalization of polygamy, that it is unnatural and that it encourages unhealthy behavior. Some social conservatives believe that all sexual relationships with people other than an opposite-sex spouse undermines the traditional family and that children should be reared in homes with both a father and a mother. As society has become more accepting of homosexuality, there therefore has also been the emergence of many groups that desire to end homosexuality; during the 1990s, one of the best known groups that was established with this goal is the ex-gay movement. Some people worry that gay rights conflict with individuals' freedom of speech, religious freedoms in the workplace, and the ability to run churches, charitable organizations and other religious organizations that hold opposing social and cultural views to LGBT rights. There is also concern that religious organizations might be forced to accept and perform same-sex marriages or risk losing their tax-exempt status. Eric Rofes author of the book, "A Radical Rethinking of Sexuality and Schooling: Status Quo or Status Queer?", argues that the inclusion of teachings on homosexuality in public schools will play an important role in transforming public ideas about lesbian and gay individuals. As a former teacher in the public school system, Rofes recounts how he was fired from his teaching position after making the decision to come out as gay. As a result of the stigma that he faced as a gay teacher he emphasizes the necessity of the public to take radical approaches to making significant changes in public attitudes about homosexuality. According to Rofes, radical approaches are grounded in the belief that "something fundamental needs to be transformed for authentic and sweeping changes to occur."The radical approaches proposed by Rofes have been met with strong opposition from anti-gay rights activists such as John Briggs. Former California senator, John Briggs proposed Proposition 6, a ballot initiative that would require that all California state public schools fire any gay or lesbian teachers or counselors, along with any faculty that displayed support for gay rights in an effort to prevent what he believe to be " the corruption of the children's minds". The exclusion of homosexuality from the sexual education curriculum, in addition to the absence of sexual counseling programs in public schools, has resulted in increased feelings of isolation and alienation for gay and lesbian students who desire to have gay counseling programs that will help them come to terms with their sexual orientation. Eric Rofes founder of youth homosexual programs, such as Out There and Committee for Gay Youth, stresses the importance of having support programs that help youth learn to identify with their sexual orientation. David Campos, author of the book, "Sex, Youth, and Sex Education: A Reference Handbook", illuminates the argument proposed by proponents of sexual education programs in public schools. Many gay rights supporters argue that teachings about the diverse sexual orientations that exist outside of heterosexuality are pertinent to creating students that are well informed about the world around them. However, Campos also acknowledges that the sex education curriculum alone cannot teach youth about factors associated with sexual orientation but instead he suggests that schools implement policies that create safe school learning environments and foster support for LGBT youth. It is his belief that schools that provide unbiased, factual information about sexual orientation, along with supportive counseling programs for these homosexual youth will transform the way society treats homosexuality. Many opponents of LGBT social movements have attributed their indifference toward homosexuality as being a result of the immoral values that it may instill in children who are exposed to homosexual individuals. In opposition to this claim, many proponents of increased education about homosexuality suggest that educators should refrain from teaching about sexuality in schools entirely. In her book entitled "Gay and Lesbian Movement," Margaret Cruickshank provides statistical data from the Harris and Yankelovich polls which confirmed that over 80% of American adults believe that students should be educated about sexuality within their public school. In addition, the poll also found that 75% of parents believe that homosexuality and abortion should be included in the curriculum as well. An assessment conducted on California public school systems discovered that only 2% of all parents actually disapproved of their child being taught about sexuality in school. It had been suggested that education has a positive impact on support for same sex marriage. African Americans statistically have lower rates of educational achievement; however, the education level of African Americans does not have as much significance on their attitude towards same-sex marriage as it does on white attitudes. Educational attainment among whites has a significant positive effect on support for same-sex marriage, whereas the direct effect of education among African Americans is less significant. The income levels of whites have a direct and positive correlation with support for same-sex marriage, but African American income level is not significantly associated with attitudes toward same-sex marriage. Location also affects ideas towards same-sex marriage; residents of rural and southern areas are significantly more opposed to same-sex marriage in comparison to residents elsewhere. Gays and lesbians that live in rural areas face many challenges, including: sparse populations and the traditional culture held closely by the small population of most rural areas, generally hostile social climates towards gays relative to urban areas, and less social and institution support and access compared to urban areas. In order to combat this problem that the LGBT community faces, social networks and apps such as Moovs have been created for "LGBT individuals with like-minds" that are "enabled to connect, share, and feel the heartbeat of the community as one." In a study conducted by Darren E. Sherkat, Kylan M. de Vries, and Stacia Creek at the Southern Illinois University Carbondale, researchers found that women tend to be more consistently supportive of LGBT rights than men and that individuals that are divorced or have never married are also more likely to grant marital rights to same-sex couples than married or widowed individuals. They also claimed that white women are significantly more supportive than white men, but there are no gender discrepancies among African Americans. The year in which one was born was also found to be a strong indicator of attitude towards same-sex marriage—generations born after 1946 are considerably more supportive of same-sex marriage than older generations. Finally, the study reported that statistically African Americans are more opposed to same-sex marriage than any other ethnicity. Studies show that Non-Protestant Christians are much more likely to support same-sex unions than Protestants; 63% of African Americans claim that they are Baptist or Protestant, whereas only 30% of white Americans are. Religion, as measured by individuals' religious affiliations, behaviors, and beliefs, has a lot of influence in structuring same-sex union attitudes and consistently influences opinions about homosexuality. The most liberal attitudes are generally reflected by Jews, liberal Protestants, and people who are not affiliated with religion. This is because many of their religious traditions have not "systematically condemned homosexual behaviors" in recent years. Moderate and tolerant attitudes are generally reflected by Catholics and moderate Protestants. And lastly, the most conservative views are held by Evangelical Protestants. Moreover, it is a tendency for one to be less tolerant of homosexuality if their social network is strongly tied to a religious congregation. Organized religion, especially Protestant and Baptist affiliations, espouse conservative views which traditionally denounce same-sex unions. Therefore, these congregations are more likely to hear messages of this nature. Polls have also indicated that the amount and level of personal contact that individuals have with homosexual individuals and traditional morality affects attitudes of same-sex marriage and homosexuality. Fiction literature also has an increased role in shaping people's attitude towards same-sex marriages. An original idea appears in Rafael Grugman dystopian fiction book "Nontraditional Love" (2008). He describes an inverted world in which mixed-sex marriages are forbidden. In this world intimacy between the opposite sexes is rejected, world history and the classics of world literature have been falsified in order to support the ideology of the homosexual world (in this world same-sex love is a traditional love). At the heart of the novel is a love story between a man and a woman who unfortunately were born as heterosexuals in a homosexual world and they forced to hide their feelings and their sexual orientation. For a homosexual society love between man and women is a non-traditional love. Underlying this story is the idea that society should be tolerant and accepting and respect the right of every person to be themselves. It is unusual approach that supports human rights of all people and same-sex marriages.
https://en.wikipedia.org/wiki?curid=13070
Great Victoria Desert The Great Victoria Desert is a sparsely-populated desert ecoregion and interim Australian bioregion in Western Australia and South Australia. The Great Victoria is the largest desert in Australia and consists of many small sandhills, grassland plains, areas with a closely packed surface of pebbles (called desert pavement or gibber plains) and salt lakes. It is over wide (from west to east) and covers an area of from the Eastern Goldfields region of Western Australia to the Gawler Ranges in South Australia. The Western Australian mulga shrublands ecoregion lies to the west, the Little Sandy Desert to the northwest, the Gibson Desert and the Central Ranges xeric shrublands to the north, the Tirari-Sturt stony desert to the east, while the Nullarbor Plain to the south separates it from the Southern Ocean. Average annual rainfall is low and irregular, ranging from per year. Thunderstorms are relatively common in the Great Victoria Desert, with an average of 15–20 thunderstorms per annum. Summer daytime temperatures range from while in winter, this falls to . The Great Victoria desert is a World Wildlife Fund ecoregion and an Interim Biogeographic Regionalisation for Australia (IBRA) region of the same name. The majority of people living in the region are Indigenous Australians from different groups including the Kogara, the Mirning and the Pitjantjatjara. Aboriginal populations have been increasing in this region. Young Indigenous adults from the Great Victoria Desert region work in the Wilurarra Creative programs to maintain and develop their culture. Despite its isolated location the Great Victoria is bisected by very rough tracks including the Connie Sue Highway and the Anne Beadell Highway. Human activity has included some mining and nuclear weapons testing. In 1875, British explorer Ernest Giles became the first European to cross the desert. He named the desert after the then-reigning British monarch, Queen Victoria. In 1891, David Lindsey's expedition traveled across this area from north to south. Frank Hann was looking for gold in this area between 1903 and 1908. Len Beadell explored the area in the 1960s. Only the hardiest of plants can survive in much of this environment. Between the sand ridges there are areas of wooded steppe consisting of "Eucalyptus gongylocarpa", "Eucalyptus youngiana" and mulga "(Acacia aneura)" shrubs scattered over areas of resilient spinifex grasses particularly "Triodia basedowii". Wildlife adapted to these harsh conditions includes few large birds or mammals. However, the desert does sustain many types of lizard including the vulnerable great desert skink ("Egernia kintorei"), the Central Ranges taipan (discovered in 2007), and a number of small marsupials including the endangered sandhill dunnart "(Sminthopsis psammophila)" and the crest-tailed mulgara "(Dasycercus cristicauda)". One way to survive here is to burrow into the sands, as a number of the desert's animals, including the southern marsupial mole "(Notoryctes typhlops)", and the water-holding frog do. Birds include the chestnut-breasted whiteface ("Aphelocephala pectoralis") found on the eastern edge of the desert and the malleefowl of Mamungari Conservation Park. Predators of the desert include the dingo (as the desert is north of the Dingo Fence) and two large monitor lizards, the perentie "(Varanus giganteus)" and the sand goanna "(Varanus gouldii)". As this area has had very limited use for agriculture, habitats remain largely undisturbed while parts of the desert are protected areas including Mamungari Conservation Park (formerly known as Unnamed Conservation Park) in South Australia, a large area of pristine arid zone wilderness which possesses cultural significance and is one of the fourteen World Biosphere Reserves in Australia. Habitat is also preserved in the large Aboriginal local government area of Anangu Pitjantjatjara Yankunytjatjara in South Australia and in the Great Victoria Desert Nature Reserve of Western Australia. The nuclear weapons trials carried out by the United Kingdom at Maralinga and Emu Field in the 1950s and early 1960s have left areas contaminated with plutonium-239 and other radioactive material.
https://en.wikipedia.org/wiki?curid=13072
GNU Lesser General Public License The GNU Lesser General Public License (LGPL) is a free-software license published by the Free Software Foundation (FSF). The license allows developers and companies to use and integrate a software component released under the LGPL into their own (even proprietary) software without being required by the terms of a strong copyleft license to release the source code of their own components. However, any developer who modifies an LGPL-covered component is required to make their modified version available under the same LGPL license. For proprietary software, code under the LGPL is usually used in the form of a shared library, so that there is a clear separation between the proprietary and LGPL components. The LGPL is primarily used for software libraries, although it is also used by some stand-alone applications. The LGPL was developed as a compromise between the strong copyleft of the GNU General Public License (GPL) and more permissive licenses such as the BSD licenses and the MIT License. The word "Lesser" in the title shows that the LGPL does not guarantee the end user's complete freedom in the use of software; it only guarantees the freedom of modification for components licensed under the LGPL, but not for any proprietary components. The license was originally called the GNU Library General Public License and was first published in 1991, and adopted the version number 2 for parity with GPL version 2. The LGPL was revised in minor ways in the 2.1 point release, published in 1999, when it was renamed the GNU Lesser General Public License to reflect the FSF's position that not all libraries should use it. Version 3 of the LGPL was published in 2007 as a list of additional permissions applied to GPL version 3. In addition to the term "work based on the Program" of GPL, LGPL version 2 introduced two additional clarification terms "work based on the library" and a "work that uses the library". LGPL version 3 partially dropped these terms. The main difference between the GPL and the LGPL is that the latter allows the work to be linked with (in the case of a library, "used by") a non-(L)GPLed program, regardless of whether it is free software or proprietary software. In LGPL 2.1, the non-(L)GPLed program can then be distributed under any terms if it is not a derivative work. If it is a derivative work, then the program's terms must allow for "modification of the work for the customer's own use and reverse engineering for debugging such modifications." Whether a work that uses an LGPL program is a derivative work or not is a legal issue. A standalone executable that dynamically links to a library through a .so, .dll, or similar medium is generally accepted as not being a derivative work as defined by the LGPL. It would fall under the definition of a "work that uses the Library". Paragraph 5 of the LGPL version 2.1 states: Essentially, if it is a "work that uses the library", then it must be possible for the software to be linked with a newer version of the LGPL-covered program. The most commonly used method for doing so is to use "a suitable shared library mechanism for linking". Alternatively, a statically linked library is allowed if either source code or linkable object files are provided. One feature of the LGPL is the permission to relicense under the GPL any piece of software which is received under the LGPL (see section 3 of the LGPL version 2.1, and section 2 option b of the LGPL version 3). This feature allows for direct reuse of LGPLed code in GPLed libraries and applications. Version 3 of the LGPL is not inherently compatible with version 2 of the GPL. However, works using the latter that have given permission to use a later version of the GPL are compatible: a work released under the GPLv2 "or any later version" may be combined with code from a LGPL version 3 library, with the combined work as a whole falling under the terms of the GPLv3. The former name "GNU Library General Public License" gave some the impression that the FSF recommended software libraries use the LGPL and that programs use the GPL. In February 1999, GNU Project leader Richard Stallman wrote the essay "Why you shouldn't use the Lesser GPL for your next library" explaining that the LGPL had not been deprecated, but that one should not "necessarily" use the LGPL for all libraries: Stallman and the FSF sometimes advocate licenses even less restrictive than the LGPL as a matter of strategy. A prominent example was Stallman's endorsement of the use of a BSD-style license by the Vorbis project for use in its libraries. The license uses terminology which is mainly intended for applications written in the C programming language or its family. Franz Inc. published its own preamble to the license to clarify terminology in the Lisp context. LGPL with this preamble is sometimes referred as LLGPL. In addition, Ada has a special feature, , which may prompt the use of GNAT Modified General Public License: it allows code to link against or instantiate GMGPL-covered units without the code itself becoming covered by the GPL. C++ templates and header-only libraries have the same problem as Ada generics. Version 3 of the LGPL addresses such cases in section 3. Some concern has risen about the suitability of object-oriented classes in LGPL'd being inherited by non-(L)GPL code. Clarification is given on the official GNU website:
https://en.wikipedia.org/wiki?curid=13073
Gosford Gosford is a New South Wales suburb located in the heart of the Central Coast Region, about north of the Sydney CBD. The suburb is situated at the northern extremity of Brisbane Water, an extensive northern branch of the Hawkesbury River estuary and Broken Bay. The suburb is the administrative centre and CBD of the Central Coast region, which is the third largest urban area in New South Wales after Sydney and Newcastle. Following its formation from the combination of the previous Gosford City and Wyong Shire Councils, Gosford has been earmarked as a vital CBD spine under the NSW Metropolitan Strategy. The population of the suburb was 3,499 in the 2016 census. but there were 169,053 people in the Gosford area in 2016. Until white settlement, the area around Gosford was inhabited by the Guringai peoples, who were principally coastal-dwellers, and the Darkinjung people that inhabited the hinterland. Along with the other land around the Hawkesbury River estuary, the Brisbane Water district was explored during the early stages of the settlement of New South Wales. Gosford itself was explored by Governor Phillip between 1788 and 1789. The area was difficult to access and settlement began around 1823. By the late 19th century the agriculture in the region was diversifying, with market gardens and citrus orchards occupying the rich soil left after the timber harvest. As late as 1850, the road between Hawkesbury (near Pittwater) and Brisbane Water was a cart wheel track. Typical of early Colonial settlement, convicts lived and worked in the Gosford area. In 1825, Gosford's population reached 100, of whom 50% were convicts. East Gosford was the first centre of settlement. Gosford was named in 1839 after Archibald Acheson, 2nd Earl of Gosford – a friend of the then Governor of New South Wales George Gipps. Acheson's title derives its name from Gosford, a townland (sub-division) of Markethill in County Armagh in Northern Ireland. In 1887, the rail link to Sydney was completed, requiring a bridge over the Hawkesbury River and a tunnel through the sandstone ridge west of Woy Woy. The introduction of this transport link and then the Pacific Highway in 1930 accelerated the development of the region. Gosford became a town in 1885 and was declared a municipality in 1886. At the , there were 3,499 people in Gosford. 59.6% of people were born in Australia. The next most common countries of birth were India 4.5%, and England 2.9%. 65.2% of people spoke only English at home. Other languages spoken at home included Mandarin at 3.7%. The most common responses for religion were No Religion 33.9% and Catholic 18.2%. Gosford has a humid subtropical climate (Köppen climate classification: Cfa) with warm summers and mild winters. In summer, temperatures average about 27-28 °C in the day with high humidity and about 17-18 °C at night. Winters are mild with cool overnight temperatures and mild to occasionally warm daytime temperatures with lower humidity. Average rainfall is 1333mm, much of which falls in the late summer and autumn. Records range from a maximum of on 18 January 2013, to a low of on 16 July 1970. Gosford proper is located in a valley with President's Hill on the city's western border, Rumbalara Reserve on its eastern border, and Brisbane water to the city's south. Mann Street, Gosford's main street and part of the Pacific Highway, runs north-south and contains the frontage for much of the commercial district. In the centre of Gosford is a shopping and community precinct, including Kibble Park, William Street Mall, Gosford City Library, the Imperial Shopping Centre and a full range of shops, cafes, banks and services. A renewed period of optimism has followed demolition of several derelict buildings and several infrastructure investment projects including the full fibre optic telecommunications rollout of the National Broadband Network in 2012 in the city's CBD as well as the so-called Kibbleplex project, announced in 2013 that plans to house the new regional library, tertiary teaching rooms and associated organisations. Gosford Classic Car Museum opened in 2016 at nearby suburb of West Gosford. Recent residential apartments have been built in various areas of the Gosford Central Business District. Gosford is situated along an identified business growth corridor between Erina, the West Gosford light industrial zone and Somersby. Connectivity of main roads and rail travel times between Sydney, the Central Coast, Lake Macquarie and the city of Newcastle are key issues for corporate business relocation to the region. Aged and personal care and retail are major employers in Gosford. As an entertainment hub, Mann Street enjoys relatively good public transport links and is one of the Central Coast's most popular spots for pubs and clubs and in close proximity to cultural and sporting events. Yacht and other boat building has been undertaken by East Coast Yachts since 1964 in West Gosford. Gosford is home to: Gosford Community News has been published fortnightly by Ducks Crossing Publications since 2010. Express Advocate: News Limited publish a free weekly suburban style newspaper in the News Local group. Radio stations available: The Central Coast Highway cuts through Gosford's waterfront area, while its predecessor the, Pacific Highway, takes on several names through the CBD itself. Mann Street contains the main public transport links for Gosford, including Gosford railway station, with hourly trains to Sydney Central Station and to Newcastle Interchange. There is also a terminal for several bus routes linking Gosford to the rest of the Central Coast outside of Gosford station. St Joseph's Catholic College, East Gosford is an all girls school.
https://en.wikipedia.org/wiki?curid=13075
Gordon Brown James Gordon Brown (born 20 February 1951) is a British politician who was Prime Minister of the United Kingdom and Leader of the Labour Party from 2007 to 2010. He served as Chancellor of the Exchequer from 1997 to 2007. Brown was a Member of Parliament (MP) from 1983 to 2015, first for Dunfermline East and later for Kirkcaldy and Cowdenbeath. A doctoral graduate, Brown read history at the University of Edinburgh, where he was elected Rector of the University of Edinburgh in 1972. He spent his early career working as both a lecturer at a further education college and a television journalist. He entered Parliament in 1983 as the MP for Dunfermline East. He joined the Shadow Cabinet in 1989 as Shadow Secretary of State for Trade, and was later promoted to become Shadow Chancellor of the Exchequer in 1992. After Labour's victory in 1997, he was appointed Chancellor of the Exchequer, becoming the longest-serving holder of that office in modern history. Brown's tenure as Chancellor was marked by major reform of Britain's monetary and fiscal policy architecture, transferring interest rate setting powers to the Bank of England, by a wide extension of the powers of the Treasury to cover much domestic policy and by transferring responsibility for banking supervision to the Financial Services Authority. Controversial moves included the abolition of advance corporation tax (ACT) relief in his first budget, and the removal in his final budget of the 10% "starting rate" of personal income tax which he had introduced in 1999. Tony Blair resigned as Prime Minister and Labour Leader in 2007, and Brown was chosen to replace him in an uncontested election. After initial rises in opinion polls following Brown becoming Prime Minister, Labour's popularity declined with the onset of a recession in 2008, leading to poor results in the local and European elections in 2009. A year later, Labour lost 91 seats in the House of Commons at the 2010 general election, the party's biggest loss of seats in a single general election since 1931, making the Conservatives the largest party in a hung parliament. Brown remained in office as Labour negotiated to form a coalition government with the Liberal Democrats. On 10 May 2010, Brown announced he would stand down as leader of the Labour Party, and instructed the party to put into motion the processes to elect a new leader. Labour's attempts to retain power failed and on 11 May, he officially resigned as Prime Minister and Leader of the Labour Party. He was succeeded as Prime Minister by David Cameron, and as Leader of the Labour Party by Ed Miliband. Brown later played a prominent role in the campaign to maintain the union during the 2014 Scottish independence referendum. Brown was born at the Orchard Maternity Nursing Home in Giffnock, Renfrewshire, Scotland. His father was John Ebenezer Brown (1914–1998), a minister of the Church of Scotland and a strong influence on Brown. His mother was Jessie Elizabeth "Bunty" Brown ("née" Souter; 1918–2004). She was the daughter of John Souter, a timber merchant. The family moved to Kirkcaldy – then the largest town in Fife, across the Firth of Forth from Edinburgh – when Gordon was three. Brown was brought up there with his elder brother John and younger brother Andrew Brown in a manse; he is therefore often referred to as a "son of the manse", an idiomatic Scottish phrase. Brown was educated first at Kirkcaldy West Primary School where he was selected for an experimental fast stream education programme, which took him two years early to Kirkcaldy High School for an academic hothouse education taught in separate classes. Aged 16, he wrote that he loathed and resented this "ludicrous" experiment on young lives. He was accepted by the University of Edinburgh to study history at the same early age of 16. During an end-of-term rugby union match at his old school, he received a kick to the head and suffered a retinal detachment. This left him blind in his left eye, despite treatment including several operations and weeks spent lying in a darkened room. Later at Edinburgh, while playing tennis, he noticed the same symptoms in his right eye. Brown underwent experimental surgery at the Edinburgh Royal Infirmary and his right eye was saved by a young eye surgeon, Hector Chawla. Brown graduated from Edinburgh with a First-Class Honours MA degree in history in 1972, and stayed on to obtain his PhD in history (which he gained ten years later in 1982), titled "The Labour Party and Political Change in Scotland 1918–29". In his youth at the University of Edinburgh, Brown was involved in a romantic relationship with Margarita, Crown Princess of Romania. Margarita said about it: "It was a very solid and romantic story. I never stopped loving him but one day it didn't seem right any more, it was politics, politics, politics, and I needed nurturing." An unnamed friend of those years is quoted by Paul Routledge in his biography of Brown as recalling: "She was sweet and gentle and obviously cut out to make somebody a very good wife. She was bright, too, though not like him, but they seemed made for each other." In 1972, while still a student, Brown was elected Rector of the University of Edinburgh, the convener of the University Court. He served as Rector until 1975, and also edited the document "The Red Paper on Scotland". From 1976 to 1980 Brown was employed as a lecturer in politics at Glasgow College of Technology. He also worked as a tutor for the Open University. In the 1979 general election, Brown stood for the Edinburgh South constituency, losing to the Conservative candidate, Michael Ancram. From 1980, he worked as a journalist at Scottish Television, later serving as current affairs editor until his election to Parliament in 1983. Brown was elected to Parliament as a Labour MP at his second attempt, for Dunfermline East in the 1983 general election. His first Westminster office mate was a newly elected MP from the Sedgefield constituency, Tony Blair. Brown became an opposition spokesman on Trade and Industry in 1985. In 1986, he published a biography of the Independent Labour Party politician James Maxton, the subject of his doctoral thesis. Brown was Shadow Chief Secretary to the Treasury from 1987 to 1989 and then Shadow Secretary of State for Trade and Industry, before becoming Shadow Chancellor in 1992. Having led the "Labour Movement Yes" campaign, refusing to join the cross-party "Yes for Scotland" campaign, during the 1979 Scottish devolution referendum, while other senior Labour politicians – including Robin Cook, Tam Dalyell and Brian Wilson – campaigned for a "No" vote, Brown was subsequently a key participant in the Scottish Constitutional Convention, signing the Claim of Right for Scotland in 1989. Labour leader John Smith died suddenly in May 1994. Brown did not contest the leadership after Tony Blair became the favourite, deciding to make way for Blair to avoid splitting the pro-modernising vote in the leadership ballot. It has long been rumoured a deal was struck between Blair and Brown at the former Granita restaurant in Islington, in which Blair promised to give Brown control of economic policy in return for Brown not standing against him in the leadership election. Whether this is true or not, the relationship between Blair and Brown was central to the fortunes of New Labour, and they mostly remained united in public, despite reported serious private rifts. As Shadow Chancellor, Brown as Chancellor-in-waiting was seen as a good choice by business and the middle class. While he was Chancellor inflation sometimes exceeded the 2% target causing the Governor of the Bank of England to write several letters to the Chancellor, each time inflation exceeded three per cent. Following a reorganisation of Westminster constituencies in Scotland in 2005, Brown became MP for Kirkcaldy and Cowdenbeath at the general election. In the 1997 general election, Labour defeated the Conservatives by a landslide to end their 18-year exile from government, and when Tony Blair, the new Prime Minister, announced his ministerial team on 2 May 1997, he appointed Brown as Chancellor of the Exchequer. Brown would remain in this role for 10 years and two months, making him the longest-serving Chancellor in modern history. The Prime Minister's website highlights some achievements from Brown's decade as Chancellor: making the Bank of England independent and delivering an agreement on poverty and climate change at the G8 summit in 2005. On taking office as Chancellor of the Exchequer Brown gave the Bank of England operational independence in monetary policy, and thus responsibility for setting interest rates through the Bank's Monetary Policy Committee. At the same time, he also changed the inflation measure from the Retail Price Index to the Consumer Price Index and transferred responsibility for banking supervision to the Financial Services Authority. Some commentators have argued that this division of responsibilities exacerbated the severity in Britain of the 2007 global banking crisis. In the 1997 election and subsequently, Brown pledged not to increase the basic or higher rates of income tax. Over his Chancellorship, he reduced the basic rate from 23% to 20%. However, in all but his final budget, Brown increased the tax thresholds inline with inflation, rather than earnings, resulting in fiscal drag. Corporation tax fell under Brown, from a main rate of 33% to 28%, and from 24% to 19% for small businesses. In 1999, he introduced a lower tax band of 10%. He abolished this 10% tax band in his last budget in 2007 to reduce the basic rate from 22% to 20%, increasing tax for 5 million people and, according to the calculations of the Institute for Fiscal Studies, leaving those earning between £5,000 and £18,000 as the biggest losers. To backbench cheers, Brown had described the measure in his last Budget thus: "Having put in place more focused ways of incentivising work and directly supporting children and pensioners at a cost of £3bn a year, I can now return income tax to just two rates by removing the 10p band on non-savings income". According to the OECD UK taxation increased from a 39.3% share of gross domestic product in 1997 to 42.4% in 2006, going to a higher level than that of Germany. This increase has mainly been attributed to active government policy, and not simply to the growing economy. Conservatives have accused Brown of imposing "stealth taxes". A commonly reported example resulted in 1997 from a technical change in the way corporation tax is collected, the indirect effect of which was for the dividends on stock investments held within pensions to be taxed, thus lowering pension returns and contributing to the demise of most of the final salary pension funds in the UK. The Treasury contends that this tax change was crucial to long-term economic growth. Brown's 2000 Spending Review outlined a major expansion of government spending, particularly on health and education. In his April 2002 budget, Brown increased National Insurance to pay for health spending. He also introduced working tax credits, and in his last budget as Chancellor, Brown gave an extra £3 billion in pension allowances, an increase in the child tax credit, and an increase in the working tax credit. These increases were followed by another £1 billion of support for increases in the child tax credit. Under Brown, the tax code, the standard guide to tax, doubled in length to 17,000 pages. In October 1997, Brown announced that the Treasury would set five economic tests to determine whether the economic case had been made for the United Kingdom to adopt the European single currency. The Treasury indicated that the tests had not been passed in June 2003. In 2000, Brown was accused of starting a political row about higher education (referred to as the Laura Spence Affair) when he accused the University of Oxford of elitism in its admissions procedures, describing its decision not to offer a place to state school pupil Laura Spence as "absolutely outrageous". Lord Jenkins, then Oxford Chancellor and himself a former Labour Chancellor of the Exchequer, said "nearly every fact he used was false." Between 1999 and 2002 Brown sold 60% of the UK's gold reserves shortly before gold entered a protracted bull market, since nicknamed by dealers as the Brown Bottom or Brown's Bottom. The official reason for selling the gold reserves was to reduce the portfolio risk of the UK's reserves by diversifying away from gold. The UK eventually sold about 395 tons of gold over 17 auctions from July 1999 to March 2002, at an average price of about US$275 per ounce, raising approximately US$3.5 billion. By 2011, that quantity of gold would be worth over $19 billion, leading to Brown's decision to sell the gold being widely criticised. As Chancellor, Brown argued against renationalising the railways, saying at the Labour conference in 2004 that it would cost £22 billion. During his time as Chancellor, Brown reportedly believed that it was appropriate to remove most, but not all, of the unpayable Third World debt. On 20 April 2006, in a speech to the United Nations Ambassadors, Brown outlined a "Green" view of global development. In October 2004, Tony Blair announced he would not lead the party into a fourth general election, but would serve a full third term. Political comment over the relationship between Brown and Blair continued up to and beyond the 2005 election, which Labour won with a reduced majority and reduced vote share. Blair announced on 7 September 2006 that he would step down within a year. Brown was the clear favourite to succeed Blair; he was the only candidate spoken of seriously in Westminster. Appearances and news coverage leading up to the handover were interpreted as preparing the ground for Brown to become Prime Minister, in part by creating the impression of a statesman with a vision for leadership and global change. This enabled Brown to signal the most significant priorities for his agenda as Prime Minister; speaking at a Fabian Society conference on 'The Next Decade' in January 2007, he stressed education, international development, narrowing inequalities (to pursue 'equality of opportunity and fairness of outcome'), renewing Britishness, restoring trust in politics, and winning hearts and minds in the war on terror as key priorities. Brown ceased to be Chancellor and became the Prime Minister of the United Kingdom on 27 June 2007. Like all modern Prime Ministers, Brown concurrently served as the First Lord of the Treasury and the Minister for the Civil Service, and was a member of the Cabinet of the United Kingdom. Until his resignation from the post in May 2010 he was Leader of the Labour Party. He was Member of Parliament for the constituency of Kirkcaldy and Cowdenbeath until he stepped down in 2015. He was the sixth post-war Prime Minister, of a total of 13, to assume the role without having won a general election. Brown was the first Prime Minister from a Scottish constituency since the Conservative Sir Alec Douglas-Home in 1964. Not all British prime ministers have been university graduates, but, of those that were, Brown was one of only five that had not attended either Oxford or Cambridge. He proposed moving some traditional prime ministerial powers conferred by royal prerogative to the realm of Parliament, such as the power to declare war and approve appointments to senior positions. Brown wanted Parliament to gain the right to ratify treaties and have more oversight into the intelligence services. He also proposed moving some powers from Parliament to citizens, including the right to form "citizens' juries", easily petition Parliament for new laws, and rally outside Westminster. He asserted that the attorney general should not have the right to decide whether to prosecute in individual cases, such as in the loans for peerages scandal. There was speculation during September and early October 2007 about whether Brown would call a snap general election. Indeed, the party launched the Not Flash, Just Gordon advertising campaign, which was seen largely as pre-election promotion of Brown as Prime Minister. However, Brown announced on 6 October that there would be no election any time soon – despite opinion polls showing that he was capable of winning an election should he call one. This proved to be a costly mistake, as during 2008 his party slid behind the Conservatives (led by David Cameron) in the polls. Disputes over political donations, a string of losses in local elections, and by-election losses in Crewe and Glasgow did himself and the government no favours either. His political opponents accused him of being indecisive, which Brown denied. In July 2008 Brown supported a new bill extending the pre-charge detention period to 42 days. The bill was met with opposition on both sides of the House and backbench rebellion. In the end the bill passed by just 9 votes. The House of Lords defeated the bill, with Lords characterising it as "fatally flawed, ill thought through and unnecessary", stating that "it seeks to further erode fundamental legal and civil rights". Brown was mentioned by the press in the expenses crisis for claiming for the payment of his cleaner. However, no wrongdoing was found and the Commons Authority did not pursue Brown over the claim. Meanwhile, the Commons Fees Office stated that a double payment for a £153 plumbing repair bill was a mistake on their part and that Brown had repaid it in full. During his Labour leadership campaign Brown proposed some policy initiatives, which he called the "manifesto for change". The manifesto included a clampdown on corruption and a new Ministerial Code, which set out clear standards of behaviour for ministers. He also stated in a speech when announcing his bid that he wants a "better constitution" that is "clear about the rights and responsibilities of being a citizen in Britain today". He planned to set up an all-party convention to look at new powers for Parliament and to look at rebalancing powers between Whitehall and local government. Brown said he would give Parliament the final say on whether British troops were sent into action in future. He said he wanted to release more land and ease access to ownership with shared equity schemes. He backed a proposal to build new eco-towns, each housing between 10,000 and 20,000 home-owners – up to 100,000 new homes in total. Brown also said he wanted to have doctors' surgeries open at the weekends, and GPs on call in the evenings. Doctors were given the right of opting out of out-of-hours care in 2007, under a controversial pay deal, signed by then-Health Secretary John Reid, which awarded them a 22 percent pay rise in 2006. Brown also stated in the manifesto that the NHS was his top priority. On 5 June 2007, just three weeks before he was due to take the post of Prime Minister, Brown made a speech promising "British Jobs for British workers". Brown reiterated that promise at the Labour Party's annual conference in September, which caused controversy as he coupled this with a commitment to crack down on migrant workers. The Conservative Party, led by David Cameron, promptly pointed out that such a commitment was illegal under EU law. Brown was committed to the Iraq War, but said in a speech in June 2007 that he would "learn the lessons" from the mistakes made in Iraq. Brown said in a letter published on 17 March 2008 that the United Kingdom would hold an inquiry into the war. Brown went to great lengths to empathise with those who lost family members in the Iraq and Afghanistan conflicts. He has often said "War is tragic", echoing Blair's quote, "War is horrible". Nonetheless, in November 2007 Brown was accused by some senior military figures of not adhering to the Military Covenant, a convention within British politics ensuring adequate safeguards, rewards and compensation for military personnel who risk their lives in obedience to orders derived from the policy of the elected government. Brown did not attend the opening ceremony of the 2008 Summer Olympics on 8 August 2008 in Beijing; instead, he attended the closing ceremony on 24 August 2008. Brown had been under intense pressure from human rights campaigners to send a message to China, concerning the 2008 Tibetan unrest. His decision not to attend the opening ceremony was not an act of protest, but rather was made several weeks in advance and not intended as a stand on principle. In a speech in July 2007, Brown clarified his position regarding Britain's relationship with the USA "We will not allow people to separate us from the United States of America in dealing with the common challenges that we face around the world. I think people have got to remember that the special relationship between a British prime minister and an American president is built on the things that we share, the same enduring values about the importance of liberty, opportunity, the dignity of the individual. I will continue to work, as Tony Blair did, very closely with the American administration." Brown and the Labour party had pledged to allow a referendum on the EU Treaty of Lisbon. On 13 December 2007, Foreign Secretary David Miliband attended for the Prime Minister at the official signing ceremony in Lisbon of the EU Reform Treaty. Brown's opponents on both sides of the House, and in the press, suggested that ratification by Parliament was not enough and that a referendum should also be held. Labour's 2005 manifesto had pledged to give the British public a referendum on the original EU Constitution. Brown argued that the Treaty significantly differed from the Constitution, and as such did not require a referendum. He also responded with plans for a lengthy debate on the topic, and stated that he believed the document to be too complex to be decided by referendum. During Brown's premiership, in October 2008, the Advisory Council on the Misuse of Drugs (ACMD) recommended to the then Home Secretary Jacqui Smith that cannabis remain classified as a Class C drug. Acting against the advice of the Council, she chose to reclassify it as Class B. After Professor David Nutt, the chair of the ACMD, criticised this move in a lecture in 2009, he was asked to step down by then Home Secretary Alan Johnson. Following his resignation, Professor Nutt said Brown had "made up his mind" to reclassify cannabis despite evidence to the contrary. Brown had argued, "I don't think that the previous studies took into account that so much of the cannabis on the streets is now of a lethal quality and we really have got to send out a message to young people—this is not acceptable". Professor Nutt's predecessor at the ACMD, Sir Michael Rawlins, later said, "Governments may well have good reasons for taking an alternative view ... When that happens, then the government should explain why it's ignoring the particular advice". Brown's premiership coincided with the global recession, during which Brown called for fiscal action in an attempt to stimulate aggregate demand. Domestically, Brown's administration introduced measures including a bank rescue package worth around £500 billion (approximately $850 billion), a temporary 2.5 percentage point cut in value-added tax and a "car scrappage" scheme. In mid-2008, Brown's leadership was presented with a challenge as some MPs openly called for him to resign. This event was dubbed the 'Lancashire Plot', as two backbenchers from (pre-1974) Lancashire urged him to step down and a third questioned his chances of holding on to the Labour Party leadership. Several MPs argued that if Brown did not recover in the polls by early 2009, he should call for a leadership contest. However, certain prominent MPs, such as Jacqui Smith and Bill Rammell, suggested that Brown was the right person to lead Britain through its economic crisis. In the autumn, Siobhain McDonagh, an MP and junior government whip, who during her time in office had never voted against the government, spoke of the need for discussion over Brown's position. While she did not state that she wanted Brown deposed, she implored the Labour Party to hold a leadership election. McDonagh was sacked from her role shortly afterward, on 12 September. She was supported in making clear her desire for a contest by Joan Ryan (who applied, as McDonagh had, for leadership nomination papers, and became the second rebel to be fired from her job), Jim Dowd, Greg Pope, and a string of others who had previously held positions in government. In the face of this speculation over Brown's future, his ministers backed him to lead the party, and Harriet Harman and David Miliband denied that they were preparing leadership bids. After Labour lost the Glasgow East by-election in July, Harman, the deputy leader of the party, said that Brown was the "solution", not the "problem"; Home Secretary Smith, Justice Secretary Jack Straw, Schools Secretary Ed Balls and Cabinet Office Minister Ed Miliband all re-affirmed their support for Brown. The deputy Prime Minister under Blair, John Prescott, also pledged his support. Foreign Secretary David Miliband then denied that he was plotting a leadership bid, when on 30 July, an article written by him in "The Guardian" was interpreted by a large number in the media as an attempt to undermine Brown. In the article, Miliband outlined the party's future, but neglected to mention the Prime Minister. Miliband, responded to this by saying that he was confident Brown could lead Labour to victory in the next general election, and that his article was an attack against the fatalism in the party since the loss of Glasgow-East. Miliband continued to show his support for Brown in the face of the challenge that emerged in September, as did Business Secretary John Hutton, Environment Secretary Hilary Benn, and Chief Whip Geoff Hoon. On 6 January 2010, Patricia Hewitt and Geoff Hoon jointly called for a secret ballot on the future of Brown's leadership. The call received little support, and the following day Hoon said that it appeared to have failed and was "over". Brown later referred to the call for a secret ballot as a "form of silliness". In the local elections on 1 May 2008, Labour suffered its worst results in 40 years, finishing in third place with a projected 24% share of the national vote. Subsequently, the party saw the loss of by-elections in Nantwich and Crewe and Henley as well as slumps in the polls. A by-election in Glasgow East triggered by the resignation of David Marshall saw Labour struggle to appoint a candidate, eventually settling for Margaret Curran, a sitting MSP in the Scottish Parliament. The SNP, Conservatives and Liberal Democrats all derided Labour for their disorganised nature, with Alex Salmond commenting "This is their 'lost weekend'—they don't have a leader in Scotland, they don't have a candidate in Glasgow East, and they have a prime minister who refuses to come to the constituency". Labour lost the constituency to the Scottish National party's John Mason who took 11,277 votes, with Labour just 365 behind. The seat experienced a swing of 22.54%. In the European elections, Labour polled 16% of the vote, finishing in third place behind the Conservatives and UK Independence Party (UKIP). Voter apathy was reflected in the historically low turnout of around thirty three percent. In Scotland voter turnout was only twenty eight per cent. In the local elections, Labour polled 23% of the vote, finishing in third place behind Conservatives and Liberal Democrats, with Labour losing control of the four councils it had held prior to the election. In a vote widely considered to be a reaction to the expenses scandal, the share of the votes was down for all the major parties; Labour was down one percent, the Conservative share was down five percent. The beneficiary of the public backlash was generally seen to be the minor parties, including the Green Party and UKIP. These results were Labour's worst since World War II. Brown was quoted in the press as having said that the results were "a painful defeat for Labour", and that "too many good people doing so much good for their communities and their constituencies have lost through no fault of their own." In April 2010, Brown asked the Queen to dissolve Parliament. The general election campaign included the first televised leadership debates in Britain. The result of the election on 6 May was a hung parliament. Brown was re-elected as MP for Kirkcaldy and Cowdenbeath with 29,559 votes. Brown announced on 10 May 2010 that he would stand down as Labour Leader, with a view to a successor being chosen before the next Labour Party Conference in September 2010. The following day, negotiations between the Labour Party and the Liberal Democrats to form a coalition government failed. During the evening, Brown visited Buckingham Palace to tender his resignation as Prime Minister to Queen Elizabeth II and to recommend that she invite the Leader of the Opposition, David Cameron, to form a government. He resigned as leader of the Labour Party with immediate effect. On 13 May 2010, in his first public appearance since leaving 10 Downing Street, two days after resigning as Prime Minister and Leader of the Labour Party, Brown confirmed he intended to stay on in Parliament, serving as a Labour backbencher, to serve the people of his Kirkcaldy and Cowdenbeath constituency. Towards the end of May 2010, Brown began writing "Beyond the Crash", completing it after 14 weeks. The book discusses the 2007–08 financial crisis and Brown's recommendations for future co-ordinated global action. He played a prominent role in the lead-up to, and the aftermath of, the 2014 Scottish independence referendum, campaigning for Scotland to stay in the United Kingdom. On 1 December 2014, Brown announced that he would not be seeking re-election to parliament. He stood down at the general election in May 2015. In April 2011, media reports linked Brown with the role as the next managing director of the International Monetary Fund following the scheduled retirement of Dominique Strauss-Kahn. Brown's successor and Leader of the Opposition, Ed Miliband, supported Brown for the role while the Prime Minister, David Cameron, voiced opposition to this. Following the arrest of Strauss-Kahn for alleged sexual assault in May 2011, and his subsequent resignation, these reports re-surfaced. Support for Brown among economists was mixed but British Government backing for his candidature was not forthcoming and instead supported Christine Lagarde – the eventual successful candidate – for the post. Sir Tim Berners-Lee, who had worked with the government during Brown's premiership to publish government data on the internet in the data.gov.uk project, subsequently invited Brown to become a board director of the World Wide Web Foundation to "advise the Web Foundation on ways to involve disadvantaged communities and global leaders in the development of sustainable programs that connect humanity and affect positive change". On 22 April 2011 it was announced that Brown would be taking on an unpaid advisory role at the World Economic Forum. Brown was also appointed as the inaugural 'Distinguished Leader in Residence' by New York University and has already taken part in discussions and lectures relating to the global financial crisis and globalisation. In July 2012 Brown was named by Secretary-General Ban Ki-moon as a United Nations Special Envoy on Global Education. He chaired the International Commission on Financing Global Education Opportunity. The position is unpaid. In December 2015, Brown took his first large-scale role in the private sector since standing down as prime minister in 2010, becoming an advisor to PIMCO. Any money earned from the role is to go to the Gordon and Sarah Brown Foundation to support charitable work. Brown is concerned about child poverty and poverty in general. In 2018, he said: "It makes me angry. I'm seeing poverty I didn’t think I would ever see again in my lifetime. Slum housing was a feature of my childhood in the 1950s and 1960s, and I thought we had finally got over the worst of child poverty. Tax credits were the key to that." He also said, "You can't solve child poverty with the existing system. Universal credit is completely underfunded and every time they move people on to it you see more poverty. Minimum wage jobs don't pay enough to keep a family with two or three children out of poverty." Brown said further, "The government is trying to analyse this as people who are too lazy to get into work. But the problem is that people can't earn enough to stay out of poverty." "The Deal", a TV movie from 2003, followed Tony Blair's rise to power, and his friendship and rivalry with Brown, played by David Morrissey. In "The Trial of Tony Blair" (2007) Brown was played by Peter Mullan, and in the Channel 4 television film "Coalition" (2015) he was portrayed by Ian Grieve. Brown's early girlfriends included journalist Sheena McDonald and Princess Margarita, the eldest daughter of exiled King Michael of Romania. At the age of 49, Brown married Sarah Macaulay in a private ceremony at his home in North Queensferry, Fife, on 3 August 2000. A daughter, Jennifer Jane, was born prematurely on 28 December 2001; she died on 7 January 2002, one day after suffering a brain haemorrhage. The couple have two sons, John Macaulay (born 17 October 2003) and (James) Fraser (born on 18 July 2006). In November 2006, Fraser was diagnosed with cystic fibrosis. "The Sun" had learned of the situation in 2006 and published the story. In 2011 Brown stated he had wanted the details of his son's condition kept private and that the publication had left him "in tears". The "Sun" said they approached Brown and that discussion occurred with his colleagues who provided quotes to use in the article. Sarah Brown rarely made official appearances, whether with or without her husband. She is patron of several charities and has written articles for national newspapers related to this. At the 2008 Labour Party Conference, Sarah caused surprise by taking to the stage to introduce her husband for his keynote address. Since then her public profile has increased. Brown has two brothers, John Brown and Andrew Brown. Andrew has been Head of Media Relations in the UK for the French-owned utility company EDF Energy since 2004. Brown is also the brother-in-law of environmental journalist Clare Rewcastle Brown; he wrote a piece for "The Independent" supporting Clare's current environmental efforts on behalf of Sarawak. Whilst Prime Minister, Brown spent some of his spare time at Chequers, the house often being filled with friends. The Browns have entertained local dignitaries like Sir Leonard Figg. Brown is also a friend of Harry Potter author J. K. Rowling, who says of Brown: "I know him as affable, funny and gregarious, a great listener, a kind and loyal friend." Brown is a strong supporter of the NHS, partly due to both the experimental surgery that saved the sight in his right eye after his retina became detached, and the care he and Sarah Brown received when their premature firstborn baby died. Blindness in his left eye resulting in his lack of peripheral vision contributed to Brown's supposed antisocial nature and awkward public manner. For example, both on a podium and before a camera, while reading "he needs to look slightly to one side of the paper to focus; when speaking to an audience or into a camera lens, he must remember to correct what would normally be an automatic tendency to look slightly askew to see clearly with his good eye". Brown's papers were prepared in capital letters and in extremely large type, resulting in his stack of papers at the dispatch box being noticeably bulky due to fewer words per page. Former staffers often attributed Brown's outbursts of temper in Downing Street to his frustration with his physical limitations. Brown is a noted supporter of Kirkcaldy-based football club Raith Rovers and has written articles about his relationship with the club. The son of a Church of Scotland minister, Brown has talked about what he calls his "moral compass" and of his parents being his "inspiration". He has, at least ostensibly, been keen to keep his religion a private matter. According to "The Guardian", he is a member of the Church of Scotland. In March 2009, Brown was named World Statesman of the Year by the Appeal of Conscience Foundation, an American organisation 'dedicated to promoting peace, human rights and understanding between religious faiths'. The award was presented by Rabbi Arthur Schneier who praised Brown's "compassionate leadership in dealing with the challenging issues facing humanity, his commitment to freedom, human dignity and the environment, and for the major role he has played in helping to stabilise the world's financial system". Speeches
https://en.wikipedia.org/wiki?curid=13076
Galileo (spacecraft) Galileo was an American uncrewed spacecraft that studied the planet Jupiter and its moons, as well as several other Solar System bodies. Named after the Italian astronomer Galileo Galilei, it consisted of an orbiter and an entry probe. It was delivered into Earth orbit on October 18, 1989 by . "Galileo" arrived at Jupiter on December 7, 1995, after gravitational assist flybys of Venus and Earth, and became the first spacecraft to orbit Jupiter. It launched the first probe into Jupiter, directly measuring its atmosphere. Despite suffering major antenna problems, "Galileo" achieved the first asteroid flyby, of 951 Gaspra, and discovered the first asteroid moon, Dactyl, around 243 Ida. In 1994, "Galileo" observed Comet Shoemaker–Levy 9's collision with Jupiter. Jupiter's atmospheric composition and ammonia clouds were recorded, the clouds possibly created by outflows from the lower depths of the atmosphere. Io's volcanism and plasma interactions with Jupiter's atmosphere were also recorded. The data "Galileo" collected supported the theory of a liquid ocean under the icy surface of Europa, and there were indications of similar liquid-saltwater layers under the surfaces of Ganymede and Callisto. Ganymede was shown to possess a magnetic field and the spacecraft found new evidence for exospheres around Europa, Ganymede, and Callisto. "Galileo" also discovered that Jupiter's faint ring system consists of dust from impacts on the four small inner moons. The extent and structure of Jupiter's magnetosphere was also mapped. On September 21, 2003, after 14 years in space and 8 years in the Jovian system, "Galileo" mission was terminated by sending it into Jupiter's atmosphere at a speed of over , eliminating the possibility of contaminating local moons with terrestrial bacteria. Jupiter was rated as the number one priority in the Planetary Science Decadal Survey published in the summer of 1968. In the early 1970s the first flybys of Jupiter were achieved by "Pioneer 10" and "Pioneer 11", and before the decade was out it was also visited by the more advanced "Voyager 1" and "Voyager 2" spacecraft. Work on the spacecraft began at Jet Propulsion Laboratory in 1977, while the "Voyager 1" and "2" missions were still being prepared for launch. Early plans called for a launch on on what was then codenamed STS-23 in January 1982, but delays in the development of the Space Shuttle allowed more time for development of the probe. As the shuttle program got underway, "Galileo" was scheduled for launch in 1984, but this later slipped to 1985 and then to 1986. The mission was initially called the "Jupiter Orbiter Probe"; it was christened "Galileo" in 1978. Once the spacecraft was complete, its launch was scheduled for STS-61-G on-board in 1986. The Inertial Upper Stage booster was going to be used at first, but this changed to the Centaur booster, then back to IUS after "Challenger". The Centaur-G liquid hydrogen-fueled booster stage allowed a direct trajectory to Jupiter. The mission was further delayed by the hiatus in launches that occurred after the Space Shuttle "Challenger" disaster. New safety protocols introduced as a result of the disaster prohibited the use of the Centaur-G stage on the Shuttle, forcing "Galileo" to use a lower-powered Inertial Upper Stage solid-fuel booster. The mission was re-profiled in 1987 to use several gravitational slingshots, referred to as the Venus-Earth-Earth Gravity Assist or VEEGA maneuvers, to provide the additional velocity required to reach its destination. It was finally launched on October 18, 1989, by on the STS-34 mission. "Galileo" flew by Venus at 05:58:48 UTC on February 10, 1990, at a range of . Having gained in speed, the spacecraft flew by Earth twice, the first time at a range of at 20:34:34 UTC on December 8, 1990, before approaching the S-type asteroid 951 Gaspra to a distance of at 22:37 UTC on October 29, 1991. "Galileo" then performed a second flyby of Earth at at 15:09:25 UTC on December 8, 1992, adding to its cumulative speed. "Galileo" performed close observations of a second asteroid, 243 Ida, at 16:51:59 UTC on August 28, 1993, at a range of . The spacecraft discovered Ida has a moon, Dactyl, the first discovery of a natural satellite orbiting an asteroid. In 1994, "Galileo" was perfectly positioned to watch the fragments of Comet Shoemaker–Levy 9 crash into Jupiter, whereas terrestrial telescopes had to wait to see the impact sites as they rotated into view. After releasing its atmospheric probe on July 13, 1995, the "Galileo" orbiter became the first artificial satellite of Jupiter at 01:16 UTC on December 8, 1995, after it fired its main engine to enter a 198-day parking orbit. "Galileo" prime mission was a two-year study of the Jovian system. The spacecraft traveled around Jupiter in elongated ellipses, each orbit lasting about two months. The differing distances from Jupiter afforded by these orbits allowed "Galileo" to sample different parts of the planet's extensive magnetosphere. The orbits were designed for close-up flybys of Jupiter's largest moons. Once the prime mission concluded, an extended mission started on December 7, 1997; the spacecraft made several flybys of Europa and Io. The closest approach was on October 15, 2001. The radiation environment near Io was very unhealthy for "Galileo" systems, and so these flybys were saved for the extended mission when loss of the spacecraft would be more acceptable. "Galileo" cameras were deactivated on January 17, 2002, after they had sustained irreparable radiation damage. NASA engineers were able to recover the damaged tape recorder electronics, and "Galileo" continued to return scientific data until it was deorbited in 2003, performing one last scientific experiment: a measurement of the moon Amalthea's mass as the spacecraft swung by it. On December 11, 2013, NASA reported, based on results from the "Galileo" mission, the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet, according to the scientists. The Jet Propulsion Laboratory built the "Galileo" spacecraft and managed the "Galileo" mission for NASA. West Germany Messerschmitt-Bölkow-Blohm supplied the propulsion module. NASA's Ames Research Center managed the atmospheric probe, which was built by Hughes Aircraft Company. At launch, the orbiter and probe together had a mass of and stood tall. One section of the spacecraft rotated at three rpm, keeping "Galileo" stable and holding six instruments that gathered data from many different directions, including the fields and particles instruments. The other section of the spacecraft was a 4.8-meter (16-foot) wide, umbrella-like high-gain antenna, and data were periodically transmitted to it. Back on the ground, the mission operations team used software containing 650,000 lines of programming code in the orbit sequence design process; 1,615,000 lines in the telemetry interpretation; and 550,000 lines of code in navigation. The CDH subsystem was actively redundant, with two parallel data system buses running at all times. Each data system bus (a.k.a. string) was composed of the same functional elements, consisting of multiplexers (MUX), high-level modules (HLM), low-level modules (LLM), power converters (PC), bulk memory (BUM), data management subsystem bulk memory (DBUM), timing chains (TC), phase locked loops (PLL), Golay coders (GC), hardware command decoders (HCD) and critical controllers (CRC). The CDH subsystem was responsible for maintaining the following functions: The spacecraft was controlled by six RCA 1802 COSMAC microprocessor CPUs: four on the spun side and two on the despun side. Each CPU was clocked at about 1.6 MHz, and fabricated on sapphire (silicon on sapphire), which is a radiation-and static-hardened material ideal for spacecraft operation. This microprocessor was the first low-power CMOS processor chip, quite on a par with the 8-bit 6502 that was being built into the Apple II desktop computer at that time. The Galileo Attitude and Articulation Control System (AACSE) was controlled by two Itek Advanced Technology Airborne Computers (ATAC), built using radiation-hardened 2901s. The AACSE could be reprogrammed in flight by sending the new program through the Command and Data Subsystem. "Galileo" attitude control system software was written in the HAL/S programming language, also used in the Space Shuttle program. Memory capacity provided by each BUM was 16K of RAM, while the DBUMs each provided 8K of RAM. There were two BUMs and two DBUMs in the CDH subsystem and they all resided on the spun side of the spacecraft. The BUMs and DBUMs provided storage for sequences and contain various buffers for telemetry data and interbus communication. Every HLM and LLM was built up around a single 1802 microprocessor and 32K of RAM (for HLMs) or 16K of RAM (for LLMs). Two HLMs and two LLMs resided on the spun side while two LLMs were on the despun side. Thus, total memory capacity available to the CDH subsystem was 176K of RAM: 144K allocated to the spun side and 32K to the despun side. Each HLM was responsible for the following functions: Each LLM was responsible for the following functions: The HCD received command data from the modulation/demodulation subsystem, decoded these data and transferred them to the HLMs and CRCs. The CRC controlled the configuration of CDH subsystem elements. It also controlled access to the two data system buses by other spacecraft subsystems. In addition, the CRC supplied signals to enable certain critical events (e.g. probe separation). The GCs provided Golay encoding of data via hardware. The TCs and PLLs established timing within the CDH subsystem. The propulsion subsystem consisted of a 400 N main engine and twelve 10 N thrusters, together with propellant, storage and pressurizing tanks and associated plumbing. The 10 N thrusters were mounted in groups of six on two 2-meter booms. The fuel for the system was of monomethylhydrazine and nitrogen tetroxide. Two separate tanks held another of helium pressurant. The propulsion subsystem was developed and built by Messerschmitt-Bölkow-Blohm and provided by West Germany, the major international partner in Project "Galileo". At the time, solar panels were not practical at Jupiter's distance from the Sun; the spacecraft would have needed a minimum of of panels. Chemical batteries would likewise be prohibitively large due to technological limitations. The solution was two radioisotope thermoelectric generators (RTGs) which powered the spacecraft through the radioactive decay of plutonium-238. The heat emitted by this decay was converted into electricity through the solid-state Seebeck effect. This provided a reliable and long-lasting source of electricity unaffected by the cold environment and high-radiation fields in the Jovian system. Each GPHS-RTG, mounted on a boom, carried of . Each RTG contained 18 separate heat source modules, and each module encased four pellets of plutonium(IV) oxide, a ceramic material resistant to fracturing. The modules were designed to survive a range of potential accidents: launch vehicle explosion or fire, re-entry into the atmosphere followed by land or water impact, and post-impact situations. An outer covering of graphite provided protection against the structural, thermal, and eroding environments of a potential re-entry. Additional graphite components provided impact protection, while iridium cladding of the fuel cells provided post-impact containment. The RTGs produced about 570 watts at launch. The power output initially decreased at the rate of 0.6 watts per month and was 493 watts when "Galileo" arrived at Jupiter. As the launch of "Galileo" neared, anti-nuclear groups, concerned over what they perceived as an unacceptable risk to the public's safety from "Galileo" RTGs, sought a court injunction prohibiting "Galileo" launch. RTGs had been used for years in planetary exploration without mishap: the Lincoln Experimental Satellites 8/9, launched by the U.S. Department of Defense, had 7% more plutonium on board than "Galileo", and the two "Voyager" spacecraft each carried 80% as much plutonium as "Galileo". Activists remembered the messy crash of the Soviet Union's nuclear-powered Kosmos 954 satellite in Canada in 1978, and the 1986 "Challenger" accident, which did not involve nuclear fuel but raised public awareness about spacecraft failures. In addition, no RTGs had ever done a non-orbital swing past the Earth at close range and high speed, as "Galileo" Venus-Earth-Earth gravity assist trajectory required it to do. This created a novel mission failure modality that might plausibly have entailed total dispersal of "Galileo" plutonium in the Earth's atmosphere. Scientist Carl Sagan, for example, a strong supporter of the "Galileo" mission, said in 1989 that "there is nothing absurd about either side of this argument." After the "Challenger" accident, a study considered additional shielding but rejected it, in part because such a design significantly increased the overall risk of mission failure and only shifted the other risks around. For example, if a failure on orbit had occurred, additional shielding would have significantly increased the consequences of a ground impact. Scientific instruments to measure fields and particles were mounted on the spinning section of the spacecraft, together with the main antenna, power supply, the propulsion module and most of "Galileo" computers and control electronics. The sixteen instruments, weighing altogether, included magnetometer sensors mounted on an boom to minimize interference from the spacecraft; a plasma instrument for detecting low-energy charged particles and a plasma-wave detector to study waves generated by the particles; a high-energy particle detector; and a detector of cosmic and Jovian dust. It also carried the Heavy Ion Counter, an engineering experiment to assess the potentially hazardous charged particle environments the spacecraft flew through, and an extreme ultraviolet detector associated with the UV spectrometer on the scan platform. The despun section's instruments included the camera system; the near infrared mapping spectrometer to make multi-spectral images for atmospheric and moon surface chemical analysis; the ultraviolet spectrometer to study gases; and the photopolarimeter-radiometer to measure radiant and reflected energy. The camera system was designed to obtain images of Jupiter's satellites at resolutions 20 to 1,000 times better than "Voyager" best, because "Galileo" flew closer to the planet and its inner moons, and because the more modern CCD sensor in "Galileo" camera was more sensitive and had a broader color detection band than the vidicons of "Voyager". The following information was taken directly from NASA's "Galileo" legacy site. The SSI was an 800-by-800-pixel solid state camera consisting of an array of silicon sensors called a charge-coupled device (CCD). "Galileo" was one of the first spacecraft to be equipped with a CCD camera. The optical portion of the camera was built as a Cassegrain telescope. Light was collected by the primary mirror and directed to a smaller secondary mirror that channeled it through a hole in the center of the primary mirror and onto the CCD. The CCD sensor was shielded from radiation, a particular problem within the harsh Jovian magnetosphere. The shielding was accomplished by means of a thick layer of tantalum surrounding the CCD except where the light enters the system. An eight-position filter wheel was used to obtain images at specific wavelengths. The images were then combined electronically on Earth to produce color images. The spectral response of the SSI ranged from about 400 to 1100 nm. The SSI weighed and consumed, on average, 15 watts of power. The NIMS instrument was sensitive to 0.7-to-5.2-micrometer wavelength infrared light, overlapping the wavelength range of the SSI. The telescope associated with NIMS was all reflective (using only mirrors and no lenses) with an aperture of . The spectrometer of NIMS used a grating to disperse the light collected by the telescope. The dispersed spectrum of light was focused on detectors of indium, antimonide and silicon. The NIMS weighed and used 12 watts of power on average. The Cassegrain telescope of the UVS had a aperture and collected light from the observation target. Both the UVS and EUV instruments used a ruled grating to disperse this light for spectral analysis. This light then passed through an exit slit into photomultiplier tubes that produced pulses or "sprays" of electrons. These electron pulses were counted, and these count numbers constituted the data that were sent to Earth. The UVS was mounted on "Galileo" scan platform and could be pointed to an object in inertial space. The EUV was mounted on the spun section. As "Galileo" rotated, EUV observed a narrow ribbon of space perpendicular to the spin axis. The two instruments combined weighed about and used 5.9 watts of power. The PPR had seven radiometry bands. One of these used no filters and observed all incoming radiation, both solar and thermal. Another band allowed only solar radiation through. The difference between the solar-plus-thermal and the solar-only channels gave the total thermal radiation emitted. The PPR also measured in five broadband channels that spanned the spectral range from 17 to 110 micrometers. The radiometer provided data on the temperatures of Jupiter's atmosphere and satellites. The design of the instrument was based on that of an instrument flown on the "Pioneer Venus" spacecraft. A aperture reflecting telescope collected light and directed it to a series of filters, and, from there, measurements were performed by the detectors of the PPR. The PPR weighed and consumed about 5 watts of power. The Dust Detector Subsystem (DDS) was used to measure the mass, electric charge, and velocity of incoming particles. The masses of dust particles that the DDS could detect go from to grams. The speed of these small particles could be measured over the range of . The instrument could measure impact rates from 1 particle per 115 days (10 megaseconds) to 100 particles per second. Such data was used to help determine dust origin and dynamics within the magnetosphere. The DDS weighed and used an average of 5.4 watts of power. The Energetic Particles Detector (EPD) was designed to measure the numbers and energies of ions and electrons whose energies exceeded about . The EPD could also measure the direction of travel of such particles and, in the case of ions, could determine their composition (whether the ion is oxygen or sulfur, for example). The EPD used silicon solid-state detectors and a time-of-flight detector system to measure changes in the energetic particle population at Jupiter as a function of position and time. These measurements helped determine how the particles got their energy and how they were transported through Jupiter's magnetosphere. The EPD weighed and used 10.1 watts of power on average. The HIC was, in effect, a repackaged and updated version of some parts of the flight spare of the "Voyager" Cosmic Ray System. The HIC detected heavy ions using stacks of single crystal silicon wafers. The HIC could measure heavy ions with energies as low as and as high as per nucleon. This range included all atomic substances between carbon and nickel. The HIC and the EUV shared a communications link and, therefore, had to share observing time. The HIC weighed and used an average of 2.8 watts of power. The magnetometer (MAG) used two sets of three sensors. The three sensors allowed the three orthogonal components of the magnetic field section to be measured. One set was located at the end of the magnetometer boom and, in that position, was about from the spin axis of the spacecraft. The second set, designed to detect stronger fields, was from the spin axis. The boom was used to remove the MAG from the immediate vicinity of "Galileo" to minimize magnetic effects from the spacecraft. However, not all these effects could be eliminated by distancing the instrument. The rotation of the spacecraft was used to separate natural magnetic fields from engineering-induced fields. Another source of potential error in measurement came from the bending and twisting of the long magnetometer boom. To account for these motions, a calibration coil was mounted rigidly on the spacecraft to generate a reference magnetic field during calibrations. The magnetic field at the surface of the Earth has a strength of about 50,000 nT. At Jupiter, the outboard (11 m) set of sensors could measure magnetic field strengths in the range from ±32 to ±512 nT, while the inboard (6.7 m) set was active in the range from ±512 to ±16,384 nT. The MAG experiment weighed and used 3.9 watts of power. The PLS used seven fields of view to collect charged particles for energy and mass analysis. These fields of view covered most angles from 0 to 180 degrees, fanning out from the spin axis. The rotation of the spacecraft carried each field of view through a full circle. The PLS measured particles in the energy range from . The PLS weighed and used an average of 10.7 watts of power. An electric dipole antenna was used to study the electric fields of plasmas, while two search coil magnetic antennas studied the magnetic fields. The electric dipole antenna was mounted at the tip of the magnetometer boom. The search coil magnetic antennas were mounted on the high-gain antenna feed. Nearly simultaneous measurements of the electric and magnetic field spectrum allowed electrostatic waves to be distinguished from electromagnetic waves. The PWS weighed and used an average of 9.8 watts. The "Galileo" Probe was an atmospheric-entry probe carried by the main "Galileo" spacecraft on its way to Jupiter. It separated from the main spacecraft on July 10, 1995, five months before its rendezvous with the planet on December 7. After a rough deceleration, the Descent Module started to return data to the main spacecraft orbiting high above Jupiter. The probe was built by Hughes Aircraft Company at its El Segundo, California plant and measured about across. Inside the probe's heat shield, the Descent Module with its scientific instruments was protected from extreme heat and pressure during its high-speed journey into the Jovian atmosphere, entering at . During the 57 minutes of data collecting, the "Galileo" Probe returned data on Jupiter's atmospheric conditions and composition and achieved some new discoveries. After arriving on December 8, 1995 (UTC), and completing 35 orbits around Jupiter throughout a nearly eight-year mission, the "Galileo" orbiter was destroyed during a controlled impact with Jupiter on September 21, 2003. During that intervening time, "Galileo" changed the way scientists saw Jupiter and provided a wealth of information on the moons orbiting the planet which will be studied for years to come. Culled from NASA's press kit, the top orbiter scientific results were: The astronomer Carl Sagan, pondering the question of whether life on Earth could be easily detected from space, devised a set of experiments in the late 1980s using "Galileo" remote sensing instruments during the mission's first Earth flyby in December 1990. After data acquisition and processing, Sagan "et al." published a paper in "Nature" in 1993 detailing the results of the experiment. "Galileo" had indeed found what are now referred to as the "Sagan criteria for life". These included strong absorption of light at the red end of the visible spectrum (especially over continents) which was caused by absorption by chlorophyll in photosynthesizing plants, absorption bands of molecular oxygen which is also a result of plant activity, infrared absorption bands caused by the ~1 micromole per mole (µmol/mol) of methane in Earth's atmosphere (a gas which must be replenished by either volcanic or biological activity), and modulated narrowband radio wave transmissions uncharacteristic of any known natural source. "Galileo" experiments were thus the first ever controls in the newborn science of astrobiological remote sensing. In December 1992, during "Galileo" second gravity-assist planetary flyby of Earth, another groundbreaking experiment was performed. Optical communications in space were assessed by detecting light pulses from powerful lasers with "Galileo" CCD. The experiment, dubbed "Galileo" Optical Experiment or GOPEX, used two separate sites to beam laser pulses to the spacecraft, one at Table Mountain Observatory in California and the other at the Starfire Optical Range in New Mexico. The Table Mountain site used a frequency doubled neodymium-yttrium-aluminium garnet () laser operating at 532 nm with a repetition rate of ~15 to 30 Hz and a pulse power (FWHM) in the tens of megawatts range, which was coupled to a Cassegrain telescope for transmission to "Galileo". The Starfire range site used a similar setup with a larger, , transmitting telescope. Long exposure (~0.1 to 0.8 s) images using "Galileo" 560 nm centered green filter produced images of Earth clearly showing the laser pulses even at distances of up to . Adverse weather conditions, restrictions placed on laser transmissions by the U.S. Space Defense Operations Center (SPADOC) and a pointing error caused by the scan platform acceleration on the spacecraft being slower than expected (which prevented laser detection on all frames with less than 400 ms exposure times) all contributed to the reduction of the number of successful detections of the laser transmission to 48 of the total 159 frames taken. Nonetheless, the experiment was considered a resounding success and the data acquired will likely be used in the future to design laser "downlinks" which will send large volumes of data very quickly from spacecraft to Earth. The scheme was already being studied (as of 2004) for a data link to a future Mars orbiting spacecraft. "Galileo" star scanner was a small optical telescope that provided an absolute attitude reference. It also made several scientific discoveries serendipitously. In the prime mission, it was found that the star scanner was able to detect high-energy particles as a noise signal. This data was eventually calibrated to show the particles were predominantly > electrons that were trapped in the Jovian magnetic belts, and released to the Planetary Data System. A second discovery occurred in 2000. The star scanner was observing a set of stars which included the second magnitude star Delta Velorum. At one point, this star dimmed for 8 hours below the star scanner's detection threshold. Subsequent analysis of "Galileo" data and work by amateur and professional astronomers showed that Delta Velorum is the brightest known eclipsing binary, brighter at maximum than even Algol. It has a primary period of 45 days and the dimming is just visible with the naked eye. A final discovery occurred during the last two orbits of the mission. When the spacecraft passed the orbit of Jupiter's moon Amalthea, the star scanner detected unexpected flashes of light that were reflections from moonlets. None of the individual moonlets were reliably sighted twice, hence no orbits were determined and the moonlets did not meet the International Astronomical Union requirements to receive designations. It is believed that these moonlets most likely are debris ejected from Amalthea and form a tenuous, and perhaps temporary, ring. On October 29, 1991, two months after entering the asteroid belt, "Galileo" performed the first asteroid encounter by a spacecraft, passing approximately from 951 Gaspra at a relative speed of about . Several pictures of Gaspra were taken, along with measurements using the NIMS instrument to indicate composition and physical properties. The last two images were relayed back to Earth in November 1991 and June 1992. The imagery revealed a cratered and very irregular body, measuring about . The remainder of data taken, including low-resolution images of more of the surface, were transmitted in late November 1992. On August 28, 1993, "Galileo" flew within of the asteroid 243 Ida. The probe discovered that Ida had a small moon, dubbed Dactyl, measuring around in diameter; this was the first asteroid moon discovered. Measurements using "Galileo" Solid State Imager, Magnetometer and NIMS instrument were taken. From subsequent analysis of this data, Dactyl appears to be an SII-subtype S-type asteroid, and is spectrally different from 243 Ida. It is hypothesized that Dactyl may have been produced by partial melting within a Koronis parent body, while the 243 Ida region escaped such igneous processing. Some of the mission challenges that had to be overcome included intense radiation at Jupiter and hardware wear-and-tear, as well as dealing with unexpected technical difficulties. Jupiter's uniquely harsh radiation environment caused over 20 anomalies over the course of "Galileo" mission, in addition to the incidents expanded upon below. Despite having exceeded its radiation design limit by at least a factor of three, the spacecraft survived all these anomalies. Work-arounds were found eventually for all of these problems, and "Galileo" was never rendered entirely non-functional by Jupiter's radiation. The radiation limits for "Galileo" computers were based on data returned from "Pioneers 10" and "11", since much of the design work was underway before the two "Voyagers" arrived at Jupiter in 1979. A typical effect of the radiation was that several of the science instruments suffered increased noise while within about of Jupiter. The SSI camera began producing totally white images when the spacecraft was hit by the exceptional 'Bastille Day' coronal mass ejection in 2000, and did so again on subsequent close approaches to Jupiter. The quartz crystal used as the frequency reference for the radio suffered permanent frequency shifts with each Jupiter approach. A spin detector failed, and the spacecraft gyro output was biased by the radiation environment. The most severe effect of the radiation were current leakages somewhere in the spacecraft's power bus, most likely across brushes at a spin bearing connecting rotor and stator sections of the orbiter. These current leakages triggered a reset of the onboard computer and caused it to go into safe mode. The resets occurred when the spacecraft was either close to Jupiter or in the region of space magnetically downstream of the Jupiter. A change to the software was made in April 1999 that allowed the onboard computer to detect these resets and autonomously recover, so as to avoid safe mode. "Galileo" high-gain antenna failed to fully deploy after its first flyby of Earth. The antenna had 18 ribs, like an umbrella and when the driver motor started and put pressure on the ribs, they were supposed to pop out of the cup their tips were held in. Only 15 popped out, leaving the antenna looking like a lop-sided, half-open umbrella. Investigators concluded that during the 4.5 years that "Galileo" spent in storage after the 1986 "Challenger" disaster, the lubricants between the tips of the ribs and the cup were eroded and worn away from vibration during three cross-country journeys by truck between California and Florida for the spacecraft. The failed ribs were those closest to the flat-bed trailers carrying "Galileo" on these trips, which were used instead of air transport to cut costs. The antenna lubricants were not checked or replaced before launch. To fix this malfunction, engineers tried thermal-cycling the antenna, rotating the spacecraft up to its maximum spin rate of 10.5 rpm, and "hammering" the antenna deployment motor—turning it on and off repeatedly—over 13,000 times, but all attempts failed to open the high-gain antenna. The associated problem mission managers faced was if one rib popped free, there would be increased pressure on the remaining two, and if one of them popped out the last would be under so much pressure it would never release. The second part of the problem was due to "Galileo" revised flight plan. The probe had never been intended to approach the Sun any closer than the orbit of Earth, but sending it to Venus would expose it to temperatures at least 50 degrees higher than at Earth distance. So the probe had to be protected from that extra heat, part of which involved adapting some of the computer functions. Forty-one device drivers had been programmed into the computer, but with no room for any more, the mission planners had to decide which driver they could use in association with the heat protection. They chose the antenna motor reverse driver. Fortunately, "Galileo" possessed an additional low-gain antenna that was capable of transmitting information back to Earth, although since it transmitted a signal isotropically, the low-gain antenna's bandwidth was significantly less than what the high-gain antenna's would have been; the high-gain antenna was to have transmitted at 134 kilobits per second, whereas the low-gain antenna was only intended to transmit at about 8 to 16 bits per second. "Galileo" low-gain antenna transmitted with a power of about 15 to 20 watts, which, by the time it reached Earth and had been collected by one of the large aperture (70 m) NASA Deep Space Network antennas, had a total power of about −170 dBm or 10 zeptowatts ( watts). Through the implementation of sophisticated technologies, the arraying of several Deep Space Network antennas and sensitivity upgrades to the receivers used to listen to "Galileo" signal, data throughput was increased to a maximum of 160 bits per second. By further using data compression, the effective data rate could be raised to 1,000 bits per second. The data collected on Jupiter and its moons was stored in the spacecraft's onboard tape recorder, and transmitted back to Earth during the long apoapsis portion of the probe's orbit using the low-gain antenna. At the same time, measurements were made of Jupiter's magnetosphere and transmitted back to Earth. The reduction in available bandwidth reduced the total amount of data transmitted throughout the mission, although 70% of "Galileo" science goals could still be met. The failure of "Galileo" high-gain antenna meant that data storage to the tape recorder for later compression and playback was absolutely crucial in order to obtain any substantial information from the flybys of Jupiter and its moons. In October 1995, "Galileo" four-track, 114-megabyte digital tape recorder, which was manufactured by Odetics Corporation, remained stuck in rewind mode for 15 hours before engineers learned what had happened and were able to send commands to shut it off. Though the recorder itself was still in working order, the malfunction possibly damaged a length of tape at the end of the reel. This section of tape was subsequently declared "off limits" to any future data recording, and was covered with 25 more turns of tape to secure the section and reduce any further stresses, which could tear it. Because it happened only weeks before "Galileo" entered orbit around Jupiter, the anomaly prompted engineers to sacrifice data acquisition of almost all of the Io and Europa observations during the orbit insertion phase, in order to focus solely on recording data sent from the Jupiter probe descent. In November 2002, after the completion of the mission's only encounter with Jupiter's moon Amalthea, problems with playback of the tape recorder again plagued "Galileo". About 10 minutes after the closest approach of the Amalthea flyby, "Galileo" stopped collecting data, shut down all of its instruments, and went into safe mode, apparently as a result of exposure to Jupiter's intense radiation environment. Though most of the Amalthea data was already written to tape, it was found that the recorder refused to respond to commands telling it to play back data. After weeks of troubleshooting of an identical flight spare of the recorder on the ground, it was determined that the cause of the malfunction was a reduction of light output in three infrared Optek OP133 light-emitting diodes located in the drive electronics of the recorder's motor encoder wheel. The GaAs LEDs had been particularly sensitive to proton-irradiation-induced atomic lattice displacement defects, which greatly decreased their effective light output and caused the drive motor's electronics to falsely believe the motor encoder wheel was incorrectly positioned. "Galileo" flight team then began a series of "annealing" sessions, where current was passed through the LEDs for hours at a time to heat them to a point where some of the crystalline lattice defects would be shifted back into place, thus increasing the LED's light output. After about 100 hours of annealing and playback cycles, the recorder was able to operate for up to an hour at a time. After many subsequent playback and cooling cycles, the complete transmission back to Earth of all recorded Amalthea flyby data was successful. The atmospheric probe deployed its parachute fifty-three seconds later than anticipated, resulting in a small loss of upper atmospheric readings. This was attributed to wiring problems with an accelerometer that determined when to begin the parachute deployment sequence. Two years of Jupiter's intense radiation took its toll on the spacecraft's systems, and its fuel supply was running low in the early 2000s. "Galileo" had not been sterilized prior to launch and could have carried bacteria from Earth. Therefore, a plan was formulated to send the probe directly into Jupiter, in an intentional crash to eliminate the possibility of any impact with Jupiter's moons and prevent a forward contamination. "Galileo" flew by Amalthea on November 5, 2002, during its 34th orbit, allowing a measurement of the moon's mass as it passed within of its surface. On April 14, 2003, "Galileo" reached its greatest orbital distance from Jupiter for the entire mission since orbital insertion, , before plunging back towards the gas giant for its final impact. At the completion of its 35th and final circuit around the Jovian system, "Galileo" impacted the gas giant in darkness just south of the equator on September 21, 2003, at 18:57 UTC. Its impact speed was approximately . The total mission cost was about . While "Galileo" was operating, "Cassini–Huygens" coasted by the planet in 2000 en route to Saturn, and it also collected data on Jupiter. "Ulysses" passed by Jupiter in 1992 and 2004 on its mission to study the Sun's polar regions. "New Horizons" also passed close by Jupiter in 2007 for a gravity assist en route to Pluto, and it too collected data on the planet. The next mission to orbit Jupiter was the "Juno" spacecraft in July 2016. NASA's "Juno" spacecraft, launched in 2011 and planned for a two-year tour of the Jovian system, successfully completed Jupiter orbital insertion on July 4, 2016. There was a spare "Galileo" spacecraft that was considered by the NASA-ESA Outer Planets Study Team in 1983 for a mission to Saturn, but it was passed over in favor of a newer design which became "Cassini–Huygens". Even before "Galileo" concluded, NASA considered the Europa Orbiter, which was a mission to Jupiter's moon Europa, but it was canceled in 2002. ESA also is planning to return to the Jovian system with the Jupiter Icy Moons Explorer (JUICE), which is designed to orbit Ganymede in the 2020s. There have been several other attempts at missions that were dedicated to, or included, the Jupiter system as part of their mission plan but did not make it out of the planning stages. Following the cancellation of Europa Orbiter, a lower-cost version was studied. This led to the "Europa Clipper" being approved in 2015; it is currently planned for launch in the mid-2020s. A lander concept, simply called Europa Lander is being assessed by the Jet Propulsion Laboratory. As of 2019, this lander mission to Europa remains a concept, and some funds have been released for instrument development and maturation. Investigators and researchers came from the following institutions, and included famous scientists such as Carl Sagan and James Van Allen. Builders: Crew of STS-34:
https://en.wikipedia.org/wiki?curid=13077
Glottis The glottis is the opening between the vocal folds (the rima glottidis). As the vocal folds vibrate, the resulting vibration produces a "buzzing" quality to the speech, called voice or voicing or pronunciation. Sound production that involves moving the vocal folds close together is called "glottal". English has a voiceless glottal transition spelled "h". This sound is produced by keeping the vocal folds spread somewhat, resulting in non-turbulent airflow through the glottis. In many accents of English the glottal stop (made by pressing the folds together) is used as a variant allophone of the phoneme (and in some dialects, occasionally of and ); in some languages, this sound is a phoneme of its own. Skilled players of the Australian didgeridoo restrict their glottal opening in order to produce the full range of timbres available on the instrument. The vibration produced is an essential component of "voiced" consonants as well as vowels. If the vocal folds are drawn apart, air flows between them causing no vibration, as in the production of voiceless consonants. The glottis is also important in the valsalva maneuver.
https://en.wikipedia.org/wiki?curid=13079
Geneva College Geneva College is a Christian liberal arts college in Beaver Falls, Pennsylvania. Founded in 1848, in Northwood, Ohio, the college moved to its present location in 1880, where it continues to educate a student body of about 1400 traditional undergraduates in over 30 majors, as well as graduate students in a handful of master's programs. The only undergraduate institution affiliated with the Reformed Presbyterian Church of North America, the college's undergraduate core curriculum emphasizes the humanities and the formation of a Reformed Christian worldview. Geneva College was founded in 1848 in Northwood, Ohio, by John Black Johnston, a minister of the RPCNA. The college was founded as "Geneva Hall", and was named after the Swiss center of the Reformed faith movement. After briefly closing during the American Civil War, the college continued operating in Northwood until 1880. By that time, the college leadership had begun a search for alternate locations that were closer to urban areas. After considering several locations in the Midwest, the denomination chose the College Hill neighborhood of Beaver Falls, Pennsylvania. The college constructed its current campus on land donated by the Harmony Society. Old Main, the oldest building on campus, was completed in 1881. The Rapp Technical Design Center was completed in 2002. A major project to reroute Pennsylvania Route 18, which runs through the campus, was completed in November 2007. Improvements to Reeves Stadium and the construction of a campus entrance and pedestrian mall were completed in time for the fall semester in 2009. Two bodies oversee the administration of the college, the Board of Corporators and the Board of Trustees; while the Corporators are the official legal owners of the college, in practice most authority is delegated to the Trustees, who are elected by the Corporators. Both Boards drafted the philosophical basis on which the college rests, known as the Foundational Concepts of Higher Education. The RPCNA still takes an active sponsorship and oversight role in the college: the college president, chaplain, and chairman of the Department of Biblical Studies must be members of the RPCNA, and all members of the Board of Corporators and the majority of the Board of Trustees must be RPCNA members. All professors and lecturers in the Department of Biblical Studies must subscribe to the Westminster Confession of Faith, and all full-time faculty and staff members must submit a written statement confessing faith in Jesus Christ and the Christian religion. Geneva offers undergraduate degree programs in the arts and sciences, such as elementary education, business, engineering, student ministry, biology, and psychology. In 2006, the Educational Testing Service (ETS) rated the Business and Accounting undergraduates in the 95th percentile amongst American colleges. Geneva offers a Degree Completion Program (DCP) for degrees in Human Resource Management, Community Ministry or Organizational Development for adult students mainly at off-campus locations. Geneva also established the Center for Urban Theological Studies in Philadelphia and has sister colleges in Taiwan (Christ College) and South Korea (Chong Shin College and Theological Seminary). Geneva also offers graduates studies in several fields. These include a Master of Business Administration, a Masters of Science in Organizational Leadership, Masters of Education in Reading or Special Education, and Masters of Arts in Counseling or Higher Education. Geneva established the Center for Technology Development in 1986 for providing research, prototyping and technical support to local industries and entrepreneurs. The Center was awarded first prize in the Consolidated Natural Gas Company’s Annual Award of Excellence competition in 1990. Geneva College is a member institution of the Council for Christian Colleges and Universities, Council of Independent Colleges, and National Association of Independent Colleges and Universities. Accreditations include the Commission on Higher Education of the Middle States Association of Colleges and Schools, Accreditation Board for Engineering and Technology, Association of Collegiate Business Schools and Programs, American Chemical Society and the Council for Accreditation of Counseling and Related Educational Programs. Geneva's sports teams are called the Golden Tornadoes. The college is a dual member of the National Collegiate Athletic Association (NCAA) Division III and National Christian College Athletic Association (NCCAA) Division I. The Golden Tornadoes compete as a member of the Presidents' Athletic Conference. Geneva was a member of the National Association of Intercollegiate Athletics (NAIA) for many years, and competed in the now-defunct American Mideast Conference. Geneva joined the NCAA as a provisional member in 2007 and during the transition process was not eligible for post season play or conference Player of the Week honors until gaining membership in July 2011. The school offers a range of men's and women's varsity sports, including football, baseball, softball, basketball, volleyball, track and field, cross country, tennis, and soccer. Geneva has also offered rugby as a club sport since 1994. Football competition began in 1890 under head coach William McCracken. Over the years, the football team has amassed an all-time record of 496 wins, 437 losses, and 48 ties with five appearances in the Victory Bowl. The current football coach is Geno DeMarco. Students must attend a designated number of weekly college-sponsored chapels to qualify for graduation. Alcohol is banned from the campus, and tobacco use is restricted from the entire campus. Greek letter fraternities and sororities are not permitted. One of the earliest college basketball games in the United States occurred at Geneva College on April 8, 1893, when the Geneva College Covenanters defeated the New Brighton YMCA. Geneva commemorates this event through the athletic slogan of "The Birthplace of College Basketball". Geneva also has one of the oldest basketball courts in collegiate sports in the Johnson Gymnasium. Geneva was founded by Scottish and Scots-Irish immigrants. Many names of campus buildings and areas bear Scottish names: Geneva sports teams were nicknamed the "Covenanters" until the 1950s. Members of the RPCNA are sometimes referred to as Covenanters because the denomination traces its roots to the Covenanting tradition of Reformation era Scotland. The modern sports nickname of "Golden Tornadoes" commemorates the "Golden Tornado" of May 11, 1914 when a major tornado struck the college, most notably taking the gold colored roof from the top of Old Main, which was the origin of the associated color. Although the storm caused significant damage to the campus, there were no serious injuries. College students and faculty rejoiced at what they believed was a sign of God's mercy. Geneva's traditional sports rivalry is with Westminster College in nearby New Wilmington, Pennsylvania. Full-time undergraduate students between ages 17 and 23 are required to live in college housing, with the exception of commuters and some seniors. Six dormitories — Clarke, Geneva Arms, McKee, Memorial, Pearce, and Young — house resident students. Geneva Arms and Young are apartment-style options divided into men's and women's wings. The college also operates six smaller houses, primarily for upperclassmen. The following structures are owned by the college, but currently not being used for any activities or events. On December 15, 2006, the college filed a federal lawsuit against the Commonwealth of Pennsylvania, alleging that a decision by the state to block the college from participating in the state sponsored CareerLink job service amounted to a violation of the college's First Amendment rights. Although the state argued that the college's requirement that faculty and staff members subscribe to the Christian religion amounted to discrimination, the lawsuit was settled. Geneva's right to access to CareerLink was restored and the college retains a statement on its employment applications stating "Compliance with Geneva's Christian views is considered a bona fide occupational qualification ... and will have a direct impact on employment consideration." In 2012, the college sued the federal government over the Patient Protection and Affordable Care Act ("Obamacare") contraceptive mandate, which requires employers to provide health insurance coverage for their employees that includes contraception, which Geneva College "considers abortion, abortifacients and embryo-harming pharmaceuticals" and objects to on religious grounds. The college, represented by Alliance Defending Freedom in the litigation, prevailed in its case, obtaining a permanent injunction in 2018.
https://en.wikipedia.org/wiki?curid=13082
Gorillaz Gorillaz are a British virtual band created in 1998 by musician Damon Albarn and artist Jamie Hewlett. The band primarily consists of four animated members: Stuart "2-D" Pot, Murdoc Niccals, Noodle, and Russel Hobbs. Their fictional universe is presented in music videos, interviews and short cartoons. In reality, Albarn is the only permanent musical contributor, and he often collaborates with other musicians. Remi Kabaka Jr. became producer for the band in 2016 after several years providing the voice of Russel Hobbs and was listed as an official member alongside Albarn and Hewlett in the 2019 Gorillaz documentary "Gorillaz: Reject False Icons". With Gorillaz, Albarn departed from the distinct Britpop of his band Blur through hip hop, electronic music, and world music through an "eccentrically postmodern" approach. The band's 2001 debut album "Gorillaz" went triple platinum in the UK and double platinum in Europe and earned the group an entry in the "Guinness Book of World Records" as the "Most Successful Virtual Band". It was nominated for the Mercury Prize, but the nomination was withdrawn at the band's request. Their second studio album, "Demon Days" (2005), went 6 times platinum in the UK and double platinum in the US. The third album, "Plastic Beach", was released on March 3, 2010. Their fourth, "The Fall", was released on April 18, 2011. The fifth, "Humanz", was released after a 6-year hiatus on April 28, 2017. Their sixth, "The Now Now", was released on June 29, 2018. Gorillaz are currently working on a project named Song Machine which releases new songs every month. Gorillaz have won a Grammy Award, two MTV Video Music Awards, an NME Award and three MTV Europe Music Awards. They have also been nominated for 10 Brit Awards and won Best British Group at the 2018 Brit Awards. By 2010, Gorillaz had sold over 20 million records worldwide. Musician Damon Albarn and comic creator Jamie Hewlett met in 1990 when guitarist Graham Coxon, a fan of Hewlett's work, asked him to interview Blur, a band Albarn and Coxon had recently formed. The interview was published in "Deadline" magazine, home of Hewlett's comic strip "Tank Girl". Hewlett initially thought Albarn was "arsey, a wanker;" despite becoming acquaintances with the band, they often did not get on, especially after Hewlett began seeing Coxon's ex-girlfriend Jane Olliver. Despite this, Albarn and Hewlett started sharing a flat on Westbourne Grove in London in 1997. Hewlett had recently broken up with Olliver and Albarn was at the end of his highly publicised relationship with Justine Frischmann of Elastica. The idea to create Gorillaz came about when Albarn and Hewlett were watching MTV. Hewlett said, ""If you watch MTV for too long, it's a bit like hell – there's nothing of substance there. So we got this idea for a virtual band, something that would be a comment on that."" The band originally identified themselves as "Gorilla" and the first song they recorded was "Ghost Train," which was later released as a B-side on their single "Rock the House" and the B-side compilation "G Sides". The musicians behind Gorillaz' first incarnation included Albarn, Del the Funky Homosapien, Dan the Automator and Kid Koala, who had previously worked together on the track "Time Keeps on Slipping" for Deltron 3030's eponymous debut album. Although not released under the Gorillaz name, Albarn has said that "one of the first ever Gorillaz tunes" was Blur's 1997 single "On Your Own", which was released for their fifth studio album "Blur". The band's first release was the EP "Tomorrow Comes Today", released on 27 November 2000. The band's first single was "Clint Eastwood" and was released on 5 March 2001. It was produced by hip hop producer Dan the Automator and originally featured UK rap group Phi Life Cypher, but the version that appears on the album features American rapper Del the Funky Homosapien, known on the album as "Del tha' Ghost Rapper", a spirit in the band's drummer Russel Hobbs. The Phi Life Cypher version of "Clint Eastwood" appears on the B-side album "G Sides". On 26 March 2001, their first full-length album, the self-titled "Gorillaz", was released, producing four singles: "Clint Eastwood", "19-2000", "Rock the House", and "Tomorrow Comes Today". In June 2001, "19–2000" a remix of the song was used as the title theme for EA Sports "FIFA" video game "FIFA Football 2002". On 7 December, the song "911" was released, a collaboration between Gorillaz and hip hop group D12 (without Eminem) and Terry Hall about the September 11 attacks. Meanwhile, "G Sides", a compilation of the B-sides from the "Tomorrow Comes Today" EP and first three singles, was released in Japan on 12 December. Gorillaz performed at the 2002 Brit Awards in London on 22 February, appearing in 3D animation on four large screens along with rap accompaniment by Phi Life Cypher. The band were nominated for four Brit Awards, including Best British Group, Best British Album and British Breakthrough Act, but did not win any awards. On 1 July 2002, a remix album titled "Laika Come Home" by Spacemonkeyz vs. Gorillaz was released. It contains most of the songs from Gorillaz' first album, "Gorillaz", but remixed in dub and reggae style. On 18 November, a DVD titled "" was released. The DVD contains four promotional videos, the abandoned video for "5/4", the "Charts of Darkness" documentary, the five Gorilla Bitez (comedic shorts starring the virtual characters), a tour of the website by the MEL 9000 server and more. The DVD's menu was designed much like the band's website and depicts an abandoned Kong Studios. Rumours were circulating at this time that the Gorillaz team were busy preparing a film, but Hewlett said that the film project had been abandoned: "We lost all interest in doing it as soon as we started meeting with studios and talking to these Hollywood executive types, we just weren't on the same page. We said, fuck it, we'll sit on the idea until we can do it ourselves, and maybe even raise the money ourselves." The album "Demon Days" was released on 11 May 2005. The album debuted at No. 1 on the UK Albums Chart. The third single was "Dirty Harry", which had been released as a promotional single earlier that year. It was released in the United Kingdom on 21 November. The fourth and final single was a double A-side, "Kids with Guns" / "El Mañana". It was released in the UK on 10 April 2006. In December 2005, "Demon Days" had sold over a million copies in the UK, making it the UK's fifth best selling album of 2005. "Demon Days" has since gone six times platinum in the UK, double platinum in the United States, triple platinum in Australia and has sold over 8 million copies worldwide. At the 2005 MTV Video Music Awards in Miami on 28 August, Gorillaz won two awards for "Feel Good Inc." featuring De La Soul, including the award for Breakthrough Video. Gorillaz performed "Dirty Harry" at the 2006 Brit Awards in London, and the band were nominated for Best British Group, and Best British Album ("Demon Days"). Plans were unveiled for Gorillaz to go on a "holographic" world tour in 2007 and 2008. The virtual members would be shown as virtual characters on stage using Musion Eyeliner technology, giving them a lifelike appearance on stage. The virtual characters were first used at the 2005 MTV Europe Music Awards on 3 November, and again at the 2006 Grammy Awards, on 8 February 2006 with the addition of Madonna, where the band played a pre-recorded version of "Feel Good Inc." During 16 October–2 November, a set of Gorillaz figures were released by Kidrobot to coincide with the release of "Demon Days". Two variations of the set were released, known as the Red and Black editions, and a limited edition Noodle from the music video for "Dare" was also released. Three new sets of Gorillaz vinyl figures were released in 2006. The basic set which was limited to 60,000 was the two-tone set limited to 1,000 and the white edition which was limited to 4,000 were released. On 26 October, the Gorillaz autobiography titled "Rise of the Ogre" by Riverhead Books, was released on UK. 2 November 2006 on US. On 30 October, the "" DVD was released on UK. It contains the most of the materials released by Gorillaz from 2004 to 2006. Also included is the Gorillaz' "MTV Cribs" episode, the Phase Two Gorillaz bites, a new Kong Studios guide, a gallery, and short interviews. On 2 June 2006 the hopes for a Gorillaz film were revived, when Hewlett stated that they would be producing the film on their own. Film producer and The Weinstein Company co-chairman, Harvey Weinstein, was said to be in talks with Albarn and Hewlett. In a September 2006 interview with "Uncut" magazine, Albarn was said that the band "has been a fantastic journey which isn't over, because we're making a film. We've got Terry Gilliam involved. But as far as being in a big band and putting pop music out there, it's finished. We won't be doing that any more." On 19 November 2007, a compilation album titled "D-Sides" was released. It contains B-sides and remixes from singles and bonus tracks for the band's second studio album "Demon Days", as well as previously unreleased tracks recorded during the same sessions. Only one video out in 8 December 2004, "Rockit (or "Rock It"). On 20 April 2009, a documentary titled "Bananaz", was released on PAL / Region 0 DVD format. It contains the documentary film, directed by Ceri Levy, documents the previous seven years of the band. The film was released online on the Babelgum website on 20 April 2009 followed by the DVD release on 1 June 2009. In late 2007, Albarn and Hewlett began working on "Carousel", a new Gorillaz project which eventually evolved into the band's third studio album "Plastic Beach". Albarn said "I'm making this the biggest and most pop record I've ever made in many ways, but with all my experience to try and at least present something that has got depth." The album features guest performances by Snoop Dogg, Lou Reed, Mos Def, Bobby Womack, Gruff Rhys, Mark E. Smith, Mick Jones, Paul Simonon, Kano, Bashy, De La Soul, Little Dragon, Hypnotic Brass Ensemble, sinfonia ViVA, and the Lebanese National Orchestra for Oriental Arabic Music. On 18 January 2010, it was announced that Gorillaz would be headlining the final night of the Coachella Valley Music and Arts Festival on 18 April 2010. The first single from the album, "Stylo", featuring Bobby Womack and Mos Def was made available for download 26 January 2010. In October 2010, Albarn announced to the media that he would not let the cast of "Glee" cover the band's songs, claiming that the music on the Fox network's TV show is a "very poor substitute for the real thing". This statement led most people to believe that Gorillaz had been asked by "Glee" producers to lend their music to the show, which they had not. Albarn responded to the confusion with a laugh and said "and now they definitely won't." On 5 October 2010, Gorillaz announced their new single "Doncamatic" featuring Daley. On 8 December 2010, Albarn confirmed that a Gorillaz album recorded on the American leg of the Escape to Plastic Beach tour would be released to download for free exclusively to paying fan club members from the Gorillaz website on Christmas Day, 25 December 2010. The video for "Phoner to Arizona" was released on Gorillaz' website for free on 24 December and, a day later, their new album was released, entitled "The Fall". On 18 April 2011, Gorillaz announced the release of their own version of the iPad app iElectribe, by Korg – which features loops and samples taken from "The Fall" as well as other samples. The new version features a Gorillaz designed and styled interface; it is customised to generate Gorillaz samples from their album "The Fall" and includes 128 new sounds created by the band and 64 ready-to-use pre-programmed patterns from Gorillaz, Stephen Sedgwick (Gorillaz' engineer) and Korg. The app was based on Korg's Electribe: R device app. On 5 October 2011, Gorillaz released their first "greatest hits" compilation, "The Singles Collection 2001–2011". On 9 February 2012, Gorillaz announced "DoYaThing", a single to promote the Gorillaz-branded Converse shoes that were soon to be released. The song would be part of Converse's "Three Artists, One Song" projects, with the two collaborators being James Murphy of LCD Soundsystem and André 3000 of Outkast. An explicit, 13-minute-long version of the song became available for listening shortly after on Gorillaz.com. Hewlett returned to direct the single's music video, featuring animated versions of the two collaborators on the track. In April 2012, Albarn told "The Guardian" that he and Hewlett had fallen out and that future Gorillaz projects were "unlikely". Tension between the two had been building, partly due to a belief held by Hewlett that his contributions to Gorillaz were being diminished. Speaking to "The Guardian" in April 2017, Hewlett explained: "Damon had half the Clash on stage, and Bobby Womack and Mos Def and De La Soul, and fucking Hypnotic Brass Ensemble and Bashy and everyone else. It was the greatest band ever. And the screen on stage behind them seemed to get smaller every day. I’d say, ‘Have we got a new screen?’ and the tour manager was like, ‘No, it’s the same screen.’ Because it seemed to me like it was getting smaller." On 25 April 2012, in an interview with Metro, Albarn was more optimistic about Gorillaz' future, saying that once he had worked out his differences with Hewlett, he was sure that they would make another record. On 24 June 2013, Hewlett stated that he and Albarn planned to someday record a follow-up to their 2010 album "Plastic Beach". In April 2014, Albarn told the "National Post" that he "wouldn't mind having another stab at a Gorillaz record." Two months later he reported that he had "been writing quite a lot of songs on the road for Gorillaz". On 19 October 2014, Albarn told "The Sydney Morning Herald" that he was planning to release new Gorillaz material in 2016. Albarn has described the music that he has written for the next Gorillaz album as being very upbeat, humorous, and positive, stating that he plans on giving the tracks "a benchmark of 125 bpm and nothing underneath that", while also suggesting that it once again may have many collaborations. On 16 July 2015, Albarn stated during an interview for ABC's 7.30 in Australia that he would begin work on the next Gorillaz album: "I'm starting recording in September for a new Gorillaz record, I've just been really, really busy so I haven't had a chance. I'd love to just get back in to that routine of being at home and coming to the studio five days a week." Speaking about his relationship with Hewlett, Albarn said that the pair's well-publicized fall-out has helped their relationship in the long term. In October 2015, Albarn revealed to "Rolling Stone" that he and Hewlett were working on a new Gorillaz album. In April 2016, Hewlett uploaded two video clips onto his Instagram showing the continued work on the album. The first clip featured Liam Bailey and rumoured executive producer on the album "The Twilite Tone". The second clip was a time-lapse video featuring Albarn, Bailey, The Twilite Tone and Jean-Michel Jarre. On 17 May 2016, Gorillaz were in the studio with Chicago-based hip hop artist Vic Mensa. On 20 September 2016, Gorillaz began an abridged retrospective timeline of the history of Gorillaz releases since 2000. On 3 October 2016, Gorillaz began posting a series of interactive multimedia stories revolving around the fictional lives of each Gorillaz character since their hiatus to their social media profiles, beginning with "The Book of Noodle" (she ended up in Japan and tracked down a demon crime boss), then "The Book of Russel" (he was still a giant from the storyline of Plastic Beach, where he washed up on the shores of North Korea and starved so much he shrank back to normal size), then "The Book of Murdoc" (he was captured by the band's record label EMI at sea and told to make another album), and finishing with "The Book of 2-D" (he was swallowed by a whale named Massive Dick on Plastic Beach, and washed up on the shores of Mexico where, to survive, he had to eat the whale's blubber, but it turned out he was just at an empty part of Cabo San Lucas). On 8 October 2016, Noodle was given her own Instagram page and was announced to be the Global Ambassador of Jaguar Racing. On 19 January 2017, a new song from the band entitled "Hallelujah Money" featuring Benjamin Clementine was released. On 6 March, Gorillaz announced the launch of their own festival, called Demon Dayz Festival, which took place on 10 June 2017 at Dreamland Margate in Margate, Kent, England. The band headlined the festival. On 17 March 2017, the tracklist of the forthcoming album was leaked online, showing guest features from a variety of artists including usual collaborators De La Soul, as well as new collaborators such as Grace Jones, Vince Staples, Pusha T, Rag'n'Bone Man, Anthony Hamilton, Kilo Kish and Kali Uchis. On 23 March 2017, Gorillaz announced via Instagram that the new album will be entitled "Humanz", with a scheduled release date of 28 April 2017. On the official Gorillaz YouTube page, two new music videos were released for their track "Saturnz Barz", one of which was in a 360º view. The track features vocals from Jamaican dancehall artist Popcaan. The band also released an art video for the track "Andromeda", featuring an animated planet in a galaxy. The track features the American rapper D.R.A.M., two more art videos were released: "Ascension" (featuring American rapper Vince Staples) and "We Got the Power" (featuring Jehnny Beth of the English rock band Savages and Noel Gallagher of Oasis). On 6 April 2017, the fifth single from "Humanz", "Let Me Out" (featuring Mavis Staples and Pusha T), was released, followed by a performance of the song on "The Late Show with Stephen Colbert" on 27 April. On 10 April 2017, a Gorillaz themed augmented reality app created in collaboration with Electronic Beats was released. On 24 April 2017, four days before the "Humanz" release date, another promotional single was uploaded, titled "The Apprentice". It is the only new song taken from the deluxe edition of "Humanz". On 28 April 2017, "Humanz" was released worldwide. On 8 June 2017, the non-album single "Sleeping Powder" was released, along with an accompanying music video. On 10 June 2017, the band headlined the Demon Dayz Festival in Margate, England. On 4 August 2017 the band released "Strobelite" as a single with an accompanying music video. On 31 October 2017, "Garage Palace" was released as a single from the "Super Deluxe" Edition of "Humanz", which includes 14 additional songs and was released on 3 November 2017. In December 2017, the band released a "Humanz"-themed, in-universe magazine called G Magazine. On 21 February 2018, the band received the Brit Award for British Group for their work on "Humanz". During their acceptance speech, a short-video of Murdoc was played, showing him being imprisoned for an unknown reason. In an interview with "Q Magazine" in September 2017, Albarn hinted at another potential Gorillaz album being in production. He mentioned enjoying the spontaneity of recording and debuting music while on tour, similarly to the band's 2010 release "The Fall", but expressed desire to make it a comparatively more "complete" record, adding that "If we're going to do more with Gorillaz we don't want to wait seven years because, y'know, we're getting on a bit now". At the end of the same month on 30 September 2017 while touring for "Humanz", the band debuted a new song in Seattle called "Idaho". Hewlett confirmed later on in December 2017 that the band planned to release a follow-up album to "Humanz" in 2018, citing a desire to keep the band going rather than take any prolonged breaks as the band had usually done with previous projects. Hewlett described several of the demos and new material as a "new direction" for the band, stating that he hopes to move the band's artwork in a similar direction. During a performance in Chile on the final leg of the Humanz Tour, Albarn confirmed that the new album was coming "very soon" and premiered a new song called "Hollywood" featuring Jamie Principle and Snoop Dogg. On 26 May 2018, the album was officially announced to be titled "The Now Now", being co-produced by James Ford. On 31 May, the music video for the single "Humility" featuring George Benson, was released alongside "Lake Zurich". From 7 to 21 June, the band released the singles "Sorcererz", "Fire Flies", and "Hollywood". "The Now Now" was released on 29 June 2018. On 13 September, "Tranz" was released as a single along with a music video. All of the songs from the album have a visualizer except "Humility". In the fictional Gorillaz storyline, the band introduced Ace from Cartoon Network's animated series "The Powerpuff Girls" as a temporary bassist of the band, filling in for the imprisoned Murdoc Niccals (as seen in the band's Brit Awards acceptance speech for "Humanz"). From 4 June to 26 October 2018, the band ran a bi-weekly text-adventure ARG called ""Free Murdoc"", in which the player assists Murdoc as he attempts to escape from prison. Murdoc was reunited with the band on 20 September, in time to join them on the final leg of The Now Now Tour. On 25 October, the band announced they would be partnering with G-Shock to create a line of Gorillaz watches. To promote the watches, the band released a monthly web-series called "Mission M101". On 21 November 2019, the band announced a documentary covering the production of "Humanz" and "The Now Now" and their accompanying tours titled "Gorillaz: Reject False Icons". It was released on 16 December via a worldwide theatrical screening. Eight days later, an edited version of the documentary subtitled the "Director's Cut" was uploaded through the Gorillaz official YouTube page in three parts: "Humanz", "Humanz World Tour" and "The Now Now". On 29 January 2020, the band announced its new project "Song Machine". Eschewing the typical album format of releasing music, the band would instead release one new song a month (labeled "episodes"), with 13 tracks released throughout 2020 to comprise the first "season" of Song Machine. Elaborating on the idea behind "Song Machine" in a radio interview shortly after the announcement of the project , Albarn explained that "We no longer kind of see ourselves as constrained to making albums. We can now make episodes and seasons." Each episode features previously unannounced guest musicians on new Gorillaz material, with the first being "Momentary Bliss", which was released on 31 January and features both British rapper Slowthai and the Kent-based punk rock duo Slaves. Upon the premiere of "Momentary Bliss", Albarn revealed that the group had been in the studio with ScHoolboy Q and Sampa the Great among others, although he did say that these songs were likely to be saved for future series of "Song Machine". The group also teased a possible collaboration with Australian band Tame Impala on Instagram. On 27 February, the band released the second episode of "Song Machine" entitled "Désolé". The song features Malian singer Fatoumata Diawara. The third track, "Aries," released on 9 April and featured Peter Hook and Georgia. The fourth track "How Far?" featuring Tony Allen and Skepta was released 2 May. This song was released without an accompanying music video as a tribute to Allen, who passed away on 30 April. On the 26th of May, Gorillaz officially announced the release of a new annual book titled 'Gorillaz Almanac' via their social media pages. Gorillaz Almanac comes in three editions: standard, deluxe and super deluxe, all of which are set to release on the 16th of October, with a physical release of season one of "Song Machine" included with each copy. On 9 June, the band released "Friday 13th", the fourth episode of "Song Machine". The track features French-British rapper Octavian. Writers and critics have variously described Gorillaz as art pop,
https://en.wikipedia.org/wiki?curid=13084
GW-BASIC GW-BASIC is a dialect of the BASIC programming language developed by Microsoft from IBM BASICA. It is functionally identical to BASICA, but is a fully self-contained executable and does not need the Cassette BASIC ROM. It was bundled with MS-DOS operating systems on IBM PC compatibles by Microsoft. Microsoft also sold a BASIC compiler, BASCOM, compatible with GW-BASIC, for programs needing more speed. The language is suitable for simple games, business programs and the like. Since it was included with most versions of MS-DOS, it was also a low-cost way for many aspiring programmers to learn the fundamentals of computer programming. With the release of MS-DOS 5.0, GW-BASIC's place was eventually taken by QBasic, the interpreter part of the separately available QuickBASIC compiler. On May 21, 2020, Microsoft released the 8088 assembler source code for GW-BASIC 1.0 on GitHub under the MIT License. IBM BASICA and GW-BASIC are largely ports of MBASIC version 5.x, but with added features specifically for the IBM PC hardware. Common features of BASIC-80 5.x and BASICA/GW-BASIC include: The ability to "crunch" program lines by omitting spaces, a common feature of earlier Microsoft BASIC implementations, was removed from BASIC-80 5.x and BASICA/GWBASIC. BASIC-80 programs not using PEEK/POKE statements run under GW-BASIC. BASICA adds many features for the IBM PC such as sound, graphics, and memory commands. Features not present in BASIC-80 include the ability to execute the RND function with no parameters and the ability to also save programs in a "protected" format, preventing them from being LISTed. BASICA also allows double-precision numbers to be used with mathematical and trigonometric functions such as COS, SIN, and ATN, which wasn't allowed in 8-bit versions of BASIC. This feature was normally not enabled and required the optional parameter /D at startup, "i.e.", codice_1. BASIC's memory footprint was slightly increased if it was used. Microsoft did not offer a generic version of MS-DOS until v3.20 in 1986; before then, all variants of the operating system were OEM versions. Depending on the OEM, BASIC was distributed as either BASICA.EXE or GWBASIC.EXE. The former should not be confused with IBM BASICA, which always came as a .COM file. Some variants of BASIC has extra features to support a particular machine. For example, the AT&T and Tandy versions of DOS include a special GW-BASIC that supports their enhanced sound and graphics capabilities. The initial version of GW-BASIC is the one included with Compaq DOS 1.13, released with the Compaq Portable in 1983, and was analogous to IBM BASICA 1.10. It uses the CP/M-derived file control blocks for disk access and does not support subdirectories. Later versions support subdirectories, improved graphics, and other capabilities. GW-BASIC 3.20 (1986) adds EGA graphics support (no version of BASICA or GW-BASIC had VGA support) and is the last major new version released before it was superseded by QBasic. Buyers of Hercules Graphics Cards received a special version of GW-BASIC on the card's utility disk that is called HBASIC, which adds support for its 720×348 monochrome graphics. Other versions of BASICA/GW-BASIC do not support Hercules graphics and can only display graphics on that card through the use of third-party CGA emulation, such as SIMCGA. GW-BASIC has a command line-based integrated development environment (IDE) based on Dartmouth BASIC. Using the cursor movement keys, any line displayed on the screen can be edited. It also includes function key shortcuts at the bottom of the screen. Like other early microcomputer versions of BASIC, GW-BASIC lacks many of the structures needed for structured programming such as local variables, and GW-BASIC programs executed relatively slowly because it was an interpreted language. All program lines must be numbered; all non-numbered lines are considered to be commands in direct mode to be executed immediately. Program source files are normally saved in binary compressed format with tokens replacing keywords, with an option to save in ASCII text form. The GW-BASIC command-line environment has commands to codice_2 the current program, or quit to the operating codice_3; these commands can also be used as program statements. There is little support for structured programming in GW-BASIC. All codice_4 conditional statements must be written on one line, although codice_5 statements may group multiple lines. Functions can only be defined using the single line codice_6 statement (e.g., codice_7). The data type of variables can be specified with a character at the end of the variable name: codice_8 is a string of characters, codice_9 is an integer, etc. Groups of variables can also be set to default types based on the initial letter of their name by use of the codice_10, etc., statements. The default type for undeclared variables not identified by such typing statements, is single-precision floating point (32-bit MBF). GW-BASIC allows use of joystick and light pen input devices. GW-BASIC can read from and write to files and COM ports; it can also do event trapping for ports. Since the cassette tape port interface of the original IBM PC was never implemented on compatibles, cassette operations are not supported. GW-BASIC can play simple music using the codice_11 statement, needing a string of notes represented in a music macro language, e.g., codice_12. More low-level control is possible with the codice_13 statement, which takes the arguments of a frequency in hertz and a length in clock ticks for the standard internal PC speaker in IBM machines. Consequently, sound is limited to single channel beeps and whistles as befits a 'business' machine. Home-based PCs like the Tandy 1000 allow up to three channels of sound for the codice_13 and codice_11 commands. There are several theories on what the initials "GW" stand for. Greg Whitten, an early Microsoft employee who developed the standards in the company's BASIC compiler line, says Bill Gates picked the name GW-BASIC. Whitten refers to it as "Gee-Whiz" BASIC and is unsure if Gates named the program after him. The "Microsoft User Manual" from Microsoft Press also refers to it by this name. It may have also been nicknamed "Gee-Whiz" because of its numerous graphics commands. Other common theories as to the initials' origins include "Graphics and Windows", "Gates, William" (Microsoft's president at the time), or "Gates-Whitten" (the two main designers of the program).
https://en.wikipedia.org/wiki?curid=13087
Granite Granite () is a common type of felsic intrusive igneous rock that is granular and phaneritic in texture. Granites can be predominantly white, pink, or gray in color, depending on their mineralogy. The word "granite" comes from the Latin "granum", a grain, in reference to the coarse-grained structure of such a completely crystalline rock. Strictly speaking, granite is an igneous rock with between 20% and 60% quartz by volume, and at least 35% of the total feldspar consisting of alkali feldspar, although commonly the term "granite" is used to refer to a wider range of coarse-grained igneous rocks containing quartz and feldspar. The term "granitic" means granite-like and is applied to granite and a group of intrusive igneous rocks with similar textures and slight variations in composition and origin. These rocks mainly consist of feldspar, quartz, mica, and amphibole minerals, which form an interlocking, somewhat equigranular matrix of feldspar and quartz with scattered darker biotite mica and amphibole (often hornblende) peppering the lighter color minerals. Occasionally some individual crystals (phenocrysts) are larger than the groundmass, in which case the texture is known as porphyritic. A granitic rock with a porphyritic texture is known as a granite porphyry. Granitoid is a general, descriptive field term for lighter-colored, coarse-grained igneous rocks. Petrographic examination is required for identification of specific types of granitoids. The extrusive igneous rock equivalent of granite is rhyolite. Granite is nearly always massive (i.e., lacking any internal structures), hard, and tough. These properties have made granite a widespread construction stone throughout human history. The average density of granite is between , its compressive strength usually lies above 200 MPa, and its viscosity near STP is 3–6·1019 Pa·s. The melting temperature of dry granite at ambient pressure is ; it is strongly reduced in the presence of water, down to 650 °C at a few kBar pressure. Granite has poor primary permeability overall, but strong secondary permeability through cracks and fractures if they are present. Granite is classified according to the QAPF diagram for coarse grained plutonic rocks and is named according to the percentage of quartz, alkali feldspar (orthoclase, sanidine, or microcline) and plagioclase feldspar on the A-Q-P half of the diagram. True granite (according to modern petrologic convention) contains both plagioclase and alkali feldspars. When a granitoid is devoid or nearly devoid of plagioclase, the rock is referred to as alkali feldspar granite. When a granitoid contains less than 10% orthoclase, it is called tonalite; pyroxene and amphibole are common in tonalite. A granite containing both muscovite and biotite micas is called a binary or "two-mica" granite. Two-mica granites are typically high in potassium and low in plagioclase, and are usually S-type granites or A-type granites. A worldwide average of the chemical composition of granite, by weight percent, based on 2485 analyses: Granite containing rock is widely distributed throughout the continental crust. Much of it was intruded during the Precambrian age; it is the most abundant basement rock that underlies the relatively thin sedimentary veneer of the continents. Outcrops of granite tend to form tors, domes or bornhardts, and rounded massifs. Granites sometimes occur in circular depressions surrounded by a range of hills, formed by the metamorphic aureole or hornfels. Granite often occurs as relatively small, less than 100 km2 stock masses (stocks) and in batholiths that are often associated with orogenic mountain ranges. Small dikes of granitic composition called aplites are often associated with the margins of granitic intrusions. In some locations, very coarse-grained pegmatite masses occur with granite. Granite has a felsic composition and is more common in continental crust than in oceanic crust. They are crystallized from felsic melts which are less dense than mafic rocks and thus tend to ascend toward the surface. In contrast, mafic rocks, either basalts or gabbros, once metamorphosed at eclogite facies, tend to sink into the mantle beneath the Moho. Granitoids have crystallized from felsic magmas that have compositions at or near a eutectic point (or a temperature minimum on a cotectic curve). Magmas are composed of melts and minerals in variable abundances. Traditionally, magmatic minerals are crystallized from the melts that have completely separated from their parental rocks and thus are highly evolved because of igneous differentiation. If a granite has a slowly cooling process, it has the potential to form larger crystals. There are also peritectic and residual minerals in granitic magmas. Peritectic minerals are generated through peritectic reactions, whereas residual minerals are inherited from parental rocks. In either case, magmas will evolve to the eutectic for crystallization upon cooling. Anatectic melts are also produced by peritectic reactions, but they are much less evolved than magmatic melts because they have not separated from their parental rocks. Nevertheless, the composition of anatectic melts may change toward the magmatic melts through high-degree fractional crystallization. Fractional crystallisation serves to reduce a melt in iron, magnesium, titanium, calcium and sodium, and enrich the melt in potassium and silicon – alkali feldspar (rich in potassium) and quartz (SiO2), are two of the defining constituents of granite. This process operates regardless of the origin of parental magmas to granites, and regardless of their chemistry. The composition and origin of any magma that differentiates into granite leave certain petrological evidence as to what the granite's parental rock was. The final texture and composition of a granite are generally distinctive as to its parental rock. For instance, a granite that is derived from partial melting of metasedimentary rocks may have more alkali feldspar, whereas a granite derived from partial melting of metaigneous rocks may be richer in plagioclase. It is on this basis that the modern "alphabet" classification schemes are based. The letter-based Chappell & White classification system was proposed initially to divide granites into I-type (igneous source) granite and S-type (sedimentary sources). Both types are produced by partial melting of crustal rocks, either metaigneous rocks or metasedimentary rocks. M-type granite was later proposed to cover those granites that were clearly sourced from crystallized mafic magmas, generally sourced from the mantle. However, this proposal has been rejected by studies of experimental petrology, which demonstrate that partial melting of mantle peridotite cannot produce granitic melts in any case. Although the fractional crystallisation of basaltic melts can yield small amounts of granites, such granites must occur together with large amounts of basaltic rocks. A-type granites were defined as to occur in anorogenic setting, have alkaline and anhydrous compositions. They show a peculiar mineralogy and geochemistry, with particularly high silicon and potassium at the expense of calcium and magnesium. These granites are produced by partial melting of refractory lithology such as granulites in the lower continental crust at high thermal gradients. This leads to significant extraction of hydrous felsic melts from granulite-facies resitites. A-type granites occur in the Koettlitz Glacier Alkaline Province in the Royal Society Range, Antarctica. The rhyolites of the Yellowstone Caldera are examples of volcanic equivalents of A-type granite. H-type granites were suggested for hybrid granites, which were hypothesized to form by mixing between mafic and felsic from different sources, e.g. M-type and S-type. However, the big difference in rheology between mafic and felsic magmas makes this process hardly happening in nature. An old, and largely discounted process, granitization states that granite is formed in place through extreme metasomatism by fluids bringing in elements, e.g. potassium, and removing others, e.g. calcium, to transform a metamorphic rock into a granite. This was supposed to occur across a migrating front. After more than 50 years of studies, it becomes clear that granitic magmas have separated from their sources and experienced fractional crystallization during their ascent toward the surface. On the other hand, granitic melts can be produced in place through the partial melting of metamorphic rocks by extracting melt-mobile elements such as potassium and silicon into the melts but leaving others such as calcium and iron in granulite residues. Once a metamorphic rock is melted, it becomes a kind of migmatites which are composed of leucosome and melanosome. In nature, metamorphic rocks may undergo partial melting to transform into migmatites through peritectic reactions, with anatectic melts to crystallize as leucosomes. As soon as the anatectic melts have separated from their sources and highly evolved through fractional crystallization during their ascent toward the surface, they become the magmatic melts and minerals of granitic composition. After the extraction of anatectic melts, the migmatites become a kind of granulites. In all cases, the partial melting of solid rocks requires high temperatures, and also water or other volatiles which act as a catalyst by lowering the solidus temperature of these rocks. The production of granite at crustal depths requires high heat flow, which cannot be provided by heat production elements in the crust. Furthermore, high heat flow is necessary to produce granulite facies metamorphic rocks in orogens, indicating extreme metamorphism at high thermal gradients. In-situ granitisation by the extreme metamorphism is possible if crustal rocks would be heated by the asthenospheric mantle in rifting orogens, where collision-thickened orogenic lithosphere is thinned at first and then underwent extensional tectonism for active rifting. The ascent and emplacement of large volumes of granite within the upper continental crust is a source of much debate amongst geologists. There is a lack of field evidence for any proposed mechanisms, so hypotheses are predominantly based upon experimental data. There are two major hypotheses for the ascent of magma through the crust: Of these two mechanisms, Stokes diapir was favoured for many years in the absence of a reasonable alternative. The basic idea is that magma will rise through the crust as a single mass through buoyancy. As it rises, it heats the wall rocks, causing them to behave as a power-law fluid and thus flow around the pluton allowing it to pass rapidly and without major heat loss. This is entirely feasible in the warm, ductile lower crust where rocks are easily deformed, but runs into problems in the upper crust which is far colder and more brittle. Rocks there do not deform so easily: for magma to rise as a pluton it would expend far too much energy in heating wall rocks, thus cooling and solidifying before reaching higher levels within the crust. Fracture propagation is the mechanism preferred by many geologists as it largely eliminates the major problems of moving a huge mass of magma through cold brittle crust. Magma rises instead in small channels along self-propagating dykes which form along new or pre-existing fracture or fault systems and networks of active shear zones. As these narrow conduits open, the first magma to enter solidifies and provides a form of insulation for later magma. Granitic magma must make room for itself or be intruded into other rocks in order to form an intrusion, and several mechanisms have been proposed to explain how large batholiths have been emplaced: Most geologists today accept that a combination of these phenomena can be used to explain granite intrusions, and that not all granites can be explained entirely by one or another mechanism. Physical weathering occurs on a large scale in the form of exfoliation joints, which are the result of granite's expanding and fracturing as pressure is relieved when overlying material is removed by erosion or other processes. Chemical weathering of granite occurs when dilute carbonic acid, and other acids present in rain and soil waters, alter feldspar in a process called hydrolysis. As demonstrated in the following reaction, this causes potassium feldspar to form kaolinite, with potassium ions, bicarbonate, and silica in solution as byproducts. An end product of granite weathering is grus, which is often made up of coarse-grained fragments of disintegrated granite. Climatic variations also influence the weathering rate of granites. For about two thousand years, the relief engravings on Cleopatra's Needle obelisk had survived the arid conditions of its origin before its transfer to London. Within two hundred years, the red granite has drastically deteriorated in the damp and polluted air there. Soil development on granite reflects the rock's high quartz content and dearth of available bases, with the base-poor status predisposing the soil to acidification and podzolization in cool humid climates as the weather-resistant quartz yields much sand. Feldspars also weather slowly in cool climes, allowing sand to dominate the fine-earth fraction. In warm humid regions, the weathering of feldspar as described above is accelerated so as to allow a much higher proportion of clay with the Cecil soil series a prime example of the consequent Ultisol great soil group. Granite is a natural source of radiation, like most natural stones. Potassium-40 is a radioactive isotope of weak emission, and a constituent of alkali feldspar, which in turn is a common component of granitic rocks, more abundant in alkali feldspar granite and syenites. Some granites contain around 10 to 20 parts per million (ppm) of uranium. By contrast, more mafic rocks, such as tonalite, gabbro and diorite, have 1 to 5 ppm uranium, and limestones and sedimentary rocks usually have equally low amounts. Many large granite plutons are sources for palaeochannel-hosted or roll front uranium ore deposits, where the uranium washes into the sediments from the granite uplands and associated, often highly radioactive pegmatites. Cellars and basements built into soils over granite can become a trap for radon gas, which is formed by the decay of uranium. Radon gas poses significant health concerns and is the number two cause of lung cancer in the US behind smoking. Thorium occurs in all granites. Conway granite has been noted for its relatively high thorium concentration of 56±6 ppm. There is some concern that some granite sold as countertops or building material may be hazardous to health. Dan Steck of St. Johns University has stated that approximately 5% of all granite is of concern, with the caveat that only a tiny percentage of the tens of thousands of granite slab types have been tested. Various resources from national geological survey organizations are accessible online to assist in assessing the risk factors in granite country and design rules relating, in particular, to preventing accumulation of radon gas in enclosed basements and dwellings. A study of granite countertops was done (initiated and paid for by the Marble Institute of America) in November 2008 by National Health and Engineering Inc. of USA. In this test, all of the 39 full-size granite slabs that were measured for the study showed radiation levels well below the European Union safety standards (section 4.1.1.1 of the National Health and Engineering study) and radon emission levels well below the average outdoor radon concentrations in the US. Granite and related marble industries are considered one of the oldest industries in the world, existing as far back as Ancient Egypt. Major modern exporters of granite include China, India, Italy, Brazil, Canada, Germany, Sweden, Spain and the United States. The Red Pyramid of Egypt (circa 2590 BC), named for the light crimson hue of its exposed limestone surfaces, is the third largest of Egyptian pyramids. Pyramid of Menkaure, likely dating 2510 BC, was constructed of limestone and granite blocks. The Great Pyramid of Giza (c. 2580 BC) contains a huge granite sarcophagus fashioned of "Red Aswan Granite". The mostly ruined Black Pyramid dating from the reign of Amenemhat III once had a polished granite pyramidion or capstone, which is now on display in the main hall of the Egyptian Museum in Cairo (see Dahshur). Other uses in Ancient Egypt include columns, door lintels, sills, jambs, and wall and floor veneer. How the Egyptians worked the solid granite is still a matter of debate. Patrick Hunt has postulated that the Egyptians used emery, which has greater hardness on the Mohs scale. Rajaraja Chola I of the Chola Dynasty in South India built the world's first temple entirely of granite in the 11th century AD in Tanjore, India. The Brihadeeswarar Temple dedicated to Lord Shiva was built in 1010. The massive Gopuram (ornate, upper section of shrine) is believed to have a mass of around 81 tonnes. It was the tallest temple in south India. Imperial Roman granite was quarried mainly in Egypt, and also in Turkey, and on the islands of Elba and Giglio. Granite became "an integral part of the Roman language of monumental architecture". The quarrying ceased around the third century AD. Beginning in Late Antiquity the granite was reused, which since at least the early 16th century became known as spolia. Through the process of case-hardening, granite becomes harder with age. The technology required to make tempered steel chisels was largely forgotten during the Middle Ages. As a result, Medieval stoneworkers were forced to use saws or emery to shorten ancient columns or hack them into discs. Giorgio Vasari noted in the 16th century that granite in quarries was "far softer and easier to work than after it has lain exposed" while ancient columns, because of their "hardness and solidity have nothing to fear from fire or sword, and time itself, that drives everything to ruin, not only has not destroyed them but has not even altered their colour." In some areas, granite is used for gravestones and memorials. Granite is a hard stone and requires skill to carve by hand. Until the early 18th century, in the Western world, granite could be carved only by hand tools with generally poor results. A key breakthrough was the invention of steam-powered cutting and dressing tools by Alexander MacDonald of Aberdeen, inspired by seeing ancient Egyptian granite carvings. In 1832, the first polished tombstone of Aberdeen granite to be erected in an English cemetery was installed at Kensal Green Cemetery. It caused a sensation in the London monumental trade and for some years all polished granite ordered came from MacDonald's. As a result of the work of sculptor William Leslie, and later Sidney Field, granite memorials became a major status symbol in Victorian Britain. The royal sarcophagus at Frogmore was probably the pinnacle of its work, and at 30 tons one of the largest. It was not until the 1880s that rival machinery and works could compete with the MacDonald works. Modern methods of carving include using computer-controlled rotary bits and sandblasting over a rubber stencil. Leaving the letters, numbers, and emblems exposed on the stone, the blaster can create virtually any kind of artwork or epitaph. The stone known as "black granite" is usually gabbro, which has a completely different chemical composition. Granite has been extensively used as a dimension stone and as flooring tiles in public and commercial buildings and monuments. Aberdeen in Scotland, which is constructed principally from local granite, is known as "The Granite City". Because of its abundance in New England, granite was commonly used to build foundations for homes there. The Granite Railway, America's first railroad, was built to haul granite from the quarries in Quincy, Massachusetts, to the Neponset River in the 1820s. Engineers have traditionally used polished granite surface plates to establish a plane of reference, since they are relatively impervious, inflexible, and maintain good dimensional stability. Sandblasted concrete with a heavy aggregate content has an appearance similar to rough granite, and is often used as a substitute when use of real granite is impractical. Granite tables are used extensively as bases or even as the entire structural body of optical instruments, CMMs, and very high precision CNC machines because of granite's rigidity, high dimensional stability, and excellent vibration characteristics. A most unusual use of granite was as the material of the tracks of the Haytor Granite Tramway, Devon, England, in 1820. Granite block is usually processed into slabs, which can be cut and shaped by a cutting center. In military engineering, Finland planted granite boulders along its Mannerheim Line to block invasion by Russian tanks in the winter war of 1940. Curling stones are traditionally fashioned of Ailsa Craig granite. The first stones were made in the 1750s, the original source being Ailsa Craig in Scotland. Because of the rarity of this granite, the best stones can cost as much as US$1,500. Between 60 and 70 percent of the stones used today are made from Ailsa Craig granite, although the island is now a wildlife reserve and is still used for quarrying under license for Ailsa granite by Kays of Scotland for curling stones. Granite is one of the rocks most prized by climbers, for its steepness, soundness, crack systems, and friction. Well-known venues for granite climbing include the Yosemite Valley, the Bugaboos, the Mont Blanc massif (and peaks such as the Aiguille du Dru, the Mourne Mountains, the Adamello-Presanella Alps, the Aiguille du Midi and the Grandes Jorasses), the Bregaglia, Corsica, parts of the Karakoram (especially the Trango Towers), the Fitzroy Massif, Patagonia, Baffin Island, Ogawayama, the Cornish coast, the Cairngorms, Sugarloaf Mountain in Rio de Janeiro, Brazil, and the Stawamus Chief, British Columbia, Canada. Granite rock climbing is so popular that many of the artificial rock climbing walls found in gyms and theme parks are made to look and feel like granite.
https://en.wikipedia.org/wiki?curid=13088
Global Climate Coalition The Global Climate Coalition (GCC) (1989–2001) was an international lobbyist group of businesses that opposed action to reduce greenhouse gas emissions and publicly challenged the science behind global warming. The GCC was the largest industry group active in climate policy and the most prominent industry advocate in international climate negotiations. The GCC was involved in opposition to the Kyoto Protocol, and played a role in blocking ratification by the United States. The coalition knew it could not deny the scientific consensus, but sought to sow doubt over the scientific consensus on climate change and create manufactured controversy. The GCC dissolved in 2001 after membership declined in the face of improved understanding of the role of greenhouse gases in climate change and of public criticism. The Global Climate Coalition (GCC) was formed in 1989 as a project under the auspices of the National Association of Manufacturers. The GCC was formed to represent the interests of the major producers and users of fossil fuels, to oppose regulation to mitigate global warming, and to challenge the science behind global warming. Context for the founding of the GCC from 1988 included the establishment of the Intergovernmental Panel on Climate Change (IPCC) and NASA climatologist James Hansen's congressional testimony that climate change was occurring. The government affairs' offices of five or six corporations recognized that they had been inadequately organized for the Montreal Protocol, the international treaty that phased out ozone depleting chlorofluorocarbons, and the Clean Air Act in the United States, and recognized that fossil fuels would be targeted for regulation. According to GCC's mission statement on the home page of its website, GCC was established: "to coordinate business participation in the international policy debate on the issue of global climate change and global warming," and GCC's executive director in a 1993 press release said GCC was organized "as the leading voice for industry on the global climate change issue." GCC reorganized independently in 1992, with the first chairman of the board of directors being the director of government relations for the Phillips Petroleum Company. Exxon was a founding member, and a founding member of the GCC's board of directors. Exxon, and later ExxonMobil, had a leadership role in coalition. The American Petroleum Institute (API) was a leading member of the coalition. API's executive vice president was a chairman of the coalition's board of directors. Other GCC founding members included the National Coal Association, United States Chamber of Commerce, American Forest & Paper Association, and Edison Electric Institute. GCC's executive director John Shleas was previously the director of government relations at the Edison Electric Institute. GCC was run by Ruder Finn, a public relations firm. GCC was the largest industry group active in climate policy. About 40 companies and industry associations were GCC members. Considering member corporations, member trade associations, and business represented by member trade associations, GCC represented over 230,000 businesses. Industry sectors represented included: aluminium, paper, transportation, power generation, petroleum, chemical, and small businesses. All the major oil companies were members. GCC members were from industries that would have been adversely effected by limitations on fossil fuel consumption. GCC was funded by membership dues. GCC was one of the most powerful lobbyist groups against action to mitigate global warming. It was the most prominent industry advocate in international climate negotiations, and led a campaign opposed to policies to reduce greenhouse gas emissions. The GCC was one of the most powerful non-governmental organizations representing business interests in climate policy, according to Kal Raustiala, professor at the UCLA School of Law. GCC's advocacy activities included lobbying government officials, grassroots lobbying through press releases and advertising, participation in international climate conferences, criticism of the processes of international climate organizations, critiques of climate models, and personal attacks on scientists and environmentalists. Policy positions advocated by the coalition included denial of anthropogenic climate change, emphasizing the uncertainty in climatology, advocating for additional research, highlighting the benefits and downplaying the risks of climate change, stressing the priority of economic development, defending national sovereignty, and opposition to the regulation of greenhouse gas emissions. GCC sent delegations to all of the major international climate conventions. Only nations and non-profits may send official delegates to the United Nations Climate Change conferences. GCC registered with the United Nations Framework Convention on Climate Change as a non-governmental organization, and executives from GCC members attended official UN conferences as GCC delegates. In 1990, after U. S. President George H. W. Bush addressed the Intergovernmental Panel on Climate Change (IPCC) urging caution in responding to global warming, and offering no new proposals, GCC said Bush's speech was "very strong" and concurred with the priorities of economic development and additional research. GCC sent 30 attendees to the 1992 Earth Summit in Rio de Janeiro, where it lobbied to keep targets and timetables out of the Framework Convention on Climate Change. In December, 1992 GCC's executive director wrote in a letter to "The New York Times": "...there is considerable debate on whether or not man-made greenhouse gases (produced primarily by burning fossil fuels) are triggering a dangerous 'global warming' trend." In 1992 GCC distributed a half-hour video entitled "The Greening of Planet Earth", to hundreds of journalists, the White House, and several Middle Eastern oil-producing countries, which suggested that increasing atmospheric carbon dioxide could boost crop yields and solve world hunger. In 1993, after U. S. President Bill Clinton pledged "to reducing our emissions of greenhouse gases to their 1990 levels by the year 2000," GCC's executive director said it "could jeopardize the economic health of the nation." GCC's lobbying was key to the defeat in the United States Senate of Clinton's 1993 BTU tax proposal. In 1994, after United States Secretary of Energy Hazel R. O'Leary said the 1992 United Nations Framework Convention on Climate Change needed to be strengthened, and that voluntary carbon dioxide reductions may not be enough, GCC said it was: "disturbed by the implication that the President's voluntary climate action plan, which is just getting under way, will be inadequate and that more stringent measures may be needed domestically." GCC did not fund original scientific research and its climate claims relied largely on the "World Climate Review" and its successor the "World Climate Report" edited by Patrick Michaels and funded by the Western Fuels Association. GCC promoted the views of climate deniers such as Michaels, Fred Singer, and Richard Lindzen. In 1996, GCC published a report entitled "Global warming and extreme weather: fact vs. fiction" written by Robert E. Davis. GCC members questioned the efficacy of climate change denial and shifted their message to highlighting the economic costs of proposed greenhouse gas emission regulations and the limited effectiveness of proposals exempting developing nations. In 1995, after the United Nations Climate Change conference in Berlin agreed to negotiate greenhouse gas emission limits, GCC's executive director said the agreement gave "developing countries like China, India and Mexico a free ride" and would "change the relations between sovereign countries and the United Nations. This could have very significant implications. It could be a way of capping our economy." At a Washington, D.C. press conference on the eve of the second United Nations Climate Change conference in Geneva, GCC's executive director said, "The time for decision is not yet now." At the conference in Geneva, GCC issued a statement that said it was too early to determine the causes of global warming. GCC representatives lobbied scientists at the September, 1996 IPCC conference in Mexico City. After actor Leonardo DiCaprio, chairman of Earth Day 2000, interviewed Clinton for ABC News, GCC sent out an e-mail that said that DiCaprio's first car was a Jeep Grand Cherokee and that his current car was a Chevrolet Tahoe. In 1995, GCC assembled an advisory committee of scientific and technical experts to compile an internal-only, 17-page report on climate science entitled "Predicting Future Climate Change: A Primer", which said: “The scientific basis for the Greenhouse Effect and the potential impact of human emissions of greenhouse gases such as CO2 on climate is well established and cannot be denied.” In early 1996, GCC's operating committee asked the advisory committee to redact the sections that rebutted contrarian arguments, and accepted the report and distributed it to members. The draft document was disclosed in a 2007 lawsuit filed by the auto industry against California’s efforts to regulate automotive greenhouse gas emissions. According to "The New York Times", the primer demonstrated that "even as the coalition worked to sway opinion, its own scientific and technical experts were advising that the science backing the role of greenhouse gases in global warming could not be refuted." According to the Union of Concerned Scientists in 2015, the primer was: "remarkable for indisputably showing that, while some fossil fuel companies’ deception about climate science has continued to the present day, at least two decades ago the companies’ own scientific experts were internally alerting them about the realities and implications of climate change." GCC was an industry participant in the review process of the IPCC Second Assessment Report. In 1996, prior to the publication of the Second Assessment Report, GCC distributed a report entitled "The IPCC: Institutionalized Scientific Cleansing" to reporters, US Congressmen, and scientists. The coalition report said that Benjamin D. Santer, the lead author of Chapter 8 in the assessment, entitled "Detection of Climate Change and Attribution of Causes," had altered the text, after acceptance by the Working Group, and without approval of the authors, to strike content characterizing the uncertainty of the science. Frederick Seitz repeated GCC's charges in a letter to the "Wall Street Journal" published June 12, 1996. The coalition ran newspaper advertisements that said: "unless the management of the IPCC promptly undertakes to republish the printed versions...the IPCC's credibility will have been lost." Santer and his co-authors said the edits were integrations of comments from peer review as per agreed IPCC processes. GCC was the main industry group in the United States opposed to the Kyoto Protocol, which committed signatories to reduce greenhouse gas emissions. The coalition "was the leading industry group working in opposition to the Kyoto Protocol," according to Greenpeace, and led opposition to the Kyoto Protocol, according to the "Los Angeles Times." Prior to 1997, GCC spent about $1 million annually lobbying against limits on emissions; before Kyoto, GCC annual revenue peaked around $1.5 million; GCC spent $13 million on advertising in opposition to the Kyoto treaty. The coalition funded the Global Climate Information Project and hired the advertising firm that produced the 1993-1994 Harry and Louise advertising campaign which opposed Clinton's health care initiative. The advertisements said, “the UN Climate Treaty isn’t Global...and it won’t work” and "Americans will pay the price...50 cents more for every gallon of gasoline." GCC opposed the signing of the Kyoto Protocol by Clinton. GCC was influential in the withdrawal from the Kyoto Protocol by the administration of President George W. Bush. According to briefing notes prepared by the United States Department of State for the under-secretary of state, Bush's rejection of the Kyoto Protocol was "in part based on input from" GCC. GCC lobbying was key to the July, 1997 unanimous passage in the United States Senate of the Byrd-Hagel Resolution, which reflected the coalition's position that restrictions on greenhouse gas emissions must include developing countries. GCC's chairman told a US congressional committee that mandatory greenhouse gas emissions limits were: "an unjustified rush to judgement." The coalition sent 50 delegates to the third Conference of the Parties to the United Nations Climate Change Conference in Kyoto. On December 11, 1997, the day the Kyoto delegates reached agreement on legally binding limits on greenhouse gas emissions, GCC's chairman said the agreement would be defeated by the US Senate. In 2001, GCC's executive director compared the Kyoto Protocol to the "RMS Titanic". GCC's challenge to science prompted a backlash from environmental groups. Environmentalists described GCC as a "club for polluters" and called for members to withdraw their support. "Abandonment of the Global Climate Coalition by leading companies is partly in response to the mounting evidence that the world is indeed getting warmer," according to environmentalist Lester R. Brown. In 1998, Green Party delegates to the European Parliament introduced an unsuccessful proposal that the World Meteorological Organization name hurricanes after GCC members. Defections weakened the coalition. In 1996, British Petroleum resigned and later announced support for the Kyoto Protocol and commitment to greenhouse gas emission reductions. In 1997, Royal Dutch Shell withdrew after criticism from European environmental groups. In 1999, Ford Motor Company was the first US company to withdraw; the "New York Times" described the departure as "the latest sign of divisions within heavy industry over how to respond to global warming." DuPont left the coalition in 1997 and Shell Oil (US) left in 1998. In 2000, GCC corporate members were the targets of a national student-run university divestiture campaign. Between December, 1999 and early March, 2000, Texaco, the Southern Company, General Motors and Daimler-Chrysler withdrew. Some former coalition members joined the Business Environmental Leadership Council within the Pew Center on Global Climate Change which represented diverse stakeholders, including business interests, with a commitment to peer-reviewed scientific research and accepted the need for emissions restrictions to address climate change. In 2000, GCC restructured as an association of trade associations; membership was limited to trade associations, and individual corporations were represented through their trade association. Brown called the restructuring "a thinly veiled effort to conceal the real issue - the loss of so many key corporate members." In 2001, after US President George W. Bush withdrew the US from the Kyoto process, GCC disbanded. Absent the participation of the US, the effectiveness of the Kyoto process was limited. GCC said on its website that its mission had been successfully achieved, writing "At this point, both Congress and the Administration agree that the U.S. should not accept the mandatory cuts in emissions required by the protocol." In 2015, the Union of Concerned Scientists compared GCC's role in the public policy debate on climate change to the roles in the public policy debate on tobacco safety of the Tobacco Institute, the tobacco industry's lobbyist group, and the Council for Tobacco Research, which promoted misleading science. Environmentalist Bill McKibben said that, by promoting doubt about the science, "throughout the 1990s, even as other nations took action, the fossil fuel industry's Global Climate Coalition managed to make American journalists treat the accelerating warming as a he-said-she-said story." According to the "Los Angeles Times", GCC members integrated projections from climate models into their operational planning while publicly criticising the models.
https://en.wikipedia.org/wiki?curid=13089
Gotham City Gotham City ( ), or simply Gotham, is a fictional city appearing in American comic books published by DC Comics, best known as the home of Batman. The city was first identified as Batman's place of residence in "Batman" #4 (December 1940) and has since been the primary setting for stories featuring the character. Gotham City is traditionally depicted as being located in the U.S. state of New Jersey. Over the years, Gotham's look and atmosphere has been influenced by cities such as New York City and Chicago. Locations used as inspiration or filming locations for Gotham City in the live-action "Batman" films and television series have included New York City, New Jersey, Chicago, Vancouver, Detroit, Pittsburgh, Los Angeles, London, Toronto, and Hong Kong. Writer Bill Finger, on the naming of the city and the reason for changing Batman's locale from New York City to a fictional city, said, "Originally I was going to call Gotham City 'Civic City.' Then I tried 'Capital City,' then 'Coast City.' Then I flipped through the New York City phone book and spotted the name 'Gotham Jewelers' and said, 'That's it,' Gotham City. We didn't call it New York because we wanted anybody in any city to identify with it." "Gotham" has been a nickname for New York City that first became popular in the nineteenth century; Washington Irving had first attached it to New York in the November 11, 1807 edition of his "Salmagundi", a periodical which lampooned New York culture and politics. Irving took the name from the village of Gotham, Nottinghamshire, England: a place inhabited, according to folklore, by fools. Gotham City, like other cities in the DC Universe, has varied in its portrayals over the decades, but the city's location is traditionally depicted as being in the state of New Jersey. In "Amazing World of DC Comics" #14 (March 1977), publisher Mark Gruenwald discusses the history of the Justice League and indicates that Gotham City is located in New Jersey. In the "World's Greatest Super Heroes" (August 1978) comic strip, a map is shown placing Gotham City in New Jersey and Metropolis in Delaware. "World's Finest Comics" #259 (November 1979) also confirms that Gotham is in New Jersey. "New Adventures of Superboy" #22 (October 1981) and the 1990 "Atlas of the DC Universe" both show maps of Gotham City in New Jersey and Metropolis in the state of Delaware. "Detective Comics" #503 (June 1983) includes several references suggesting Gotham City is in New Jersey. A location on the Jersey Shore is described as "twenty miles north of Gotham". Within the same issue, Robin and Batgirl drive from a "secret New Jersey airfield" to Gotham City and then drive on the "Hudson County Highway"; Hudson County is the name of an actual county in New Jersey. "Batman: Shadow of the Bat", Annual #1 (June 1993) further establishes that Gotham City is in New Jersey. Sal E. Jordan's driver's license in the comic shows his address as "72 Faxcol Dr Gotham City, NJ 12345". The 2016 film "Suicide Squad" reveals Gotham City to be in the state of New Jersey within the DC Extended Universe. The 2019 film "Joker" takes place in a dystopic Gotham City and was filmed in Jersey City, New Jersey and New York City. Gotham City is the home of Batman, just as Metropolis is home to Superman, and the two heroes often work together in both cities. In comic book depictions, the exact distance between Gotham and Metropolis has varied over the years, with the cities usually being within driving distance of each other. The two cities are sometimes portrayed as twin cities on opposite sides of the Delaware Bay, with Gotham in New Jersey and Metropolis in Delaware. "The Atlas of the DC Universe" from the 1990s places Metropolis in Delaware and Gotham City in New Jersey. New York City has also garnered the nickname "Metropolis" to describe the city in the daytime in popular culture, contrasting with "Gotham", sometimes used to describe New York City at night. During the Bronze Age of Comic Books, the Metro-Narrows Bridge was depicted as the main route connecting the twin cities of Metropolis and Gotham City. It has been described as being the longest suspension bridge in the world. A map appeared in "The New Adventures of Superboy" #22 (October 1981), that showed Smallville within driving distance of both Metropolis and Gotham City; Smallville was relocated to Kansas in post-Crisis continuity. A map of the United States in "The Secret Files & Origins Guide to the DC Universe 2000" #1 (March 2000) depicts Metropolis and Gotham City as being somewhere in the Tri-state Area alongside Blüdhaven. Within the DC Extended Universe, the 2016 film "" depicts Gotham City as being located across a bay from Metropolis. A Norwegian mercenary, Captain Jon Logerquist, founded Gotham City in 1635 and the British later took it over—a story that parallels the founding of New York by the Dutch (as New Amsterdam) and later takeover by the British. During the American Revolutionary War, Gotham City was the site of a major battle (paralleling the Battle of Brooklyn in the American Revolution). This was detailed in Rick Veitch's "Swamp Thing" #85 featuring Tomahawk. Rumours held it to be the site of various occult rites. The 2011 comic book series "" details a history of Gotham City in which Alan Wayne (Bruce Wayne's ancestor), Theodore Cobblepot (Oswald Cobblepot's ancestor), and Edward Elliot (Thomas Elliot's ancestor), are considered the founding fathers of Gotham. In 1881, they constructed three bridges called the Gates of Gotham, each bearing one of their last names. Edward Elliot became increasingly jealous of the Wayne family's popularity and wealth during this period, jealousy that would spread to his great-great-grandson, Thomas Elliot or Hush. The occult origins of Gotham are further delved into by Peter Milligan's 1990 story arc "Dark Knight, Dark City", which reveals that some of the American Founding Fathers are involved in summoning a bat-demon which becomes trapped beneath old "Gotham Town", its dark influence spreading as Gotham City evolves. A similar trend is followed in 2005's "Shadowpact" #5 by Bill Willingham, which expands upon Gotham's occult heritage by revealing a being who has slept for 40,000 years beneath the land upon which Gotham City was built. Strega, the being's servant, says that the "dark and often cursed character" of the city was influenced by the being who now uses the name "Doctor Gotham." During the American Civil War, Gotham was defended by an ancestor of the Penguin, fighting for the Union Army, Col. Nathan Cobblepot, in the Legendary Battle of Gotham Heights. In "Gotham Underground" #2 by Frank Tieri, Tobias Whale claims that 19th century Gotham was run by five rival gangs, until the first "masks" appeared, eventually forming a gang of their own. It is not clear whether these were vigilantes or costumed criminals. In contemporary times, Batman is considered the protector of Gotham, as he is fiercely protective of his home city. While other masked vigilantes also operate in Gotham City, they do so under Batman's approval since he is considered the best and most knowledgeable crime-fighter in the city. Many storylines have added more events to Gotham's history, at the same time greatly affecting the city and its people. Perhaps the greatest in effect was a long set of serial storylines, which started with Ra's al Ghul releasing a debilitating virus called the "Clench" during the "" storyline. As that arc concluded, the city was beginning to recover, only to suffer an earthquake measuring 7.6 on the Richter Scale in the 1998 "" storyline. This resulted in the federal government cutting Gotham off from the rest of the United States in the 1999 storyline "No Man's Land", the city's remaining residents forced to engage in gang warfare, either as active participants or paying for protection from groups ranging from the GCPD to the Penguin, just to stay alive. Eventually, Gotham was rebuilt and returned to the U.S. as part of a campaign mounted by Lex Luthor, who used the positive publicity of his role to make a successful bid for the position of President of the United States. For a time, the city faces various complications from gang warfare and escalating vigilante actions, due to such events as Spoiler unintentionally triggering a gang war, the return of Jason Todd as the Red Hood, and Bruce Wayne's disappearance during the war against Darkseid. Although Dick Grayson takes on the role of Batman for a time, matters become worse when a complex conspiracy initiated by the Cluemaster results in multiple villains attacking all areas of Batman's life, ruining the reputation of Wayne Enterprises and seeing Commissioner Gordon framed for causing a mass train accident. After the destruction caused by the Joker's latest rampage, new villain Mr Bloom sets out to destroy the city so that a new form can "grow" from it, but Bruce Wayne returns as Batman in time to defeat Bloom and reaffirm his role as Batman. Suggestions of other Gotham City histories include a founding date of 1820 seen in a city seal in "". "Batman" writer and editor Dennis O'Neil has said that, figuratively, Batman's Gotham City is akin to "Manhattan below Fourteenth Street at eleven minutes past midnight on the coldest night in November." Batman artist Neal Adams has long believed that Chicago has been the basis for Gotham, stating "one of the things about Chicago is Chicago has alleys (which are virtually nonexistent in New York). Back alleys, that's where Batman fights all the bad guys." The statement "Metropolis is New York in the daytime; Gotham City is New York at night" has been variously attributed to comics creators Frank Miller and John Byrne. In designing "", creators Bruce Timm and Eric Radomski emulated the Tim Burton films' "otherworldly timelessness," incorporating period features such as black-and-white title cards, police airships (although no such thing existed, Timm has stated that he found it to fit the show's style), and a "vintage" color scheme with film noir flourishes. Police airships have since been incorporated into Batman comic books and are a recurring element in Gotham City. Concerning the evolution of Gotham throughout the years, Paul Levitz, "Batman" editor and former DC Comics president, has stated "each guy adds their own vision. That's the fun of comics, rebuilding a city each time." In the Batman comics, the person cited as an influential figure in promoting the unique architecture of Gotham City during the 19th century was Judge Solomon Wayne, Bruce Wayne's ancestor. His campaign to reform Gotham came to a head when he met a young architect named Cyrus Pinkney. Wayne commissioned Pinkney to design and to build the first "Gotham Style" structures in what became the center of the city's financial district. The "Gotham Style" idea of the writers matches parts of the Gothic Revival in style and timing. In the story line of "", the Gotham Cathedral plays a central role for the story as it is built by Mr Whisper, the story's antagonist. In a 1992 storyline, a man obsessed with Pinkney's architecture blew up several Gotham buildings in order to reveal the Pinkney structures they had hidden; the editorial purpose behind this was to transform the city depicted in the comics to resemble the designs created by Anton Furst for the 1989 "Batman" film. Alan Wayne expanded upon his father's ideas and built a bridge to expand the city. Edward Elliot and Theodore Cobblepot also each had a bridge named for them. "Batman Begins" features a CGI-augmented version of Chicago while "The Dark Knight" more directly features Chicago infrastructure and architecture such as Navy Pier. However, "The Dark Knight Rises" abandoned Chicago, instead shooting in Pittsburgh, Los Angeles, New York City, Newark, New Jersey, London and Glasgow. Gotham Academy has appeared in different comics and shows, as the most prestigious private school in Gotham. Richard Grayson, and Damian Wayne have both attended the school. Gotham City University is a major college located in the city. In the DC Extended Universe, Victor Stone attended the University prior to turning into Cyborg. Bruce Wayne's place of residence is Wayne Manor, which is located on the outskirts of the city. His butler, Alfred Pennyworth, aids Bruce in his crusade to fight crime in Gotham. Over the years, in various Bat titles in the chronological DC Comics continuity, the caped crusader enlists the help of numerous characters, the first being his trusty sidekick, Robin. Although a singular title, many have donned the mantle of the Boy Wonder over the years. The first being Nightwing, then came Red Hood, Red Robin (comics), and finally Batman's son Damian Wayne. In addition to the Robins or former Robins, there is also Catwoman, Batgirl, and Huntress (comics). Other DC characters have also been depicted to be living in Gotham, such as mercenary Tommy Monaghan and renowned demonologist Jason Blood. Within modern DC Universe continuity, Batman is not the first hero in Gotham. Stories featuring Alan Scott, the Golden Age Green Lantern, set before and during World War II depict Scott living in Gotham, and later depictions show him running his Gotham Broadcasting Corporation. Also, the original Golden Age Spectre and his sidekick, Percival Popp, live in Gotham City as does Starman and the Gay Ghost. DC's 2011 reboot of "All Star Western" takes place in an Old West-styled Gotham. Jonah Hex and Amadeus Arkham are among this version of Gotham's inhabitants. Apart from Gotham's superhero residents, the residents of the city feature in a back-up series in "Detective Comics" called "Tales of Gotham City" and in two limited series called "Gotham Nights". Additionally, the Gotham City Police Department is the focus of the series "Gotham Central", as well as the mini-series "Gordon's Law", "Bullock's Law", and "". Due to the volatile nature of Gotham politics, turnover of mayors and other city officials is high. The first Gotham mayor depicted in comics was unnamed, but appeared as a caricature of New York mayor Fiorello La Guardia. Theodore Cobblepot, great grandfather of the Penguin, was mayor in the late nineteenth century. An unnamed mayor ran afoul of the Court of Owls in 1914 and was killed by them. Archibald Brewster was mayor during the Great Depression. Mayor Thorndike was killed by the Made of Wood killer in 1948. Mayor Aubrey James was a contemporary of Thomas Wayne who was stabbed to death. Mayor Jessop was in office shortly after the Wayne murders. A man named Falcone was purportedly mayor during the earliest days of Batman's career. Shortly after, Mayor Wilson Klass directed the GCPD to turn a blind eye to Batman's activities after Batman saved his daughter. Mayor Hill was in office when the Joker debuted, and a man named Gill was mayor early in Batman's career, as was former police commissioner Grogan. An unnamed bald mayor was killed by a villain known as Midnight. Men named Carfax, Bradley Stokes, Sheppard,, Taylor, and Hayes all served as mayor. Mayor Charles Chesterfield was killed by a sentient fat-eating blob of grease. Hamilton Hill became mayor through the backing of crime boss Rupert Thorne but was ultimately ousted from office and replaced by George P. Skowcroft. An unnamed mayor is killed by Deacon Blackfire's followers and replaced by Donald Webster. Mayor Julius Lieberman is killed by a Predator. Mayor Goode served briefly before being replaced by an African American man. Armand Krol became mayor and died of the Clench virus after leaving office. A woman, Marion Grange, became mayor with the backing of Bruce Wayne but was assassinated in Washington, D.C. while trying to secure federal aid for Gotham after an earthquake. In the wake of , Daniel Danforth Dickerson III served as mayor only to be killed by a sniper, after which he was replaced by David Hull. Seamus McGreevy served as mayor in the midst of a criminal conspiracy known as "The Body". An unnamed woman was mayor when Batman returned to Gotham a year after the Infinite Crisis. Sebastian Hady was a corrupt mayor who was eventually killed by the League of Shadows. Councilwoman Muir served as interim mayor when the city was in the grip of a virus that only affected men. Michael Akins, former commissioner of police, was appointed mayor, and later replaced by a man named Atkins. In the wake of Bane's takeover of the city, the current mayor in DC Comics continuity is a man named Dunch. The 1960s live-action "Batman" television series never specified Gotham's location though there are hints it actually represents New York City, including a city map and its location across the 'West River' from 'Guernsey City' in 'New Guernsey'. Fictional residents Mayor Linseed (portrayed by Byron Keith) and Governor Stonefellow are also direct allusions to real-life Mayor John Lindsay and Governor Nelson Rockefeller. The related theatrical movie showed Batman to be flying over suburban Los Angeles, the Hollywood Hills, palm trees, a harbor, a beach and a view of the Los Angeles City Hall. Gotham City is featured in "". When describing Gotham City Paul Dini, a writer and director of the show, stated "in my mind, it was sort of like what if the 1939 World's Fair had gone on another 60 years or so"."""" In the episode "", a driver's license lists a Gotham area resident's hometown as "Gotham Estates, NY". In the episode , when Bruce Wayne leaves for England, a map shows Gotham City, at the joining of Long Island and the Hudson River. The episode shows a character's address in a police file indicating that Gotham City is located in New York state. The episode , however, implies that Gotham resides in a state of the same name; a prison workshop is shown stamping license plates that read "Gotham: The Dark Deco State" (as a reference to the artistic style of the series). The episode "" states that Gotham City has a population of approximately 10 million people. The live-action TV series "Gotham" is filmed in New York City and was an important requirement of the show's creative team. According to executive producer Danny Cannon, its atmosphere is inspired by the look of the city itself in the 1970s films of Sidney Lumet and William Friedkin. Clues to this include and signs showing phone numbers bearing the area code 212. Donal Logue, who portrays Harvey Bullock in the series "Gotham", described different aspects of that series' design of Gotham City as exhibiting different sensibilities, explaining, "for me, you can step into things that almost feel like the roaring 20s, and then there's this other really kind of heavy "Blade Runner" vibe floating around. There are elements of it that are completely contemporary and there are pieces of it that are very old-fashioned...There were a couple of examples of modern technology, but maybe an antiquated version of it, that gave me a little bit of sense that it's certainly not the 50s and the 60s...But it's not high tech and it's not futuristic, by any means." In the TV series "Smallville", Gotham City is mentioned by the character Linda Lake in the episode "Hydro", who jokes she can see Gotham from her view. It is also mentioned in "Reunion", where one of Oliver Queen's friends mentions having to get back to Gotham. The fifth episode of "Young Justice", entitled "Schooled", indicates that Gotham is located in Connecticut, near Bridgeport. Gotham City was first shown in the Arrowverse as part of "Elseworlds", a 2018 crossover storyline among the shows, which introduced Batwoman, although it had been referenced several times previously. For the TV series "Batwoman", both Vancouver and Chicago were used for Gotham City. In this show, the Crows have helped to defend it from crime ever since Batman went missing three years ago. In "The Flash" episode "Marathon", a map sites Gotham City in place of Chicago, Illinois. Released as "Batman" (1989) director Tim Burton wanted a timeless alternate to New York and described it as "hell burst through the pavement and grew". The look of Gotham was overseen by production designer Anton Furst, who won an Oscar for supervising the art department. Furst stated "Batman" was "definitely based in many ways on the worst aspects of New York City" and was inspired by Andreas Feininger's photographs of 1940s New York. Furst's draftsman Nigel Phelps created numerous charcoal drawings of the buildings and interior sets for the production. Following the death of Furst, Burton tapped Bo Welch to oversee production design for "Batman Returns" (1992). Burton wanted Welch to re-imagine Gotham, stating ""Batman" didn't feel big to me – it didn't have the power an old American city has". Welch wanted to expand on the same basic concept for the sequel but moved away from European influences to show more American Art Deco/world's fair elements. When asked what inspired his interpretation of Gotham, Welch stated "[H]ow can I create a visual expression of corruption and greed? That got me thinking about the fascistic architecture employed at world's fairs... That feels corrupt because it's evocative of oppressive bureaucracies and dictatorships... So I looked at a lot of [Third Reich] art and images from world's fairs". To physically make the city seem darker, he designed tall "oppressively overbuilt" cityscape that physically blocked out light. When Joel Schumacher took over directing the "Batman" film series from Tim Burton, Barbara Ling handled the production design for both of Schumacher's films "Batman Forever" (1995) and 1997's "Batman & Robin." Ling's vision of Gotham City was a luminous and outlandish evocation of Modern expressionism and Constructivism. Its futuristic-like concepts (to a certain extent, akin to the 1982 film "Blade Runner") appeared to be sort of a cross between 1930's Manhattan and the "Neo-Tokyo" of "Akira". Ling admitted her influences for the Gotham City design came from "neon-ridden Tokyo and the Machine Age. Gotham is like a World's Fair on ecstasy." When Batman is pursuing Two-Face in "Batman Forever," the chase ends at Lady Gotham, the fictional equivalent of the Statue of Liberty. During Mr. Freeze's attempt to freeze Gotham in the film "Batman & Robin", the targeting screen for his giant laser locates it somewhere on the New England shoreline, possibly as far north as Maine. The soundtrack for "Batman & Robin" features a song named after the city and sung by R. Kelly, later included on international editions of his 1998 double album "R." Director Christopher Nolan has stated that Chicago is the basis of his portrayal of Gotham, and the majority of both "Batman Begins" (2005) and "The Dark Knight" (2008) were filmed there. However, the city itself seems to take many cues from New York City: police cars use a paint job that was used by the NYPD in the 1990s, and the same is applicable to garbage trucks, and the "Gotham Post" seems to have the same font heading as "The New York Post". In "Batman Begins", Nolan desired that Gotham appeared as a large, modern city that nonetheless reflected a variety of architecture styles and periods, as well as different socioeconomic strata. The production's approach depicted Gotham as an exaggeration of New York City, with elements taken from Chicago, the elevated freeways and monorails of Tokyo, and the "walled city of Kalhoon" in Hong Kong, which was the basis for the slum in the film known as The Narrows. In the animated "" (2008), which takes place between "Batman Begins" and "The Dark Knight", The Narrows was converted into an expansion of Arkham Asylum. In "The Dark Knight", more Chicago and New York influences were observed. On filming in Chicago, James McAllister, key location manager stated, "visually it's that look like you would see in the comic books." Nolan also stated "there's all these different boroughs, with rivers to interconnect. I think it's hard to get away from that, because Gotham is based on New York." In the movie, it is revealed that downtown Gotham, or much of the city, is on an island, similar to New York City's Manhattan Island, as suggested by the "Gotham Island Ferry". However, while Gordon is discussing evacuation plans with the Mayor, land routes to the east are mentioned. In conversation with Harvey Dent, Bruce Wayne indicates that the Palisades of the Wayne Manor estate are within the city limits. In terms of population, Lucius Fox says that the city houses "30 million people." The film indicates that the city's area code is 735, which in real life is an unused code. Compared to the previous film, less CGI was used in Gotham's skyline, resulting in plenty of shots of a digitally unaltered Chicago skyline. For "The Dark Knight Rises" (2012), the production utilized Pittsburgh, Los Angeles, New York City, Newark, New Jersey, London and Glasgow for shots of Gotham City. Within the DC Extended Universe, Gotham City is located in Gotham County, New Jersey. In "", paperwork mentions that the city is in "Gotham County," and Amanda Waller's files on Deadshot and Harley Quinn in "Suicide Squad" reveal Gotham City to be located in the state of New Jersey. Zack Snyder confirmed that Metropolis and Gotham City are in close geographical proximity to each other, with Gotham City being located on the edge of the New Jersey, separated from the federal district of Metropolis by Delaware Bay. In "Justice League" it is revealed there is a tunnel connecting the two, constructed as part of the abandoned 'Metropolis Project' in 1929 to connect the two cities. There are multiple islands located in the bay also, with one of them being named Braxton Island. Senator Debbie Stabenow makes a cameo appearance in "" as the state's governor. The "Boston Globe" compared the close proximity of Gotham and Metropolis to Jersey City, New Jersey and Manhattan, New York. A television spot for Turkish Airlines premiering during the 2016 Super Bowl featured Bruce Wayne (played by the film's star, Ben Affleck) promoting Gotham as a tourist destination. To create Gotham in "", the creative team "decided to recreate and combine large sections of existing selected city sections and adapt the architecture and layout to fit Gotham's. Thousands of photographs were put through MPC's photogrammetry pipeline to create geometry and textures for each city section." "Joker" director and producer Todd Phillips imagined Gotham as a "version of Gotham was the pre-’80s boom New York, or urban northeastern center, but not the iconic New York." When asked how he re-imagined the city, production designer Mark Friedberg stated "our version of Gotham was what groomed him. It was both an appreciation for how severe things got in the city, but also for the world of possibility that lived in the version of that city." During the events of the direct-to-video film, "" (1998), a computer screen displaying Barbara Gordon's personal information refers to her location as "Gotham City, NY", and also displays her area code as being 212 – a Manhattan area code. "Batman Beyond" (1999–2001) envisions a Gotham City in 2039, referred to as "Neo-Gotham". The 2008 direct-to-DVD film "" shows Gotham as a large city with many skyscrapers and a bustling population. Gotham City appears in several video games, including "Batman Begins", "DC Universe Online", and "Mortal Kombat vs. DC Universe". The city makes another appearance in a video game with "", where the player can fight in front of and inside of Wayne Manor, on top of a building and in an alley as well. Gotham also appears in "Lego Dimensions", and it is a playable stage on "Batman: Arkham" universe "" (2009) opens with Batman driving Joker from Gotham City to Arkham Asylum. Joker also threatens to detonate bombs across Gotham. In "" (2011), the slums of Old Gotham City (the northern island) were converted into Arkham City. Inside the prison walls, this part of Gotham contains various landmarks throughout the story, like Penguin's Iceberg Lounge, the Ace Chemical Plant, the Sionis Steel Mill, the Old Gotham City Police Department building, and the Monarch Theatre with the Wayne murder scene in Crime Alley. Most of these locations have major events in the story. In "" (2013), an earlier, younger version of the city can be seen than that of other games in the "Batman: Arkham" series. In addition to the northern island, this installment in the series lets players explore a new southern island, connected to the former by the Pioneer's Bridge. The setting of "" (2015), Central Gotham City, is five times larger than Old Gotham.
https://en.wikipedia.org/wiki?curid=13090
Charles Goren Charles Henry Goren (March 4, 1901 – April 3, 1991) was an American bridge player and writer who significantly developed and popularized the game. He was the leading American bridge personality in the 1950s and 1960s – or 1940s and 1950s, as "Mr. Bridge" – as Ely Culbertson had been in the 1930s. Culbertson, Goren, and Harold Vanderbilt were the three people named when "The Bridge World" inaugurated a bridge "hall of fame" in 1964 and they were made founding members of the ACBL Hall of Fame in 1995. According to "New York Times" bridge columnist Alan Truscott, more than 10 million copies of Goren's books were sold. Among them, "Point-Count Bidding" (1949) "pushed the great mass of bridge players into abandoning Ely Culbertson's clumsy and inaccurate honor-trick method of valuation." Goren was born in Philadelphia, Pennsylvania, to Russian Jewish immigrants. He earned a law degree at McGill University in Montreal in 1923. While he was attending McGill, a girlfriend (or "a young hostess") laughed at his ineptness at the game of bridge, motivating him to immerse himself in a study of existing bridge materials. (The young hostess laughed in 1922. The game was auction bridge, "which became contract bridge later in the decade".) When he graduated, he was admitted to the Pennsylvania bar and he practiced law for 13 years in Philadelphia. The growing fame of Ely Culbertson, however, prompted Goren to abandon his original career choice to pursue bridge competitions, where he attracted the attention of Milton Work. Work hired Goren to help with his bridge articles and columns, and eventually Goren began ghostwriting Work's material. Work was one of numerous strong bridge players based in Philadelphia around the 1920s. He became an extraordinarily successful lecturer and writer on the game and perhaps the first who came to be called its "Grand Old Man". From 1928, he had popularized the 4–3–2–1 point count system for evaluating balanced hands (now sometimes called the Work count, Work point count, or Work points). His chief assistant Olive Peterson and young Goren established a partnership as players. Work was the greatest authority on auction bridge, which was generally replaced by contract bridge during the late 1920s. Goren "became Mr. Work's technical assistant at the end of the decade". As a player Goren's "breakthrough" was the 1937 Board-a-Match Teams championship (now Reisinger) won with three other Philadelphia players: John Crawford, Charles Solomon, and Sally Young. His breakthrough as a writer may have been when Culbertson moved a newspaper bridge column from one syndicate to another. The "Chicago Tribune" and the "Daily News" of New York picked up Goren. After Milton Work died in 1934, Goren began his own bridge writing career and published the first of his many books on playing bridge, "Winning Bridge Made Easy", in 1936. Drawing on his experience with Work's system, Goren quickly became popular as an instructor and lecturer. His subsequent lifetime of contributions to the game have made him one of the most important figures in the history of bridge. Goren became world champion at the Bermuda Bowl in 1950. Goren's books have sold millions of copies (especially "Winning Bridge Made Easy" and "Contract Bridge Complete"); by 1958 his daily bridge column was appearing in 194 American newspapers. He also had a monthly column in "McCall's" and a weekly column in "Sports Illustrated". His television program, "Championship Bridge with Charles Goren", was broadcast from 1959 to 1964 on the ABC network. It featured numerous appearances by top players and segments with celebrity guests such as Chico Marx, Alfred Drake, and Forest Evashevski, among others. December 10, 1961 - Goren appeared on CBS in "What's My Line?" as a mystery guest, as well as Michael Redgrave. See YouTube Channel, What's My Line? John Daly was the host. Guest panelists were: Arlene Francis, Johnny Carson, Dorothy Kilgallen, Bennett Cerf. I watched this program on 3-3-2020. Goren's longest partnership was with Helen Sobel, but he also famously partnered actor Omar Sharif. Sharif also wrote introductions to or co-authored several of Goren's bridge books, and was also co-author of Goren's newspaper column, eventually taking it over in collaboration with Tannah Hirsch. As he continued writing, Goren began to develop his point count system, based on the Milton Work point count, as an improvement over the existing system of counting "honor tricks". Goren, with assistance, formulated a method of combining the Work count, which was based entirely on high cards, and various distributional features. This may well have improved the bidding of intermediate players and beginners almost immediately. Goren also worked to continue the practice of opening four-card suits, with an occasional three-card club suit when the only four-card suit was a weak . In this, he was following the practice established by Ely Culbertson in the early 1930s. Later on, he continued this practice, resisting the well-known "five-card majors" approach that has become a major feature of modern Standard American bidding. Opening a four-card suit can improve the chances of the partnership identifying a four-four trump fit, and the four-card approach is still used by experts today, notably by most Acol players. It is claimed that the drawback of the four-card approach is that the Law of Total Tricks is more difficult to apply in cases where it is used. However, the five-card majors approach became popular before the Law of Total Tricks was propounded. In addition to his pioneering work in bringing simple and effective bridge to everyday players, Goren also worked to popularize the Precision bidding method, which is one of many so-called big club or strong club systems (which use an opening bid of one club to indicate a strong hand). Tribune Content Agency distributes the daily column Goren Bridge, written by Bob Jones, using the Goren method. Goren died on April 3, 1991, in Encino, California, at the age of 90. He had lived with his nephew Marvin Goren for 19 years. While few players "play Goren" exactly today, the point-count approach he popularized remains the foundation for most bidding systems. During the month of Goren's death, Truscott followed his obituary with a bridge column entitled, "Goren leaves behind many fans and a column with an international flavor". His business interests had been "managed by others" since his retirement "a quarter of a century ago", according to Truscott. "The Goren syndicated column now has an international flavor: It carries the bylines of the movie star Omar Sharif, an Egyptian who lives in Paris, and an entrepreneur, Tannah Hirsch, a South African who came to the United States via Israel." Citations
https://en.wikipedia.org/wiki?curid=13092
Galactus Galactus () is a fictional character appearing in American comic books published by Marvel Comics. Formerly a mortal man, Galactus is a cosmic entity who originally consumed planets to sustain his life force, and serves a functional role in the upkeep of the primary Marvel continuity. Galactus was created by Stan Lee and Jack Kirby and first appeared in the comic book "Fantastic Four" #48, published in March 1966. Lee and Kirby wanted to introduce a character that broke away from the archetype of the standard villain. In the character's first appearance, Galactus was depicted as a god-like figure who feeds by draining living planets of their energy, and operates without regard to the morality and judgments of mortal beings. Galactus's initial origin was that of a space explorer named Galan who gained cosmic abilities by passing near a star, but writer Mark Gruenwald further developed the origin of the character, revealing that Galan lived during the previous universe that existed prior to the Big Bang which began the current universe. As Galan's universe came to an end, Galan merged with the "Sentience of the Universe" to become Galactus, an entity that wielded such cosmic power as to require devouring entire planets to sustain his existence. Additional material written by John Byrne, Jim Starlin, and Louise Simonson explored Galactus's role and purpose in the Marvel Universe, and examined the actions of the character through themes of genocide, manifest destiny, ethics, and natural/necessary existence. Frequently accompanied by a herald (such as the Silver Surfer), the character has appeared as both antagonist and protagonist in central and supporting roles. Since debuting in the Silver Age of Comic Books, Galactus has played a role in over five decades of Marvel continuity. The character has been featured in other Marvel media, such as arcade games, video games, animated television series, and the 2007 film "". In 2009, Galactus ranked 5th on IGN's list of "Top 100 Comic Book Villains", which cited the character's "larger-than-life presence" as making him one of the more important villains ever created. IGN also noted "Galactus is one of the few villains on our list to really defy the definition of an evil-doer" as the character is compelled to destroy worlds because of his hunger. Created by writer-editor Stan Lee and artist-coplotter Jack Kirby, the character debuted in "The Fantastic Four" #48 (March 1966, the first of a three-issue story later known as "The Galactus Trilogy"). In 1966, nearly five years after launching Marvel Comics' flagship superhero title, "Fantastic Four", creators Stan Lee and Jack Kirby collaborated on an antagonist designed to break the supervillain mold of the tyrant with god-like stature and power. As Lee recalled in 1993, Kirby described his biblical inspirations for Galactus and an accompanying character, an angelic herald Lee called the Silver Surfer: Kirby elaborated, "Galactus in actuality is a sort of god. He is beyond reproach, beyond anyone's opinion. In a way he is kind of a Zeus, who fathered Hercules. He is his own legend, and of course, he and the Silver Surfer are sort of modern legends, and they are designed that way." Writer Mike Conroy expanded on Lee and Kirby's explanation: "In five short years from the launch of the "Fantastic Four", the Lee/Kirby duo ... had introduced a whole host of alien races or their representatives ... there were the Skrulls, the Watcher and the Stranger, all of whom Lee and Kirby used in the foundations of the universe they were constructing, one where all things were possible but only if they did not flout the 'natural laws' of this cosmology. In the nascent Marvel Universe, characters acted consistently, whatever comic they were appearing in. Their actions reverberated through every title. It was pure soap opera but on a cosmic scale, and Galactus epitomized its epic sweep." This led to the introduction of Galactus in "Fantastic Four" #48–50 (March–May 1966), which fans began calling "The Galactus Trilogy". Kirby did not intend Galactus to reappear, to preserve the character's awe-inspiring presence. Fan popularity, however, prompted Lee to ask Kirby for Galactus's reappearance, and the character became a mainstay of the Marvel Universe. To preserve the character's mystique, his next two appearances were nonspeaking cameos in "Thor" #134 (Nov. 1966), and "Daredevil" #37 (Feb. 1968), respectively. Numerous requests from fans prompted the character to be featured heavily in "Fantastic Four" #72–77 (March–Aug. 1968). After a flashback appearance in "Silver Surfer" #1 (Aug. 1968), the character returned to Earth in "Thor" #160–162 (Jan. – March 1969). Galactus's origin was eventually revealed in "Thor" #168–169 (Sept. – Oct. 1969). The character made appearances in "Fantastic Four" #120–123 (March – June 1972) and "Thor" #225–228 (July–Oct. 1974). These two storylines introduced two new heralds for Galactus. Galactus also featured in "Fantastic Four" #172–175 (July – Oct. 1976) and #208–213 (July – Dec. 1979). Stan Lee and Jack Kirby reunited for the origin of Silver Surfer and Galactus in the one-shot graphic novel "The Silver Surfer: The Ultimate Cosmic Experience!" in 1978. This Marvel Fireside Book, published by Simon & Schuster, was an out-of-continuity retelling of the origin story without the Fantastic Four. The full Lee-and-Kirby origin story was reprinted in the one-volume "Super-Villain Classics: Galactus the Origin" #1 (May 1983), inked by Vince Colletta and George Klein, lettered by John Morelli and colored by Andy Yanchus. While nearly identical to the previous origin, this story featured supplemental material, edits, and deletions by writer Mark Gruenwald, pencillers John Byrne and Ron Wilson and inker Jack Abel. Rather than traveling into a dying star, the character enters the core of the collapsing universe before the Big Bang; the story was later reprinted as "Origin of Galactus" #1 (Feb. 1996). The character guest-starred in "Rom" #26–27 (Jan. – Feb. 1982). Galactus featured in two related storylines of "Fantastic Four" #242–244 (May–July 1982) and later #257 (August 1983), in which writer-artist John Byrne introduced the conceit of Galactus feeling remorse for his actions, and the weight of his genocides. In the issue, Death assures Galactus of his role and purpose as one of shepherd and weeder in guiding the universe to its proper purpose, and Galactus remains resolute. Byrne further elaborated on this concept in "Fantastic Four" #262 (Jan. 1984), which sparked controversy. At the end of the story, Eternity, an abstract entity in the Marvel Universe, appears to validate the existence of Galactus as necessary for the natural order and essential to prevent an even more catastrophic fate; Howard University professor of literature Marc Singer criticized this, accusing writer-artist of using the character to "justify planetary-scale genocide." Byrne and Stan Lee also collaborated on a one-shot Silver Surfer story (June 1982) in which it is revealed that,after the Surfer's rebellion against Galactus, he returned to Zenn-La, the Surfer's homeworld, and drained it of energy after allowing the populace to flee. Writer-penciller John Byrne and inker Terry Austin produced "The Last Galactus Story" as a serial in the anthology comic-magazine "Epic Illustrated" #26–34 (October 1984–February 1986). Nine of a scheduled 10 installments appeared. Each was six pages with the exception of the eighth installment (12 pages). The magazine was canceled with issue #34, leaving the last chapter unpublished and the story unfinished; however, Byrne later published the conclusion on his website. Galactus played a pivotal role in the limited series "Secret Wars" #1–12 (May 1984 – April 1985), and became a recurring character in the third volume of the "Silver Surfer" (beginning with issue #1 (July 1987)). Stan Lee and artist John Buscema also produced the 64-page hardcover "Silver Surfer: Judgment Day" (Oct. 1988), in which Galactus clashes with demonic entity Mephisto. Galactus was featured in the miniseries "Infinity Gauntlet" #1–6 (July – Dec. 1991), "Infinity War" #1–6 (June – Nov. 1992) and "Cosmic Powers" #1–6 (March – Aug. 1994). The character starred in the six-issue miniseries "Galactus the Devourer" (September 1999 – March 2000), written by Louise Simonson and illustrated by John Buscema, which climaxed with Galactus's death. Simonson originally conceived that the story arc would occur in "Silver Surfer" (vol. 3), but the title was canceled due to dwindling sales. She proposed a separate limited series, and at the time was initially doubtful that Marvel would approve what she considered a "radical" idea concerning "why the very existence of the universe depends on the health and well-being of Galactus." The consequences of Galactus's death are explored in the issues "Fantastic Four Annual 2001" and "Fantastic Four" #46–49 (Oct. 2001 – Jan. 2002) written by Jeph Loeb and culminate in Galactus' revival, bringing resolution to Simonson's cliffhanger from the "Devourer" story-arc. The character features in the first six issues of the series "Thanos" (Dec. 2003 – May 2004), written by Jim Starlin. Issues #7–12, written by Keith Giffen, introduce Galactus's first herald (the ). Galactus's origin is re-examined in "Fantastic Four" #520–523 (Oct. 2004 – April 2005), in which the character is temporarily reverted to his mortal form. After appearing in the limited series "Stormbreaker: The Saga of Beta Ray Bill" #1–6 (March – Aug. 2005) Galactus was a central character in the "Annihilation" storyline, appearing in the limited series "" #1–4 (June – Sept. 2006), "Annihilation" #1–6 (Oct. 2006 – March 2007) and the epilogue, "" #1–2 (Feb. – April 2007). Galactus was an antagonist in "Fantastic Four" #545–546 (June – July 2007), where he tried to devour fellow cosmic function Epoch. In "Nova" (vol. 4) #13–15 (May – July 2008), the character had no dialogue. Author Andy Lanning said that he and co-writer Dan Abnett were "treating Galactus like a force of nature; an inevitable, planetary catastrophe that there is no reasoning with, no bargaining with and no escaping." Galactus also appeared in the limited series "Beta Ray Bill: Godhunter" #1–3 (June – Aug. 2009), a sequel to "Stormbreaker: The Saga of Beta Ray Bill". Galactus and the Silver Surfer appeared as antagonists in "Skaar: Son of Hulk" #9-11, and as protagonists in the limited series "The Thanos Imperative" (June – Nov. 2010). Galactus was a member of the God Squad in the miniseries "Chaos War" #2–5 (Dec. – March 2010). After an appearance in "Fantastic Four" #583–587 (Nov. 2010 – March 2011), the character returned to Earth in "Silver Surfer" (vol. 6) #1–5 (Jan. – May 2011) and was the antagonist in "The Mighty Thor" #1–6 (April – Sept. 2011). Galactus played a supporting role in the storyline "Forever" featured in "Fantastic Four" #600-604 (Nov.2011-Mar. 2012) and "FF" #16 (Mar. 2012) by Johnathan Hickman, where Hickman introduced the concept of a shared destiny between Galactus and Franklin Richards. Writer Mark Waid would subsequently develop this concept (see 2019). The character played a central role as antagonist in "Hunger" #1-4 (2013), in which the mainstream Galactus of the primary Marvel continuity merges with his counterpart from the Ultimate Marvel publication imprint, Gah Lak Tus. Writer Joshua Hale Fialkov commented that his intent was to use Galactus as a means to place the characters from the Ultimate Marvel imprint into a completely unexpected crisis: "What I hope comes across is the sense of wonder that’s being brought into the Ultimate Universe... with the smart, modern tone Brian has established." Following his appearance in "Hunger", Galactus was a major supporting character in "Ultimates" (vol. 2) #1-6 (Jan.–June 2016), where writer Al Ewing fundamentally changed the nature of Galactus's character. During the events of the story, Galactus is transformed into "The Lifebringer," a being who is compelled to infuse dead planets with life-sustaining energies, thus altering the character's primary motive for the first time since Galactus's debut in 1966. Elaborating on what inspired the change, Ewing explained "What inspired it -- a mixture of wanting someone big on or allied with the team -- originally, we thought about Odin, but he's a bit busy -- and my usual preoccupations with atonement, redemption, growth and change. So what can [Galactus] do now? Well, whereas before he was taking in vast amounts of energy, now he's putting out vast amounts of energy -- pure life energy. He always said he was going to give back more than he took out of the universe -- now he's making good on that, one dead world at a time." The themes of redemption and change were received well by columnist Mark Peters, who described Ewing's work on "Ultimates" as "one of the best Galactus stories ever." Galactus featured prominently in a direct sequel series to "Ultimates", titled "Ultimates 2" #1-10 (Aug. 2016-Nov. 2017) which focused on the Lifebringer Galactus as the de facto leader of the Ultimates. Galactus was reverted to his "Devourer of Worlds" persona by writer Gerry Dugan in "Infinity Countdown" # 4 (June 2018). Set at the end of the primary Marvel continuity, the limited series "History of the Marvel Universe" #1-6 (Jul. 2019-Dec. 2019) by Mark Waid depicted Galactus as the in-story narrator. The story features Galactus recounting all the major events that have occurred in Marvel continuity to Franklin Richards as the universe experiences its final moments. Confirming the series as occurring within the primary Marvel continuity, Waid elaborated that "[t]here is a framing device, yes. We wanted it to be a story, not just a long Wikipedia entry. As established in Jonathan Hickman's "Fantastic Four" run, there comes a point when Galactus and Franklin Richards stand together at the end of time, and now we get to see exactly what they were doing there." Galactus was originally the explorer Galan of the planet Taa, which existed in the prime pre-Big Bang universe. When an unknown cosmic cataclysm gradually begins killing off all of the other life in his universe, Galan and other survivors leave Taa on a spacecraft and are engulfed in the Big Crunch. Galan, however, does not die: after bonding with the Sentience of the Universe, he changes and gestates for billions of years in an egg made of the debris of his ship that the current universe formed after the Big Bang. He emerges as Galactus, and though a Watcher observed Galactus's birth and recognizes his destructive nature, the Watcher chooses not to kill Galactus. Starving for sustenance, Galactus consumes the nearby planet of Archeopia - the first of many planets he would destroy to maintain his existence. Subsequently, in memory of his dead home world Taa, and the first planet (Archeopia) to fall prey to his hunger, Galactus constructs a new "home world": the Möbius strip-shaped space station called "Taa II". Galactus becomes involved in a civil war among the "Proemial Gods", who had come into being during the universe's infancy. When a faction of the gods led by Diableri of Chaos attempts to remake the universe in their own image, Galactus kills Diableri and imprisons three others (Antiphon, Tenebrous, and Aegis) in the prison called the Kyln. Galactus then decides to create a herald to locate worlds for sustenance, but fails when the first—Tyrant—rebels, and the second—the —is dismissed for his bloodthirsty attitude. When approaching the planet of Zenn-La, Galactus accepts the offer of Norrin Radd to become his herald, the Silver Surfer, in exchange for sparing his world. Eventually locating Earth, Galactus is driven off by the Fantastic Four, Uatu the Watcher, and the rebellious Silver Surfer after the Human Torch—with the Watcher's assistance—retrieves the Ultimate Nullifier from Taa II. Although Galactus leaves Earth, vowing that he will never try to consume it again, he banishes the Surfer to Earth for betraying him. Galactus later returns for his former herald, but the Surfer is unrepentant and chooses to remain on Earth. Thor learns of Galactus's origin when the entity comes into conflict with Ego the Living Planet. Returning to Earth, Galactus unsuccessfully tries to re-enlist the Silver Surfer. After the Fantastic Four and the Surfer defeat Galactus's new herald, the Air-Walker, Mr. Fantastic reprograms Galactus's ship to travel to the Negative Zone, which contains many uninhabited worlds that could potentially be consumed. Thor and Olympian ally Hercules encounter Galactus when his next herald, Firelord, travels to Earth to be free of his master. Galactus frees Firelord when Thor presents Galactus with the Asgardian Destroyer to animate and use as a herald. Galactus comes into conflict with the High Evolutionary when attempting to devour Counter-Earth, but he is temporarily transformed into harmless energy after attempting to devour the planet Poppup. After returning to normal form, Galactus is sought by the Fantastic Four to help stop a new cosmic threat, the Sphinx. Mr. Fantastic offers to release Galactus from his vow to avoid Earth if he helps defeat the Sphinx. Galactus agrees, if the Fantastic Four first recruit a being called Tyros as a new herald. The quartet succeed, and the newly empowered and renamed Terrax the Tamer leads his master to Earth. Galactus locates and defeats the Sphinx in Egypt, but is confronted by Mr. Fantastic, who, unbeknownst to Galactus, wields a fake Ultimate Nullifier. Unable to read Richard's mind (which is protected by the Watcher), Galactus retreats. Galactus empowers and uses the superheroine Dazzler to locate a missing Terrax, who is in fact hiding from his master inside a black hole. Dazzler defeats and retrieves Terrax, and forces Galactus to return her to Earth. Galactus is fooled by the Galadorian Spaceknight Rom into trying to devour the Black Nebula, home of the alien Dire Wraiths, but he is repelled by the Wraith's Dark Sun. A weakened Galactus pursues the rebellious Terrax to Earth and strips him of his power. Near death, Galactus is saved by the Fantastic Four and the Avengers while also acquiring another herald: Nova. Galactus destroys the Skrull homeworld, and discusses his role in the universe with fellow cosmic entity Death. Mr. Fantastic is captured by the Shi'ar for saving Galactus's life, and is tried by aliens who survived the annihilation of their worlds by Galactus. During the trial, the cosmic entity Eternity—the sentient embodiment of space and reality of the Marvel Universe—intervenes, allowing all beings present to momentarily become one with the universe, allowing them to understand that Galactus is a necessary part of the cosmic order. During the Secret Wars, Galactus attempts to consume Battleworld in order to force the Beyonder to remove his hunger, but his plan is foiled by Doctor Doom. Galactus grants clemency to the Surfer, who aids his former master against the Elders of the Universe and the In-Betweener. The entity also rescues the Surfer and Nova from Mephisto's realm, and aids the cosmic hierarchy in a war against the mad Eternal Thanos, who wields the Infinity Gauntlet. When Nova is conscience-stricken at causing the death of billions of aliens, Galactus takes on a new herald, the bloodthirsty Morg the Executioner. Tyrant eventually returns and Morg sacrifices himself to stop the entity by using the Ultimate Nullifier. Galactus then decides, with help from new herald Red Shift, to only devour the energy of living beings, which brings him into conflict with alien races and Earth's heroes. During a final confrontation near the home world of the Shi'ar, the Silver Surfer turns Galactus's siphoning machines against him. A starving Galactus dies and adopts the form of a star. The death of Galactus allows the entity Abraxas (a metaphysical embodiment of destruction and the antithesis of cosmic entity Eternity) to emerge from imprisonment. The entity wreaks havoc across thousands of alternate universes, killing various incarnations of Galactus before the children of Reed Richards—Franklin Richards and Valeria Von Doom—exhaust their powers to restore the original Galactus. Galactus then provides Mr. Fantastic with the Ultimate Nullifier, which he uses to reset reality and prevent Abraxas' initial escape and destruction. Conscience-stricken, Galactus tries to rid himself of his hunger by feeding on the power from the Infinity Gems, but is tricked into releasing the Hunger, which feeds on entire galaxies. The Hunger is destroyed when Thanos orchestrates a final battle with Galactus. When an alien race develops a technology to make planets invisible to Galactus, he empowers the Human Torch (who has traded powers with the Invisible Woman) and utilizes the hero as an unwilling herald to locate the planets. The Fantastic Four and Quasar free the Torch by changing Galactus back into the humanoid Galan, who chooses to exile himself to an energy-rich alternate dimension before he can transform back into Galactus so that he can feed on that reality without endangering planets. Galactus consumes Beta Ray Bill's Korbinite home world with the aid of new herald Stardust. When the Negative Zone villain Annihilus declares war on the universe, the entity attacks and destroys the Kyln, freeing former Galactus foes Tenebrous and Aegis. Sensing their release, Galactus temporarily releases Stardust from service and reemploys the Silver Surfer as his herald, due to his familiarity with their old foes. Aegis and Tenebrous, however, find and defeat the Silver Surfer and Galactus and deliver them to Annihilus. Annihilus intends to use Galactus as a weapon to destroy all life in the universe, but is thwarted when the entity is freed by Drax the Destroyer. Galactus retaliates and destroys most of Annihilus' forces. Seeking a final confrontation with Aegis and Tenebrous, Galactus sends the Silver Surfer to locate them. The Surfer eventually draws the pair into the barrier between the universe and the Negative Zone, which destroys both. After an encounter with Epoch, Galactus consumes the planet Orbucen. When a distraught Beta Ray Bill seeks vengeance for the destruction of the Korbinite home world, Galactus relents and creates a female Korbinite as a companion for Bill. Galactus also consumes the planet Sakaar, earning the enmity of Skaar and Hiro-Kala. The Silver Surfer finds the body of a future Galactus under New York City, and he summons the present Galactus to Earth. Mr. Fantastic explains that in the distant future, the heroes on a dying Earth had killed Galactus and then escaped to the present via time travel. When Galactus discovers these heroes now live on a planet called Nu-Earth, he destroys it and its inhabitants in retribution. A tear in the fabric of space caused by the Annihilation Wave and other interstellar conflicts allows the extra-universal forces of the Cancerverse (a universe without death) to invade. Galactus, the Celestials and the resurrected Tenebrous and Aegis combat the powerful Cancerverse weapon: the Galactus Engine (constructed from the corpse of the Cancerverse's counterpart to Galactus). During the events of the Chaos War, Galactus is teleported to Earth by demi-god Hercules to help fight the Chaos King, a metaphysical embodiment of oblivion and another antithesis of Eternity. While the Hulk and his allies (the God Squad, Alpha Flight, and several Avengers) fight Amatsu-Mikaboshi's forces, Hulk ally Amadeus Cho and Galactus develop a machine which will move Earth to a safe location in a sealed-off continuum, only to adapt the plan by trapping Amatsu-Mikaboshi in that dimension instead. After an encounter with the High Evolutionary, Galactus invades Asgard, home of the Norse Gods, seeking an Asgardian artifact to sate his hunger and spare future civilizations. Odin, ruler of the Norse Gods, contends that Galactus wishes to ensure that he is not replaced in the next universe. To avoid a protracted battle, the Silver Surfer offers to remain on Earth to guard the artifact on the proviso that Galactus may have it once Asgard eventually passes. Galactus recruits a preacher that he names Praeter to be his new herald. Later, when the Mad Celestials from Earth-4280 invade, Galactus destroys one before being struck down by the others. Revived by Franklin Richards, he and Franklin succeed in vanquishing the remaining Celestials, and prevent the destruction of Earth. In the aftermath, Galactus learns that he will no longer face the eventual end of the universe alone; he and Franklin will witness it together. Galactus is then pulled through a hole in space-time to an alternate universe and meets another version of himself: a space-faring mechanical hive-mind called "Gah Lak Tus". After the two merge with one another, Galactus makes his way towards this universe's Earth in an attempt to consume it. The heroes of the alternate Earth travel to Earth-616 to acquire information on Galactus and eventually manage to send Galactus to the Negative Zone, reasoning that he will eventually starve to death as the region is composed of anti-matter. A comatose Galactus is found by the Eternals and Aarkus who hope to use him in their war on the alien Kree. Galactus returns to the universe, and after an encounter with Squirrel Girl, is forced into his own incubator by the Ultimates, who are determined to end his threat. Galactus re-emerges as a Lifebringer instead of a Devourer of Worlds, his first act being to restore Archeopia, the first world that he ever consumed. The entity later rescues the team at the behest of Eternity, and learns that the latter has been imprisoned by an unknown force. Galactus also comes into conflict with fellow cosmics Lord Chaos and Master Order, who, with the Molecule Man, wish for Galactus to return to his former role as a Devourer of Worlds and thereby restore order to the universe. Galactus locates the hero Anti-Man outside the multiverse and, after transforming him into a Herald of Life, sends him to recruit the recently disbanded Ultimates to help discover the identity of Eternity's captor, who is later revealed to be the First Firmament, the first iteration of the cosmos. Lord Chaos and Master Order bring Galactus to trial before the Living Tribunal, still seeking to restore Galactus to his former state for the sake of the cosmic balance. Although Galactus successfully argues that the balance of the new Multiverse is different and that his old role is obsolete, the Tribunal is destroyed by a Firmament-influenced Master Order and Lord Chaos. After a brief battle, Master Order decides to create a new cosmic order, which it and Lord Chaos control. Their former servant, the In-Betweener, is forcibly merged with them into a new cosmic being called "Logos". After destroying several Celestials, Logos forcibly transforms Galactus back into the Devourer of Worlds. The process is reversed when Anti-Man sacrifices his life to restore Galactus as the Lifebringer. Galactus then swears to free the imprisoned Eternity. During the "Infinity Countdown" storyline, the Silver Surfer requested Galactus's aid in defeating Ultron/Hank Pym by consuming the planet Saiph which was overrun by Ultron drones. Galactus reluctantly agrees. After consuming Saiph, Galactus's hunger returns and the Silver Surfer becomes his Herald again as he takes Galactus to find an uninhabited planet. Returning to Earth, Galactus has a confrontation with Doctor Doom and the Fantastic Four, with Doom revealing his plans to harness Galactus's power as an energy source. Banished to the mystical realms by an alien sorcerer, Galactus becomes entangled in the schemes of Dormammu and Mephisto. Galactus begins to consume mystical energy, eventually absorbing Dormammu and other mystical entities, and in so doing grows mad and destabilizes reality. Doctor Strange intervenes and-with the aid of Eternity and the Living Tribunal-is able to undo the damage wrought by Galactus. An injured Galactus crash lands on Asgard, seemingly seeking asylum from the Black Winter, the cause of his universe's destruction. Approached by All-Father Thor, several of his Heralds, and a Asgardian Army, Galactus reveals that he didn't come to Asgard for asylum and that he had a vision of Thor being responsible for his death. Wanting to keep him close to him and to defeat the Black Winter, Galactus turns Thor into his Herald of Thunder. The first (and oldest) living entity in the universe, Galactus was created during the union of the Sentience of the (previous) Universe and Galan of Taa, and is described as "the physical, metamorphosed embodiment of a cosmos." Although not an abstract, non-corporeal entity, his true form cannot be perceived by most beings; each species sees Galactus in a form they can comprehend, similar to their race or a deity of their religion. Galactus can also appear as a humanoid star when addressing fellow members of the cosmic hierarchy. Through his actions of consuming planets, Galactus embodies a living force of nature whose existence is necessary to correct the imbalances between the conceptual entities: Eternity and Death, as well as to serve as a cosmic test of survival for civilizations. Additionally, the continued existence of Galactus ensures the confinement of the cosmic entity Abraxas. As Galactus requires planets with the potential to support life, his existence also causes the extinction of entire extraterrestrial civilizations. Consumption of planets maintains Galactus' ability to use his powers. To facilitate consumption, he can employ the Elemental Converter, which efficiently converts matter into energy. Alternatively, Galactus can absorb energy directly from cosmic beings and even mystical entities-though with unpredictable results. Processing this cosmic energy allows Galactus to utilize a force known as the Power Cosmic to perform feats, which have included universal cosmic awareness, telepathy, telekinesis, energy projection; size alteration; transmutation of matter; teleportation of objects across space, creation of force fields and interdimensional portals; creation of life, resurrection, manipulating souls, memories and emotions, and mass-scale events such as recreating dead worlds in every detail (including illusions of their entire populations) and destroying multiple solar systems simultaneously. To aid in his search for suitable planets, Galactus frequently appoints an individual as his "herald", granting each in turn a small fraction of the Power Cosmic. This Power replaces the auras (or souls) of the recipient, with each wielder's physical form adapting to store the energy and in turn allow manipulation for feats such as energy projection. Galactus is also capable of removing the Power Cosmic from the herald. Galactus has on occasion been severely weakened due to a lack of sustenance, and on one occasion was defeated whilst in this state by the combined Fantastic Four and Avengers. In this weakened condition, Galactus has also shown susceptibility to Ikonn's spell, which forces him to remember all of the beings that he has destroyed from his feeding. Galactus also employs incredibly advanced science capable of producing objects such as the Punisher robots, the Ultimate Nullifier (a weapon capable of destroying and remaking the multiverse) and his space station Taa II. Reed Richards has speculated that Taa II may be the greatest source of energy in the universe. Heralds in the main continuity of the Marvel Universe include: Numerous versions of Galactus exist in alternate universes: The final issue of "The Adventures of the X-Men" reveals that the previous universe from which Galan originates was Earth-92131, which was being destroyed when the Dweller-In-Darkness used the M'Kraan Crystal to feed of the energies of the dying universe. Galan's rebirth as Galactus is depicted as being observed by the Living Tribunal and the Brothers from "DC vs. Marvel". In the Amalgam Comics universe that combines Marvel and DC characters, Galactus is combined with DC's Brainiac to create Galactiac, a being that consumes planetary energy but also leaves some of the world intact for his own personal study. In the five-issue miniseries "Bullet Points" (Jan.–May 2007), Galactus arrives on Earth with the Silver Surfer and kills most of Earth's heroes. Their sacrifice inspires the Surfer to turn on Galactus, who subsequently flees Earth. The limited series "The Thanos Imperative" features the huge Galactus Engine. In the limited series "Earth X", Galactus is one of the three entities in the universe responsible for keeping cosmic entities the Celestials in check. By destroying planets (the "eggs" of the Celestials), Galactus prevents the beings from overpopulating the universe. Franklin Richards eventually adopts Galactus's identity. The series "Exiles" features a version of Galactus that restores rather than destroys worlds, and empowers the being Sabretooth to defeat a renegade Silver Surfer. In the alternate future of Earth-691, the original Guardians of the Galaxy witness the formation of a symbiotic relationship between Galactus and the former Silver Surfer, now known as the Keeper. Having been named a Protector of the Universe by Eon and further empowered with the Quantum Bands, the Keeper possesses sufficient power to constantly supply Galactus with energy, ending his need to consume worlds. The second volume of the "Fantastic Four" features a pocket universe created by Franklin Richards after the events of the "Onslaught" saga, and includes a version of Galactus with five heralds, all of whom are worshiped by the Inhumans. Galactus appears as a gigantic, planet-sized life form—complete with a single, massive eye and tentacles—covered with a number of life forms ("Galactus spores"), which aid its digestion. The limited series "Marvel Zombies" features the Earth-2149 universe, which is infected by a virus changing sentient beings into flesh-eating zombies. Galactus's power is absorbed when consumed by the infected Avengers. The MC2 title "Last Planet Standing" features a future version of Galactus that eventually merges with the Silver Surfer and vows to repair rather than destroy worlds. The trilogy introduced the threatening entity Gah Lak Tus. First mentioned by the robot Ultimate Vision and subsequent Kree, Gah Lak Tus is a group mind of city-sized robotic drones. To prepare for the arrival, the drones send telepathic broadcasts of "fear", then use envoys (similar to the Silver Surfer), who introduce a flesh-eating virus into planets. Gah Lak Tus is also involved in the "Chitauri-Kree" War, and temporarily merged with Galactus after a temporal rift sends the latter to the Ultimate Marvel universe. Mahr Vehl stated the Gah Lak Tus swarm was originally built by the ancient Kree eons ago to eliminate all foes and "purify" the universe, but subsequently escaped their control and evolved into its current form. In the timeline of an aged and future King Thor, Galactus comes to a deserted Earth to finally consume it. The entity eventually bonds with All-Black the Necrosword and becomes "Galactus the World Butcher", devouring multiple planets. Galactus is finally consumed by an All-Black-empowered Ego the Living Planet. In an alternate universe, in order to resurrect Galactus, the Silver Surfer used the remnants of Ultron, creating a fusion of Ultron and Galactus. Galactus of Earth-TR666 created a new Herald - Cosmic Ghost Rider - in a failed bid to stop Thanos. A version of Galactus—called "Gah-Lak-Tus" in the novelization—appears in the 2007 film "", as a cosmic hurricane-like cloud. Fox apparently wished for the character to remain "discreet": hence the altered appearance. Visual effects studio Weta Digital convinced Fox to add hints of the character's comic-book appearance, including a shadow and a fiery mass inside the cosmic cloud resembling Galactus's signature helmet. Director Tim Story said he created Galactus as a cosmic cloud so a future "Silver Surfer" spin-off film would be unique as the character had yet to appear in comic-book form. Film writer J. Michael Straczynski stated "You don't want to sort of blow out something that big and massive for one quick shot in the first movie."
https://en.wikipedia.org/wiki?curid=13095
Ivy League The Ivy League is an American collegiate athletic conference comprising eight private universities in the Northeastern United States. The term "Ivy League" is typically used beyond the sports context to refer to the eight schools as a group of elite colleges with connotations of academic excellence, selectivity in admissions, and social elitism. Its members in alphabetical order are Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, the University of Pennsylvania, Princeton University, and Yale University. While the term was in use as early as 1933, it became official only after the formation of the NCAA Division I athletic conference in 1954. Seven of the eight schools were founded during the colonial period (all except Cornell, which was founded in 1865) and thus account for seven of the nine Colonial Colleges chartered before the American Revolution. The other two colonial colleges, Rutgers University and the College of William & Mary, became public institutions instead. Ivy League schools are viewed as some of the most prestigious universities in the world. All eight universities place in the top seventeen of the 2020 "U.S. News & World Report" national undergraduate university rankings, including four Ivies in the top three (Columbia and Yale are tied for 3rd). "U.S. News" has named a member of the Ivy League as the best national undergraduate program in each of the past 18 years ending with the 2018 rankings: Princeton eleven times, Harvard twice, and the two schools tied for first five times. In the 2019 "U.S. News & World Report" global university rankings, three Ivies rank in the top ten (Harvard 1st, Columbia 7th, and Princeton 8th) and six in the top twenty-three. Undergraduate enrollments range from about 4,500 to about 15,000, larger than most liberal arts colleges and smaller than most state universities. Total enrollment, which includes graduate students, ranges from approximately 6,600 at Dartmouth to over 20,000 at Columbia, Cornell, Harvard, and Penn. Ivy League financial endowments range from Brown's $4.2 billion to Harvard's $40.9 billion, the largest financial endowment of any academic institution in the world. The Ivy League has drawn many comparisons to other elite grouping of universities in other nations such as Oxbridge in the United Kingdom, C9 League in China, and Imperial Universities in Japan. These counterparts are often referred to in the American media as the "Ivy League" of their respective nations. Ivy League universities have some of the largest university financial endowments in the world, allowing the universities to provide abundant resources for their academic programs, financial aid, and research endeavors. , Harvard University had an endowment of $38.3 billion, the largest of any educational institution. Each university attracts millions of dollars in annual research funding from both the federal government and private sources. Students have long revered the ivied walls of older colleges. "Planting the ivy" was a customary class day ceremony at many colleges in the 1800s. In 1893, an alumnus told "The Harvard Crimson", "In 1850, class day was placed upon the University Calendar. ... the custom of planting the ivy, while the ivy oration was delivered, arose about this time." At Penn, graduating seniors started the custom of planting ivy at a university building each spring in 1873 and that practice was formally designated as "Ivy Day" in 1874. Ivy planting ceremonies are reported for Yale, Simmons, Bryn Mawr and many others. Princeton's "Ivy Club" was founded in 1879. The first usage of "Ivy" in reference to a group of colleges is from sportswriter Stanley Woodward (1895–1965). The first known instance of the term "Ivy League" being used appeared in "The Christian Science Monitor" on February 7, 1935. Several sportswriters and other journalists used the term shortly later to refer to the older colleges, those along the northeastern seaboard of the United States, chiefly the nine institutions with origins dating from the colonial era, together with the United States Military Academy (West Point), the United States Naval Academy, and a few others. These schools were known for their long-standing traditions in intercollegiate athletics, often being the first schools to participate in such activities. However, at this time, none of these institutions made efforts to form an athletic league. A common folk etymology attributes the name to the Roman numeral for four (IV), asserting that there was such a sports league originally with four members. The "Morris Dictionary of Word and Phrase Origins" helped to perpetuate this belief. The supposed "IV League" was formed over a century ago and consisted of Harvard, Yale, Princeton, and a fourth school that varies depending on who is telling the story. However, it is clear that Harvard, Princeton, Yale and Columbia met on November 23, 1876 at the so-called Massasoit Convention to decide on uniform rules for the emerging game of American football, which rapidly spread. Seven out of the eight Ivy League schools were founded before the American Revolution; Cornell was founded just after the American Civil War. These seven were the primary colleges in the Northern and Middle Colonies, and their early faculties and founding boards were largely drawn from other Ivy League institutions. There were also some British graduates from the University of Cambridge, the University of Oxford, the University of St. Andrews, the University of Edinburgh, and elsewhere on their boards. Similarly, the founder of the College of William & Mary, in 1693, was a British graduate of the University of Aberdeen and the University of Edinburgh. Cornell provided Stanford University with its first president. The influence of these institutions on the founding of other colleges and universities is notable. This included the Southern public college movement which blossomed in the decades surrounding the turn of the 19th century when Georgia, South Carolina, North Carolina and Virginia established what became the flagship universities for each of these states. In 1801, a majority of the first board of trustees for what became the University of South Carolina were Princeton alumni. They appointed Jonathan Maxcy, a Brown graduate, as the university's first president. Thomas Cooper, an Oxford alumnus and University of Pennsylvania faculty member, became the second president of the South Carolina college. The founders of the University of California, Berkeley came from Yale, hence the school colors of University of California at Berkeley are Yale Blue and California Gold. Some of the Ivy League schools have identifiable Protestant roots, while others were founded as non-sectarian schools. Church of England "King's College" broke up during the Revolution and was reformed as public nonsectarian Columbia College. In the early nineteenth century, the specific purpose of training Calvinist ministers was handed off to theological seminaries, but a denominational tone and such relics as compulsory chapel often lasted well into the twentieth century. Penn and Brown were officially founded as nonsectarian schools. Brown's charter promised no religious tests and "full liberty of conscience", but placed control in the hands of a board of twenty-two Baptists, five Quakers, four Congregationalists, and five Episcopalians. Cornell has been strongly nonsectarian from its founding. "Ivy League" is sometimes used as a way of referring to an elite class, even though institutions such as Cornell University were among the first in the United States to reject racial and gender discrimination in their admissions policies. This dates back to at least 1935. Novels and memoirs attest this sense, as a social elite; to some degree independent of the actual schools. After the Second World War, the present Ivy League institutions slowly widened their selection of their students. They had always had distinguished faculties; some of the first Americans with doctorates had taught for them; but they now decided that they could not both be world-class research institutions and be competitive in the highest ranks of American college sport; in addition, the schools experienced the scandals of any other big-time football programs, although more quietly. The first formal athletic league involving eventual Ivy League schools (or any US colleges, for that matter) was created in 1870 with the formation of the Rowing Association of American Colleges. The RAAC hosted a de facto national championship in rowing during the period 1870–1894. In 1895, Cornell, Columbia, and Penn founded the Intercollegiate Rowing Association, which remains the oldest collegiate athletic organizing body in the US. To this day, the IRA Championship Regatta determines the national champion in rowing and all of the Ivies are regularly invited to compete. A basketball league was later created in 1902, when Columbia, Cornell, Harvard, Yale, and Princeton formed the Eastern Intercollegiate Basketball League; they were later joined by Penn and Dartmouth. In 1906, the organization that eventually became the National Collegiate Athletic Association was formed, primarily to formalize rules for the emerging sport of football. But of the 39 original member colleges in the NCAA, only two of them (Dartmouth and Penn) later became Ivies. In February 1903, intercollegiate wrestling began when Yale accepted a challenge from Columbia, published in the Yale News. The dual meet took place prior to a basketball game hosted by Columbia and resulted in a tie. Two years later, Penn and Princeton also added wrestling teams, leading to the formation of the student-run Intercollegiate Wrestling Association, now the Eastern Intercollegiate Wrestling Association (EIWA), the first and oldest collegiate wrestling league in the US. In 1930, Columbia, Cornell, Dartmouth, Penn, Princeton and Yale formed the Eastern Intercollegiate Baseball League; they were later joined by Harvard, Brown, Army and Navy. Before the formal establishment of the Ivy League, there was an "unwritten and unspoken agreement among certain Eastern colleges on athletic relations". The earliest reference to the "Ivy colleges" came in 1933, when Stanley Woodward of the New York Herald Tribune used it to refer to the eight current members plus Army. In 1935, the Associated Press reported on an example of collaboration between the schools: Despite such collaboration, the universities did not seem to consider the formation of the league as imminent. Romeyn Berry, Cornell's manager of athletics, reported the situation in January 1936 as follows: Within a year of this statement and having held month-long discussions about the proposal, on December 3, 1936, the idea of "the formation of an Ivy League" gained enough traction among the undergraduate bodies of the universities that the "Columbia Daily Spectator", "The Cornell Daily Sun", "The Dartmouth", "The Harvard Crimson", "The Daily Pennsylvanian", "The Daily Princetonian" and the "Yale Daily News" would simultaneously run an editorial entitled "Now Is the Time", encouraging the seven universities to form the league in an effort to preserve the ideals of athletics. Part of the editorial read as follows: The Ivies have been competing in sports as long as intercollegiate sports have existed in the United States. Rowing teams from Harvard and Yale met in the first sporting event held between students of two U.S. colleges on Lake Winnipesaukee, New Hampshire, on August 3, 1852. Harvard's team, "The Oneida", won the race and was presented with trophy black walnut oars from then-presidential nominee General Franklin Pierce. The proposal did not succeed—on January 11, 1937, the athletic authorities at the schools rejected the "possibility of a heptagonal league in football such as these institutions maintain in basketball, baseball and track." However, they noted that the league "has such promising possibilities that it may not be dismissed and must be the subject of further consideration." In 1945 the presidents of the eight schools signed the first "Ivy Group Agreement", which set academic, financial, and athletic standards for the football teams. The principles established reiterated those put forward in the Harvard-Yale-Princeton Presidents' Agreement of 1916. The Ivy Group Agreement established the core tenet that an applicant's ability to play on a team would not influence admissions decisions: In 1954, the presidents extended the Ivy Group Agreement to all intercollegiate sports, effective with the 1955–56 basketball season. This is generally reckoned as the formal formation of the Ivy League. As part of the transition, Brown, the only Ivy that hadn't joined the EIBL, did so for the 1954–55 season. A year later, the Ivy League absorbed the EIBL. The Ivy League claims the EIBL's history as its own. Through the EIBL, it is the oldest basketball conference in Division I. As late as the 1960s many of the Ivy League universities' undergraduate programs remained open only to men, with Cornell the only one to have been coeducational from its founding (1865) and Columbia being the last (1983) to become coeducational. Before they became coeducational, many of the Ivy schools maintained extensive social ties with nearby Seven Sisters women's colleges, including weekend visits, dances and parties inviting Ivy and Seven Sisters students to mingle. This was the case not only at Barnard College and Radcliffe College, which are adjacent to Columbia and Harvard, but at more distant institutions as well. The movie "Animal House" includes a satiric version of the formerly common visits by Dartmouth men to Massachusetts to meet Smith and Mount Holyoke women, a drive of more than two hours. As noted by Irene Harwarth, Mindi Maline, and Elizabeth DeBra, "The 'Seven Sisters' was the name given to Barnard, Smith, Mount Holyoke, Vassar, Bryn Mawr, Wellesley, and Radcliffe, because of their parallel to the Ivy League men's colleges." In 1982 the Ivy League considered adding two members, with Army, Navy, and Northwestern as the most likely candidates; if it had done so, the league could probably have avoided being moved into the recently created Division I-AA (now Division I FCS) for football. In 1983, following the admission of women to Columbia College, Columbia University and Barnard College entered into an athletic consortium agreement by which students from both schools compete together on Columbia University women's athletic teams, which replaced the women's teams previously sponsored by Barnard. When Army and Navy departed the Eastern Intercollegiate Baseball League in 1992, nearly all intercollegiate competition involving the eight schools became united under the Ivy League banner. The two major exceptions are wrestling, with the Ivies that sponsor wrestling—all except Dartmouth and Yale—members of the EIWA and hockey, with the Ivies that sponsor hockey—all except Penn and Columbia—members of ECAC Hockey. The Ivy League schools are highly selective, with their acceptance rates being approximately 10% or less at all of the universities. Admitted students come from around the world, although students from the Northeastern United States make up a significant proportion of students. In 2018, seven of the eight Ivy League schools reported record-high application numbers; seven also reported record-low acceptance rates. Members of the League have been highly ranked by various university rankings. All of the Ivy League schools are consistently ranked within the top 20 national universities by the "U.S. News & World Report". "The Wall Street Journal" rankings place all eight of the universities within the top 15 in the country. Further, Ivy League members have produced many Nobel laureates, winners of the Nobel Prize and the Nobel Memorial Prize in Economic Sciences. Collaboration between the member schools is illustrated by the student-led Ivy Council that meets in the fall and spring of each year, with representatives from every Ivy League school. The governing body of the Ivy League is the Council of Ivy Group Presidents, composed of each university president. During meetings, the presidents discuss common procedures and initiatives for their universities. The universities collaborate academically through the IvyPlus Exchange Scholar Program, which allows students to cross-register at one of the Ivies or another eligible school such as the University of California at Berkeley, the University of Chicago, the Massachusetts Institute of Technology, and Stanford University. Different fashion trends and styles have emerged from Ivy League campuses over time, and fashion trends such as Ivy League and preppy are styles often associated with the Ivy League and its culture. Ivy League style is a style of men's dress, popular during the late 1950s, believed to have originated on Ivy League campuses. The clothing stores J. Press and Brooks Brothers represent perhaps the quintessential Ivy League dress manner. The Ivy League style is said to be the predecessor to the preppy style of dress. Preppy fashion started around 1912 to the late 1940s and 1950s as the Ivy League style of dress. J. Press represents the quintessential preppy clothing brand, stemming from the collegiate traditions that shaped the preppy subculture. In the mid-twentieth century J. Press and Brooks Brothers, both being pioneers in preppy fashion, had stores on Ivy League school campuses, including Harvard, Princeton, and Yale. Some typical preppy styles also reflect traditional upper class New England leisure activities, such as equestrian, sailing or yachting, hunting, fencing, rowing, lacrosse, tennis, golf, and rugby. Longtime New England outdoor outfitters, such as L.L. Bean, became part of conventional preppy style. This can be seen in sport stripes and colours, equestrian clothing, plaid shirts, field jackets and nautical-themed accessories. Vacationing in Palm Beach, Florida, long popular with the East Coast upper class, led to the emergence of bright colour combinations in leisure wear seen in some brands such as Lilly Pulitzer. By the 1980s, other brands such as Lacoste, Izod and Dooney & Bourke became associated with preppy style. Today, these styles continue to be popular on Ivy League campuses, throughout the U.S., and abroad, and are oftentimes labeled as "Classic American style" or "Traditional American style". The Ivy League is often associated with the upper class White Anglo-Saxon Protestant community of the Northeast, Old Money, or more generally, the American upper middle and upper classes. Although most Ivy League students come from upper middle- and upper-class families, the student body has become increasingly more economically and ethnically diverse. The universities provide significant financial aid to help increase the enrollment of lower income and middle class students. Several reports suggest, however, that the proportion of students from less-affluent families remains low. Phrases such as "Ivy League snobbery" are ubiquitous in nonfiction and fiction writing of the early and mid-twentieth century. A Louis Auchincloss character dreads "the aridity of snobbery which he knew infected the Ivy League colleges". A business writer, warning in 2001 against discriminatory hiring, presented a cautionary example of an attitude to avoid (the bracketed phrase is his): The phrase "Ivy League" historically has been perceived as connected not only with academic excellence but also with social elitism. In 1936, sportswriter John Kieran noted that student editors at Harvard, Yale, Princeton, Cornell, Columbia, Dartmouth, and Penn were advocating the formation of an athletic association. In urging them to consider "Army and Navy and Georgetown and Fordham and Syracuse and Brown and Pitt" as candidates for membership, he exhorted: Aspects of Ivy stereotyping were illustrated during the 1988 presidential election, when George H. W. Bush (Yale '48) derided Michael Dukakis (graduate of Harvard Law School) for having "foreign-policy views born in Harvard Yard's boutique." "New York Times" columnist Maureen Dowd asked "Wasn't this a case of the pot calling the kettle elite?" Bush explained, however, that, unlike Harvard, Yale's reputation was "so diffuse, there isn't a symbol, I don't think, in the Yale situation, any symbolism in it. ... Harvard boutique to me has the connotation of liberalism and elitism" and said "Harvard" in his remark was intended to represent "a philosophical enclave" and not a statement about class. Columnist Russell Baker opined that "Voters inclined to loathe and fear elite Ivy League schools rarely make fine distinctions between Yale and Harvard. All they know is that both are full of rich, fancy, stuck-up and possibly dangerous intellectuals who never sit down to supper in their undershirt no matter how hot the weather gets." Still, the last five presidents have all attended Ivy League schools for at least part of their education— George H. W. Bush (Yale undergrad), Bill Clinton (Yale Law School), George W. Bush (Yale undergrad, Harvard Business School), Barack Obama (Columbia undergrad, Harvard Law School), and Donald Trump (Penn undergrad). Of the 45 men who have served as President of the United States, 16 have graduated from an Ivy League university. Of them, eight have degrees from Harvard, five from Yale, three from Columbia, two from Princeton and one from Penn. Twelve presidents have earned Ivy undergraduate degrees. Three of these were transfer students: Donald Trump transferred from Fordham University, Barack Obama transferred from Occidental College, and John F. Kennedy transferred from Princeton to Harvard. John Adams was the first president to graduate from college, graduating from Harvard in 1755. Students of the Ivy League largely hail from the Northeast, largely from the New York City, Boston, and Philadelphia areas. As all eight Ivy League universities are within the Northeast, it is no surprise that most graduates end up working and residing in the Northeast after graduation. An unscientific survey of Harvard seniors from the Class of 2013 found that 42% hailed from the Northeast and 55% overall were planning on working and residing in the Northeast. Boston and New York City are traditionally where many Ivy League graduates end up living. Students of the Ivy League, both graduate and undergraduate, come primarily from upper middle and upper class families. In recent years, however, the universities have looked towards increasing socioeconomic and class diversity, by providing greater financial aid packages to applicants from lower, working, and lower middle class American families. In 2013, 46% of Harvard undergraduate students came from families in the top 3.8% of all American households (i.e., over $200,000 annual income). In 2012, the bottom 25% of the American income distribution accounted for only 3–4% of students at Brown, a figure that had remained unchanged since 1992. In 2014, 69% of incoming freshmen students at Yale College came from families with annual incomes of over $120,000, putting most Yale College students in the upper middle and/or upper class. (The median household income in the U.S. in 2013 was $52,700.) In the 2011–2012 academic year, students qualifying for Pell Grants (federally funded scholarships on the basis of need) comprised 20% at Harvard, 18% at Cornell, 17% at Penn, 16% at Columbia, 15% at Dartmouth and Brown, 14% at Yale, and 12% at Princeton. Nationally, 35% of American university students qualify for a Pell Grant. Ivy champions are recognized in sixteen men's and sixteen women's sports. In some sports, Ivy teams actually compete as members of another league, the Ivy championship being decided by isolating the members' records in play against each other; for example, the six league members who participate in ice hockey do so as members of ECAC Hockey, but an Ivy champion is extrapolated each year. In one sport, rowing, the Ivies recognize team champions for each sex in both heavyweight and lightweight divisions. While the Intercollegiate Rowing Association governs all four sex- and bodyweight-based divisions of rowing, the only one that is sanctioned by the NCAA is women's heavyweight. The Ivy League was the last Division I basketball conference to institute a conference postseason tournament; the first tournaments for men and women were held at the end of the 2016–17 season. The tournaments only award the Ivy League automatic bids for the NCAA Division I Men's and Women's Basketball Tournaments; the official conference championships continue to be awarded based solely on regular-season results. Before the 2016–17 season, the automatic bids were based solely on regular-season record, with a one-game playoff (or series of one-game playoffs if more than two teams were tied) held to determine the automatic bid. The Ivy League is one of only two Division I conferences which award their official basketball championships solely on regular-season results; the other is the Southeastern Conference. Since its inception, an Ivy League school has yet to win either the men's or women's Division I NCAA Basketball Tournament. On average, each Ivy school has more than 35 varsity teams. All eight are in the top 20 for number of sports offered for both men and women among Division I schools. Unlike most Division I athletic conferences, the Ivy League prohibits the granting of athletic scholarships; all scholarships awarded are need-based (financial aid). In addition, the Ivies have a rigid policy against redshirting, even for medical reasons; an athlete loses a year of eligibility for every year enrolled at an Ivy institution. Additionally, the Ivies prohibit graduate students from participating in intercollegiate athletics, even if they have remaining athletic eligibility. Ivy League teams' non-league games are often against the members of the Patriot League, which have similar academic standards and athletic scholarship policies (although unlike the Ivies, the Patriot League allows both redshirting and play by eligible graduate students). In the time before recruiting for college sports became dominated by those offering athletic scholarships and lowered academic standards for athletes, the Ivy League was successful in many sports relative to other universities in the country. In particular, Princeton won 26 recognized national championships in college football (last in 1935), and Yale won 18 (last in 1927). Both of these totals are considerably higher than those of other historically strong programs such as Alabama, which has won 15, Notre Dame, which claims 11 but is credited by many sources with 13, and USC, which has won 11. Yale, whose coach Walter Camp was the "Father of American Football," held on to its place as the all-time wins leader in college football throughout the entire 20th century, but was finally passed by Michigan on November 10, 2001. Harvard, Yale, Princeton and Penn each have over a dozen former scholar-athletes enshrined in the College Football Hall of Fame. Currently Dartmouth holds the record for most Ivy League football titles, with 18, followed closely by Harvard and Penn, each with 17 titles. In addition, the Ivy League has produced Super Bowl winners Kevin Boothe (Cornell), two-time Pro Bowler Zak DeOssie (Brown), Sean Morey (Brown), All-Pro selection Matt Birk (Harvard), Calvin Hill (Yale), Derrick Harmon (Cornell) and 1999 "Mr. Irrelevant" Jim Finn (Penn). Beginning with the 1982 football season, the Ivy League has competed in Division I-AA (renamed FCS The Ivy League teams are eligible for the FCS tournament held to determine the national champion, and the league champion is eligible for an automatic bid (and any other team may qualify for an at-large selection) from the NCAA. However, since its inception in 1956, the Ivy League has not played any postseason games due to concerns about the extended December schedule's effects on academics. (The last postseason game for a member was , the 1934 Rose Bowl, won by For this reason, any Ivy League team invited to the FCS playoffs turns down the bid. The Ivy League plays a strict 10-game schedule, compared to other FCS members' schedules of 11 (or, in some seasons, 12) regular season games, plus post-season, which expanded in 2013 to five rounds with 24 teams, with a bye week for the top eight teams. Football is the only sport in which the Ivy League declines to compete for a national title. In addition to varsity football, Penn, Princeton and Cornell also field teams in the 10-team Collegiate Sprint Football League, in which all players must weigh 178 pounds or less. Penn and Princeton are the last remaining founding members of the league from its 1934 debut, and Cornell is the next-oldest, joining in 1937. Yale and Columbia previously fielded teams in the league but no longer do so. The Ivy League is home to some of the oldest college rugby teams in the United States. Although these teams are not "varsity" sports, they compete annually in the Ivy Rugby Conference. The table above includes the number of team championships won from the beginning of official Ivy League competition (1956–57 academic year) through 2016–17. Princeton and Harvard have on occasion won ten or more Ivy League titles in a year, an achievement accomplished 10 times by Harvard and 24 times by Princeton, including a conference-record 15 championships in 2010–11. Only once has one of the other six schools earned more than eight titles in a single academic year (Cornell with nine in 2005–06). In the 38 academic years beginning 1979–80, Princeton has averaged 10 championships per year, one-third of the conference total of 33 sponsored sports. In the 12 academic years beginning 2005–06 Princeton has won championships in 31 different sports, all except wrestling and men's tennis. Rivalries run deep in the Ivy League. For instance, Princeton and Penn are longstanding men's basketball rivals; "Puck Frinceton" T-shirts are worn by Quaker fans at games. In only 11 instances in the history of Ivy League basketball, and in only seven seasons since Yale's 1962 title, has neither Penn nor Princeton won at least a share of the Ivy League title in basketball, with Princeton champion or co-champion 26 times and Penn 25 times. Penn has won 21 outright, Princeton 19 outright. Princeton has been a co-champion 7 times, sharing 4 of those titles with Penn (these 4 seasons represent the only times Penn has been co-champion). Harvard won its first title of either variety in 2011, losing a dramatic play-off game to Princeton for the NCAA tournament bid, then rebounded to win outright championships in 2012, 2013, and 2014. Harvard also won the 2013 Great Alaska Shootout, defeating TCU to become the only Ivy League school to win the now-defunct tournament. Rivalries exist between other Ivy league teams in other sports, including Cornell and Harvard in hockey, Harvard and Princeton in swimming, and Harvard and Penn in football (Penn and Harvard have won 28 Ivy League Football Championships since 1982, Penn-16; Harvard-12). During that time Penn has had 8 undefeated Ivy League Football Championships and Harvard has had 6 undefeated Ivy League Football Championships. In men's lacrosse, Cornell and Princeton are perennial rivals, and they are two of three Ivy League teams to have won the NCAA tournament. In 2009, the Big Red and Tigers met for their 70th game in the NCAA tournament. No team other than Harvard or Princeton has won the men's swimming conference title outright since 1972, although Yale, Columbia, and Cornell have shared the title with Harvard and Princeton during this time. Similarly, no program other than Princeton and Harvard has won the women's swimming championship since Brown's 1999 title. Princeton or Cornell has won every indoor and outdoor track and field championship, both men's and women's, every year since 2002–03, with one exception (Columbia women won the indoor championship in 2012). Harvard and Yale are football and crew rivals although the competition has become unbalanced; Harvard has won all but one of the last 15 football games and all but one of the last 13 crew races. The Yale–Princeton series is the nation's second-longest by games played, exceeded only by "The Rivalry" between Lehigh and Lafayette, which began later in 1884 but included two or three games in each of 17 early seasons. For the first three decades of the Yale-Princeton rivalry, the two played their season-ending game at a neutral site, usually New York City, and with one exception (1890: Harvard), the winner of the game also won at least a share of the national championship that year, covering the period 1869 through 1903. This phenomenon of a finale contest at a neutral site for the national title created a social occasion for the society elite of the metropolitan area akin to a Super Bowl in the era prior to the establishment of the NFL in 1920. These football games were also financially profitable for the two universities, so much that they began to play baseball games in New York City as well, drawing record crowds for that sport also, largely from the same social demographic. In a period when the only professional team sports were fledgling baseball leagues, these high-profile early contests between Princeton and Yale played a role in popularizing spectator sports, demonstrating their financial potential and raising public awareness of Ivy universities at a time when few people attended college. This list, which is current through July 1, 2015, includes NCAA championships and women's AIAW championships (one each for Yale and Dartmouth). Excluded from this list are all other national championships earned outside the scope of NCAA competition, including football titles and retroactive Helms Foundation titles. The term "Ivy" is sometimes used to connote a positive comparison to or an association with the Ivy League, often along academic lines. The term has been used to describe the Little Ivies, a grouping of small liberal arts colleges in the Northeastern United States. Other uses include the Southern Ivies, Hidden Ivies, and the Public Ivies. The 2007 edition of "Newsweek's How to Get Into College Now", the editors designated 25 schools as "New Ivies." The term "Ivy Plus" is sometimes used to refer to the Ancient Eight plus several other schools for purposes of alumni associations, university consortia, or endowment comparisons. In his book "Untangling the Ivy League", Zawel writes, "The inclusion of non-Ivy League schools under this term is commonplace for some schools and extremely rare for others. Among these other schools, Massachusetts Institute of Technology and Stanford University are almost always included. The University of Chicago and Duke University are often included as well." Johns Hopkins University is sometimes included in this Ivy Plus group.
https://en.wikipedia.org/wiki?curid=14975
Ithaca Hours The Ithaca HOUR is a local currency used in Ithaca, New York and is the oldest and largest local currency system in the United States that is still operating, although its use is reported to be in decline. It has inspired other similar systems in Madison, Wisconsin; Santa Barbara, California, Corvallis, Oregon; and a proposed system in the Lehigh Valley, Pennsylvania. One Ithaca HOUR is valued at US$10 and is generally recommended to be used as payment for one hour's work, although the rate is negotiable. Ithaca HOURS are not backed by national currency and cannot be freely converted to national currency, although some businesses may agree to buy them. HOURS are printed on high-quality paper and use faint graphics that would be difficult to reproduce, and each bill is stamped with a serial number, in order to discourage counterfeiting. In 2002, a one-tenth hour bill was introduced, partly due to the encouragement and funding from Alternatives Federal Credit Union and feedback from retailers who complained about the awkwardness of only having larger denominations to work with; the bills bear the signatures of both HOURS president Steve Burke and the president of AFCU. While the Ithaca HOUR continues to exist, in recent years it has fallen into disuse. Media accounts from the year 2011 indicate that the number of businesses accepting HOURS has declined. Several reasons are attributed to this. First has been the founder, Paul Glover, moving out of town. While in Ithaca, Glover had acted as an evangelist and networker for HOURS, helping spread their use and helping businesses find ways to spend HOURS they had received. Secondly, a general shift away from cash transactions towards electronic transfers with debit or credit cards. Glover has emphasized that every local currency needs at least one full-time networker to "promote, facilitate and troubleshoot" currency circulation. Ithaca HOURS were started by Paul Glover in November 1991. The system has historical roots in scrip and alternative and local currencies that proliferated in America during the Great Depression. While doing research into local economics during 1989, Glover had seen an "Hour" note 19th century British industrialist Robert Owen issued to his workers for spending at his company store. After Ithaca HOURS began, he discovered that Owen's Hours were based on Josiah Warren's "Time Store" notes of 1827. In May 1991, local student Patrice Jennings interviewed Glover about the Ithaca LETS enterprise. This conversation strongly reinforced his interest in trade systems. Jennings's research on the Ithaca LETS and its failure was integral to the development of the HOUR currency; conversations between Jennings and Glover helped ensure that HOURS used knowledge of what had not worked with the LETS system. Within a few days, he had designs for the HOUR and Half HOUR notes. He established that each HOUR would be worth the equivalent of $10, which was about the average hourly amount that workers earned in surrounding Tompkins County, although the exact rate of exchange for any given transaction was to be decided by the parties themselves. At GreenStar Cooperative Market, a local food co-op, Glover approached Gary Fine, a local massage therapist, with photocopied samples. Fine became the first person to sign a list formally agreeing to accept HOURS in exchange for services. Soon after, Jim Rohrrsen, the proprietor of a local toy store, became the first retailer to sign-up to accept Ithaca HOURS in exchange for merchandise. When the system was first started, 90 people agreed to accept HOURS as pay for their services. They all agreed to accept HOURS despite the lack of a business plan or guarantee. Glover then began to ask for small donations to help pay for printing HOURS. Fine Line Printing completed the first run of 1,500 HOURS and 1,500 Half HOURS in October 1991. These notes, the first modern local currency, were nearly twice as large as the current Ithaca HOURS. Because they didn't fit well in people's wallets, almost all of the original notes have been removed from circulation. The first issue of Ithaca Money was printed at Our Press, a printing shop in Chenango Bridge, New York, on October 16, 1991. The next day Glover issued 10 HOURS to Ithaca Hours, the organization he founded to run the system, as the first of four reimbursements for the cost of printing HOURS. The day after that, October 18, 1991, 382 HOURS were disbursed and prepared for mailing to the first 93 pioneers. On October 19, 1991, Glover bought a samosa from Catherine Martinez at the Farmers' Market with Half HOUR #751—the first use of an HOUR. Several other Market vendors enrolled that day. During the next years more than a thousand individuals enrolled to accept HOURS, plus 500 businesses. Stacks of the Ithaca Money newspaper were distributed all over town with an invitation to "join the fun." A Barter Potluck was held at GIAC on November 12, 1991, the first of many monthly gatherings where food and skills were exchanged, acquaintances made, and friendships renewed. In 1996, Glover was running the Ithaca Hours system from his home, and the system had an advisory board and a governing board called the "Barter Potluck". The board and Glover put forth the idea that economic interactions should be based on harmony rather than on more Hobbesian forms of competition. In one interview, Glover stated that "There's a growing movement called "ecological economics" and Ithaca HOURS is part of that cosmos. Last year I wrote an article which discusses moving us toward the provision of food, fuel, clothing, housing, transportation, [and other] necessities in ways which are healing of nature, or which are less depleting at least and which bring people together on the basis of their shared pride, not arrogance." Thus one underlying principle of the local currency movement is to create "fair trade" with a minimum of conflict or exploitation of either people or natural resources. The Advisory Board incorporated the Ithaca HOUR system as Ithaca Hours, Inc. in October 1998, and hosted the first elections for Board of Directors in March 1999. The first Board of Directors included Monica Hargraves, Dan Cogan, Margaret McCasland, Erica Van Etten, Greg Spence Wolf, Bob LeRoy, LeGrace Benson, Wally Woods, Jennifer Elges, and Donald Stephenson. In May 1999 Glover turned the administration of Ithaca HOURS over to the newly elected Board of Directors. Glover has continued to support Ithaca Hours through community outreach to present, most notably through the Ithaca Health Fund (now incorporated as part of the Ithaca Health Alliance) and Ithaca Community News. The current Board of Directors, 2014-2015, includes Erik Lehmann (Chair), Danielle Klock, and Bob LeRoy. Several million dollars value of HOURS have been traded since 1991 among thousands of residents and over 500 area businesses, including the Cayuga Medical Center, Alternatives Federal Credit Union, the public library, many local farmers, movie theatres, restaurants, healers, plumbers, carpenters, electricians, and landlords. One of the primary functions of the Ithaca Hours system is to promote local economic development. Businesses who receive Hours must spend them on local goods and services, thus building a network of inter-supporting local businesses. While non-local businesses are welcome to accept Hours, those businesses need to spend them on local goods and services to be economically sustainable. In their mission to promote local economic development, the Board of Directors also makes interest-free loans of Ithaca HOURS to local businesses and grants to local non-profit organizations.
https://en.wikipedia.org/wiki?curid=14976
Interstellar cloud An interstellar cloud is generally an accumulation of gas, plasma, and dust in our and other galaxies. Put differently, an interstellar cloud is a denser-than-average region of the interstellar medium, (ISM), the matter and radiation that exists in the space between the star systems in a galaxy. Depending on the density, size, and temperature of a given cloud, its hydrogen can be neutral, making an H I region; ionized, or plasma making it an H II region; or molecular, which are referred to simply as molecular clouds, or sometime dense clouds. Neutral and ionized clouds are sometimes also called "diffuse clouds". An interstellar cloud is formed by the gas and dust particles from a red giant in its later life. The chemical composition of interstellar clouds is determined by studying electromagnetic radiation or EM radiation that they emanate, and we receive – from radio waves through visible light, to gamma rays on the electromagnetic spectrum – that we receive from them. Large radio telescopes scan the intensity in the sky of particular frequencies of electromagnetic radiation which are characteristic of certain molecules' spectra. Some interstellar clouds are cold and tend to give out EM radiation of large wavelengths. A map of the abundance of these molecules can be made, enabling an understanding of the varying composition of the clouds. In hot clouds, there are often ions of many elements, whose spectra can be seen in visible and ultraviolet light. Radio telescopes can also scan over the frequencies from one point in the map, recording the intensities of each type of molecule. Peaks of frequencies mean that an abundance of that molecule or atom is present in the cloud. The height of the peak is proportional to the relative percentage that it makes up. Until recently the rates of reactions in interstellar clouds were expected to be very slow, with minimal products being produced due to the low temperature and density of the clouds. However, organic molecules were observed in the spectra that scientists would not have expected to find under these conditions, such as formaldehyde, methanol, and vinyl alcohol. The reactions needed to create such substances are familiar to scientists only at the much higher temperatures and pressures of earth and earth-based laboratories. The fact that they were found indicates that these chemical reactions in interstellar clouds take place faster than suspected, likely in gas-phase reactions unfamiliar to organic chemistry as observed on earth. These reactions are studied in the CRESU experiment. Interstellar clouds also provide a medium to study the presence and proportions of metals in space. The presence and ratios of these elements may help develop theories on the means of their production, especially when their proportions are inconsistent with those expected to arise from stars as a result of fusion and thereby suggest alternate means, such as cosmic ray spallation. These interstellar clouds possess a velocity higher than can be explained by the rotation of the Milky Way. By definition, these clouds must have a vlsr greater than 90 km s−1, where vlsr is the local standard rest velocity. They are detected primarily in the 21 cm line of neutral hydrogen, and typically have a lower portion of heavy elements than is normal for interstellar clouds in the Milky Way. Theories intended to explain these unusual clouds include materials left over from the formation of the galaxy, or tidally-displaced matter drawn away from other galaxies or members of the Local Group. An example of the latter is the Magellanic Stream. To narrow down the origin of these clouds, a better understanding of their distances and metallicity is needed. High-velocity clouds are identified with an HVC prefix, as with HVC 127-41-330.
https://en.wikipedia.org/wiki?curid=14979
Imhotep Imhotep (; Egyptian: "ỉỉ-m-ḥtp" "*jā-im-ḥātap", in Unicode hieroglyphs: 𓇍𓅓𓊵:𓏏*𓊪, "the one who comes in peace"; fl. late 27th century BC) was an Egyptian chancellor to the pharaoh Djoser, probable architect of the Djoser's step pyramid, and high priest of the sun god Ra at Heliopolis. Very little is known of Imhotep as a historical figure, but in the 3000 years following his death, he was gradually glorified and deified. Traditions from long after Imhotep's death treated him as a great author of wisdom texts and especially as a physician. No text from his lifetime mentions these capacities and no text mentions his name in the first 1200 years following his death. Apart from the three short contemporary inscriptions that establish him as chancellor to the pharaoh, the first text to reference Imhotep dates to the time of Amenhotep III (c. 1391–1353 BC). It is addressed to the owner of a tomb, and reads: It appears that this libation to Imhotep was done regularly, as they are attested on papyri associated with statues of Imhotep until the Late Period (c. 664–332 BC). To Wildung, this cult holds its origin in the slow evolution of the memory of Imhotep among intellectuals from his death onwards. To Alan Gardiner, this cult is so distinct from the offerings usually made to commoners that the epithet of "demi-god" is likely justified to describe the way Imhotep was venerated in the New Kingdom (c. 1550–1077 BC). The first references to the healing abilities of Imhotep occur from the Thirtieth Dynasty (c. 380–343 BC) onwards, some 2200 years after his death. Imhotep is among the few non-royal Egyptians who were deified after their death, and until the 21st century, he was thought to be only one of two commonersalong with Amenhotep, son of Haputo achieve this status. The center of his cult was in Memphis. The location of his tomb remains unknown, despite efforts to find it. The consensus is that it is hidden somewhere at Saqqara. Imhotep's historicity is confirmed by two contemporary inscriptions made during his lifetime on the base or pedestal of one of Djoser's statues (Cairo JE 49889) and also by a graffito on the enclosure wall surrounding Sekhemkhet's unfinished step-pyramid. The latter inscription suggests that Imhotep outlived Djoser by a few years and went on to serve in the construction of King Sekhemkhet's pyramid, which was abandoned due to this ruler's brief reign. Imhotep was one of the chief officials of the Pharaoh Djoser. Egyptologists ascribe to him the design of the Pyramid of Djoser, a step pyramid at Saqqara in Egypt in 2630–2611 BC. He may also have been responsible for the first known use of stone columns to support a building. Despite these later attestations, the pharaonic Egyptians themselves never credited Imhotep as the designer of the stepped pyramid nor with the invention of stone architecture. Two thousand years after his death, Imhotep's status had risen to that of a god of medicine and healing. He was eventually equated with Thoth, the god of architecture, mathematics, and medicine, and patron of scribes: Imhotep's cult had merged with that of his former tutelary god. He was revered in the region of Thebes as the "brother" of Amenhotep, son of Hapu, another deified architect, in the temples dedicated to Thoth. Imhotep was also linked to Asklepios by the Greeks. According to myth, Imhotep's mother was a mortal named Kheredu-ankh, she too being eventually revered as a demi-goddess as the daughter of Banebdjedet. Alternatively, since Imhotep was known as the "Son of Ptah", his mother was sometimes claimed to be Sekhmet, the patron of Upper Egypt whose consort was Ptah. The Upper Egyptian Famine Stela, which dates from the Ptolemaic period (305–30 B.C.), bears an inscription containing a legend about a famine lasting seven years during the reign of Djoser. Imhotep is credited with having been instrumental in ending it. One of his priests explained the connection between the god Khnum and the rise of the Nile to the king, who then had a dream in which the Nile god spoke to him, promising to end the drought. A demotic papyrus from the temple of Tebtunis, dating to the 2nd century A.D., preserves a long story about Imhotep. King Djoser plays a prominent role in the story, which also mentions Imhotep's family; his father the god Ptah, his mother Khereduankh, and his younger sister Renpetneferet. At one point Djoser desires Renpetneferet, and Imhotep disguises himself and tries to rescue her. The text also refers to the royal tomb of Djoser. Part of the legend includes an anachronistic battle between the Old Kingdom and the Assyrian armies where Imhotep fights an Assyrian sorceress in a duel of magic. As an instigator of Egyptian culture, Imhotep's idealized image lasted well into the Roman period. In the Ptolemaic period, the Egyptian priest and historian Manetho credited him with inventing the method of a stone-dressed building during Djoser's reign, though he was not the first to actually build with stone. Stone walling, flooring, lintels, and jambs had appeared sporadically during the Archaic Period, though it is true that a building of the size of the step pyramid made entirely out of stone had never before been constructed. Before Djoser, pharaohs were buried in mastaba tombs. Egyptologist James Peter Allen states that "The Greeks equated him with their own god of medicine, Asklepios, although ironically there is no evidence that Imhotep himself was a physician."
https://en.wikipedia.org/wiki?curid=14980
Ictinus Ictinus (; , "Iktinos") was an architect active in the mid 5th century BC. Ancient sources identify Ictinus and Callicrates as co-architects of the Parthenon. He co-wrote a book on the project – which is now lost – in collaboration with Carpion. Pausanias identifies Ictinus as architect of the Temple of Apollo at Bassae. That temple was Doric on the exterior, Ionic on the interior, and incorporated a Corinthian column, the earliest known, at the center rear of the cella. Sources also identify Ictinus as architect of the Telesterion at Eleusis, a gigantic hall used in the Eleusinian Mysteries. Pericles also commissioned Ictinus to design the Telesterion (Hall of mystery ) at Eleusis, but his involvement was terminated when Pericles fell from power. Three other architects took over instead. It seems likely that Ictinus's reputation was harmed by his links with the fallen ruler, as he is singled out for condemnation by Aristophanes in his play "The Birds", dated to around 414 BC. It depicts the royal kite or "ictinus" – a play on the architect's name – not as a noble bird of prey but as a scavenger stealing sacrifices from the gods and money from men. As no other classical author describes the bird in this fashion, Aristophanes likely intended it to be a dig at the architect. The artist Jean Auguste Dominique Ingres painted a scene showing Ictinus together with the lyric poet Pindar. The painting is known as "Pindar and Ictinus" and is exhibited at the National Gallery, London.
https://en.wikipedia.org/wiki?curid=14981
Isidore of Miletus Isidore of Miletus (; Medieval Greek pronunciation: ; ) was one of the two main Byzantine Greek architects (Anthemius of Tralles was the other) that Emperor Justinian I commissioned to design the cathedral Hagia Sophia in Constantinople from 532 to 537. The creation of an important compilation of Archimedes' works has been attributed to him. The spurious Book XV from Euclid's Elements has been partly attributed to Isidore of Miletus. Isidore of Miletus was a renowned scientist and mathematician before Emperor Justinian I hired him. Isidorus taught stereometry and physics at the universities, first of Alexandria then of Constantinople, and wrote a commentary on an older treatise on vaulting. Eutocius together with Isidore studied Archimedes' work. Isidore is also renowned for producing the first comprehensive compilation of Archimedes' work, the Archimedes palimpsest survived to the present. Emperor Justinian I appointed his architects to rebuild the Hagia Sophia following his victory over protesters within the capital city of his Roman Empire, Constantinople. The first basilica was completed in 360 and remodelled from 404 to 415, but had been damaged in 532 in the course of the Nika Riot, “The temple of Sophia, the baths of Zeuxippus, and the imperial courtyard from the Propylaia all the way to the so-called House of Ares were burned up and destroyed, as were both of the great porticoes that lead to the forum that is named after Constantine, houses of prosperous people, and a great deal of other properties.” The rival factions of Byzantine society, the Blues and the Greens, opposed each other in the chariot races at the Hippodrome and often resorted to violence. During the Nika Riot, more than thirty thousand people died. Emperor Justinian I ensured that his new structure would not be burned down, like its predecessors, by commissioning architects that would build the church mainly out of stone, rather than wood, “He compacted it of baked brick and mortar, and in many places bound it together with iron, but made no use of wood, so that the church should no longer prove combustible.” Isidore of Miletus and Anthemius of Tralles originally planned on a main hall of the Hagia Sophia that measured 70 by 75 metres (230 x 250 ft), making it the largest church in Constantinople, but the original dome was nearly 6 metres (20 ft) lower than it was constructed, “Justinian suppressed these riots and took the opportunity of marking his victory by erecting in 532-7 the new Hagia Sophia, one of the largest, most lavish, and most expensive buildings of all time.” Although Isidore of Miletus and Anthemius of Tralles were not formally educated in architecture, they were scientists who could organize the logistics of drawing thousands of labourers and unprecedented loads of rare raw materials from around the Roman Empire to construct the Hagia Sophia for Emperor Justinian I. The finished product was built in admirable form for the Roman Emperor, “All of these elements marvellously fitted together in mid-air, suspended from one another and reposing only on the parts adjacent to them, produce a unified and most remarkable harmony in the work, and yet do not allow the spectators to rest their gaze upon any one of them for a length of time.” The Hagia Sophia architects innovatively combined the longitudinal structure of a Roman basilica and the central plan of a drum-supported dome, in order to withstand the high magnitude earthquakes of the Marmara Region, “However, in May 558, little more than 20 years after the Church’s dedication, following the earthquakes of August 553 and December 557, parts of the central dome and its supporting structure system collapsed.” The Hagia Sophia was repeatedly cracked by earthquakes and was quickly repaired. Isidore of Miletus’ nephew, Isidore the Younger, introduced the new dome design that can be viewed in the Hagia Sophia in present-day Istanbul, Turkey. After a great earthquake in 989 ruined the dome of Hagia Sophia, the Byzantine officials summoned Trdat the Architect to Byzantium to organize repairs. The restored dome was completed by 994.
https://en.wikipedia.org/wiki?curid=14982
International Atomic Energy Agency The International Atomic Energy Agency (IAEA) is an international organization that seeks to promote the peaceful use of nuclear energy, and to inhibit its use for any military purpose, including nuclear weapons. The IAEA was established as an autonomous organisation on 29 July 1957. Though established independently of the United Nations through its own international treaty, the IAEA Statute, the IAEA reports to both the United Nations General Assembly and Security Council. The IAEA has its headquarters in Vienna, Austria. The IAEA has two "Regional Safeguards Offices" which are located in Toronto, Canada, and in Tokyo, Japan. The IAEA also has two liaison offices which are located in New York City, United States, and in Geneva, Switzerland. In addition, the IAEA has laboratories and research centers located in Seibersdorf, Austria, in Monaco and in Trieste, Italy. The IAEA serves as an intergovernmental forum for scientific and technical co-operation in the peaceful use of nuclear technology and nuclear power worldwide. The programs of the IAEA encourage the development of the peaceful applications of nuclear energy, science and technology, provide international safeguards against misuse of nuclear technology and nuclear materials, and promote nuclear safety (including radiation protection) and nuclear security standards and their implementation. The IAEA and its former Director General, Mohamed ElBaradei, were jointly awarded the Nobel Peace Prize on 7 October 2005. The Director General is Rafael Grossi, an Argentinian diplomat previously served as an IAEA's chief of cabinet, whose appointment was approved at the special session of the IAEA's General Conference on 2 December 2019, as the successor of Yukiya Amano, who died in July 2019. In 1953, the President of the United States, Dwight D. Eisenhower, proposed the creation of an international body to both regulate and promote the peaceful use of atomic power (nuclear power), in his Atoms for Peace address to the UN General Assembly. In September 1954, the United States proposed to the General Assembly the creation of an international agency to take control of fissile material, which could be used either for nuclear power or for nuclear weapons. This agency would establish a kind of "nuclear bank." The United States also called for an international scientific conference on all of the peaceful aspects of nuclear power. By November 1954, it had become clear that the Soviet Union would reject any international custody of fissile material if the United States did not agree to a disarmament first, but that a "clearing house" for nuclear transactions might be possible. From 8 to 20 August 1955, the United Nations held the International Conference on the Peaceful Uses of Atomic Energy in Geneva, Switzerland. In October 1957, a Conference on the IAEA Statute was held at the Headquarters of the United Nations to approve the founding document for the IAEA, which was negotiated in 1955–1957 by a group of twelve countries. The Statute of the IAEA was approved on 23 October 1956 and came into force on 29 July 1957. Former US Congressman W. Sterling Cole served as the IAEA's first Director General from 1957 to 1961. Cole served only one term, after which the IAEA was headed by two Swedes for nearly four decades: the scientist Sigvard Eklund held the job from 1961 to 1981, followed by former Swedish Foreign Minister Hans Blix, who served from 1981 to 1997. Blix was succeeded as Director General by Mohamed ElBaradei of Egypt, who served until November 2009. Beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA increased its efforts in the field of nuclear safety. The same happened after the 2011 Fukushima disaster in Fukushima, Japan. Both the IAEA and its then Director General, ElBaradei, were awarded the Nobel Peace Prize in 2005. In ElBaradei's acceptance speech in Oslo, he stated that only one percent of the money spent on developing new weapons would be enough to feed the entire world, and that, if we hope to escape self-destruction, then nuclear weapons should have no place in our collective conscience, and no role in our security. On 2 July 2009, Yukiya Amano of Japan was elected as the Director General for the IAEA, defeating Abdul Samad Minty of South Africa and Luis E. Echávarri of Spain. On 3 July 2009, the Board of Governors voted to appoint Yukiya Amano "by acclamation," and IAEA General Conference in September 2009 approved. He took office on 1 December 2009. After Amano's death, his Chief of Coordination Cornel Feruta of Romania was named Acting Director General. On August 2, 2019, Rafael Grossi was presented as the Argentine candidate to become the Director General of IAEA. On 28 October, 2019, the IAEA Board of Governors held its first vote to elect the new Director General, but none of the candidates secured the two-thirds majority in the 35-member IAEA Board of Governors needed to be elected. The next day, 29 October, the second voting round was held, and Grossi won 24 of the 23 needed votes required for Director General Appointment. He assumed office on 3 December 2019. Following a special meeting of the IAEA General Conference to approve his appointment, on December 3 Grossi became the first Latin American to head the Agency. The IAEA's mission is guided by the interests and needs of Member States, strategic plans and the vision embodied in the IAEA Statute (see below). Three main pillars – or areas of work – underpin the IAEA's mission: Safety and Security; Science and Technology; and Safeguards and Verification. The IAEA as an autonomous organisation is not under direct control of the UN, but the IAEA does report to both the UN General Assembly and Security Council. Unlike most other specialised international agencies, the IAEA does much of its work with the Security Council, and not with the United Nations Economic and Social Council. The structure and functions of the IAEA are defined by its founding document, the IAEA Statute (see below). The IAEA has three main bodies: the Board of Governors, the General Conference, and the Secretariat. The IAEA exists to pursue the "safe, secure and peaceful uses of nuclear sciences and technology" (Pillars 2005). The IAEA executes this mission with three main functions: the inspection of existing nuclear facilities to ensure their peaceful use, providing information and developing standards to ensure the safety and security of nuclear facilities, and as a hub for the various fields of science involved in the peaceful applications of nuclear technology. The IAEA recognises knowledge as the nuclear energy industry's most valuable asset and resource, without which the industry cannot operate safely and economically. Following the IAEA General Conference since 2002 resolutions the Nuclear Knowledge Management, a formal programme was established to address Member States' priorities in the 21st century. In 2004, the IAEA developed a Programme of Action for Cancer Therapy (PACT). PACT responds to the needs of developing countries to establish, to improve, or to expand radiotherapy treatment programs. The IAEA is raising money to help efforts by its Member States to save lives and to reduce suffering of cancer victims. The IAEA has established programs to help developing countries in planning to build systematically the capability to manage a nuclear power program, including the Integrated Nuclear Infrastructure Group, which has carried out Integrated Nuclear Infrastructure Review missions in Indonesia, Jordan, Thailand and Vietnam. The IAEA reports that roughly 60 countries are considering how to include nuclear power in their energy plans. To enhance the sharing of information and experience among IAEA Member States concerning the seismic safety of nuclear facilities, in 2008 the IAEA established the International Seismic Safety Centre. This centre is establishing safety standards and providing for their application in relation to site selection, site evaluation and seismic design. The Board of Governors is one of two policy making bodies of the IAEA. The Board consists of 22 member states elected by the General Conference, and at least 10 member states nominated by the outgoing Board. The outgoing Board designates the ten members who are the most advanced in atomic energy technology, plus the most advanced members from any of the following areas that are not represented by the first ten: North America, Latin America, Western Europe, Eastern Europe, Africa, Middle East and South Asia, South East Asia, the Pacific, and the Far East. These members are designated for one year terms. The General Conference elects 22 members from the remaining nations to two-year terms. Eleven are elected each year. The 22 elected members must also represent a stipulated geographic diversity. The 35 Board members for the 2018–2019 period are: Argentina, Armenia, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Ecuador, Egypt, France, Germany, India, Indonesia, Italy, Japan, Jordan, Kenya, the Republic of Korea, Morocco, the Netherlands, Niger, Pakistan, Portugal, the Russian Federation, Serbia, South Africa, the Sudan, Sweden, Thailand, the United Kingdom of Great Britain and Northern Ireland, the United States of America, Uruguay and the Bolivarian Republic of Venezuela. The Board, in its five yearly meetings, is responsible for making most of the policy of the IAEA. The Board makes recommendations to the General Conference on IAEA activities and budget, is responsible for publishing IAEA standards and appoints the Director General subject to General Conference approval. Board members each receive one vote. Budget matters require a two-thirds majority. All other matters require only a simple majority. The simple majority also has the power to stipulate issues that will thereafter require a two-thirds majority. Two-thirds of all Board members must be present to call a vote. The Board elects its own chairman. The General Conference is made up of all 171 member states. It meets once a year, typically in September, to approve the actions and budgets passed on from the Board of Governors. The General Conference also approves the nominee for Director General and requests reports from the Board on issues in question (Statute). Each member receives one vote. Issues of budget, Statute amendment and suspension of a member's privileges require a two- thirds majority and all other issues require a simple majority. Similar to the Board, the General Conference can, by simple majority, designate issues to require a two- thirds majority. The General Conference elects a President at each annual meeting to facilitate an effective meeting. The President only serves for the duration of the session (Statute). The main function of the General Conference is to serve as a forum for debate on current issues and policies. Any of the other IAEA organs, the Director General, the Board and member states can table issues to be discussed by the General Conference (IAEA Primer). This function of the General Conference is almost identical to the General Assembly of the United Nations. The Secretariat is the professional and general service staff of the IAEA. The Secretariat is headed by the Director General. The Director General is responsible for enforcement of the actions passed by the Board of Governors and the General Conference. The Director General is selected by the Board and approved by the General Conference for renewable four-year terms. The Director General oversees six departments that do the actual work in carrying out the policies of the IAEA: Nuclear Energy, Nuclear Safety and Security, Nuclear Sciences and Applications, Safeguards, Technical Cooperation, and Management. The IAEA budget is in two parts. The regular budget funds most activities of the IAEA and is assessed to each member nation (€344 million in 2014). The Technical Cooperation Fund is funded by voluntary contributions with a general target in the US$90 million range. The IAEA is generally described as having three main missions: According to Article II of the IAEA Statute, the objective of the IAEA is "to accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world." Its primary functions in this area, according to Article III, are to encourage research and development, to secure or provide materials, services, equipment and facilities for Member States, to foster exchange of scientific and technical information and training. Three of the IAEA's six Departments are principally charged with promoting the peaceful uses of nuclear energy. The Department of Nuclear Energy focuses on providing advice and services to Member States on nuclear power and the nuclear fuel cycle. The Department of Nuclear Sciences and Applications focuses on the use of non-power nuclear and isotope techniques to help IAEA Member States in the areas of water, energy, health, biodiversity, and agriculture. The Department of Technical Cooperation provides direct assistance to IAEA Member States, through national, regional, and inter-regional projects through training, expert missions, scientific exchanges, and provision of equipment. Article II of the IAEA Statute defines the Agency's twin objectives as promoting peaceful uses of atomic energy and "ensur[ing], so far as it is able, that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose." To do this, the IAEA is authorised in Article III.A.5 of the Statute "to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities, and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy." The Department of Safeguards is responsible for carrying out this mission, through technical measures designed to verify the correctness and completeness of states' nuclear declarations. The IAEA classifies safety as one of its top three priorities. It spends 8.9 percent of its 352 million-euro ($469 million) regular budget in 2011 on making plants secure from accidents. Its resources are used on the other two priorities: technical co-operation and preventing nuclear weapons proliferation. The IAEA itself says that, beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA redoubled its efforts in the field of nuclear safety. The IAEA says that the same happened after the Fukushima disaster in Fukushima, Japan. In June 2011, the IAEA chief said he had "broad support for his plan to strengthen international safety checks on nuclear power plants to help avoid any repeat of Japan's Fukushima crisis". Peer-reviewed safety checks on reactors worldwide, organised by the IAEA, have been proposed. In 2011, Russian nuclear accident specialist Iouli Andreev was critical of the response to Fukushima, and says that the IAEA did not learn from the 1986 Chernobyl disaster. He has accused the IAEA and corporations of "wilfully ignoring lessons from the world's worst nuclear accident 25 years ago to protect the industry's expansion". The IAEA's role "as an advocate for nuclear power has made it a target for protests". The journal "Nature" has reported that the IAEA response to the 2011 Fukushima Daiichi nuclear disaster in Japan was "sluggish and sometimes confusing", drawing calls for the agency to "take a more proactive role in nuclear safety". But nuclear experts say that the agency's complicated mandate and the constraints imposed by its member states mean that reforms will not happen quickly or easily, although its INES "emergency scale is very likely to be revisited" given the confusing way in which it was used in Japan. Some scientists say that the Fukushima nuclear accidents have revealed that the nuclear industry lacks sufficient oversight, leading to renewed calls to redefine the mandate of the IAEA so that it can better police nuclear power plants worldwide. There are several problems with the IAEA says Najmedin Meshkati of University of Southern California: It recommends safety standards, but member states are not required to comply; it promotes nuclear energy, but it also monitors nuclear use; it is the sole global organisation overseeing the nuclear energy industry, yet it is also weighed down by checking compliance with the Nuclear Non-Proliferation Treaty (NPT). In 2011, the journal "Nature" reported that the International Atomic Energy Agency should be strengthened to make independent assessments of nuclear safety and that "the public would be better served by an IAEA more able to deliver frank and independent assessments of nuclear crises as they unfold". The process of joining the IAEA is fairly simple. Normally, a State would notify the Director General of its desire to join, and the Director would submit the application to the Board for consideration. If the Board recommends approval, and the General Conference approves the application for membership, the State must then submit its instrument of acceptance of the IAEA Statute to the United States, which functions as the depositary Government for the IAEA Statute. The State is considered a member when its acceptance letter is deposited. The United States then informs the IAEA, which notifies other IAEA Member States. Signature and ratification of the Nuclear Non-Proliferation Treaty (NPT) are not preconditions for membership in the IAEA. The IAEA has 171 member states. Most UN members and the Holy See are Member States of the IAEA. Non-member states Cape Verde (2007), Tonga (2011), Comoros (2014) and Gambia (2016) have been approved for membership and will become a Member State if they deposit the necessary legal instruments. Four states have withdrawn from the IAEA. North Korea was a Member State from 1974 to 1994, but withdrew after the Board of Governors found it in non-compliance with its safeguards agreement and suspended most technical co-operation. Nicaragua became a member in 1957, withdrew its membership in 1970, and rejoined in 1977, Honduras joined in 1957, withdrew in 1967, and rejoined in 2003, while Cambodia joined in 1958, withdrew in 2003, and rejoined in 2009. There are four regional cooperative areas within IAEA, that share information, and organize conferences within their regions: The African Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology (AFRA): Cooperative Agreement for Arab States in Asia for Research, Development and Training related to Nuclear Science and Technology (ARASIA): Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology for Asia and the Pacific (RCA): Cooperation Agreement for the Promotion of Nuclear Science and Technology in Latin America and the Caribbean (ARCAL): Typically issued in July each year, the IAEA Annual Report summarizes and highlights developments over the past year in major areas of the Agency's work. It includes a summary of major issues, activities, and achievements, and status tables and graphs related to safeguards, safety, and science and technology.
https://en.wikipedia.org/wiki?curid=14984
International Civil Aviation Organization The International Civil Aviation Organization (ICAO; ; ) is a specialized agency of the United Nations. It changes the principles and techniques of international air navigation and fosters the planning and development of international air transport to ensure safe and orderly growth. Its headquarters is located in the "Quartier International" of Montreal, Quebec, Canada. The ICAO Council adopts standards and recommended practices concerning air navigation, its infrastructure, flight inspection, prevention of unlawful interference, and facilitation of border-crossing procedures for international civil aviation. ICAO defines the protocols for air accident investigation that are followed by in countries signatory to the Chicago Convention on International Civil Aviation. The Air Navigation Commission (ANC) is the technical body within ICAO. The Commission is composed of 19 Commissioners, nominated by the ICAO's contracting states and appointed by the ICAO Council. Commissioners serve as independent experts, who although nominated by their states, do not serve as state or political representatives. International Standards And Recommended Practices are developed under the direction of the ANC through the formal process of ICAO Panels. Once approved by the Commission, standards are sent to the Council, the political body of ICAO, for consultation and coordination with the Member States before final adoption. ICAO is distinct from other international air transport organizations, particularly because it alone is vested with international authority (among signatory states): other organizations include the International Air Transport Association (IATA), a trade association representing airlines; the Civil Air Navigation Services Organization (CANSO), an organization for Air navigation service providers (ANSPs); and the Airports Council International, a trade association of airport authorities. The forerunner to ICAO was the International Commission for Air Navigation (ICAN). It held its first convention in 1903 in Berlin, Germany, but no agreements were reached among the eight countries that attended. At the second convention in 1906, also held in Berlin, twenty-seven countries attended. The third convention, held in London in 1912 allocated the first radio callsigns for use by aircraft. ICAN continued to operate until 1945. Fifty-two countries signed the Chicago Convention on International Civil Aviation, also known as the Chicago Convention, in Chicago, Illinois, on 7 December 1944. Under its terms, a Provisional International Civil Aviation Organization was to be established, to be replaced in turn by a permanent organization when twenty-six countries ratified the convention. Accordingly, PICAO began operating on 6 June 1945, replacing ICAN. The twenty-sixth country ratified the Convention on 5 March 1947 and, consequently, PICAO was disestablished on 4 April 1947 and replaced by ICAO, which began operations the same day. In October 1947, ICAO became an agency of the United Nations under its Economic and Social Council (ECOSOC). In April 2013, Qatar offered to serve as the new permanent seat of the Organization. Qatar promised to construct a massive new headquarters for ICAO and to cover all moving expenses, stating that Montreal "was too far from Europe and Asia", "had cold winters", was hard to attend due to the Canadian government's slow issuance of visas, and that the taxes imposed on ICAO by Canada were too high. According to "The Globe and Mail", Qatar's invitation was at least partly motivated by the pro-Israel foreign policy of Canadian Prime Minister Stephen Harper. Approximately one month later, Qatar withdrew its bid after a separate proposal to the ICAO's governing council to move the ICAO triennial conference to Doha was defeated by a vote of 22–14. In January 2020, ICAO blocked a number of Twitter users—among them think-tank analysts, employees of the United States Congress, and journalists—who mentioned Taiwan in tweets related to ICAO. Many of the tweets concerned the COVID-19 pandemic and Taiwan's exclusion from ICAO safety and health bulletins due to Chinese pressure. In response to questions from reporters, ICAO issued a tweet stating that publishers of "irrelevant, compromising and offensive material" would be "precluded". Since that action the organization has followed a policy of blocking anyone asking about it. The United States House Committee on Foreign Affairs harshly criticized ICAO's perceived failure to uphold principles of fairness, inclusion, and transparency by silencing non-disruptive opposing voices. Senator Marco Rubio also criticized the move. The Ministry of Foreign Affairs (Taiwan) and Taiwanese legislators criticized the move with MOFA head Jaushieh Joseph Wu tweeting in support of those blocked. Anthony Philbin, chief of communications of the ICAO Secretary General, rejected criticism of ICAO's handling of the situation: "We felt we were completely warranted in taking the steps we did to defend the integrity of the information and discussions our followers should reasonably expect from our feeds." In exchanges with International Flight Network, Philbin refused to acknowledge the existence of Taiwan. On February 1, the US State Department issued a press release which heavily criticized ICAO's actions, characterizing them as "outrageous, unacceptable, and not befitting of a UN organization." The 9th edition of the Convention on International Civil Aviation includes modifications from years 1948 up to 2006. ICAO refers to its current edition of the Convention as the "Statute" and designates it as ICAO Document 7300/9. The Convention has 19 Annexes that are listed by title in the article Convention on International Civil Aviation. , there are 193 ICAO members, consisting of 192 of the 193 UN members (all but Liechtenstein, which lacks an international airport), plus the Cook Islands. Despite Liechtenstein not being a direct party to ICAO, its government has delegated Switzerland to enter into the treaty on its behalf, and the treaty applies in the territory of Liechtenstein. The Republic of China (Taiwan) was a founding member of ICAO but was replaced by the People's Republic of China as the legal representative of China in 1971 and as such, did not take part in the organization. In 2013, Taiwan was for the first time invited to attend the ICAO Assembly, at its 38th session, as a guest under the name of Chinese Taipei. , it has not been invited to participate again, due to renewed PRC pressure. Host government, Canada, supports Taiwan's inclusion in ICAO. Support also comes from Canada's commercial sector with the president of the Air Transport Association of Canada saying in 2019 that "It's about safety in aviation so from a strictly operational and non-political point of view, I believe Taiwan should be there." The Council of ICAO is elected by the Assembly every 3 years and consists of 36 members elected in 3 groups. The present Council was elected in October 2019. The structure of the present Council is as follows: ICAO also standardizes certain functions for use in the airline industry, such as the Aeronautical Message Handling System (AMHS). This makes it a standards organization. Each country should have an accessible Aeronautical Information Publication (AIP), based on standards defined by ICAO, containing information essential to air navigation. Countries are required to update their AIP manuals every 28 days and so provide definitive regulations, procedures and information for each country about airspace and airports. ICAO's standards also dictate that temporary hazards to aircraft must be regularly published using NOTAMs. ICAO defines an International Standard Atmosphere (also known as ICAO Standard Atmosphere), a model of the standard variation of pressure, temperature, density, and viscosity with altitude in the Earth's atmosphere. This is useful in calibrating instruments and designing aircraft. The standardized pressure is also used in calibrating instruments in-flight, particularly above the transition altitude. ICAO is active in infrastructure management, including communication, navigation and surveillance / air traffic management (CNS/ATM) systems, which employ digital technologies (like satellite systems with various levels of automation) in order to maintain a seamless global air traffic management system. ICAO has published standards for machine-readable passports. Machine-readable passports have an area where some of the information otherwise written in textual form is also written as strings of alphanumeric characters, printed in a manner suitable for optical character recognition. This enables border controllers and other law enforcement agents to process such passports more quickly, without having to enter the information manually into a computer. ICAO's technical standard for machine-readable passports is contained in Document 9303 "Machine Readable Travel Documents". A more recent standard covers biometric passports. These contain biometrics to authenticate the identity of travellers. The passport's critical information is stored on a tiny RFID computer chip, much like information stored on smart cards. Like some smart cards, the passport book design calls for an embedded contactless chip that is able to hold digital signature data to ensure the integrity of the passport and the biometric data. Both ICAO and IATA have their own airport and airline code systems. ICAO uses 4-letter airport codes (vs. IATA's 3-letter codes). The ICAO code is based on the region and country of the airport—for example, Charles de Gaulle Airport has an ICAO code of LFPG, where "L" indicates Southern Europe, "F", France, "PG", Paris de Gaulle, while Orly Airport has the code LFPO (the 3rd letter sometimes refers to the particular flight information region (FIR) or the last two may be arbitrary). In most parts of the world, ICAO and IATA codes are unrelated; for example, Charles de Gaulle Airport has an IATA code of CDG. However, the location prefix for continental United States is "K", and ICAO codes are usually the IATA code with this prefix. For example, the ICAO code for Los Angeles International Airport is KLAX. Canada follows a similar pattern, where a prefix of "C" is usually added to an IATA code to create the ICAO code. For example, Calgary International Airport is YYC or CYYC. (In contrast, airports in Hawaii are in the Pacific region and so have ICAO codes that start with "PH"; Kona International Airport's code is PHKO. Similarly, airports in Alaska have ICAO codes that start with "PA". Merrill Field, for instance is PAMR.) Note that not all airports are assigned codes in both systems; for example, airports that do not have airline service do not need an IATA code. ICAO also assigns 3-letter airline codes (versus the more-familiar 2-letter IATA codes—for example, "UAL" vs. "UA" for United Airlines). ICAO also provides telephony designators to aircraft operators worldwide, a one- or two-word designator used on the radio, usually, but not always, similar to the aircraft operator name. For example, the identifier for Japan Airlines International is "JAL" and the designator is "Japan Air", but Aer Lingus is "EIN" and "Shamrock". Thus, a Japan Airlines flight numbered 111 would be written as "JAL111" and pronounced "Japan Air One One One" on the radio, while a similarly numbered Aer Lingus would be written as "EIN111" and pronounced "Shamrock One One One". In the US, FAA practices require the digits of the flight number to be spoken in group format ("Japan Air One Eleven" in the above example) while individual digits are used for the aircraft tail number used for unscheduled civil flights. ICAO maintains the standards for aircraft registration ("tail numbers"), including the alphanumeric codes that identify the country of registration. For example, airplanes registered in the United States have tail numbers starting with "N". ICAO is also responsible for issuing 2-4 character alphanumeric "aircraft type designators" for those aircraft types which are most commonly provided with air traffic service. These codes provide an abbreviated aircraft type identification, typically used in flight plans. For example, the Boeing 747-100, -200 and -300 are given the type designators "B741", "B742" and "B743" respectively. ICAO recommends a unification of units of measurement within aviation based on the International System of Units (SI). Technically this makes SI units preferred, but in practice the following non-SI units are still in widespread use within commercial aviation: Knots, nautical miles and feet have been permitted for temporary use since 1979, but a termination date has not yet been established, which would complete metrication of worldwide aviation. Since 2010, ICAO recommends using: Notably, aviation in Russia, Sweden and China currently use km/h for reporting airspeed, and many present-day European glider planes also indicate airspeed in kilometres per hour. Sweden, China and North Korea use metres for reporting altitude when communicating with pilots. Russia also formerly used metres exclusively for reporting altitude, but in 2011 changed to feet for high altitude flight. From February 2017, Russian airspace started transitioning to reporting altitude in feet only. Runway lengths are now commonly given in metres worldwide, except in North America where feet are commonly used. The table below summarizes some of the units commonly used in flight and ground operations, as well as their recommended replacement. A full list of recommended units can be found in annex 5 to the Convention on International Civil Aviation. Altitude, elevation, height. ICAO has a headquarters, seven regional offices, and one regional sub-office: Emissions from international aviation are specifically excluded from the targets agreed under the Kyoto Protocol. Instead, the Protocol invites developed countries to pursue the limitation or reduction of emissions through the International Civil Aviation Organization. ICAO's environmental committee continues to consider the potential for using market-based measures such as trading and charging, but this work is unlikely to lead to global action. It is currently developing guidance for states who wish to include aviation in an emissions trading scheme (ETS) to meet their Kyoto commitments, and for airlines who wish to participate voluntarily in a trading scheme. Emissions from domestic aviation are included within the Kyoto targets agreed by countries. This has led to some national policies such as fuel and emission taxes for domestic air travel in the Netherlands and Norway, respectively. Although some countries tax the fuel used by domestic aviation, there is no duty on kerosene used on international flights. ICAO is currently opposed to the inclusion of aviation in the European Union Emission Trading Scheme (EU ETS). The EU, however, is pressing ahead with its plans to include aviation. On 6 October 2016, the ICAO finalized an agreement among its 191 member nations to address the more than of carbon dioxide emitted annually by international passenger and cargo flights. The agreement will use an offsetting scheme called CORSIA (the Carbon Offsetting and Reduction Scheme for International Aviation) under which forestry and other carbon-reducing activities are directly funded, amounting to about 2% of annual revenues for the sector. Rules against 'double counting' should ensure that existing forest protection efforts are not recycled. The scheme does not take effect until 2021 and will be voluntary until 2027, but many countries, including the US and China, have promised to begin at its 2020 inception date. Under the agreement, the global aviation emissions target is a 50% reduction by 2050 relative to 2005. NGO reaction to the deal was mixed. The agreement has critics. It is not aligned with the 2015 Paris climate agreement, which set the objective of restricting global warming to 1.5 to 2 °C. A late draft of the agreement would have required the air transport industry to assess its share of global carbon budgeting to meet that objective, but the text was removed in the agreed version. CORSIA will regulate only about 25 percent of aviation's international emissions, since it grandfathers all emissions below the 2020 level, allowing unregulated growth until then. Only 65 nations will participate in the initial voluntary period, not including significant emitters Russia, India and perhaps Brazil. The agreement does not cover domestic emissions, which are 40% of the global industry's overall emissions. One observer of the ICAO convention made this summary: although another critic called it "a timid step in the right direction." Most air accident investigations are carried out by an agency of a country that is associated in some way with the accident. For example, the Air Accidents Investigation Branch conducts accident investigations on behalf of the British Government. ICAO has conducted four investigations involving air disasters, of which two were passenger airliners shot down while in international flight over hostile territory. ICAO is looking at having a singular ledger for drone registration to help law enforcement globally. Currently, ICAO is responsible for creating drone regulations across the globe, and it is expected that it will only maintain the registry. This activity is seen as a forerunner to global regulations on flying drones under the auspices of the ICAO. ICAO currently maintains the 'UAS Regulation Portal' for various countries to list their country's UAS regulations and also review the best practices from across the globe.
https://en.wikipedia.org/wiki?curid=14985
International Maritime Organization The International Maritime Organization (IMO) (French: "Organisation Maritime Internationale" (OMI)), known as the Inter-Governmental Maritime Consultative Organization (IMCO) until 1982, is a specialised agency of the United Nations responsible for regulating shipping. The IMO was established following agreement at a UN conference held in Geneva in 1948 and the IMO came into existence ten years later, meeting for the first time in 1959. Headquartered in London, United Kingdom, the IMO currently has 174 member states and three associate members. The IMO's primary purpose is to develop and maintain a comprehensive regulatory framework for shipping and its remit today includes safety, environmental concerns, legal matters, technical co-operation, maritime security and the efficiency of shipping. IMO is governed by an assembly of members and is financially administered by a council of members elected from the assembly. The work of IMO is conducted through five committees and these are supported by technical subcommittees. Other UN organisations may observe the proceedings of the IMO. Observer status is granted to qualified non-governmental organisations. IMO is supported by a permanent secretariat of employees who are representative of the organisation's members. The secretariat is composed of a Secretary-General who is periodically elected by the assembly, and various divisions such as those for marine safety, environmental protection and a conference section. Inter-Governmental Maritime Consultative Organization (IMCO) was formed in order to bring the regulation of the safety of shipping into an international framework, for which the creation of the United Nations provided an opportunity. Hitherto such international conventions had been initiated piecemeal, notably the Safety of Life at Sea Convention (SOLAS), first adopted in 1914 following the "Titanic" disaster. IMCO's first task was to update that convention; the resulting 1960 convention was subsequently recast and updated in 1974 and it is that convention that has been subsequently modified and updated to adapt to changes in safety requirements and technology. When IMCO began its operations in 1959 certain other pre-existing conventions were brought under its aegis, most notable the International Convention for the Prevention of Pollution of the Sea by Oil (OILPOL) 1954. The first meetings of the newly formed IMCO were held in London in 1959. Throughout its existence IMCO, later renamed the IMO in 1982, has continued to produce new and updated conventions across a wide range of maritime issues covering not only safety of life and marine pollution but also encompassing safe navigation, search and rescue, wreck removal, tonnage measurement, liability and compensation, ship recycling, the training and certification of seafarers, and piracy. More recently SOLAS has been amended to bring an increased focus on maritime security through the International Ship and Port Facility Security (ISPS) Code. The IMO has also increased its focus on smoke emissions from ships. In January 1959, IMO began to maintain and promote the 1954 OILPOL Convention. Under the guidance of IMO, the convention was amended in 1962, 1969, and 1971. As oil trade and industry developed, many people in the industry began to recognise a need for further improvements in regards to oil pollution prevention at sea. This became increasingly apparent in 1967, when the tanker "Torrey Canyon" spilled 120,000 tons of crude oil when it ran aground entering the English Channel The "Torrey Canyon" grounding was the largest oil pollution incident recorded up to that time. This incident prompted a series of new conventions. IMO held an emergency session of its Council to deal with the need to readdress regulations pertaining to maritime pollution. In 1969, the IMO Assembly decided to host an international gathering in 1973 dedicated to this issue. The goal at hand was to develop an international agreement for controlling general environmental contamination by ships when out at sea. During the next few years IMO brought to the forefront a series of measures designed to prevent large ship accidents and to minimise their effects. It also detailed how to deal with the environmental threat caused by routine ship duties such as the cleaning of oil cargo tanks or the disposal of engine room wastes. By tonnage, the aforementioned was a bigger problem than accidental pollution. The most significant thing to come out of this conference was the International Convention for the Prevention of Pollution from Ships, 1973. It covers not only accidental and operational oil pollution but also different types of pollution by chemicals, goods in packaged form, sewage, garbage and air pollution. The original MARPOL was signed on 17 February 1973, but did not come into force due to lack of ratifications. The current convention is a combination of 1973 Convention and the 1978 Protocol. It entered into force on 2 October 1983. As of May 2013, 152 states, representing 99.2 per cent of the world's shipping tonnage, are involved in the convention. In 1983 the IMO established the World Maritime University in Malmö, Sweden. The IMO headquarters are located in a large purpose-built building facing the River Thames on the Albert Embankment, in Lambeth, London. The organisation moved into its new headquarters in late 1982, with the building being officially opened by Queen Elizabeth II on 17 May 1983. The architects of the building were Douglass Marriott, Worby & Robinson. The front of the building is dominated by a seven-metre high, ten-tonne bronze sculpture of the bow of a ship, with a lone seafarer maintaining a look-out. The previous headquarters of IMO were at 101 Piccadilly (now the home of the Embassy of Japan), prior to that at 22 Berners Street in Fitzrovia and originally in Chancery Lane. To become a member of the IMO, a state ratifies a multilateral treaty known as the Convention on the International Maritime Organization. As of 2020, there are 174 member states of the IMO, which includes 173 of the UN member states plus the Cook Islands. The first state to ratify the convention was the United Kingdom in 1949. The most recent members to join were Armenia and Nauru, which became IMO members in January and May 2018, respectively. These are the current members with the year they joined: Albania (1993) Algeria (1963) Angola (1977) Antigua and Barbuda (1986) Argentina (1953) Armenia (2018) Australia (1952) Austria (1975) Azerbaijan (1995) Bahamas (1976) Bahrain (1976) Bangladesh (1976) Barbados (1970) Belarus (2016) Belgium (1951) Belize (1990) Benin (1980) Bolivia (1987) Bosnia and Herzegovina (1993) Brazil (1963) Brunei Darussalam (1984) Bulgaria (1960) Cabo Verde (1976) Cambodia (1961) Cameroon (1961) Canada (1948) Chile (1972) China (1973) Colombia (1974) Comoros (2001) Congo (1975) Cook Islands (2008) Costa Rica (1981) Côte d'Ivoire (1960) Croatia (1992) Cuba (1966) Cyprus (1973) Czechia (1993) Democratic People's Republic of Korea (1986) Democratic Republic of the Congo (1973) Denmark (1959) Djibouti (1979) Dominica (1979) Dominican Republic (1953) Ecuador (1956) Egypt (1958) El Salvador (1981) Equatorial Guinea (1972) Eritrea (1993) Estonia (1992) Ethiopia (1975) Fiji (1983) Finland (1959) France (1952) Gabon (1976) Gambia (1979) Georgia (1993) Germany (1959) Ghana (1959) Greece (1958) Grenada (1998) Guatemala (1983) Guinea (1975) Guinea-Bissau (1977) Guyana (1980) Haiti (1953) Honduras (1954) Hungary (1970) Iceland (1960) India (1959) Indonesia (1961) Iran (1958) Iraq (1973) Ireland (1951) Israel (1952) Italy (1957) Jamaica (1976) Japan (1958) Jordan (1973) Kazakhstan (1994) Kenya (1973) Kiribati (2003) Kuwait (1960) Latvia (1993) Lebanon (1966) Liberia (1959) Libya (1970) Lithuania (1995) Luxembourg (1991) Madagascar (1961) Malawi (1989) Malaysia (1971) Maldives (1967) Malta (1966) Marshall Islands (1998) Mauritania (1961) Mauritius (1978) Mexico (1954) Monaco (1989) Mongolia (1996) Montenegro (2006) Morocco (1962) Mozambique (1979) Myanmar (1951) Namibia (1994) Nauru (2018) Nepal (1979) Netherlands (1949) New Zealand (1960) Nicaragua (1982) Nigeria (1962) North Macedonia (1993) Norway (1958) Oman (1974) Pakistan (1958) Palau (2011) Panama (1958) Papua New Guinea (1976) Paraguay (1993) Peru (1968) Philippines (1964) Poland (1960) Portugal (1976) Qatar (1977) Republic of Korea (1962) Republic of Moldova (2001) Romania (1965) Russian Federation (1958) Saint Kitts and Nevis (2001) Saint Lucia (1980) Saint Vincent and the Grenadines (1981) Samoa (1996) San Marino (2002) São Tomé and Príncipe (1990) Saudi Arabia (1969) Senegal (1960) Serbia (2000) Seychelles (1978) Sierra Leone (1973) Singapore (1966) Slovakia (1993) Slovenia (1993) Solomon Islands (1988) Somalia (1978) South Africa (1995) Spain (1962) Sri Lanka (1972) Sudan (1974) Suriname (1976) Sweden (1959) Switzerland (1955) Syria (1963) Tanzania (1974) Thailand (1973) Timor-Leste (2005) Togo (1983) Tonga (2000) Trinidad and Tobago (1965) Tunisia (1963) Turkey (1958) Turkmenistan (1993) Tuvalu (2004) Uganda (2009) Ukraine (1994) United Arab Emirates (1980) United Kingdom (1949) United States of America (1950) Uruguay (1968) Vanuatu (1986) Venezuela (1975) Viet Nam (1984) Yemen (1979) Zambia (2014) Zimbabwe (2005) The three associate members of the IMO are the Faroe Islands, Hong Kong and Macao. In 1961, the territories of Sabah and Sarawak, which had been included through the participation of United Kingdom, became joint associate members. In 1963 they became part of Malaysia. Most UN member states that are not members of IMO are landlocked countries. These include Afghanistan, Andorra, Bhutan, Botswana, Burkina Faso, Burundi, Central African Republic, Chad, Kyrgyzstan, Laos, Lesotho, Liechtenstein, Mali, Niger, Rwanda, South Sudan, Swaziland, Tajikistan and Uzbekistan. However, the Federated States of Micronesia, an island-nation in the Pacific Ocean, is also a non-member, as is the same for similar Taiwan, itself a non-member of the UN. The IMO consists of an Assembly, a Council and five main Committees: the Maritime Safety Committee; the Marine Environment Protection Committee; the Legal Committee; the Technical Co-operation Committee and the Facilitation Committee. A number of Sub-Committees support the work of the main technical committees. IMO is the source of approximately 60 legal instruments that guide the regulatory development of its member states to improve safety at sea, facilitate trade among seafaring states and protect the maritime environment. The most well known is the International Convention for the Safety of Life at Sea (SOLAS), as well as International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC). Others include the International Oil Pollution Compensation Funds (IOPC). It also functions as a depository of yet to be ratified treaties, such as the International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances by Sea, 1996 (HNS Convention) and Nairobi International Convention of Removal of Wrecks (2007). IMO regularly enacts regulations, which are broadly enforced by national and local maritime authorities in member countries, such as the International Regulations for Preventing Collisions at Sea (COLREG). The IMO has also enacted a Port State Control (PSC) authority, allowing domestic maritime authorities such as coast guards to inspect foreign-flag ships calling at ports of the many port states. Memoranda of Understanding (protocols) were signed by some countries unifying Port State Control procedures among the signatories. Conventions, Codes and Regulations: Recent initiatives at the IMO have included amendments to SOLAS, which upgraded fire protection standards on passenger ships, the International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW) which establishes basic requirements on training, certification and watchkeeping for seafarers and to the Convention on the Prevention of Maritime Pollution (MARPOL 73/78), which required double hulls on all tankers. In December 2002, new amendments to the 1974 SOLAS Convention were enacted. These amendments gave rise to the International Ship and Port Facility Security (ISPS) Code, which went into effect on 1 July 2004. The concept of the code is to provide layered and redundant defences against smuggling, terrorism, piracy, stowaways, etc. The ISPS Code required most ships and port facilities engaged in international trade to establish and maintain strict security procedures as specified in ship and port specific Ship Security Plans and Port Facility Security Plans. The IMO has a role in tackling international climate change. The First Intersessional Meeting of IMO's Working Group on Greenhouse Gas Emissions from Ships took place in Oslo, Norway (23–27 June 2008), tasked with developing the technical basis for the reduction mechanisms that may form part of a future IMO regime to control greenhouse gas emissions from international shipping, and a draft of the actual reduction mechanisms themselves, for further consideration by IMO's Marine Environment Protection Committee (MEPC). The IMO participated in the 2015 United Nations Climate Change Conference in Paris seeking to establish itself as the "appropriate international body to address greenhouse gas emissions from ships engaged in international trade". Nonetheless, there has been widespread criticism of the IMO's relative inaction since the conclusion of the Paris conference, with the initial data-gathering step of a three-stage process to reduce maritime greenhouse emissions expected to last until 2020. The IMO has also taken action to mitigate the global effects of ballast water and sediment discharge, through the 2004 Ballast Water Management Convention, which entered into force in September 2017. The IMO is also responsible for publishing the International Code of Signals for use between merchant and naval vessels. IMO has harmonised information available to seafarers and shore-side traffic services called e-Navigation. An e-Navigation strategy was ratified in 2005, and an implementation plan was developed through three IMO sub-committees. The plan was completed by 2014 and implemented in November of that year. IMO has also served as a key partner and enabler of US international and interagency efforts to establish Maritime Domain Awareness. The governing body of the International Maritime Organization is the Assembly which meets every two years. In between Assembly sessions a Council, consisting of 40 Member States elected by the Assembly, acts as the governing body. The technical work of the International Maritime Organization is carried out by a series of Committees. The Secretariat consists of some 300 international civil servants headed by a Secretary-General. The current Secretary-General is Kitack Lim (South Korea), elected for a four-year term at the 106th session of the IMO Council in June 2015 and at the 27th session of the IMO's Assembly in November 2015. His mandate started on 1 January 2016. The technical work of the International Maritime Organisation is carried out by a series of Committees. These include: It is regulated in the Article 28(a) of the Convention on the IMO: The Maritime Safety Committee is the most senior of these and is the main Technical Committee; it oversees the work of its nine sub-committees and initiates new topics. One broad topic it deals with is the effect of the human element on casualties; this work has been put to all of the sub-committees, but meanwhile, the Maritime Safety Committee has developed a code for the management of ships which will ensure that agreed operational procedures are in place and followed by the ship and shore-side staff. The MSC and MEPC are assisted in their work by a number of sub-committees which are open to all Member States. The committees are: The names of the IMO sub-committees were changed in 2013. Prior to 2013 there were nine Sub-Committees as follows: Resolution MSC.255(84), of 16 May 2008, adopts the "Code of the International Standards and Recommended Practices for a Safety Investigation into a Marine casualty or Marine Incident". It is also known as the Casualty Investigation Code. Sea transportation is one of few industrial areas that still commonly uses non-metric units such as the nautical mile (nmi) for distance and knots (kn) for speed or velocity. One nautical mile is approximately one minute of arc of latitude along any meridian arc, and is today precisely defined as 1852 meters (about 1.151 statute miles). In 1975, the assembly of the IMO decided that future conventions of the International Convention for the Safety of Life at Sea (SOLAS) and other IMO instruments should use SI units only.
https://en.wikipedia.org/wiki?curid=14986
International Labour Organization The International Labour Organization (ILO) is a United Nations agency whose mandate is to advance social and economic justice through setting international labour standards. Founded in 1919 under the League of Nations, it is the first and oldest specialised agency of the UN. The ILO has 187 member states: 186 out of 193 UN member states plus the Cook Islands. It is headquartered in Geneva, Switzerland, with around 40 field offices around the world, and employs some 2,700 staff from over 150 nations, of whom 900 work in technical cooperation programmes and projects. The ILO's international labour standards are broadly aimed at ensuring accessible, productive, and sustainable work worldwide in conditions of freedom, equity, security and dignity. They are set forth in 189 conventions and treaties, of which eight are classified as fundamental according to the 1998 Declaration on Fundamental Principles and Rights at Work; together they protect freedom of association and the effective recognition of the right to collective bargaining, the elimination of forced or compulsory labour, the abolition of child labour, and the elimination of discrimination in respect of employment and occupation. The ILO is subsequently a major contributor to international labour law. Within the UN system the organization has a unique tripartite structure: all standards, policies, and programmes require discussion and approval from the representatives of governments, employers, and workers. This framework is maintained in the ILO's three main bodies: The International Labour Conference, which meets annually to formulate international labour standards; the Governing Body, which serves as the executive council and decides the agency's policy and budget; and the International Labour Office, the permanent secretariat that administers the organization and implements activities. The secretariat is led by the Director-General, currently Guy Ryder of the United Kingdom, who was elected by the Governing Body in 2012. In 1969, the ILO received the Nobel Peace Prize for improving fraternity and peace among nations, pursuing decent work and justice for workers, and providing technical assistance to other developing nations. In 2019, the organization convened the Global Commission on the Future of Work, whose report made ten recommendations for governments to meet the challenges of the 21st century labor environment; these include a universal labour guarantee, social protection from birth to old age and an entitlement to lifelong learning. With its focus on international development, it is a member of the United Nations Development Group, a coalition of UN organization aimed at helping meet the Sustainable Development Goals. Unlike other United Nations specialized agencies, the International Labour Organization has a tripartite governing structure that brings together governments, employers, and workers of 187 member States, to set labour standards, develop policies and devise programmes promoting decent work for all women and men. The structure is intended to ensure the views of all three groups are reflected in ILO labour standards, policies, and programmes, though governments have twice as many representatives as the other two groups. The Governing Body is the executive body of the International Labour Organization. It meets three times a year, in March, June and November. It takes decisions on ILO policy, decides the agenda of the International Labour Conference, adopts the draft Programme and Budget of the Organization for submission to the Conference, elects the Director-General, requests information from the member states concerning labour matters, appoints commissions of inquiry and supervises the work of the International Labour Office. Juan Somavía was the ILO's Director-General from 1999 until October 2012 when Guy Ryder was elected. The ILO Governing Body re-elected Guy Rider as Director-General for a second five-year-term in November 2016. This governing body is composed of 56 titular members (28 governments, 14 employers and 14 workers) and 66 deputy members (28 governments, 19 employers and 19 workers). Ten of the titular government seats are permanently held by States of chief industrial importance: Brazil, China, France, Germany, India, Italy, Japan, the Russian Federation, the United Kingdom and the United States. The other Government members are elected by the Conference every three years (the last elections were held in June 2017). The Employer and Worker members are elected in their individual capacity. The ILO organises once a year the International Labour Conference in Geneva to set the broad policies of the ILO, including conventions and recommendations. Also known as the "international parliament of labour", the conference makes decisions about the ILO's general policy, work programme and budget and also elects the Governing Body. Each member state is represented by a delegation: two government delegates, an employer delegate, a worker delegate and their respective advisers. All of them have individual voting rights and all votes are equal, regardless the population of the delegate's member State. The employer and worker delegates are normally chosen in agreement with the most representative national organizations of employers and workers. Usually, the workers and employers' delegates coordinate their voting. All delegates have the same rights and are not required to vote in blocs. Delegate have the same rights, they can express themselves freely and vote as they wish. This diversity of viewpoints does not prevent decisions being adopted by very large majorities or unanimously. Heads of State and prime ministers also participate in the Conference. International organizations, both governmental and others, also attend but as observers. The ILO has 187 state members. 186 of the 193 member states of the United Nations plus the Cook Islands are members of the ILO. The UN member states which are not members of the ILO are Andorra, Bhutan, Liechtenstein, Micronesia, Monaco, Nauru, and North Korea. The ILO constitution permits any member of the UN to become a member of the ILO. To gain membership, a nation must inform the director-general that it accepts all the obligations of the ILO constitution. Other states can be admitted by a two-thirds vote of all delegates, including a two-thirds vote of government delegates, at any ILO General Conference. The Cook Islands, a non-UN state, joined in June 2015. Members of the ILO under the League of Nations automatically became members when the organization's new constitution came into effect after World War II. The ILO is a specialized agency of the United Nations (UN). As with other UN specialized agencies (or programmes) working on international development, the ILO is also a member of the United Nations Development Group. Through July 2018, the ILO had adopted 189 conventions. If these conventions are ratified by enough governments, they come in force. However, ILO conventions are considered international labour standards regardless of ratification. When a convention comes into force, it creates a legal obligation for ratifying nations to apply its provisions. Every year the International Labour Conference's Committee on the Application of Standards examines a number of alleged breaches of international labour standards. Governments are required to submit reports detailing their compliance with the obligations of the conventions they have ratified. Conventions that have not been ratified by member states have the same legal force as recommendations. In 1998, the 86th International Labour Conference adopted the "Declaration on Fundamental Principles and Rights at Work". This declaration contains four fundamental policies: The ILO asserts that its members have an obligation to work towards fully respecting these principles, embodied in relevant ILO conventions. The ILO conventions which embody the fundamental principles have now been ratified by most member states. This device is employed for making conventions more flexible or for amplifying obligations by amending or adding provisions on different points. Protocols are always linked to Convention, even though they are international treaties they do not exist on their own. As with Conventions, Protocols can be ratified. Recommendations do not have the binding force of conventions and are not subject to ratification. Recommendations may be adopted at the same time as conventions to supplement the latter with additional or more detailed provisions. In other cases recommendations may be adopted separately and may address issues separate from particular conventions. While the ILO was established as an agency of the League of Nations following World War I, its founders had made great strides in social thought and action before 1919. The core members all knew one another from earlier private professional and ideological networks, in which they exchanged knowledge, experiences, and ideas on social policy. Prewar "epistemic communities", such as the International Association for Labour Legislation (IALL), founded in 1900, and political networks, such as the socialist Second International, were a decisive factor in the institutionalization of international labour politics. In the post–World War I euphoria, the idea of a "makeable society" was an important catalyst behind the social engineering of the ILO architects. As a new discipline, international labour law became a useful instrument for putting social reforms into practice. The utopian ideals of the founding members—social justice and the right to decent work—were changed by diplomatic and political compromises made at the Paris Peace Conference of 1919, showing the ILO's balance between idealism and pragmatism. Over the course of the First World War, the international labour movement proposed a comprehensive programme of protection for the working classes, conceived as compensation for labour's support during the war. Post-war reconstruction and the protection of labour unions occupied the attention of many nations during and immediately after World War I. In Great Britain, the Whitley Commission, a subcommittee of the Reconstruction Commission, recommended in its July 1918 Final Report that "industrial councils" be established throughout the world. The British Labour Party had issued its own reconstruction programme in the document titled "Labour and the New Social Order". In February 1918, the third Inter-Allied Labour and Socialist Conference (representing delegates from Great Britain, France, Belgium and Italy) issued its report, advocating an international labour rights body, an end to secret diplomacy, and other goals. And in December 1918, the American Federation of Labor (AFL) issued its own distinctively apolitical report, which called for the achievement of numerous incremental improvements via the collective bargaining process. As the war drew to a close, two competing visions for the post-war world emerged. The first was offered by the International Federation of Trade Unions (IFTU), which called for a meeting in Bern, Switzerland, in July 1919. The Bern meeting would consider both the future of the IFTU and the various proposals which had been made in the previous few years. The IFTU also proposed including delegates from the Central Powers as equals. Samuel Gompers, president of the AFL, boycotted the meeting, wanting the Central Powers delegates in a subservient role as an admission of guilt for their countries' role in the bringing about war. Instead, Gompers favoured a meeting in Paris which would only consider President Woodrow Wilson's Fourteen Points as a platform. Despite the American boycott, the Bern meeting went ahead as scheduled. In its final report, the Bern Conference demanded an end to wage labour and the establishment of socialism. If these ends could not be immediately achieved, then an international body attached to the League of Nations should enact and enforce legislation to protect workers and trade unions. Meanwhile, the Paris Peace Conference sought to dampen public support for communism. Subsequently, the Allied Powers agreed that clauses should be inserted into the emerging peace treaty protecting labour unions and workers' rights, and that an international labour body be established to help guide international labour relations in the future. The advisory Commission on International Labour Legislation was established by the Peace Conference to draft these proposals. The Commission met for the first time on 1 February 1919, and Gompers was elected as the chairman. Two competing proposals for an international body emerged during the Commission's meetings. The British proposed establishing an international parliament to enact labour laws which each member of the League would be required to implement. Each nation would have two delegates to the parliament, one each from labour and management. An international labour office would collect statistics on labour issues and enforce the new international laws. Philosophically opposed to the concept of an international parliament and convinced that international standards would lower the few protections achieved in the United States, Gompers proposed that the international labour body be authorized only to make recommendations, and that enforcement be left up to the League of Nations. Despite vigorous opposition from the British, the American proposal was adopted. Gompers also set the agenda for the draft charter protecting workers' rights. The Americans made 10 proposals. Three were adopted without change: That labour should not be treated as a commodity; that all workers had the right to a wage sufficient to live on; and that women should receive equal pay for equal work. A proposal protecting the freedom of speech, press, assembly, and association was amended to include only freedom of association. A proposed ban on the international shipment of goods made by children under the age of 16 was amended to ban goods made by children under the age of 14. A proposal to require an eight-hour work day was amended to require the eight-hour work day "or" the 40-hour work week (an exception was made for countries where productivity was low). Four other American proposals were rejected. Meanwhile, international delegates proposed three additional clauses, which were adopted: One or more days for weekly rest; equality of laws for foreign workers; and regular and frequent inspection of factory conditions. The Commission issued its final report on 4 March 1919, and the Peace Conference adopted it without amendment on 11 April. The report became Part XIII of the Treaty of Versailles. The first annual conference, referred to as the International Labour Conference (ILC), began on 29 October 1919 at the Pan American Union Building in Washington, D.C. and adopted the first six International Labour Conventions, which dealt with hours of work in industry, unemployment, maternity protection, night work for women, minimum age, and night work for young persons in industry. The prominent French socialist Albert Thomas became its first director-general. Despite open disappointment and sharp critique, the revived International Federation of Trade Unions (IFTU) quickly adapted itself to this mechanism. The IFTU increasingly oriented its international activities around the lobby work of the ILO. At the time of establishment, the U.S. government was not a member of ILO, as the US Senate rejected the covenant of the League of Nations, and the United States could not join any of its agencies. Following the election of Franklin Delano Roosevelt to the U.S. presidency, the new administration made renewed efforts to join the ILO without league membership. On 19 June 1934, the U.S. Congress passed a joint resolution authorizing the president to join ILO without joining the League of Nations as a whole. On 22 June 1934, the ILO adopted a resolution inviting the U.S. government to join the organization. On 20 August 1934, the U.S. government responded positively and took its seat at the ILO. During the Second World War, when Switzerland was surrounded by German troops, ILO director John G. Winant made the decision to leave Geneva. In August 1940, the government of Canada officially invited the ILO to be housed at McGill University in Montreal. Forty staff members were transferred to the temporary offices and continued to work from McGill until 1948. The ILO became the first specialized agency of the United Nations system after the demise of the league in 1946. Its constitution, as amended, includes the Declaration of Philadelphia (1944) on the aims and purposes of the organization. Beginning in the late 1950s the organization was under pressure to make provisions for the potential membership of ex-colonies which had become independent; in the Director General’s report of 1963 the needs of the potential new members were first recognized. The tensions produced by these changes in the world environment negatively affected the established politics within the organization and they were the precursor to the eventual problems of the organization with the USA In July, 1970, the United States withdrew 50% of its financial support to the ILO following the appointment of an assistant director-general from the Soviet Union. This appointment (by the ILO's British director-general, C. Wilfred Jenks) drew particular criticism from AFL–CIO president George Meany and from Congressman John E. Rooney. However, the funds were eventually paid. On 12 June 1975, the ILO voted to grant the Palestinian Liberation Organization observer status at its meetings. Representatives of the United States and Israel walked out of the meeting. The U.S. House of Representatives subsequently decided to withhold funds. The United States gave notice of full withdrawal on 6 November 1975, stating that the organization had become politicized. The United States also suggested that representation from communist countries was not truly "tripartite"—including government, workers, and employers—because of the structure of these economies. The withdrawal became effective on 1 November 1977. The United States returned to the organization in 1980 after extracting some concession from the organization. It was partly responsible for the ILO's shift away from a human rights approach and towards support for the Washington Consensus. Economist Guy Standing wrote "the ILO quietly ceased to be an international body attempting to redress structural inequality and became one promoting employment equity". In 1981, the government of Poland declared martial law. It interrupted the activities of Solidarnosc detained many of its leaders and members. The ILO Committee on Freedom of Association filed a complaint against Poland at the 1982 International Labour Conference. A Commission of Inquiry established to investigate found Poland had violated ILO Conventions No. 87 on freedom of association and No. 98 on trade union rights, which the country had ratified in 1957. The ILO and many other countries and organizations put pressure on the Polish government, which finally gave legal status to Solidarnosc in 1989. During that same year, there was a roundtable discussion between the government and Solidarnoc which agreed on terms of relegalization of the organization under ILO principles. The government also agreed to hold the first free elections in Poland since the Second World War. The ILO is a major provider of labour statistics. Labour statistics are an important tool for its member states to monitor their progress toward improving labour standards. As part of their statistical work, ILO maintains several databases. This database covers 11 major data series for over 200 countries. In addition, ILO publishes a number of compilations of labour statistics, such as the Key Indicators of Labour Markets (KILM). KILM covers 20 main indicators on labour participation rates, employment, unemployment, educational attainment, labour cost, and economic performance. Many of these indicators have been prepared by other organizations. For example, the Division of International Labour Comparisons of the U.S. Bureau of Labor Statistics prepares the hourly compensation in manufacturing indicator. The U.S. Department of Labor also publishes a yearly report containing a "List of Goods Produced by Child Labor or Forced Labor" issued by the Bureau of International Labor Affairs. The December 2014 updated edition of the report listed a total of 74 countries and 136 goods. The International Training Centre of the International Labour Organization (ITCILO) is based in Turin, Italy. Together with the University of Turin Department of Law, the ITC offers training for ILO officers and secretariat members, as well as offering educational programmes. The ITC offers more than 450 training and educational programmes and projects every year for some 11,000 people around the world. For instance, the ITCILO offers a Master of Laws programme in management of development, which aims specialize professionals in the field of cooperation and development. The term "child labour" is often defined as work that deprives children of their childhood, potential, dignity, and is harmful to their physical and mental development. "Child labour" refers to work that is mentally, physically, socially or morally dangerous and harmful to children. Further, it can involve interfering with their schooling by depriving them of the opportunity to attend school, obliging them to leave school prematurely, or requiring them to attempt to combine school attendance with excessively long and heavy work. In its most extreme forms, child labour involves children being enslaved, separated from their families, exposed to serious hazards and illnesses and left to fend for themselves on the streets of large cities – often at a very early age. Whether or not particular forms of "work" can be called "child labour" depends on the child's age, the type and hours of work performed, the conditions under which it is performed and the objectives pursued by individual countries. The answer varies from country to country, as well as among sectors within countries. The ILO's International Programme on the Elimination of Child Labour (IPEC) was created in 1992 with the overall goal of the progressive elimination of child labour, which was to be achieved through strengthening the capacity of countries to deal with the problem and promoting a worldwide movement to combat child labour. The IPEC currently has operations in 88 countries, with an annual expenditure on technical cooperation projects that reached over US$61 million in 2008. It is the largest programme of its kind globally and the biggest single operational programme of the ILO. The number and range of the IPEC's partners have expanded over the years and now include employers' and workers' organizations, other international and government agencies, private businesses, community-based organizations, NGOs, the media, parliamentarians, the judiciary, universities, religious groups and children and their families. The IPEC's work to eliminate child labour is an important facet of the ILO's Decent Work Agenda. Child labour not only prevents children from acquiring the skills and education they need for a better future, Because of different cultural views involving labour, the ILO developed a series of culturally sensitive mandates, including convention Nos. 169, 107, 138, and 182, to protect indigenous culture, traditions, and identities. Convention Nos. 138 and 182 lead in the fight against child labour, while Nos. 107 and 169 promote the rights of indigenous and tribal peoples and protect their right to define their own developmental priorities. In many indigenous communities, parents believe children learn important life lessons through the act of work and through the participation in daily life. Working is seen as a learning process preparing children of the future tasks they will eventually have to do as an adult. It is a belief that the family's and child well-being and survival is a shared responsibility between members of the whole family. They also see work as an intrinsic part of their child's developmental process. While these attitudes toward child work remain, many children and parents from indigenous communities still highly value education. The ILO has considered the fight against forced labour to be one of its main priorities. During the interwar years, the issue was mainly considered a colonial phenomenon, and the ILO's concern was to establish minimum standards protecting the inhabitants of colonies from the worst abuses committed by economic interests. After 1945, the goal became to set a uniform and universal standard, determined by the higher awareness gained during World War II of politically and economically motivated systems of forced labour, but debates were hampered by the Cold War and by exemptions claimed by colonial powers. Since the 1960s, declarations of labour standards as a component of human rights have been weakened by government of postcolonial countries claiming a need to exercise extraordinary powers over labour in their role as emergency regimes promoting rapid economic development. In June 1998 the International Labour Conference adopted a Declaration on Fundamental Principles and Rights at Work and its follow-up that obligates member states to respect, promote and realize freedom of association and the right to collective bargaining, the elimination of all forms of forced or compulsory labour, the effective abolition of child labour, and the elimination of discrimination in respect of employment and occupation. With the adoption of the declaration, the ILO created the InFocus Programme on Promoting the Declaration which is responsible for the reporting processes and technical cooperation activities associated with the declaration; and it carries out awareness raising, advocacy and knowledge functions. In November 2001, following the publication of the InFocus Programme's first global report on forced labour, the ILO's governing body created a special action programme to combat forced labour (SAP-FL), as part of broader efforts to promote the 1998 Declaration on Fundamental Principles and Rights at Work and its follow-up. Since its inception, the SAP-FL has focused on raising global awareness of forced labour in its different forms, and mobilizing action against its manifestation. Several thematic and country-specific studies and surveys have since been undertaken, on such diverse aspects of forced labour as bonded labour, human trafficking, forced domestic work, rural servitude, and forced prisoner labour. In 2013, the SAP-FL was integrated into the ILO's Fundamental Principles and Rights at Work Branch (FUNDAMENTALS) bringing together the fight against forced and child labour and working in the context of Alliance 8.7. One major tool to fight forced labour was the adoption of the ILO Forced Labour Protocol by the International Labour Conference in 2014. It was ratified for the second time in 2015 and in 9 November 2016 it entered into force. The new protocol brings the existing ILO Convention 29 on Forced Labour, adopted in 1930, into the modern era to address practices such as human trafficking. The accompanying Recommendation 203 provides technical guidance on its implementation. In 2015, the ILO launched a global campaign to end modern slavery, in partnership with the International Organization of Employers (IOE) and the International Trade Union Confederation (ITUC). The 50 for Freedom campaign aims to mobilize public support and encourage countries to ratify the ILO’s Forced Labour Protocol. To protect the right of labours for fixing minimum wage, ILO has created Minimum Wage-Fixing Machinery Convention, 1928, Minimum Wage Fixing Machinery (Agriculture) Convention, 1951 and Minimum Wage Fixing Convention, 1970 as minimum wage law. The International Labour Organization (ILO) is the lead UN-agency on HIV workplace policies and programmes and private sector mobilization. ILOAIDS is the branch of the ILO dedicated to this issue. The ILO has been involved with the HIV response since 1998, attempting to prevent potentially devastating impact on labour and productivity and that it says can be an enormous burden for working people, their families and communities. In June 2001, the ILO's governing body adopted a pioneering code of practice on HIV/AIDS and the world of work, which was launched during a special session of the UN General Assembly. The same year, ILO became a cosponsor of the Joint United Nations Programme on HIV/AIDS (UNAIDS). In 2010, the 99th International Labour Conference adopted the ILO's recommendation concerning HIV and AIDS and the world of work, 2010 (No. 200), the first international labour standard on HIV and AIDS. The recommendation lays out a comprehensive set of principles to protect the rights of HIV-positive workers and their families, while scaling up prevention in the workplace. Working under the theme of "Preventing HIV, Protecting Human Rights at Work", ILOAIDS undertakes a range of policy advisory, research and technical support functions in the area of HIV and AIDS and the world of work. The ILO also works on promoting social protection as a means of reducing vulnerability to HIV and mitigating its impact on those living with or affected by HIV. ILOAIDS ran a "Getting to Zero" campaign to arrive at zero new infections, zero AIDS-related deaths and zero-discrimination by 2015. Building on this campaign, ILOAIDS is executing a programme of voluntary and confidential counselling and testing at work, known as VCT@WORK. As the word "migrant" suggests, migrant workers refer to those who moves from one country to another to do their job. For the rights of migrant workers, ILO has adopted conventions, including Migrant Workers (Supplementary Provisions) Convention, 1975 and United Nations Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families in 1990. Domestic workers are those who perform a variety of tasks for and in other peoples' homes. For example, they may cook, clean the house, and look after children. Yet they are often the ones with the least consideration, excluded from labour and social protection. This is mainly due to the fact that women have traditionally carried out the tasks without pay. For the rights and decent work of domestic workers including migrant domestic workers, ILO has adopted the Convention on Domestic Workers on 16 June 2011. Seeking a process of globalization that is inclusive, democratically governed and provides opportunities and tangible benefits for all countries and people. The World Commission on the Social Dimension of Globalization was established by the ILO's governing body in February 2002 at the initiative of the director-general in response to the fact that there did not appear to be a space within the multilateral system that would cover adequately and comprehensively the social dimension of the various aspects of globalization. The World Commission Report, A Fair Globalization: Creating Opportunities for All, is the first attempt at structured dialogue among representatives of constituencies with different interests and opinions on the social dimension of globalization. The ILO launched the Future of Work Initiative in order to gain understanding on the transformations that occur in the world of work and thus be able to develop ways of responding to these challenges. The initiative begun in 2016 by gathering the views of government representatives, workers, employers, academics and other relevant figures around the world. About 110 countries participated in dialogues at the regional and national level. These dialogues were structured around "four centenary conversations: work and society, decent jobs for all, the organization of work and production, and the governance of work." The second step took place in 2017 with the establishment of the Global Commission on the Future of Work dealing with the same "four centenary conversations". A report is expected to be published prior to the 2019 Centenary International Labour Conference. ILO is also assessing the impact of technological disruptions on employments worldwide. The agency is worried about the global economic and health impact of technology, like industrial and process automation, artificial intelligence (AI), Robots and robotic process of automation on human labor and is increasingly being considered by commentators, but in widely divergent ways. Among the salient views technology will bring less work, make workers redundant or end work by replacing the human labor. The other fold of view is technological creativity and abundant opportunities for economy boosts. In the modern era, technology has changed the way we think, design, and deploy the system solutions but no doubt there are threats to human jobs. Paul Schulte (Director of the Education and Information Division, and Co-Manager of the Nanotechnology Research Center, National Institute for Occupational Safety and Health, Centers for Disease Control) and DP Sharma, (International Consultant, Information Technology and Scientist) clearly articulated such disruptions and warned that it will be worse than ever before if appropriate actions will not be taken timely.He said that human generation needs to reinvent in terms of competitive accuracy, speed, capacity and honesty. Machines are more honest than human labours and its a crystal clear threat to this generation. The science and technology have no reverse gear and accepting the challenge " Human vs. Machine" is the only remedy for survival. The ILO has also looked at the transition to a green economy, and the impact thereof on employment. It came to the conclusion a shift to a greener economy could create 24 million new jobs globally by 2030, if the right policies are put in place. Also, if a transition to a green economy were not to take place, 72 million full-time jobs may be lost by 2030 due to heat stress, and temperature increases will lead to shorter available work hours, particularly in agriculture
https://en.wikipedia.org/wiki?curid=14987
International English International English is the concept of the English language as a global means of communication in numerous dialects, and the movement towards an international standard for the language. It is also referred to as Global English, World English, Common English, Continental English, General English, Engas (English as associate language), or Globish. Sometimes, these terms refer simply to the array of varieties of English spoken throughout the world. Sometimes, "international English" and the related terms above refer to a desired standardisation, i.e., Standard English; however, there is no consensus on the path to this goal. There have been many proposals for making International English more accessible to people from different nationalities. Basic English is an example, but it failed to make progress. More recently, there have been proposals for English as a lingua franca (ELF) in which non-native speakers take a highly active role in the development of the language. It has also been argued that International English is held back by its traditional spelling. There has been slow progress in adopting alternate spellings. The modern concept of International English does not exist in isolation, but is the product of centuries of development of the English language. The English language evolved in England, from a set of West Germanic dialects spoken by the Angles and Saxons, who arrived from continental Europe in the 5th century. Those dialects became known as "Englisc" (literally "Anglish"), the language today referred to as Anglo-Saxon or Old English (the language of the poem "Beowulf"). However, less than a quarter of the vocabulary of Modern English is derived from the shared ancestry with other West Germanic languages because of extensive borrowings from Norse, Norman, Latin, and other languages. It was during the Viking invasions of the Anglo-Saxon period that Old English was influenced by contact with Norse, a group of North Germanic dialects spoken by the Vikings, who came to control a large region in the North of England known as the Danelaw. Vocabulary items entering English from Norse (including the pronouns "they", and "them") are thus attributable to the on-again-off-again Viking occupation of Northern England during the centuries prior to the Norman Conquest (see, e.g., Canute the Great). Soon after the Norman Conquest of 1066, the "Englisc" language ceased being a literary language (see, e.g., Ormulum) and was replaced by Anglo-Norman as the written language of England. During the Norman Period, English absorbed a significant component of French vocabulary (approximately one-third of the vocabulary of Modern English). With this new vocabulary, additional vocabulary borrowed from Latin (with Greek, another approximately one-third of Modern English vocabulary, though some borrowings from Latin and Greek date from later periods), a simplified grammar, and use of the orthographic conventions of French instead of Old English orthography, the language became Middle English (the language of Chaucer). The "difficulty" of English as a written language thus began in the High Middle Ages, when French orthographic conventions were used to spell a language whose original, more suitable orthography had been forgotten after centuries of nonuse. During the late medieval period, King Henry V of England (lived 1387–1422) ordered the use of the English of his day in proceedings before him and before the government bureaucracies. That led to the development of Chancery English, a standardised form used in the government bureaucracy. (The use of so-called Law French in English courts continued through the Renaissance, however.) The emergence of English as a language of Wales results from the incorporation of Wales into England and also dates from approximately this time period. Soon afterward, the development of printing by Caxton and others accelerated the development of a standardised form of English. Following a change in vowel pronunciation that marks the transition of English from the medieval to the Renaissance period, the language of the Chancery and Caxton became Early Modern English (the language of Shakespeare's day) and with relatively moderate changes eventually developed into the English language of today. Scots, as spoken in the lowlands and along the east coast of Scotland, developed largely independent of Modern English, and is based on the Northern dialects of Anglo-Saxon, particularly Northumbrian, which also serve as the basis of Northern English dialects such as those of Yorkshire and Newcastle upon Tyne. Northumbria was within the Danelaw and therefore experienced greater influence from Norse than did the Southern dialects. As the political influence of London grew, the Chancery version of the language developed into a written standard across Great Britain, further progressing in the modern period as Scotland became united with England as a result of the Acts of Union of 1707. English was introduced to Ireland twice—a medieval introduction that led to the development of the now-extinct Yola dialect, and a modern introduction in which Hibernian English largely replaced Irish as the most widely spoken language during the 19th century, following the Act of Union of 1800. Received Pronunciation (RP) is generally viewed as a 19th-century development and is not reflected in North American English dialects (except the affected Transatlantic accent), which are based on 18th-century English. The establishment of the first permanent English-speaking colony in North America in 1607 was a major step towards the globalisation of the language. British English was only partially standardised when the American colonies were established. Isolated from each other by the Atlantic Ocean, the dialects in England and the colonies began evolving independently. The British colonisation of Australia starting in 1788 brought the English language to Oceania. By the 19th century, the standardisation of British English was more settled than it had been in the previous century, and this relatively well-established English was brought to Africa, Asia and New Zealand. It developed both as the language of English-speaking settlers from Britain and Ireland, and as the administrative language imposed on speakers of other languages in the various parts of the British Empire. The first form can be seen in New Zealand English, and the latter in Indian English. In Europe, English received a more central role particularly since 1919, when the Treaty of Versailles was composed not only in French, the common language of diplomacy at the time, but, under special request from American president Woodrow Wilson, also in English – a major milestone in the globalisation of English. The English-speaking regions of Canada and the Caribbean are caught between historical connections with the UK and the Commonwealth and geographical and economic connections with the U.S. In some things they tend to follow British standards, whereas in others, especially commercial, they follow the U.S. standard. Braj Kachru divides the use of English into three concentric circles. The "inner circle" is the traditional base of English and includes countries such as the United Kingdom and Ireland and the anglophone populations of the former British colonies of the United States, Australia, New Zealand, South Africa, Canada, and various islands of the Caribbean, Indian Ocean, and Pacific Ocean. In the "outer circle" are those countries where English has official or historical importance ("special significance"). This includes most of the countries of the Commonwealth of Nations (the former British Empire), including populous countries such as India, Pakistan, and Nigeria; and others, such as the Philippines, under the sphere of influence of English-speaking countries. Here English may serve as a useful lingua franca between ethnic and language groups. Higher education, the legislature and judiciary, national commerce, and so on, may all be carried out predominantly in English. The "expanding circle" refers to those countries where English has no official role, but is nonetheless important for certain functions, e.g., international business and tourism. By the twenty-first century, non-native English speakers have come to outnumber native speakers by a factor of three, according to the British Council. Darius Degher, a professor at Malmö University in Sweden, uses the term "decentered English" to describe this shift, along with attendant changes in what is considered important to English users and learners. Research on English as a lingua franca in the sense of "English in the Expanding Circle" is comparatively recent. Linguists who have been active in this field are Jennifer Jenkins, Barbara Seidlhofer, Christiane Meierkord and Joachim Grzega. English as an additional language (EAL) is usually based on the standards of either American English or British English as well as incorporating foreign terms. English as an international language (EIL) is EAL with emphasis on learning different major dialect forms; in particular, it aims to equip students with the linguistic tools to communicate internationally. Roger Nunn considers different types of competence in relation to the teaching of English as an International Language, arguing that linguistic competence has yet to be adequately addressed in recent considerations of EIL. Several models of "simplified English" have been suggested for teaching English as a foreign language: Furthermore, Randolph Quirk and Gabriele Stein thought about a Nuclear English, which, however, has never been fully developed. With reference to the term "Globish", Robert McCrum has used this to mean "English as global language". Jean-Paul Nerriere uses it for a constructed language. Basic Global English, or BGE, is a concept of global English initiated by German linguist Joachim Grzega. It evolved from the idea of creating a type of English that can be learned more easily than regular British or American English and that serves as a tool for successful global communication. BGE is guided by creating "empathy and tolerance" between speakers in a global context. This applies to the context of global communication, where different speakers with different mother tongues come together. BGE aims to develop this competence as quickly as possible. English language teaching is almost always related to a corresponding culture, e. g., learners either deal with American English and therefore with American culture, or British English and therefore with British culture. Basic Global English seeks to solve this problem by creating one collective version of English. Additionally, its advocates promote it as a system suited for self-teaching as well as classroom teaching. BGE is based on 20 elementary grammar rules that provide a certain degree of variation. For example, regular as well as irregular formed verbs are accepted. Pronunciation rules are not as strict as in British or American English, so there is a certain degree of variation for the learners. Exceptions that cannot be used are pronunciations that would be harmful to mutual understanding and therefore minimize the success of communication. Basic Global English is based on a 750-word vocabulary. Additionally, every learner has to acquire the knowledge of 250 additional words. These words can be chosen freely, according to the specific needs and interests of the learner. BGE provides not only basic language skills, but also so called "Basic Politeness Strategies". These include creating a positive atmosphere, accepting an offer with "Yes, please" or refusing with "No, thank you", and small talk topics to choose and to avoid. Basic Global English has been tested in two elementary schools in Germany. For the practical test of BGE, 12 lessons covered half of a school year. After the BGE teaching, students could answer questions about themselves, their family, their hobbies etc. Additionally they could form questions themselves about the same topics. Besides that, they also learned the numbers from 1 to 31 and vocabulary including things in their school bag and in their classroom. The students as well as the parents had a positive impression of the project. International English sometimes refers to English as it is actually being used and developed in the world; as a language owned not just by native speakers, but by all those who come to use it. Basically, it covers the English language at large, often (but not always or necessarily) implicitly seen as standard. It is certainly also commonly used in connection with the acquisition, use, and study of English as the world's lingua franca ('TEIL: Teaching English as an International Language'), and especially when the language is considered as a whole in contrast with "British English", "American English", "South African English", and the like. — McArthur (2002, p. 444–445) It especially means English words and phrases generally understood throughout the English-speaking world as opposed to localisms. The importance of non-native English language skills can be recognised behind the long-standing joke that the international language of science and technology is broken English. International English reaches toward cultural neutrality. This has a practical use: What could be better than a type of English that saves you from having to re-edit publications for individual regional markets! Teachers and learners of English as a second language also find it an attractive idea — both often concerned that their English should be neutral, without American or British or Canadian or Australian coloring. Any regional variety of English has a set of political, social and cultural connotations attached to it, even the so-called 'standard' forms. According to this viewpoint, International English is a concept of English that minimises the aspects defined by either the colonial imperialism of Victorian Britain or the cultural imperialism of the 20th century United States. While British colonialism laid the foundation for English over much of the world, International English is a product of an emerging world culture, very much attributable to the influence of the United States as well, but conceptually based on a far greater degree of cross-talk and linguistic transculturation, which tends to mitigate both U.S. influence and British colonial influence. The development of International English often centres on academic and scientific communities, where formal English usage is prevalent, and creative use of the language is at a minimum. This formal International English allows entry into Western culture as a whole and Western cultural values in general. The continued growth of the English language itself is seen by authors such as Alistair Pennycook as a kind of cultural imperialism, whether it is English in one form or English in two slightly different forms. Robert Phillipson argues against the possibility of such neutrality in his "Linguistic Imperialism" (1992). Learners who wish to use purportedly correct English are in fact faced with the dual standard of American English and British English, and other less known standard Englishes (including Australian, Scottish and Canadian). Edward Trimnell, author of "Why You Need a Foreign Language & How to Learn One" (2005) argues that the international version of English is only adequate for communicating basic ideas. For complex discussions and business/technical situations, English is not an adequate communication tool for non-native speakers of the language. Trimnell also asserts that native English-speakers have become "dependent on the language skills of others" by placing their faith in international English. Some reject both what they call "linguistic imperialism" and David Crystal's theory of the neutrality of English. They argue that the phenomenon of the global spread of English is better understood in the framework of appropriation (e.g., Spichtinger 2000), that is, English used for local purposes around the world. Demonstrators in non-English speaking countries often use signs in English to convey their demands to TV-audiences around the globe, for example. In English-language teaching, Bobda shows how Cameroon has moved away from a mono-cultural, Anglo-centered way of teaching English and has gradually appropriated teaching material to a Cameroonian context. This includes non-Western topics, such as the rule of Emirs, traditional medicine, and polygamy (1997:225). Kramsch and Sullivan (1996) describe how Western methodology and textbooks have been appropriated to suit local Vietnamese culture. The Pakistani textbook "Primary Stage English" includes lessons such as "Pakistan My Country", "Our Flag", and "Our Great Leader" (Malik 1993: 5,6,7), which might sound jingoistic to Western ears. Within the native culture, however, establishing a connection between English Language Teaching (ELT), patriotism, and Muslim faith is seen as one of the aims of ELT. The Punjab Textbook Board openly states: "The board ... takes care, through these books to inoculate in the students a love of the Islamic values and awareness to guard the ideological frontiers of your [the students] home lands." (Punjab Text Book Board 1997). Many difficult choices must be made if further standardisation of English is pursued. These include whether to adopt a current standard, or move towards a more neutral, but artificial one. A true International English might supplant both current American and British English as a variety of English for international communication, leaving these as local dialects, or would rise from a merger of General American and standard British English with admixture of other varieties of English and would generally replace all these varieties of English. We may, in due course, all need to be in control of two standard Englishes—the one which gives us our national and local identity, and the other which puts us in touch with the rest of the human race. In effect, we may all need to become bilingual in our own language. — David Crystal (1988: p. 265) This is the situation long faced by many users of English who possess a "non-standard" dialect of English as their birth tongue but have also learned to write (and perhaps also speak) a more standard dialect. (This phenomenon is known in linguistics as "diglossia".) Many academics often publish material in journals requiring different varieties of English and change style and spellings as necessary without great difficulty. As far as spelling is concerned, the differences between American and British usage became noticeable due to the first influential lexicographers (dictionary writers) on each side of the Atlantic. Samuel Johnson's dictionary of 1755 greatly favoured Norman-influenced spellings such as "centre" and "colour"; on the other hand, Noah Webster's first guide to American spelling, published in 1783, preferred spellings like "center" and the Latinate "color". The difference in strategy and philosophy of Johnson and Webster are largely responsible for the main division in English spelling that exists today. However, these differences are extremely minor. Spelling is but a small part of the differences between dialects of English, and may not even reflect dialect differences at all (except in phonetically spelled dialogue). International English refers to much more than an agreed spelling pattern. Two approaches to International English are the individualistic and inclusive approach and the new dialect approach. The individualistic approach gives control to individual authors to write and spell as they wish (within purported standard conventions) and to accept the validity of differences. The "Longman Grammar of Spoken and Written English", published in 1999, is a descriptive study of both American and British English in which each chapter follows individual spelling conventions according to the preference of the main editor of that chapter. The new dialect approach appears in "The Cambridge Guide to English Usage" (Peters, 2004), which attempts to avoid any language bias and accordingly uses an idiosyncratic international spelling system of mixed American and British forms (but tending to prefer the American English spellings).
https://en.wikipedia.org/wiki?curid=14996
International African Institute The International African Institute (IAI) was founded (as the International Institute of African Languages and Cultures - IIALC) in 1926 in London for the study of African languages. Frederick Lugard was the first chairman (1926 to his death in 1945); Diedrich Hermann Westermann (1926 to 1939) and Maurice Delafosse (1926) were the initial co-directors. Since 1928, the IAI has published a quarterly journal, "Africa". For some years during the 1950s and 1960s, the assistant editor was the novelist Barbara Pym. The IAI's mission is "to promote the education of the public in the study of Africa and its languages and cultures". Its operations includes seminars, journals, monographs, edited volumes and stimulating scholarship within Africa. The IAI has been involved in scholarly publishing since 1927. Scholars whose work has been published by the institute include Emmanuel Akeampong, Samir Amin, Karin Barber, Alex de Waal, Patrick Chabal, Mary Douglas, E.E. Evans Pritchard, Jack Goody, Jane Guyer, Monica Hunter, Bronislaw Malinowski, Z.K. Matthews, D.A. Masolo, Achille Mbembe, Thomas Mofolo, John Middleton, Simon Ottenburg, J.D.Y. Peel, Mamphela Ramphele, Isaac Schapera, Monica Wilson and V.Y. Mudimbe. IAI publications fall into a number of series, notably International African Library and International African Seminars. The International African Library is published from volume 41 (2011) by Cambridge University Press; Volumes 7-40 are available from Edinburgh University Press. there are 49 volumes. The archives of the International African Institute are held at the Archives Division of the Library of the London School of Economics. An online catalogue of these papers is available. In 1928, the IAI (then IIALC) published an "Africa Alphabet" to facilitate standardization of Latin-based writing systems for African languages. From April 1929 to 1950, the IAI offered prizes for works of literature in African languages.
https://en.wikipedia.org/wiki?curid=14997
Insulin-like growth factor The insulin-like growth factors (IGFs) are proteins with high sequence similarity to insulin. IGFs are part of a complex system that cells use to communicate with their physiologic environment. This complex system (often referred to as the IGF "axis") consists of two cell-surface receptors (IGF1R and IGF2R), two ligands (Insulin-like growth factor 1 (IGF-1) and Insulin-like growth factor 2 (IGF-2)), a family of seven high-affinity IGF-binding proteins (IGFBP1 to IGFBP7), as well as associated IGFBP degrading enzymes, referred to collectively as proteases. -The IGF "axis" is also commonly referred to as the Growth Hormone/IGF-1 Axis. Insulin-like growth factor 1 (IGF-1, or sometimes with a Roman numeral as IGF-I) is mainly secreted by the liver as a result of stimulation by growth hormone (GH). IGF-1 is important for both the regulation of normal physiology, as well as a number of pathological states, including cancer. The IGF axis has been shown to play roles in the promotion of cell proliferation and the inhibition of cell death (apoptosis). Insulin-like growth factor 2 (IGF-2, or sometimes as IGF-II) is thought to be a primary growth factor required for early development while IGF-1 expression is required for achieving maximal growth. Gene knockout studies in mice have confirmed this, though other animals are likely to regulate the expression of these genes in distinct ways. While IGF-2 may be primarily fetal in action it is also essential for development and function of organs such as the brain, liver, and kidney. Factors that are thought to cause variation in the levels of GH and IGF-1 in the circulation include an individual's genetic make-up, the time of day, age, sex, exercise status, stress levels, nutrition level, body mass index (BMI), disease state, race, estrogen status, and xenobiotic intake. IGF-1 has an involvement in regulating neural development including neurogenesis, myelination, synaptogenesis, and dendritic branching and neuroprotection after neuronal damage. Increased serum levels of IGF-I in children have been associated with higher IQ. IGF-1 shapes the development of the cochlea through controlling apoptosis. Its deficit can cause hearing loss. Serum level of it also underlies a correlation between short height and reduced hearing abilities particularly around 3–5 years of age, and at age 18 (late puberty). The IGFs are known to bind the IGF-1 receptor, the insulin receptor, the IGF-2 receptor, the insulin-related receptor and possibly other receptors. The IGF-1 receptor is the "physiological" receptor—IGF-1 binds to it at significantly higher affinity than it binds the insulin receptor. Like the insulin receptor, the IGF-1 receptor is a receptor tyrosine kinase—meaning the receptor signals by causing the addition of a phosphate molecule on particular tyrosines. The IGF-2 receptor only binds IGF-2 and acts as a "clearance receptor"—it activates no intracellular signaling pathways, functioning only as an IGF-2 sequestering agent and preventing IGF-2 signaling. Since many distinct tissue types express the IGF-1 receptor, IGF-1's effects are diverse. It acts as a neurotrophic factor, inducing the survival of neurons. It may catalyse skeletal muscle hypertrophy, by inducing protein synthesis, and by blocking muscle atrophy. It is protective for cartilage cells, and is associated with activation of osteocytes, and thus may be an anabolic factor for bone. Since at high concentrations it is capable of activating the insulin receptor, it can also complement for the effects of insulin. Receptors for IGF-1 are found in vascular smooth muscle, while typical receptors for insulin are not found in vascular smooth muscle. IGF-1 and IGF-2 are regulated by a family of proteins known as the IGF-Binding Proteins. These proteins help to modulate IGF action in complex ways that involve both inhibiting IGF action by preventing binding to the IGF-1 receptor as well as promoting IGF action possibly through aiding in delivery to the receptor and increasing IGF half-life. Currently, there are seven characterized IGF Binding Proteins (IGFBP1 to IGFBP7). There is currently significant data suggesting that IGFBPs play important roles in addition to their ability to regulate IGFs. IGF-1 and IGFBP-3 are GH dependent, whereas IGFBP-1 is insulin regulated. IGFBP-1 production from the liver is significantly elevated during insulinopenia while serum levels of bioactive IGF-1 is increased by insulin. Studies of recent interest show that the Insulin/IGF axis play an important role in aging. Nematodes, fruit-flies, and other organisms have an increased life span when the gene equivalent to the mammalian insulin is knocked out. It is somewhat difficult to relate this finding to the mammals, however, because in the smaller organism there are many genes (at least 37 in the nematode "Caenorhabditis elegans") that are "insulin-like" or "IGF-1-like", whereas in the mammals insulin-like proteins comprise only seven members (insulin, IGFs, relaxins, EPIL, and relaxin-like factor). The human insulin-like genes have apparently distinct roles with some but less crosstalk presumably because there are multiple insulin-receptor-like proteins in humans. Simpler organisms typically have fewer receptors; for example, only one insulin-like receptor exists in the nematode "C. elegans". Additionally, "C. elegans" do not have specialized organs such as the (Islets of Langerhans), which sense insulin in response to glucose homeostasis. Moreover, IGF1 affects lifespan in nematodes by causing dauer formation, a developmental stage of C. elegans larva. There is no mammalian correlate. Therefore, it is an open question as to whether either IGF-1 or insulin in the mammal may perturb aging, although there is the suggestion that dietary restriction phenomena may be related. Other studies are beginning to uncover the important role the IGFs play in diseases such as cancer and diabetes, showing for instance that IGF-1 stimulates growth of both prostate and breast cancer cells. Researchers are not in complete agreement about the degree of cancer risk that IGF-1 poses.
https://en.wikipedia.org/wiki?curid=15000
Islamism Islamism is a concept whose meaning has been debated in both public and academic contexts. The term can refer to diverse forms of social and political activism advocating that public and political life should be guided by Islamic principles or more specifically to movements which call for full implementation of "sharia" (Islamic order or law). It is commonly used interchangeably with the terms political Islam or Islamic fundamentalism. In academic usage, the term "Islamism" does not specify what vision of "Islamic order" or sharia are being advocated, or how their advocates intend to bring them about. In Western mass media it tends to refer to groups whose aim is to establish a sharia-based Islamic state, often with implication of violent tactics and human rights violations, and has acquired connotations of political extremism. In the Muslim world, the term has positive connotations among its proponents. Different currents of Islamist thought include advocating a "revolutionary" strategy of Islamizing society through exercise of state power, and alternately a "reformist" strategy to re-Islamizing society through grass-roots social and political activism. Islamists may emphasize the implementation of sharia; pan-Islamic political unity, including an Islamic state; or selective removal of non-Muslim, particularly Western military, economic, political, social, or cultural influences in the Muslim world that they believe to be incompatible with Islam. Graham Fuller has argued for a broader notion of Islamism as a form of identity politics, involving "support for [Muslim] identity, authenticity, broader regionalism, revivalism, [and] revitalization of the community." Some authors hold the term "Islamic activism" to be synonymous and preferable to "Islamism", and Rached Ghannouchi writes that Islamists prefer to use the term "Islamic movement" themselves. Central and prominent figures in twentieth-century Islamism include Hasan al-Banna, Sayyid Qutb, Abul Ala Maududi, and Ruhollah Khomeini. Most Islamist thinkers emphasize peaceful political processes, which are supported by the majority of contemporary Islamists. Others, Sayyid Qutb in particular, called for violence, and his followers are generally considered Islamic extremists, although Qutb denounced the killing of innocents. According to Robin Wright, Islamist movements have "arguably altered the Middle East more than any trend since the modern states gained independence", redefining "politics and even borders". Following the Arab Spring, some Islamist currents became heavily involved in democratic politics, while others spawned "the most aggressive and ambitious Islamist militia" to date, ISIS. The term "Islamism", which originally denoted the religion of Islam, first appeared in the English language as "Islamismus" in 1696, and as "Islamism" in 1712. The term appears in the U.S. Supreme Court decision in "In Re Ross" (1891). By the turn of the twentieth century the shorter and purely Arabic term "Islam" had begun to displace it, and by 1938, when Orientalist scholars completed "The Encyclopaedia of Islam", "Islamism" seems to have virtually disappeared from English usage. The term "Islamism" acquired its contemporary connotations in French academia in the late 1970s and early 1980s. From French, it began to migrate to the English language in the mid-1980s, and in recent years has largely displaced the term Islamic fundamentalism in academic circles. The new use of the term "Islamism" at first functioned as "a marker for scholars more likely to sympathize" with new Islamic movements; however, as the term gained popularity it became more specifically associated with political groups such as the Taliban or the Algerian Armed Islamic Group, as well as with highly publicized acts of violence. "Islamists" who have spoken out against the use of the term, insisting they are merely "Muslims", include Ayatollah Mohammad Hussein Fadlallah (1935-2010), the spiritual mentor of Hezbollah, and Abbassi Madani (1931- ), leader of the Algerian Islamic Salvation Front. A 2003 article in the "Middle East Quarterly" states: In summation, the term Islamism enjoyed its first run, lasting from Voltaire to the First World War, as a synonym for Islam. Enlightened scholars and writers generally preferred it to Mohammedanism. Eventually both terms yielded to Islam, the Arabic name of the faith, and a word free of either pejorative or comparative associations. There was no need for any other term, until the rise of an ideological and political interpretation of Islam challenged scholars and commentators to come up with an alternative, to distinguish Islam as modern ideology from Islam as a faith... To all intents and purposes, Islamic fundamentalism and Islamism have become synonyms in contemporary American usage. The Council on American–Islamic Relations complained in 2013 that the Associated Press's definition of "Islamist"—a "supporter of government in accord with the laws of Islam [and] who view the Quran as a political model"—had become a pejorative shorthand for "Muslims we don't like". Mansoor Moaddel, a sociologist at Eastern Michigan University, criticized it as "not a good term" because "the use of the term Islamist does not capture the phenomena that is quite heterogeneous." The AP Stylebook entry for "Islamist" reads as follows: "An advocate or supporter of a political movement that favors reordering government and society in accordance with laws prescribed by Islam. Do not use as a synonym for Islamic fighters, militants, extremists or radicals, who may or may not be Islamists. Where possible, be specific and use the name of militant affiliations: al-Qaida-linked, Hezbollah, Taliban, etc. Those who view the Quran as a political model encompass a wide range of Muslims, from mainstream politicians to militants known as jihadi." Islamism has been defined as: Islamism takes different forms and spans a wide range of strategies and tactics towards the powers in place—"destruction, opposition, collaboration, indifference" that have varied as "circumstances have changed"—and thus is not a united movement. Moderate and reformist Islamists who accept and work within the democratic process include parties like the Tunisian Ennahda Movement. Jamaat-e-Islami of Pakistan is basically a socio-political and democratic Vanguard party but has also gained political influence through military coup d'états in the past. Other Islamist groups like Hezbollah in Lebanon and Hamas in Palestine participate in the democratic and political process as well as armed attacks. Jihadist organizations like al-Qaeda and the Egyptian Islamic Jihad, and groups such as the Taliban, entirely reject democracy, often declaring as "kuffar" those Muslims who support it (see "takfirism"), as well as calling for violent/offensive jihad or urging and conducting attacks on a religious basis. Another major division within Islamism is between what Graham E. Fuller has described as the fundamentalist "guardians of the tradition" (Salafis, such as those in the Wahhabi movement) and the "vanguard of change and Islamic reform" centered around the Muslim Brotherhood. Olivier Roy argues that "Sunni pan-Islamism underwent a remarkable shift in the second half of the 20th century" when the Muslim Brotherhood movement and its focus on Islamisation of pan-Arabism was eclipsed by the Salafi movement with its emphasis on "sharia rather than the building of Islamic institutions," and rejection of Shia Islam. Following the Arab Spring, Roy has described Islamism as "increasingly interdependent" with democracy in much of the Arab Muslim world, such that "neither can now survive without the other." While Islamist political culture itself may not be democratic, Islamists need democratic elections to maintain their legitimacy. At the same time, their popularity is such that no government can call itself democratic that excludes mainstream Islamist groups. The relationship between the notions of Islam and Islamism has been subject to disagreement. Hayri Abaza argues that the failure to distinguish between Islam and Islamism leads many in the West to support illiberal Islamic regimes, to the detriment of progressive moderates who seek to separate religion from politics. In contrast, Abid Ullah Jan, writes "If Islam is a way of life, how can we say that those who want to live by its principles in legal, social, political, economic, and political spheres of life are not Muslims, but Islamists and believe in Islamism, not [just] Islam." A writer for the International Crisis Group maintains that "the conception of 'political Islam'" is a creation of Americans to explain the Iranian Islamic Revolution and apolitical Islam was a historical fluke of the "short-lived era of the heyday of secular Arab nationalism between 1945 and 1970", and it is quietist/non-political Islam, not Islamism, that requires explanation. Another source distinguishes Islamist from Islamic "by the fact that the latter refers to a religion and culture in existence over a millennium, whereas the first is a political/religious phenomenon linked to the great events of the 20th century". Islamists have, at least at times, defined themselves as "Islamiyyoun/Islamists" to differentiate themselves from "Muslimun/Muslims". Daniel Pipes describes Islamism as a modern ideology that owes more to European utopian political ideologies and "isms" than to the traditional Islamic religion. Few observers contest the influence of Islamism within the Muslim world. Following the collapse of the Soviet Union, political movements based on the liberal ideology of free expression and democratic rule have led the opposition in other parts of the world such as Latin America, Eastern Europe and many parts of Asia; however "the simple fact is that political Islam currently reigns as the most powerful ideological force across the Muslim world today". People see the unchanging socioeconomic condition in the Muslim world as a major factor. Olivier Roy believes "the socioeconomic realities that sustained the Islamist wave are still here and are not going to change: poverty, uprootedness, crises in values and identities, the decay of the educational systems, the North-South opposition, and the problem of immigrant integration into the host societies". The strength of Islamism also draws from the strength of religiosity in general in the Muslim world. Compared to Western societies, "[w]hat is striking about the Islamic world is that ... it seems to have been the least penetrated by irreligion". Where other peoples may look to the physical or social sciences for answers in areas which their ancestors regarded as best left to scripture, in the Muslim world, religion has become more encompassing, not less, as "in the last few decades, it has been the fundamentalists who have increasingly represented the cutting edge" of Muslim culture. Even before the Arab Spring, Islamists in Egypt and other Muslim countries had been described as "extremely influential. ... They determine how one dresses, what one eats. In these areas, they are incredibly successful. ... Even if the Islamists never come to power, they have transformed their countries." Democratic, peaceful and political Islamists are now dominating the spectrum of Islamist ideology as well as the political system of the Muslim world. Moderate strains of Islamism have been described as "competing in the democratic public square in places like Turkey, Tunisia, Malaysia and Indonesia". Moderate Islamism is the emerging Islamist discourses and movements which considered deviated from the traditional Islamist discourses of the mid-20th century. Moderate Islamism is characterized by pragmatic participation within the existing constitutional and political framework, in the most cases democratic institution. Moderate Islamists make up the majority of the contemporary Islamist movements. From the philosophical perspective, their discourses are represented by reformation or reinterpretation of modern socio-political institutions and values imported from the West including democracy. This had led to the conception of Islamic form of such institutions, and Islamic interpretations are often attempted within this conception. In the example of democracy, Islamic democracy as an Islamized form of the system has been intellectually developed. In Islamic democracy, the concept of "shura", the tradition of consultation which considered as Sunnah of the prophet Muhammad, is invoked to Islamically reinterpret and legitimatize the institution of democracy. Performance, goal, strategy, and outcome of moderate Islamist movements vary considerably depending on the country and its socio-political and historical context. In terms of performance, most of the Islamist political parties are oppositions. However, there are few examples they govern or obtain the substantial amount of the popular votes. This includes National Congress of Sudan, National Iraqi Alliance of Iraq and Justice and Development Party (PJD) of Morocco. Their goal also ranges widely. The Ennahda Movement of Tunisia and Prosperous Justice Party (PKS) of Indonesia formally resigned their vision of implementing sharia. In Morocco, PJD supported King Muhammad VI's "Mudawana", a "startlingly progressive family law" which grants women the right to a divorce, raises the minimum age for marriage to 18, and, in the event of separation, stipulates equal distribution of property. To the contrary, National Congress of Sudan has implemented the strict interpretation of sharia with the foreign support from the conservative states. Movements of the former category are also termed as Post-Islamism (see below). Their political outcome is interdependent with their goal and strategy, in which what analysts call "inclusion-moderation theory" is in effect. Inclusion-moderation theory assumes that the more lenient the Islamists become, the less likely their survival will be threatened. Similarly, the more accommodating the government be, the less extreme Islamists become. Moderate Islamism within the democratic institution is a relatively recent phenomenon. Throughout the 80s and 90s, major moderate Islamist movements such as the Muslim Brotherhood and the Ennahda were excluded from democratic political participation. Islamist movements operated within the state framework were markedly scrutinized during the Algerian Civil War (1991-2002) and after the increase of terrorism in Egypt in the 90s. Reflecting on these failures, Islamists turned increasingly into revisionist and receptive to democratic procedures in the 21st century. The possibility of accommodating this new wave of modernist Islamism has been explored among the Western intellectuals, with the concept such as Turkish model was proposed. The concept was inspired by the perceived success of Turkish Justice and Development Party (AKP) led by Recep Tayyip Erdoğan in harmonizing the Islamist principles within the secular state framework. Turkish model, however, has been considered came "unstuck" after recent purge and violations of democratic principles by the Erdoğan regime. Critics of the concept hold that Islamist aspirations are fundamentally incompatible with the democratic principles, thus even moderate Islamists are totalitarian in nature. As such, it requires strong constitutional checks and the effort of the mainstream Islam to detach political Islam from the public discourses. Post-Islamism is a term proposed by Iranian political sociologist Asef Bayat, referring to the Islamist movements which marked by the critical departure from the traditional Islamist discourses of the mid-20th century. Bayat explained it as "a condition where, following a phase of experimentation, the appeal, energy, symbols and sources of legitimacy of Islamism get exhausted, even among its once-ardent supporters. As such, post-Islamism is not anti-Islamic, but rather reflects a tendency to resecularize religion." It originally pertained only to Iran, where "post-Islamism is expressed in the idea of fusion between Islam (as a personalized faith) and individual freedom and choice; and post-Islamism is associated with the values of democracy and aspects of modernity". A 2008 Lowy Institute for International Policy paper suggests that PKS of Indonesia and AKP of Turkey are post-Islamist. The characterization can be applied to Malaysian Islamic Party (PAS), and used to describe the "ideological evolution" within the Ennahda of Tunisia. The contemporary Salafi movement encompasses a broad range of ultraconservative Islamist doctrines which share the reformist mission of Ibn Taymiyyah. From the perspective of political Islam, the Salafi movement can be broadly categorized into three groups; the quietist (or the purist), the activist (or "haraki") and the jihadist (Salafi jihadism, see below). The quietist school advocates for societal reform through religious education and proselytizing rather than political activism. The activist school, to the contrary, encourages political participation within the constitutional and political framework. The jihadist school is inspired by the ideology of Sayyid Qutb (Qutbism, see below), and rejects the legitimacy of secular institutions and promotes the revolution in order to pave the way for the establishment of a new Caliphate. The quietist Salafi movement is stemming from the teaching of Nasiruddin Albani, who challenged the notion of "taqlid" (imitation, conformity to the legal precedent) as a blind adherence. As such, they alarm the political participation as potentially leading to the division of the Muslim community. This school is exemplified by Madkhalism which based on the writings of Rabee al-Madkhali. Madkhalism was originated in the 90s Saudi Arabia, as a reaction against the rise of the Salafi activism and the threat of Salafi Jihadism. It rejects any kind of opposition against the secular governance, thus endorsed by the authoritarian governments of Egypt and Saudi Arabia during the 90s. The influence of the quietist school has waned significantly in the Middle East recently, as the governments began incorporating Islamist factions emanating from the popular demand. The politically active Salafi movement, Salafi activism or "harakis", is based on the religious belief that endorses non-violent political activism in order to protect God's Divine governance. This means that politics is a field which requires Salafi principles to be applied as well, in the same manner with other aspects of society and life. Salafi activism was originated in the 50s to 60s Saudi Arabia, where many Muslim Brothers had taken refuge from the prosecution by the Nasser regime. There, Muslim Brothers' Islamism had synthesized with Salafism, and led to the creation of the Salafi activist trend exemplified by the Sahwa movement in the 80s, promulgated by Safar Al-Hawali and Salman al-Ouda. Today, the school makes up the majority of Salafism. There are many active Salafist political parties throughout the Muslim world, including Al Nour Party of Egypt, Al Islah of Yemen and Al Asalah of Bahrain. The antecedent of the contemporary Salafi movement is Wahhabism, which traces back to the 18th-century reform movement in Najd by Muhammad ibn Abd al-Wahhab. Although having different roots, Wahhabism and Salafism are considered more or less merged in the 60s Saudi Arabia. In the process, Salafism had been greatly influenced by Wahhabism, and today they share the similar religious outlook. Wahhabism is also described as a Saudi brand of Salafism. From the political perspective, Wahhabism is marked in its teaching of "bay'ah" (oath to allegiance), which requires Muslims to present an allegiance to the ruler of the society. Wahhabis have traditionally given their allegiance to the House of Saud, and this has made them apolitical in Saudi Arabia. However, there are small numbers of other strains including Salafi Jihadist offshoot which decline to present an allegiance to the House of Saud. Wahhabism is also characterized by its disinterest in social justice, anticolonialism, or economic equality, expounded upon by the mainstream Islamists. Historically, Wahhabism was state-sponsored and internationally propagated by Saudi Arabia with the help of funding from mainly Saudi petroleum exports, leading to the "explosive growth" of its influence (and subsequently, the influence of Salafism) from the 70s (a phenomenon often dubbed as Petro-Islam). Today, both Wahhabism and Salafism exert their influence worldwide, and they have been indirectly contributing to the upsurge of Salafi Jihadism as well. Qutbism is an ideology formulated by Sayyid Qutb, an influential figure of the Muslim Brotherhood during the 50s and 60s, which justifies the use of violence in order to push the Islamist goals. Qutbism is marked by the two distinct methodological concepts; one is "takfirism", which in the context of Qutbism, indicates the excommunication of fellow Muslims who are deemed equivalent to apostate, and another is "offensive Jihad", a concept which promotes violence in the name of Islam against the perceived "kuffar" (infidels). Based on the two concepts, Qutbism promotes engagement against the state apparatus in order to topple down its regime. Fusion of Qutbism and Salafi Movement had resulted in the development of Salafi jihadism (see below). Qutbism is considered a product of the extreme repression experienced by Qutb and his fellow Muslim Brothers under the Nasser regime, which was resulted from the 1954 Muslim Brothers plot to assassinate Nasser. During the repression, thousands of Muslim Brothers were imprisoned, many of them, including Qutb, tortured and held in concentration camps. Under this condition, Qutb had cultivated his Islamist ideology in his seminal work "Ma'alim fi-l-Tariq (Milestones)", in which he equated the Muslims within the Nasser regime with secularism and the West, and described them as regression back to "jahiliyyah" (period of time before the advent of Islam). In this context, he allowed the "tafkir" (which was an unusual practice before the rejuvenation by Qutb) of said Muslims. Although Qutb was executed before the completion of his ideology, his idea was disseminated and continuously expanded by the later generations, among them Abdullah Yusuf Azzam and Ayman Al-Zawahiri, who was a student of Qutb's brother Muhammad Qutb and later became a mentor of Osama bin Laden. Al-Zawahiri was considered "the purity of Qutb's character and the torment he had endured in prison," and had played an extensive role in the normalization of offensive Jihad within the Qutbist discourse. Both al-Zawahiri and bin Laden had become the core of Jihadist movements which exponentially developed in the backdrop of the late 20th-century geopolitical crisis throughout the Muslim world. Salafi jihadism is a term coined by Gilles Kepel in 2002, referring to the ideology which actively promotes and conducts violence and terrorism in order to pursue the establishment of an Islamic state or a new Caliphate. Today, the term is often simplified to "Jihadism" or "Jihadist movement" in popular usage according to Martin Kramer. It is a hybrid ideology between Qutbism, Salafism, Wahhabism and other minor Islamist strains. Qutbism taught by scholars like Abdullah Azzam provided the political intellectual underpinnings with the concepts like takfirism, and Salafism and Wahhabism provided the religious intellectual input. Salafi Jihadism makes up a tiny minority of the contemporary Islamist movements. Distinct characteristics of Salafi Jihadism noted by Robin Wright include the formal process of taking "bay'ah" (oath of allegiance) to the leader, which is inspired by the Wahhabi teaching. Another characteristic is its flexibility to cut ties with the less-popular movements when its strategically or financially convenient, exemplified by the relations between al-Qaeda and al-Nusra Front. Other marked developments of Salafi Jihadism include the concepts of "near enemy" and "far enemy". "Near enemy" connotes the despotic regime occupying the Muslim society, and the term was coined by Mohammed Abdul-Salam Farag in order to justify the assassination of Anwar al-Sadat by the Salafi Jihadi organization Egyptian Islamic Jihad (EIJ) in 1981. Later, the concept of "far enemy" which connotes the West was introduced and formally declared by al-Qaeda in 1996. Salafi Jihadism emerged out during the 80s when the Soviet invaded Afghanistan. Local mujahideen had extracted financial, logistical and military support from Saudi Arabia, Pakistan and the United States. Later, Osama bin Laden established al-Qaeda as a transnational Salafi Jihadi organization in 1988 to capitalize this financial, logistical and military network and to expand their operation. The ideology had seen its rise during the 90s when the Muslim world experienced numerous geopolitical crisis, notably the Algerian Civil War (1991–2002), Bosnian War (1992–1995), and the First Chechen War (1994–1996). Within these conflicts, political Islam often acted as a mobilizing factor for the local belligerents, who demanded financial, logistical and military support from al-Qaeda, in the exchange for active proliferation of the ideology. After the 1998 bombings of US embassies, September 11 attacks (2001), the US-led invasion of Afghanistan (2001) and Iraq (2003), Salafi Jihadism had seen its momentum. However, it got devastated by the US counterterrorism operations, culminated in bin Laden's death in 2011. After the Arab Spring (2011) and subsequent Syrian Civil War (2011–present), the remnants of al-Qaeda franchise in Iraq had restored their capacity, which rapidly developed into the Islamic State of Iraq and the Levant, spreading its influence throughout the conflict zones of MENA region and the globe. Some Islamic revivalist movements and leaders pre-dating Islamism include: The end of the 19th century saw the dismemberment of most of the Muslim Ottoman Empire by non-Muslim European colonial powers. The empire spent massive sums on Western civilian and military technology to try to modernize and compete with the encroaching European powers, and in the process went deep into debt to these powers. In this context, the publications of Jamal ad-din al-Afghani (1837–97), Muhammad Abduh (1849–1905) and Rashid Rida (1865–1935) preached Islamic alternatives to the political, economic, and cultural decline of the empire. Muhammad Abduh and Rashid Rida formed the beginning of the Islamist movement, as well as the reformist Islamist movement. Their ideas included the creation of a truly Islamic society under sharia law, and the rejection of taqlid, the blind imitation of earlier authorities, which they believed deviated from the true messages of Islam. Unlike some later Islamists, Early Salafiyya strongly emphasized the restoration of the Caliphate. Muhammad Iqbal was a philosopher, poet and politician in British India who is widely regarded as having inspired the Islamic Nationalism and Pakistan Movement in British India. Iqbal is admired as a prominent classical poet by Pakistani, Iranian, Indian and other international scholars of literature. Though Iqbal is best known as an eminent poet, he is also a highly acclaimed "Islamic philosophical thinker of modern times". While studying law and philosophy in England and Germany, Iqbal became a member of the London branch of the All India Muslim League. He came back to Lahore in 1908. While dividing his time between law practice and philosophical poetry, Iqbal had remained active in the Muslim League. He did not support Indian involvement in World War I and remained in close touch with Muslim political leaders such as Muhammad Ali Johar and Muhammad Ali Jinnah. He was a critic of the mainstream Indian nationalist and secularist Indian National Congress. Iqbal's seven English lectures were published by Oxford University press in 1934 in a book titled The Reconstruction of Religious Thought in Islam. These lectures dwell on the role of Islam as a religion as well as a political and legal philosophy in the modern age. Iqbal expressed fears that not only would secularism and secular nationalism weaken the spiritual foundations of Islam and Muslim society, but that India's Hindu-majority population would crowd out Muslim heritage, culture and political influence. In his travels to Egypt, Afghanistan, Palestine and Syria, he promoted ideas of greater Islamic political co-operation and unity, calling for the shedding of nationalist differences. Sir Mummad Iqbal was elected president of the Muslim League in 1930 at its session in Allahabad as well as for the session in Lahore in 1932. In his Allahabad Address on 29 December 1930, Iqbal outlined a vision of an independent state for Muslim-majority provinces in northwestern India. This address later inspired the Pakistan movement. The thoughts and vision of Iqbal later influenced many reformist Islamists, e.g., Muhammad Asad, Sayyid Abul Ala Maududi and Ali Shariati. Sayyid Abul Ala Maududi was an important early twentieth-century figure in the Islamic revival in India, and then after independence from Britain, in Pakistan. Trained as a lawyer he chose the profession of journalism, and wrote about contemporary issues and most importantly about Islam and Islamic law. Maududi founded the Jamaat-e-Islami party in 1941 and remained its leader until 1972. However, Maududi had much more impact through his writing than through his political organising. His extremely influential books (translated into many languages) placed Islam in a modern context, and influenced not only conservative ulema but liberal modernizer Islamists such as al-Faruqi, whose "Islamization of Knowledge" carried forward some of Maududi's key principles. Maududi believed that Islam was all-encompassing: "Everything in the universe is 'Muslim' for it obeys God by submission to His laws... The man who denies God is called Kafir (concealer) because he conceals by his disbelief what is inherent in his nature and embalmed in his own soul." Maududi also believed that Muslim society could not be Islamic without Sharia, and Islam required the establishment of an Islamic state. This state should be a "theo-democracy," based on the principles of: "tawhid" (unity of God), "risala" (prophethood) and "khilafa" (caliphate). Although Maududi talked about Islamic revolution, by "revolution" he meant not the violence or populist policies of the Iranian Revolution, but the gradual changing the hearts and minds of individuals from the top of society downward through an educational process or "da'wah". Roughly contemporaneous with Maududi was the founding of the Muslim Brotherhood in Ismailiyah, Egypt in 1928 by Hassan al Banna. His was arguably the first, largest and most influential modern Islamic political/religious organization. Under the motto "the Qur'an is our constitution," it sought Islamic revival through preaching and also by providing basic community services including schools, mosques, and workshops. Like Maududi, Al Banna believed in the necessity of government rule based on Shariah law implemented gradually and by persuasion, and of eliminating all imperialist influence in the Muslim world. Some elements of the Brotherhood, though perhaps against orders, did engage in violence against the government, and its founder Al-Banna was assassinated in 1949 in retaliation for the assassination of Egypt's premier Mahmud Fami Naqrashi three months earlier. The Brotherhood has suffered periodic repression in Egypt and has been banned several times, in 1948 and several years later following confrontations with Egyptian president Gamal Abdul Nasser, who jailed thousands of members for several years. Despite periodic repression, the Brotherhood has become one of the most influential movements in the Islamic world, particularly in the Arab world. For many years it was described as "semi-legal" and was the only opposition group in Egypt able to field candidates during elections. In the 2011–12 Egyptian parliamentary election, the political parties identified as "Islamist" (the Brotherhood's Freedom and Justice Party, Salafi Al-Nour Party and liberal Islamist Al-Wasat Party) won 75% of the total seats. Mohamed Morsi, an Islamist of Muslim Brotherhood, was the first democratically elected president of Egypt. He was deposed during the 2013 Egyptian coup d'état. Maududi's political ideas influenced Sayyid Qutb a leading member of the Muslim Brotherhood movement, and one of the key philosophers of Islamism and highly influential thinkers of Islamic universalism. Qutb believed things had reached such a state that the Muslim community had literally ceased to exist. It "has been extinct for a few centuries," having reverted to Godless ignorance (Jahiliyya). To eliminate jahiliyya, Qutb argued Sharia, or Islamic law, must be established. Sharia law was not only accessible to humans and essential to the existence of Islam, but also all-encompassing, precluding "evil and corrupt" non-Islamic ideologies like communism, nationalism, or secular democracy. Qutb preached that Muslims must engage in a two-pronged attack of converting individuals through preaching Islam peacefully and also waging what he called militant jihad so as to forcibly eliminate the "power structures" of Jahiliyya—not only from the Islamic homeland but from the face of the earth. Qutb was both a member of the brotherhood and enormously influential in the Muslim world at large. Qutb is considered by some (Fawaz A. Gerges) to be "the founding father and leading theoretician" of modern jihadists, such as Osama bin Laden. However, the Muslim Brotherhood in Egypt and in Europe has not embraced his vision of undemocratic Islamic state and armed jihad, something for which they have been denounced by radical Islamists. Islamic fervor was understood as a weapon that the United States could use as a weapon in its Cold War against the Soviet Union and its communist allies because communism professes atheism. In a September 1957 White House meeting between U.S. President Eisenhower and senior U.S. foreign policy officials, it was agreed to use the communists' lack of religion against them by setting up a secret task force to deliver weapons to Middle East despots, including the Saudi Arabian rulers. "We should do everything possible to stress the 'holy war' aspect" that has currency in the Middle East, President Eisenhower stated in agreement. The quick and decisive defeat of the Arab troops during the Six-Day War by Israeli troops constituted a pivotal event in the Arab Muslim world. The defeat along with economic stagnation in the defeated countries, was blamed on the secular Arab nationalism of the ruling regimes. A steep and steady decline in the popularity and credibility of secular, socialist and nationalist politics ensued. Ba'athism, Arab socialism, and Arab nationalism suffered, and different democratic and anti-democratic Islamist movements inspired by Maududi and Sayyid Qutb gained ground. The first modern "Islamist state" (with the possible exception of Zia's Pakistan) was established among the Shia of Iran. In a major shock to the rest of the world, Ayatollah Ruhollah Khomeini led the Iranian Revolution of 1979 in order to overthrow the oil-rich, well-armed, Westernized and pro-American secular monarchy ruled by Shah Muhammad Reza Pahlavi. The views of Ali Shariati, the ideologue of the Iranian Revolution, resembled those of Mohammad Iqbal, the ideological father of the State of Pakistan, but Khomeini's beliefs are perceived to be placed somewhere between the beliefs of Shia Islam and the beliefs of Sunni Islamic thinkers like Mawdudi and Qutb. He believed that complete imitation of the Prophet Mohammad and his successors such as Ali for the restoration of Sharia law was essential to Islam, that many secular, Westernizing Muslims were actually agents of the West and therefore serving Western interests, and that acts such as the "plundering" of Muslim lands was part of a long-term conspiracy against Islam by Western governments. His views differed from those of Sunni scholars in: The revolution was influenced by Marxism through Islamist thought and also by writings that sought either to counter Marxism (Muhammad Baqir al-Sadr's work) or to integrate socialism and Islamism (Ali Shariati's work). A strong wing of the revolutionary leadership was made up of leftists or "radical populists", such as Ali Akbar Mohtashami-Pur. While initial enthusiasm for the Iranian revolution in the Muslim world was intense, it has waned as critics hold and campaign that "purges, executions, and atrocities tarnished its image". The Islamic Republic has also maintained its hold on power in Iran in spite of US economic sanctions, and has created or assisted like-minded Shia terrorist groups in Iraq, Egypt, Syria, Jordan (SCIRI) and Lebanon (Hezbollah) (two Muslim countries that also have large Shiite populations). During the 2006 Israel-Lebanon conflict, the Iranian government enjoyed something of a resurgence in popularity amongst the predominantly Sunni "Arab street," due to its support for Hezbollah and to President Mahmoud Ahmadinejad's vehement opposition to the United States and his call that Israel shall vanish. The strength of the Islamist movement was manifest in an event which might have seemed sure to turn Muslim public opinion against fundamentalism, but did just the opposite. In 1979 the Grand Mosque in Mecca Saudi Arabia was seized by an armed fundamentalist group and held for over a week. Scores were killed, including many pilgrim bystanders in a gross violation of one of the most holy sites in Islam (and one where arms and violence are strictly forbidden). Instead of prompting a backlash against the movement from which the attackers originated, however, Saudi Arabia, already very conservative, responded by shoring up its fundamentalist credentials with even more Islamic restrictions. Crackdowns followed on everything from shopkeepers who did not close for prayer and newspapers that published pictures of women, to the selling of dolls, teddy bears (images of animate objects are considered haraam), and dog food (dogs are considered unclean). In other Muslim countries, blame for and wrath against the seizure was directed not against fundamentalists, but against Islamic fundamentalism's foremost geopolitical enemy—the United States. Ayatollah Khomeini sparked attacks on American embassies when he announced: It is not beyond guessing that this is the work of criminal American imperialism and international Zionism despite the fact that the object of the fundamentalists' revolt was the Kingdom of Saudi Arabia, America's major ally in the region. Anti-American demonstrations followed in the Philippines, Turkey, Bangladesh, India, the UAE, Pakistan, and Kuwait. The US Embassy in Libya was burned by protesters chanting pro-Khomeini slogans and the embassy in Islamabad, Pakistan was burned to the ground. In 1979, the Soviet Union deployed its 40th Army into Afghanistan, attempting to suppress an Islamic rebellion against an allied Marxist regime in the Afghan Civil War. The conflict, pitting indigenous impoverished Muslims (mujahideen) against an anti-religious superpower, galvanized thousands of Muslims around the world to send aid and sometimes to go themselves to fight for their faith. Leading this pan-Islamic effort was Palestinian sheikh Abdullah Yusuf Azzam. While the military effectiveness of these "Afghan Arabs" was marginal, an estimated 16,000 to 35,000 Muslim volunteers came from around the world to fight in Afghanistan. When the Soviet Union abandoned the Marxist Najibullah regime and withdrew from Afghanistan in 1989 (the regime finally fell in 1992), the victory was seen by many Muslims as the triumph of Islamic faith over superior military power and technology that could be duplicated elsewhere. The jihadists gained legitimacy and prestige from their triumph both within the militant community and among ordinary Muslims, as well as the confidence to carry their jihad to other countries where they believed Muslims required assistance.| The "veterans of the guerrilla campaign" returning home to Algeria, Egypt, and other countries "with their experience, ideology, and weapons," were often eager to continue armed jihad. The collapse of the Soviet Union itself, in 1991, was seen by many Islamists, including Bin Laden, as the defeat of a superpower at the hands of Islam. Concerning the $6 billion in aid given by the US and Pakistan's military training and intelligence support to the mujahideen, bin Laden wrote: "[T]he US has no mentionable role" in "the collapse of the Soviet Union ... rather the credit goes to God and the mujahidin" of Afghanistan. Another factor in the early 1990s that worked to radicalize the Islamist movement was the Gulf War, which brought several hundred thousand US and allied non-Muslim military personnel to Saudi Arabian soil to put an end to Saddam Hussein's occupation of Kuwait. Prior to 1990 Saudi Arabia played an important role in restraining the many Islamist groups that received its aid. But when Saddam, secularist and Ba'athist dictator of neighboring Iraq, attacked Kuwait (his enemy in the war), western troops came to protect the Saudi monarchy. Islamists accused the Saudi regime of being a puppet of the west. These attacks resonated with conservative Muslims and the problem did not go away with Saddam's defeat either, since American troops remained stationed in the kingdom, and a de facto cooperation with the Palestinian-Israeli peace process developed. Saudi Arabia attempted to compensate for its loss of prestige among these groups by repressing those domestic Islamists who attacked it (bin Laden being a prime example), and increasing aid to Islamic groups (Islamist madrassas around the world and even aiding some violent Islamist groups) that did not, but its pre-war influence on behalf of moderation was greatly reduced. One result of this was a campaign of attacks on government officials and tourists in Egypt, a bloody civil war in Algeria and Osama bin Laden's terror attacks climaxing in the 9/11 attack. In Afghanistan, the mujahideen's victory against the Soviet Union in the 1980s did not lead to justice and prosperity, due to a vicious and destructive civil war between political and tribal warlords, making Afghanistan one of the poorest countries on earth. In 1992, the Democratic Republic of Afghanistan ruled by communist forces collapsed, and democratic Islamist elements of mujahdeen founded the Islamic State of Afghanistan. In 1996, a more conservative and anti-democratic Islamist movement known as the Taliban rose to power, defeated most of the warlords and took over roughly 80% of Afghanistan. The Taliban were spawned by the thousands of madrasahs the Deobandi movement established for impoverished Afghan refugees and supported by governmental and religious groups in neighboring Pakistan. The Taliban differed from other Islamist movements to the point where they might be more properly described as Islamic fundamentalist or neofundamentalist, interested in spreading "an idealized and systematized version of conservative tribal village customs" under the label of Sharia to an entire country. Their ideology was also described as being influenced by Wahhabism, and the extremist jihadism of their guest Osama bin Laden. The Taliban considered "politics" to be against Sharia and thus did not hold elections. They were led by Mullah Mohammed Omar who was given the title "Amir al-Mu'minin" or Commander of the Faithful, and a pledge of loyalty by several hundred Taliban-selected Pashtun clergy in April 1996. Taliban were overwhelmingly Pashtun and were accused of not sharing power with the approximately 60% of Afghans who belonged to other ethnic groups. (see: Taliban#Ideology) The Taliban's hosting of Osama bin Laden led to an American-organized attack which drove them from power following the 9/11 attacks. Taliban are still very much alive and fighting a vigorous insurgency with suicide bombings and armed attacks being launched against NATO and Afghan government targets. An Islamist movement influenced by Salafism and the jihad in Afghanistan, as well as the Muslim Brotherhood, was the FIS or Front Islamique de Salut (the Islamic Salvation Front) in Algeria. Founded as a broad Islamist coalition in 1989 it was led by Abbassi Madani, and a charismatic Islamist young preacher, Ali Belhadj. Taking advantage of economic failure and unpopular social liberalization and secularization by the ruling leftist-nationalist FLN government, it used its preaching to advocate the establishment of a legal system following Sharia law, economic liberalization and development program, education in Arabic rather than French, and gender segregation, with women staying home to alleviate the high rate of unemployment among young Algerian men. The FIS won sweeping victories in local elections and it was going to win national elections in 1991 when voting was canceled by a military coup d'état. As Islamists took up arms to overthrow the government, the FIS's leaders were arrested and it became overshadowed by Islamist guerrilla groups, particularly the Islamic Salvation Army, MIA and Armed Islamic Group (or GIA). A bloody and devastating civil war ensued in which between 150,000 and 200,000 people were killed over the next decade. The civil war was not a victory for Islamists. By 2002 the main guerrilla groups had either been destroyed or had surrendered. The popularity of Islamist parties has declined to the point that "the Islamist candidate, Abdallah Jaballah, came a distant third with 5% of the vote" in the 2004 presidential election. Jamaat-e-Islami Bangladesh is the largest Islamist party in the country and supports the implementation of Sharia law and promotes the country's main right-wing politics. Since 2000, the main political opposition Bangladesh Nationalist Party (BNP) has been allied with it and another Islamic party, Islami Oikya Jote. Some of their leaders and supporters, including former ministers and MPs, have been hanged for alleged war crimes during Bangladesh's struggle for independence and speaking against the ruling Bangladesh Awami League. In the 2012, the party named "Islam" had four candidates and they were elected in Molenbeek and Anderlecht. In 2018, they ran candidates in 28 municipalities. Its policies include schools must offer halal food and women must be able to wear a headscarf anywhere. Another of the Islam Party's goals is to separate men and women on public transportation. The party's president argues this policy will help protect women from sexual harassment. While Qutb's ideas became increasingly radical during his imprisonment prior to his execution in 1966, the leadership of the Brotherhood, led by Hasan al-Hudaybi, remained moderate and interested in political negotiation and activism. Fringe or splinter movements inspired by the final writings of Qutb in the mid-1960s (particularly the manifesto "Milestones", a.k.a. "Ma'alim fi-l-Tariq") did, however, develop and they pursued a more radical direction. By the 1970s, the Brotherhood had renounced violence as a means of achieving its goals. The path of violence and military struggle was then taken up by the Egyptian Islamic Jihad organization responsible for the assassination of Anwar Sadat in 1981. Unlike earlier anti-colonial movements the extremist group directed its attacks against what it believed were "apostate" leaders of Muslim states, leaders who held secular leanings or who had introduced or promoted Western/foreign ideas and practices into Islamic societies. Its views were outlined in a pamphlet written by Muhammad Abd al-Salaam Farag, in which he states: ...there is no doubt that the first battlefield for jihad is the extermination of these infidel leaders and to replace them by a complete Islamic Order... Another of the Egyptian groups which employed violence in their struggle for Islamic order was al-Gama'a al-Islamiyya (Islamic Group). Victims of their campaign against the Egyptian state in the 1990s included the head of the counter-terrorism police (Major General Raouf Khayrat), a parliamentary speaker (Rifaat al-Mahgoub), dozens of European tourists and Egyptian bystanders, and over 100 Egyptian police. Ultimately the campaign to overthrow the government was unsuccessful, and the major jihadi group, Jamaa Islamiya (or al-Gama'a al-Islamiyya), renounced violence in 2003. Other lesser known groups include the Islamic Liberation Party, Salvation from Hell and Takfir wal-Hijra, and these groups have variously been involved in activities such as attempted assassinations of political figures, arson of video shops and attempted takeovers of government buildings. The Democratic Union of Muslims, a party founded in 2012, planned to take part in 2019 municipal elections. They presented candidate lists for 50 different cities. The Democratic Union of Muslims also fielded candidates for European Parliament elections. The rise of the party can be attributed to French Muslim dissatisfaction with mainstream political parties. Ultimately, it represents an alternative to the Islamophobia in France. Hamas is a Palestinian Sunni Islamist organization that governs the Gaza Strip where it has moved to establish sharia law in matters such as separation of the genders, using the lash for punishment, and Islamic dress code.
https://en.wikipedia.org/wiki?curid=15012
Instructional theory An instructional theory is "a theory that offers explicit guidance on how to better help people learn and develop." It provides insights about what is likely to happen and why with respect to different kinds of teaching and learning activities while helping indicate approaches for their evaluation. Instructional designers focus on how to best structure material and instructional behavior to facilitate learning. Originating in the United States in the late 1970s, "instructional theory" is influenced by three basic theories in educational thought: behaviorism, the theory that helps us understand how people conform to predetermined standards; cognitivism, the theory that learning occurs through mental associations; and constructivism, the theory explores the value of human activity as a critical function of gaining knowledge. Instructional theory is heavily influenced by the 1956 work of Benjamin Bloom, a University of Chicago professor, and the results of his Taxonomy of Education Objectives—one of the first modern codifications of the learning process. One of the first instructional theorists was Robert M. Gagne, who in 1965 published "Conditions of Learning" for the Florida State University's Department of Educational Research. Instructional theory is different than learning theory. A learning theory "describes" how learning takes place, and an instructional theory "prescribes" how to better help people learn. Learning theories often inform instructional theory, and three general theoretical stances take part in this influence: behaviorism (learning as response acquisition), cognitivism (learning as knowledge acquisition), and constructivism (learning as knowledge construction). Instructional theory helps us create conditions that increases the probability of learning. Its goal is understanding the instructional system and to improve the process of instruction. Instructional theories identify what instruction or teaching should be like. It outlines strategies that an educator may adopt to achieve the learning objectives. Instructional theories are adapted based on the educational content and more importantly the learning style of the students. They are used as teaching guidelines/tools by teachers/trainers to facilitate learning. Instructional theories encompass different instructional methods, models and strategies. David Merrill's First Principles of Instruction discusses universal methods of instruction, situational methods and core ideas of the post-industrial paradigm of instruction. Universal Methods of Instruction: Situational Methods: based on different approaches to instruction based on different learning outcomes: Core ideas for the Post-industrial Paradigm of Instruction: Four tasks of Instructional theory: Paulo Freire's work appears to critique instructional approaches that adhere to the knowledge acquisition stance, and his work "Pedagogy of the Oppressed" has had a broad influence over a generation of American educators with his critique of various "banking" models of education and analysis of the teacher-student relationship. Freire explains, "Narration (with the teacher as narrator) leads the students to memorize mechanically the narrated content. Worse yet, it turns them into "containers", into "receptacles" to be "filled" by the teacher. The more completely she fills the receptacles, the better a teacher she is. The more meekly the receptacles permit themselves to be filled, the better students they are." In this way he explains educator creates an act of depositing knowledge in a student. The student thus becomes a repository of knowledge. Freire explains that this system that diminishes creativity and knowledge suffers. Knowledge, according to Freire, comes about only through the learner by inquiry and pursuing the subjects in the world and through interpersonal interaction. Freire further states, "In the banking concept of education, knowledge is a gift bestowed by those who consider themselves knowledgeable upon those whom they consider to know nothing. Projecting an absolute ignorance onto others, a characteristic of the ideology of oppression, negates education and knowledge as processes of inquiry. The teacher presents himself to his students as their necessary opposite; by considering their ignorance absolute, he justifies his own existence. The students, alienated like the slave in the Hegelian dialectic, accept their ignorance as justifying the teacher's existence—but, unlike the slave, they never discover that they educate the teacher." Freire then offered an alternative stance and wrote, "The raison d'etre of libertarian education, on the other hand, lies in its drive towards reconciliation. Education must begin with the solution of the teacher-student contradiction, by reconciling the poles of the contradiction so that both are simultaneously teachers and students." In the article, "A process for the critical analysis of instructional theory", the authors use an ontology-building process to review and analyze concepts across different instructional theories. Here are their findings: Linking Premise to Practice: An Instructional Theory-Strategy Model Approach By: Bowden, Randall. Journal of College Teaching & Learning, v5 n3 p69-76 Mar 2008
https://en.wikipedia.org/wiki?curid=15014
ISO/IEC 8859-1 ISO/IEC 8859-1:1998, "Information technology — 8-bit single-byte coded graphic character sets — Part 1: Latin alphabet No. 1", is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 1987. ISO 8859-1 encodes what it refers to as "Latin alphabet no. 1", consisting of 191 characters from the Latin script. This character-encoding scheme is used throughout the Americas, Western Europe, Oceania, and much of Africa. It is also commonly used in most standard romanizations of East-Asian languages. It is the basis for most popular 8-bit character sets and the first block of characters in Unicode. ISO-8859-1 was (according to the standards at least) the default encoding of documents delivered via HTTP with a MIME type beginning with "text/" (HTML5 changed this to Windows-1252). , 2.1% of all (and 1.1% of the top-1000) Web sites claim to use . However, this includes an unknown number of pages actually using Windows-1252 and/or UTF-8, both of which are commonly recognized by browsers, despite the character set tag. It is the default encoding of the values of certain descriptive HTTP headers, and defines the repertoire of characters allowed in HTML 3.2 documents (HTML 4.0 uses Unicode, "i.e.", UTF-8), and is specified by many other standards. This and similar sets are often assumed to be the encoding of 8-bit text on Unix and Microsoft Windows if there is no byte order mark (BOM); this is only gradually being changed to UTF-8. ISO-8859-1 is the IANA preferred name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429. The following other aliases are registered: iso-ir-100, csISOLatin1, latin1, l1, IBM819. Code page 28591 a.k.a. Windows-28591 is used for it in Windows. IBM calls it code page 819 or CP819 (CCSID 819). Oracle calls it WE8ISO8859P1. Each character is encoded as a single eight-bit code value. These code values can be used in almost any data interchange system to communicate in the following languages: ISO-8859-1 was commonly used for certain languages, even though it lacks characters used by these languages. In most cases, only a few letters are missing or they are rarely used, and they can be replaced with characters that are in ISO-8859-1 using some form of typographic approximation. The following table lists such languages. The letter "ÿ", which appears in French only very rarely, mainly in city names such as L'Haÿ-les-Roses and never at the beginning of words, is included only in lowercase form. The slot corresponding to its uppercase form is occupied by the lowercase letter "ß" from the German language, which did not have an uppercase form at the time when the standard was created. For some languages listed above, the correct typographical quotation marks are missing, as only , , and are included. Also, this scheme does not provide for oriented (6- or 9-shaped) single or double quotation marks. Some fonts will display the spacing grave accent (0x60) and the apostrophe (0x27) as a matching pair of oriented single quotation marks, but this is not considered part of the modern standard. ISO 8859-1 was based on the Multinational Character Set used by Digital Equipment Corporation (DEC) in the popular VT220 terminal in 1983. It was developed within the European Computer Manufacturers Association (ECMA), and published in March 1985 as ECMA-94, by which name it is still sometimes known. The second edition of ECMA-94 (June 1986) also included ISO 8859-2, ISO 8859-3, and ISO 8859-4 as part of the specification. The original draft of ISO 8859-1 placed French "Œ" and "œ" at code points 215 (0xD7) and 247 (0xF7), as in the ECMA-94. However, the delegate from France, being neither a linguist nor a typographer, falsely stated that these are not independent French letters on their own, but mere ligatures (like "fi" or "fl"), supported by the delegate team from Bull Publishing Company, who regularly did not print French with "Œ/œ" in their house style at the time. An anglophone delegate from Canada insisted in retaining "Œ/œ" but was rebuffed by the French delegate and the team from Bull. These code points were soon filled with × and ÷ under the suggestion of the German delegation. Then things went even worse for the French language, when it was again falsely stated that the letter "ÿ" is "not French", resulting in the absence of the capital "Ÿ". In fact, the letter "ÿ" is found in a number of French proper names, and the capital letter has been used in dictionaries and encyclopedias. These characters were added to ISO/IEC 8859-15:1999. BraSCII matches the original draft. In 1985, Commodore adopted ECMA-94 for its new AmigaOS operating system. The Seikosha MP-1300AI impact dot-matrix printer, used with the Amiga 1000, included this encoding. In 1990, the very first version of Unicode used the code points of ISO-8859-1 as the first 256 Unicode code points. In 1992, the IANA registered the character map ISO_8859-1:1987, more commonly known by its preferred MIME name of ISO-8859-1 (note the extra hyphen over ISO 8859-1), a superset of ISO 8859-1, for use on the Internet. This map assigns the C0 and C1 control codes to the unassigned code values thus provides for 256 characters via every possible 8-bit value. In the original draft, however, Œ was at codepoint 215 (0xD7) and œ was at codepoint 247 (0xF7). ISO/IEC 8859-15 was developed in 1999, as an update of ISO/IEC 8859-1. It provides some characters for French and Finnish text and the euro sign, which are missing from ISO/IEC 8859-1. This required the removal of some infrequently used characters from ISO/IEC 8859-1, including fraction symbols and letter-free diacritics: , , , , , , , and . Ironically, three of the newly added characters (, , and ) had already been present in DEC's 1983 Multinational Character Set (MCS), the predecessor to ISO/IEC 8859-1 (1987). Since their original code points were now reused for other purposes, the characters had to be reintroduced under different, less logical code points. ISO-IR-204, a more minor modification, had been registered in 1998, altering ISO-8859-1 by replacing the universal currency sign (¤) with the euro sign (the same substitution made by ISO-8859-15). The popular Windows-1252 character set adds all the missing characters provided by ISO/IEC 8859-15, plus a number of typographic symbols, by replacing the rarely used C1 controls in the range 128 to 159 (hex 80 to 9F). It is very common to mislabel Windows-1252 text as being in ISO-8859-1. A common result was that all the quotes and apostrophes (produced by "smart quotes" in word-processing software) were replaced with question marks or boxes on non-Windows operating systems, making text difficult to read. Many web browsers and e-mail clients will interpret ISO-8859-1 control codes as Windows-1252 characters, and that behavior was later standardized in HTML5. The Apple Macintosh computer introduced a character encoding called Mac Roman in 1984. It was meant to be suitable for Western European desktop publishing. It is a superset of ASCII, and has most of the characters that are in ISO-8859-1 and all the extra characters from Windows-1252 but in a totally different arrangement. The few printable characters that are in ISO 8859-1, but not in this set, are often a source of trouble when editing text on Web sites using older Macintosh browsers, including the last version of Internet Explorer for Mac. DOS had code page 850, which had all printable characters that ISO-8859-1 had (albeit in a totally different arrangement) plus the most widely used graphic characters from code page 437. Between 1989 and 2015, Hewlett-Packard used another superset of ISO-8859-1 on many of their calculators. This proprietary character set was sometimes referred to simply as "ECMA-94" as well.
https://en.wikipedia.org/wiki?curid=15019
ISO/IEC 8859 ISO/IEC 8859 is a joint ISO and IEC series of standards for 8-bit character encodings. The series of standards consists of numbered parts, such as ISO/IEC 8859-1, ISO/IEC 8859-2, etc. There are 15 parts, excluding the abandoned ISO/IEC 8859-12. The ISO working group maintaining this series of standards has been disbanded. ISO/IEC 8859 parts 1, 2, 3, and 4 were originally Ecma International standard ECMA-94. While the bit patterns of the 95 printable ASCII characters are sufficient to exchange information in modern English, most other languages that use Latin alphabets need additional symbols not covered by ASCII. ISO/IEC 8859 sought to remedy this problem by utilizing the eighth bit in an 8-bit byte to allow positions for another 96 printable characters. Early encodings were limited to 7 bits because of restrictions of some data transmission protocols, and partially for historical reasons. However, more characters were needed than could fit in a single 8-bit character encoding, so several mappings were developed, including at least ten suitable for various Latin alphabets. The ISO/IEC 8859-"n" encodings only contain printable characters, and were designed to be used in conjunction with control characters mapped to the unassigned bytes. To this end a series of encodings registered with the IANA add the C0 control set (control characters mapped to bytes 0 to 31) from ISO 646 and the C1 control set (control characters mapped to bytes 128 to 159) from ISO 6429, resulting in full 8-bit character maps with most, if not all, bytes assigned. These sets have ISO-8859-"n" as their preferred MIME name or, in cases where a preferred MIME name is not specified, their canonical name. Many people use the terms ISO/IEC 8859-"n" and ISO-8859-"n" interchangeably. ISO/IEC 8859-11 did not get such a charset assigned, presumably because it was almost identical to TIS 620. The ISO/IEC 8859 standard is designed for reliable information exchange, not typography; the standard omits symbols needed for high-quality typography, such as optional ligatures, curly quotation marks, dashes, etc. As a result, high-quality typesetting systems often use proprietary or idiosyncratic extensions on top of the ASCII and ISO/IEC 8859 standards, or use Unicode instead. As a rule of thumb, if a character or symbol was not already part of a widely used data-processing character set and was also not usually provided on typewriter keyboards for a national language, it did not get in. Hence the directional double quotation marks "«" and "»" used for some European languages were included, but not the directional double quotation marks "“" and "”" used for English and some other languages. French did not get its "œ" and "Œ" ligatures because they could be typed as 'oe'. Likewise, "Ÿ", needed for all-caps text, was dropped as well. Albeit under different codepoints, these three characters were later reintroduced with ISO/IEC 8859-15 in 1999, which also introduced the new euro sign character €. Likewise Dutch did not get the "ij" and "IJ" letters, because Dutch speakers had become used to typing these as two letters instead. Romanian did not initially get its "Ș"/"ș" and "Ț"/"ț" (with comma) letters, because these letters were initially unified with "Ş"/"ş" and "Ţ"/"ţ" (with cedilla) by the Unicode Consortium, considering the shapes with comma beneath to be glyph variants of the shapes with cedilla. However, the letters with explicit comma below were later added to the Unicode standard and are also in ISO/IEC 8859-16. Most of the ISO/IEC 8859 encodings provide diacritic marks required for various European languages using the Latin script. Others provide non-Latin alphabets: Greek, Cyrillic, Hebrew, Arabic and Thai. Most of the encodings contain only spacing characters, although the Thai, Hebrew, and Arabic ones do also contain combining characters. The standard makes no provision for the scripts of East Asian languages ("CJK"), as their ideographic writing systems require many thousands of code points. Although it uses Latin based characters, Vietnamese does not fit into 96 positions (without using combining diacritics such as in Windows-1258) either. Each Japanese syllabic alphabet (hiragana or katakana, see Kana) would fit, as in JIS X 0201, but like several other alphabets of the world they are not encoded in the ISO/IEC 8859 system. ISO/IEC 8859 is divided into the following parts: Each part of ISO/IEC 8859 is designed to support languages that often borrow from each other, so the characters needed by each language are usually accommodated by a single part. However, there are some characters and language combinations that are not accommodated without transcriptions. Efforts were made to make conversions as smooth as possible. For example, German has all of its seven special characters at the same positions in all Latin variants (1–4, 9, 10, 13–16), and in many positions the characters only differ in the diacritics between the sets. In particular, variants 1–4 were designed jointly, and have the property that every encoded character appears either at a given position or not at all. At position 0xA0 there's always the non breaking space and 0xAD is mostly the soft hyphen, which only shows at line breaks. Other empty fields are either unassigned or the system used is not able to display them. There are new additions as and versions. LRM stands for left-to-right mark (U+200E) and RLM stands for right-to-left mark (U+200F). Since 1991, the Unicode Consortium has been working with ISO and IEC to develop the Unicode Standard and ISO/IEC 10646: the Universal Character Set (UCS) in tandem. Newer editions of ISO/IEC 8859 express characters in terms of their Unicode/UCS names and the "U+nnnn" notation, effectively causing each part of ISO/IEC 8859 to be a Unicode/UCS character encoding scheme that maps a very small subset of the UCS to single 8-bit bytes. The first 256 characters in Unicode and the UCS are identical to those in ISO/IEC-8859-1 (Latin-1). Single-byte character sets including the parts of ISO/IEC 8859 and derivatives of them were favoured throughout the 1990s, having the advantages of being well-established and more easily implemented in software: the equation of one byte to one character is simple and adequate for most single-language applications, and there are no combining characters or variant forms. As Unicode-enabled operating systems became more widespread, ISO/IEC 8859 and other legacy encodings became less popular. While remnants of ISO 8859 and single-byte character models remain entrenched in many operating systems, programming languages, data storage systems, networking applications, display hardware, and end-user application software, most modern computing applications use Unicode internally, and rely on conversion tables to map to and from other encodings, when necessary. The ISO/IEC 8859 standard was maintained by ISO/IEC Joint Technical Committee 1, Subcommittee 2, Working Group 3 (ISO/IEC JTC 1/SC 2/WG 3). In June 2004, WG 3 disbanded, and maintenance duties were transferred to SC 2. The standard is not currently being updated, as the Subcommittee's only remaining working group, WG 2, is concentrating on development of Unicode's Universal Coded Character Set.
https://en.wikipedia.org/wiki?curid=15020
Infrared Infrared (IR), sometimes called infrared light, is electromagnetic radiation (EMR) with wavelengths longer than those of visible light. It is therefore generally invisible to the human eye, although IR at wavelengths up to 1050 nanometers (nm)s from specially pulsed lasers can be seen by humans under certain conditions. IR wavelengths extend from the nominal red edge of the visible spectrum at 700 nanometers (frequency 430 THz), to 1 millimeter (300 GHz). Most of the thermal radiation emitted by objects near room temperature is infrared. As with all EMR, IR carries radiant energy and behaves both like a wave and like its quantum particle, the photon. Infrared radiation was discovered in 1800 by astronomer Sir William Herschel, who discovered a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the total energy from the Sun was eventually found to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has a critical effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when they change their rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, law enforcement, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect overheating of electrical apparatus. Extensive uses for military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm (micrometers). Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Infrared radiation extends from the nominal red edge of the visible spectrum at 700 nanometers (nm) to 1 millimeter (mm). This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Below infrared is the microwave portion of the electromagnetic spectrum. Sunlight, at an effective temperature of 5780 kelvins (5510 °C, 9940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 micrometers. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. However, black-body, or thermal, radiation is continuous: it gives off radiation at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. Therefore, the infrared band is often subdivided into smaller sections. A commonly used sub-division scheme is: NIR and SWIR is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". Due to the nature of the blackbody radiation curves, typical "hot" objects, such as exhaust pipes, often appear brighter in the MW compared to the same object viewed in the LW. The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 specifies the following scheme: Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. A third scheme divides up the band based on the response of various detectors: Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. However, particularly intense near-IR light (e.g., from IR lasers, IR LED sources, or from bright daylight with the visible light removed by colored gels) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources transmitting/absorbing materials (fibers) and detectors: The C-band is the dominant band for long-distance telecommunication networks. The S and L bands are based on less well established technology, and are not as widely deployed. Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the idea of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 900–14,000 nanometers or 0.9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can "see" intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared can be used in cooking and heating food as it predominantly heats the opaque, absorbent objects, rather than the air around them. Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Efficiency is achieved by matching the wavelength of the infrared heater to the absorption characteristics of the material. A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere. IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that is focused by a plastic lens into a narrow beam. The beam is modulated, i.e. switched on and off, to prevent interference from other sources of infrared (like sunlight or artificial lighting). The receiver uses a silicon photodiode to convert the infrared radiation to an electric current. It responds only to the rapidly pulsing signal created by the transmitter, and filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a wavelength around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired people through the RIAS (Remote Infrared Audible Signage) project. Transmitting IR data from one device to another is sometimes referred to as beaming. Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from 4000–400 cm−1, the mid-infrared. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi-Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). High, cold ice clouds such as cirrus or cumulonimbus show up bright white, lower warmer clouds such as stratus or stratocumulus show up as grey, with intermediate clouds shaded accordingly. Hot land surfaces will show up as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can have a temperature similar to the surrounding land or sea surface and does not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low cloud can be distinguished, producing a "fog" satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared. Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's "Woman Ironing" and "Blue Room", where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well. The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system. Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat ("Desmodus rotundus"), a variety of jewel beetles ("Melanophila acuminata"), darkly pigmented butterflies ("Pachliopta aristolochiae" and "Troides rhadamantus plateni"), and possibly blood-sucking bugs ("Triatoma infestans"). Some fungi like "Venturia inaequalis" require near-infrared light for ejection Although near-infrared vision (780–1000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters. Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms. Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places. The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century.
https://en.wikipedia.org/wiki?curid=15022
Icosidodecahedron In geometry, an icosidodecahedron is a polyhedron with twenty (icosi) triangular faces and twelve (dodeca) pentagonal faces. An icosidodecahedron has 30 identical vertices, with two triangles and two pentagons meeting at each, and 60 identical edges, each separating a triangle from a pentagon. As such it is one of the Archimedean solids and more particularly, a quasiregular polyhedron. An icosidodecahedron has icosahedral symmetry, and its first stellation is the compound of a dodecahedron and its dual icosahedron, with the vertices of the icosidodecahedron located at the midpoints of the edges of either. Its dual polyhedron is the rhombic triacontahedron. An icosidodecahedron can be split along any of six planes to form a pair of pentagonal rotundae, which belong among the Johnson solids. The icosidodecahedron can be considered a "pentagonal gyrobirotunda", as a combination of two rotundae (compare pentagonal orthobirotunda, one of the Johnson solids). In this form its symmetry is D5d, [10,2+], (2*5), order 20. The wire-frame figure of the icosidodecahedron consists of six flat regular decagons, meeting in pairs at each of the 30 vertices. The icosidodecahedron has 6 central decagons. Projected into a sphere, they define 6 great circles. Buckminster Fuller used these 6 great circles, along with 15 and 10 others in two other polyhedra to define his 31 great circles of the spherical icosahedron. Convenient Cartesian coordinates for the vertices of an icosidodecahedron with unit edges are given by the even permutations of: where "φ" is the golden ratio, . The icosidodecahedron has four special orthogonal projections, centered on a vertex, an edge, a triangular face, and a pentagonal face. The last two correspond to the A2 and H2 Coxeter planes. The surface area "A" and the volume "V" of the icosidodecahedron of edge length "a" are: The icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. The icosidodecahedron is a rectified dodecahedron and also a rectified icosahedron, existing as the full-edge truncation between these regular solids. The icosidodecahedron contains 12 pentagons of the dodecahedron and 20 triangles of the icosahedron: The icosidodecahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3."n")2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *"n"32 all of these tilings are wythoff construction within a fundamental domain of symmetry, with generator points at the right angle corner of the domain. The icosidodecahedron is related to the Johnson solid called a pentagonal orthobirotunda created by two pentagonal rotunda connected as mirror images. The "icosidodecahedron" can therefore be called a "pentagonal gyrobirotunda" with the gyration between top and bottom halves. The truncated cube can be turned into an icosidodecahedron by dividing the octagons into two pentagons and two triangles. It has pyritohedral symmetry. Eight uniform star polyhedra share the same vertex arrangement. Of these, two also share the same edge arrangement: the small icosihemidodecahedron (having the triangular faces in common), and the small dodecahemidodecahedron (having the pentagonal faces in common). The vertex arrangement is also shared with the compounds of five octahedra and of five tetrahemihexahedra. In four-dimensional geometry the icosidodecahedron appears in the regular 600-cell as the equatorial slice that belongs to the vertex-first passage of the 600-cell through 3D space. In other words: the 30 vertices of the 600-cell which lie at arc distances of 90 degrees on its circumscribed hypersphere from a pair of opposite vertices, are the vertices of an icosidodecahedron. The wire frame figure of the 600-cell consists of 72 flat regular decagons. Six of these are the equatorial decagons to a pair of opposite vertices. They are precisely the six decagons which form the wire frame figure of the icosidodecahedron. In the mathematical field of graph theory, a icosidodecahedral graph is the graph of vertices and edges of the icosidodecahedron, one of the Archimedean solids. It has 30 vertices and 60 edges, and is a quartic graph Archimedean graph. In Star Trek Universe, the Vulcan game of logic Kal-Toh has the goal to create a holographic icosidodecahedron. In "The Wrong Stars", book one of the Axiom series, by Tim Pratt, Elena has a icosidodecahedron machine on either side of her. [Paperback p 336] The Hoberman sphere is an icosadodecahedron.
https://en.wikipedia.org/wiki?curid=15023
ISO 8601 ISO 8601 "Data elements and interchange formats – Information interchange – Representation of dates and times" is an international standard covering the exchange of date- and time-related data. It was issued by the International Organization for Standardization (ISO) and was first published in 1988. The purpose of this standard is to provide an unambiguous and well-defined method of representing dates and times, so as to avoid misinterpretation of numeric representations of dates and times, particularly when data is transferred between countries with different conventions for writing numeric dates and times. In general, ISO 8601 applies to representations and formats of dates in the Gregorian (and potentially proleptic Gregorian) calendar, of times based on the 24-hour timekeeping system (with optional UTC offset), of , and combinations thereof. The standard does not assign any specific meaning to elements of the date/time to be represented; the meaning will depend on the context of its use. In addition, dates and times to be represented cannot include words with no specified numerical meaning in the standard (e.g., names of years in the Chinese calendar) or that do not use characters (e.g., images, sounds). In representations for interchange, dates and times are arranged so the largest temporal term (the year) is placed to the left and each successively smaller term is placed to the right of the previous term. Representations must be written in a combination of Arabic numerals and certain characters (such as "-", ":", "T", "W", and "Z") that are given specific meanings within the standard; the implication is that some commonplace ways of writing parts of dates, such as "January" or "Thursday", are not allowed in interchange representations. The first edition of the ISO 8601 standard was published as "ISO 8601:1988" in 1988. It unified and replaced a number of older ISO standards on various aspects of date and time notation: ISO 2014, ISO 2015, ISO 2711, ISO 3307, and ISO 4031. It has been superseded by a second edition "ISO 8601:2000" in 2000, by a third edition "ISO 8601:2004" published on 1 December 2004, and withdrawn and revised by "ISO 8601-1:2019" and "ISO 8601-2:2019" on 25 February 2019. ISO 8601 was prepared by, and is under the direct responsibility of, ISO Technical Committee TC 154. ISO 2014, though superseded, is the standard that originally introduced the all-numeric date notation in most-to-least-significant order . The ISO week numbering system was introduced in ISO 2015, and the identification of days by ordinal dates was originally defined in ISO 2711. Issued in February 2019, the fourth revision of the standard ISO 8601-1:2019 represents slightly updated contents of the previous ISO 8601:2004 standard, whereas the new ISO 8601-2:2019 defines various extensions such as uncertainties or parts of the Extended Date/Time Format (EDTF). The standard uses the Gregorian calendar, which "serves as an international standard for civil use." ISO 8601 fixes a reference calendar date to the Gregorian calendar of 20 May 1875 as the date the (Metre Convention) was signed in Paris. However, ISO calendar dates before the convention are still compatible with the Gregorian calendar all the way back to the official introduction of the Gregorian calendar on . Earlier dates, in the proleptic Gregorian calendar, may be used by mutual agreement of the partners exchanging information. The standard states that every date must be consecutive, so usage of the Julian calendar would be contrary to the standard (because at the switchover date, the dates would not be consecutive). ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BC and all others AD. However, years prior to 1583 are not automatically allowed by the standard. Instead "values in the range [0000] through [1582] shall only be used by mutual agreement of the partners in information interchange." To represent years before 0000 or after 9999, the standard also permits the expansion of the year representation but only by prior agreement between the sender and the receiver. An expanded year representation [±YYYYY] must have an agreed-upon number of extra year digits beyond the four-digit minimum, and it must be prefixed with a + or − sign instead of the more common AD/BC (or CE/BCE) notation; by convention 1 BC is labelled +0000, 2 BC is labeled −0001, and so on. Calendar date representations are in the form shown in the adjacent box. [YYYY] indicates a four-digit year, 0000 through 9999. [MM] indicates a two-digit month of the year, 01 through 12. [DD] indicates a two-digit day of that month, 01 through 31. For example, "5 April 1981" may be represented as either in the "extended format" or "19810405" in the "basic format". The standard also allows for calendar dates to be written with reduced accuracy. For example, one may write to mean "1981 April". The 2000 version allowed writing to mean "April 5" but the 2004 version does not allow omitting the year when a month is present. One may simply write "1981" to refer to that year or "19" to refer to the century from 1900 to 1999 inclusive. Although the standard allows both the and YYYYMMDD formats for complete calendar date representations, if the day [DD] is omitted then only the format is allowed. By disallowing dates of the form YYYYMM, the standard avoids confusion with the truncated representation YYMMDD (still often used). Week date representations are in the formats as shown in the adjacent box. [YYYY] indicates the "ISO week-numbering year" which is slightly different from the traditional Gregorian calendar year (see below). [Www] is the "week number" prefixed by the letter "W", from W01 through W53. [D] is the "weekday number", from 1 through 7, beginning with Monday and ending with Sunday. There are several mutually equivalent and compatible descriptions of week 01: As a consequence, if 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year (there is no week 00). 28 December is always in the last week of its year. The week number can be described by counting the Thursdays: week 12 contains the 12th Thursday of the year. The "ISO week-numbering year" starts at the first day (Monday) of week 01 and ends at the Sunday before the new ISO year (hence without overlap or gap). It consists of 52 or 53 full weeks. The first ISO week of a year may have up to three days that are actually in the Gregorian calendar year that is ending; if three, they are Monday, Tuesday and Wednesday. Similarly, the last ISO week of a year may have up to three days that are actually in the Gregorian calendar year that is starting; if three, they are Friday, Saturday, and Sunday. The Thursday of each ISO week is always in the Gregorian calendar year denoted by the ISO week-numbering year. Examples: An ordinal date is a simple form for occasions when the arbitrary nature of week and month definitions are more of an impediment than an aid, for instance, when comparing dates from different calendars. As represented above, [YYYY] indicates a year. [DDD] is the day of that year, from 001 through 365 (366 in leap years). For example, is also . This format is used with simple hardware systems that have a need for a date system, but where including full calendar calculation software may be a significant nuisance. This system is sometimes referred to as "Julian Date", but this can cause confusion with the astronomical Julian day, a sequential count of the number of days since day 0 beginning Greenwich noon, Julian proleptic calendar (or noon on ISO date which uses the Gregorian proleptic calendar with a year 0000). ISO 8601 uses the 24-hour clock system. The "basic format" is [hh][mm][ss] and the "extended format" is [hh]:[mm]:[ss]. So a time might appear as either "134730" in the "basic format" or "13:47:30" in the "extended format". Either the seconds, or the minutes and seconds, may be omitted from the basic or extended time formats for greater brevity but decreased accuracy; the resulting reduced accuracy time formats are: "Midnight" is a special case and may be referred to as either "00:00" or "24:00", except in ISO 8601-1:2019 where "24:00" is no longer permitted. The notation "00:00" is used at the beginning of a calendar day and is the more frequently used. At the end of a day use "24:00". "2007-04-05T24:00" is the same instant as "2007-04-06T00:00" (see "Combined date and time representations" below). Decimal fractions may be added to any of the three time elements. However, a fraction may only be added to the lowest order time element in the representation. A decimal mark, either a comma or a dot (without any preference as stated in resolution 10 of the 22nd General Conference CGPM in 2003, but with a preference for a comma according to ISO 8601:2004) is used as a separator between the time element and its fraction. To denote "14 hours, 30 and one half minutes", do not include a seconds figure. Represent it as "14:30,5", "1430,5", "14:30.5", or "1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties. For example, in Microsoft SQL Server, the precision of a decimal fraction is 3, i.e., "yyyy-mm-ddThh:mm:ss[.mmm]". Time zones in ISO 8601 are represented as local time (with the location unspecified), as UTC, or as an offset from UTC. If no UTC relation information is given with a time representation, the time is assumed to be in local time. While it "may" be safe to assume local time when communicating in the same time zone, it is ambiguous when used in communicating across different time zones. Even within a single geographic time zone, some local times will be ambiguous if the region observes daylight saving time. It is usually preferable to indicate a time zone (zone designator) using the standard's notation. If the time is in UTC, add a "Z" directly after the time without a space. "Z" is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "0930Z". "14:45:15 UTC" would be "14:45:15Z" or "144515Z". The "Z" suffix in the ISO 8601 time representation is sometimes referred to as "Zulu time" because the same letter is used to designate the Zulu time zone. However the ACP 121 standard that defines the list of military time zones makes no mention of UTC and derives the "Zulu time" from the Greenwich Mean Time which was formerly used as the international civil time standard. GMT is no longer precisely defined by the scientific community and can refer to either UTC or UT1 depending on context. The UTC offset is appended to the time in the same way that 'Z' was above, in the form ±[hh]:[mm], ±[hh][mm], or ±[hh]. Negative UTC offsets describe a time zone west of , where the civil time is behind (or earlier) than UTC so the zone designator will look like "−03:00","−0300", or "−03". Positive UTC offsets describe a time zone east of , where the civil time is ahead (or later) than UTC so the zone designator will look like "+02:00","+0200", or "+02". Examples See List of UTC time offsets for other UTC offsets. To represent a negative offset, ISO 8601 specifies using either a hyphen–minus or a minus sign character. If the interchange character set is limited and does not have a minus sign character, then the hyphen–minus should be used. ASCII does not have a minus sign, so its hyphen–minus character (code is 45 decimal or 2D hexadecimal) would be used. If the character set has a minus sign, then that character should be used. Unicode has a minus sign, and its character code is U+2212 (2212 hexadecimal); the HTML character entity invocation is codice_1. The following times all refer to the same moment: "18:30Z", "22:30+04", "1130−0700", and "15:00−03:30". Nautical time zone letters are not used with the exception of Z. To calculate UTC time one has to subtract the offset from the local time, e.g. for "15:00−03:30" do 15:00 − (−03:30) to get 18:30 UTC. An offset of zero, in addition to having the special representation "Z", can also be stated numerically as "+00:00", "+0000", or "+00". However, it is not permitted to state it numerically with a negative sign, as "−00:00", "−0000", or "−00". The section dictating sign usage (section 3.4.2 in the 2004 edition of the standard) states that a plus sign must be used for a positive or zero value, and a minus sign for a negative value. Contrary to this rule, RFC 3339, which is otherwise a profile of ISO 8601, permits the use of "-00", with the same denotation as "+00" but a differing connotation. A single point in time can be represented by concatenating a complete date expression, the letter ""T"" as a delimiter, and a valid time expression. For example, . It is permitted to omit the ""T"" character by mutual agreement as in . Separating date and time parts with other characters such as space is not allowed in ISO 8601, but allowed in its profile RFC 3339. If a time zone designator is required, it follows the combined date and time. For example, or . Either basic or extended formats may be used, but both date and time must use the same format. The date expression may be calendar, week, or ordinal, and must use a complete representation. The time may be represented using a specified reduced accuracy format. Durations define the amount of intervening time in a time interval and are represented by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S or P[n]W as shown to the right. In these representations, the [n] is replaced by the value for each of the date and time elements that follow the [n]. Leading zeros are not required, but the maximum number of digits for each element should be agreed to by the communicating parties. The capital letters "P", "Y", "M", "W", "D", "T", "H", "M", and "S" are designators for each of the date and time elements and are not replaced. For example, "P3Y6M4DT12H30M5S" represents a duration of "three years, six months, four days, twelve hours, thirty minutes, and five seconds". Date and time elements including their designator may be omitted if their value is zero, and lower-order elements may also be omitted for reduced precision. For example, "P23DT23H" and "P4Y" are both acceptable duration representations. However, at least one element must be present, thus "P" is not a valid representation for a duration of 0 seconds. "PT0S" or "P0D", however, are both valid and represent the same duration. To resolve ambiguity, "P1M" is a one-month duration and "PT1M" is a one-minute duration (note the time designator, T, that precedes the time value). The smallest value used may also have a decimal fraction, as in "P0.5Y" to indicate half a year. This decimal fraction may be specified with either a comma or a full stop, as in "P0,5Y" or "P0.5Y". The standard does not prohibit date and time values in a duration representation from exceeding their "carry over points" except as noted below. Thus, "PT36H" could be used as well as "P1DT12H" for representing the same duration. But keep in mind that "PT36H" is not the same as "P1DT12H" when switching from or to Daylight saving time. Alternatively, a format for duration based on combined date and time representations may be used by agreement between the communicating parties either in the basic format PYYYYMMDDThhmmss or in the extended format . For example, the first duration shown above would be . However, individual date and time values cannot exceed their moduli (e.g. a value of 13 for the month or 25 for the hour would not be permissible). Although the standard describes a duration as part of time intervals, which are discussed in the next section, the duration format (or a subset thereof) is widely used independent of time intervals, as with the Java 8 Duration class. A time interval is the intervening time between two time points. The amount of intervening time is expressed by a duration (as described in the previous section). The two time points (start and end) are expressed by either a combined date and time representation or just a date representation. There are four ways to express a time interval: Of these, the first three require two values separated by an "interval designator" which is usually a solidus (more commonly referred to as a forward slash "/"). Section 4.4.2 of the standard notes that: "In certain application areas a double hyphen is used as a separator instead of a solidus." The standard does not define the term "double hyphen", but previous versions used notations like "2000--2002". Use of a double hyphen instead of a solidus allows inclusion in computer filenames. A solidus is a reserved character and not allowed in a filename in common operating systems. For / expressions, if any elements are missing from the end value, they are assumed to be the same as for the start value including the time zone. This feature of the standard allows for concise representations of time intervals. For example, the date of a two-hour meeting including the start and finish times could be simply shown as "2007-12-14T13:30/15:30", where "/15:30" implies "/2007-12-14T15:30" (the same date as the start), or the beginning and end dates of a monthly billing period as "2008-02-15/03-14", where "/03-14" implies "/2008-03-14" (the same year as the start). If greater precision is desirable to represent the time interval, then more time elements can be added to the representation. An interval denoted can start at any time on and end at any time on , whereas includes the start and end times. To explicitly include all of the start and end dates, the interval would be represented as . Repeating intervals are specified in clause "4.5 Recurring time interval". They are formed by adding "R[n]/" to the beginning of an interval expression, where "R" is used as the letter itself and [n] is replaced by the number of repetitions. Leaving out the value for [n] means an unbounded number of repetitions. If the interval specifies the start (forms 1 and 2 above), then this is the start of the repeating interval. If the interval specifies the end but not the start (form 3 above), then this is the end of the repeating interval. For example, to repeat the interval of "P1Y2M10DT2H30M" five times starting at , use . ISO 8601:2000 allowed truncation (by agreement), where leading components of a date or time are omitted. Notably, this allowed two-digit years to be used and the ambiguous formats YY-MM-DD and YYMMDD. This provision was removed in ISO 8601:2004. On the Internet, the World Wide Web Consortium (W3C) uses ISO 8601 in defining a profile of the standard that restricts the supported date and time formats to reduce the chance of error and the complexity of software. ISO 8601 is referenced by several specifications, but the full range of options of ISO 8601 is not always used. For example, the various electronic program guide standards for TV, digital radio, etc. use several forms to describe points in time and durations. The ID3 audio meta-data specification also makes use of a subset of ISO 8601. The X.690 encoding standard's GeneralizedTime makes use of another subset of ISO 8601. The ISO 8601 week date, as of 2006, appeared in its basic form on major brand commercial packaging in the United States. Its appearance depended on the particular packaging, canning, or bottling plant more than any particular brand. The format is particularly useful for quality assurance, so that production errors can be readily traced to work weeks, and products can be correctly targeted for recall. RFC 3339 defines a profile of ISO 8601 for use in Internet protocols and standards. It explicitly excludes durations and dates before the common era. The more complex formats such as week numbers and ordinal days are not permitted. RFC 3339 deviates from ISO 8601 in allowing a zero time zone offset to be specified as "-00:00", which ISO 8601 forbids. RFC 3339 intends "-00:00" to carry the connotation that it is not stating a preferred time zone, whereas the conforming "+00:00" or any non-zero offset connotes that the offset being used is preferred. This convention regarding "-00:00" is derived from earlier RFCs, such as RFC 2822 which uses it for timestamps in email headers. RFC 2822 made no claim that any part of its timestamp format conforms to ISO 8601, and so was free to use this convention without conflict. Implementation overview
https://en.wikipedia.org/wiki?curid=15024
International Seabed Authority The International Seabed Authority (ISA) () is an intergovernmental body based in Kingston, Jamaica, that was established to organize, regulate and control all mineral-related activities in the international seabed area beyond the limits of national jurisdiction, an area underlying most of the world's oceans. It is an organization established by the United Nations Convention on the Law of the Sea. Following at least ten preparatory meetings over the years, the Authority held its first inaugural meeting in its host country, Jamaica, on 16 November 1994, the day the Convention came into force. The articles governing the Authority have been made "noting the political and economic changes, including market-oriented approaches, affecting the implementation" of the Convention. The Authority obtained its observer status to the United Nations in October 1996. Currently, the Authority has 167 members and the European Union, composed of all parties to the United Nations Convention on the Law of the Sea. Two principal organs establish the policies and govern the work of the Authority: the Assembly, in which all members are represented, and a 36-member Council elected by the Assembly. Council members are chosen according to a formula designed to ensure equitable representation of countries from various groups, including those engaged in seabed mineral exploration and the land-based producers of minerals found on the seabed. The Authority holds one annual session, usually of two weeks' duration. Also established is a 30-member Legal and Technical Commission which advises the Council and a 15-member Finance Committee that deals with budgetary and related matters. All members are experts nominated by governments and elected to serve in their individual capacity. The Authority operates by contracting with private and public corporations and other entities authorizing them to explore, and eventually exploit, specified areas on the deep seabed for mineral resources essential for building most technological products. The Convention also established a body called the Enterprise which is to serve as the Authority's own mining operator, but no concrete steps have been taken to bring this into being. The Authority currently has a Secretariat of 37 authorized posts and a biennial budget of $9.1 million for 2017 and $8.9 million for 2018. In July 2016, the Assembly of the Authority elected Michael Lodge of the United Kingdom, for a four-year term as Secretary-General beginning 1 January 2017. He succeeds Nii Allotey Odunton of Ghana, who had served two consecutive four-year terms since 2008. The exploitation system envisaged in the UN Convention on the Law of the Sea, overseen by the Authority, came to life with the signature in 2001/02 of 15-year contracts with seven organizations that had applied for specific seabed areas in which they were authorized to explore for polymetallic nodules. In 2006, a German entity was added to the list. These contractors are: Yuzhmorgeologya (Russian Federation); Interoceanmetal Joint Organization (IOM) (Bulgaria, Cuba, Slovakia, Czech Republic, Poland and Russian Federation); the Government of the Republic of Korea; China Ocean Minerals Research and Development Association (COMRA) (China); Deep Ocean Resources Development Company (DORD) (Japan); Institut français de recherche pour l’exploitation de la mer (IFREMER) (France); the Government of India, the Federal Institute for Geosciences and Natural Resources of Germany. All but one of the current areas of exploration are in the Clarion-Clipperton Zone, in the Equatorial North Pacific Ocean south and southeast of Hawaii. The remaining area, being explored by India, is in the Central Indian Basin of the Indian Ocean. Each area is limited to , of which half is to be relinquished to the Authority after eight years. Each contractor is required to report once a year on its activities in its assigned area. So far, none of them has indicated any serious move to begin commercial exploitation. In 2008, the Authority received two new applications for authorization to explore for polymetallic nodules, coming for the first time from private firms in developing island nations of the Pacific. Sponsored by their respective governments, they were submitted by Nauru Ocean Resources Inc. and Tonga Offshore Mining Limited. A 15-year exploration contract was granted by the Authority to Nauru Ocean Resources Inc. on 22 July 2011 and to Tonga Offshore Mining Limited on 12 January 2012. Fifteen-year exploration contracts for polymetallic nodules were also granted to G-TECH Sea Mineral Resources NV (Belgium) on 14 January 2013; Marawa Research and Exploration Ltd (Kiribati) on 19 January 2015; Ocean Mineral Singapore Pte Ltd on 22 January 2015; UK Seabed Resources Ltd (two contracts on 8 February 2013 and 29 March 2016 respectively); Cook Islands Investment Corporation on 15 July 2016 and more recently China Minmetals Corporation on 12 May 2017. The Authority has signed seven contracts for the exploration for polymetallic sulphides in the South West Indian Ridge, Central Indian Ridge and Mid-Atlantic Ridge with China Ocean Mineral Resources Research and Development Association (18 November 2011); the Government of Russia (29 October 2012); Government of the Republic of Korea (24 June 2014); Institut français de recherche pour l’exploitation de la mer (Ifremer,France, 18 November 2014); the Federal Institute for Geosciences and Natural Resources of Germany (6 May 2015); and the Government of India (26 September 2016) and the Government of the Republic of Poland (12 February 2018). The Authority also holds five contracts for the exploration of cobalt-rich ferromanganese crusts in the Western Pacific Ocean with China Ocean Mineral Resources Research and Development Association (29 April 2014); Japan Oil Gas and Metals National Corporation (JOGMEC, 27 January 2014); Ministry of Natural Resources and Environment of the Russian Federation (10 March 2015), Companhia De Pesquisa de Recursos Minerais (9 November 2015) and the Government of the Republic of Korea (27 March 2018). The Authority's main legislative accomplishment to date has been the adoption, in the year 2000, of regulations governing exploration for polymetallic nodules. These resources, also called manganese nodules, contain varying amounts of manganese, cobalt, copper and nickel. They occur as potato-sized lumps scattered about on the surface of the ocean floor, mainly in the central Pacific Ocean but with some deposits in the Indian Ocean. The Council of the Authority began work, in August 2002, on another set of regulations, covering polymetallic sulfides and cobalt-rich ferromanganese crusts, which are rich sources of such minerals as copper, iron, zinc, silver and gold, as well as cobalt. The sulphides are found around volcanic hot springs, especially in the western Pacific Ocean, while the crusts occur on oceanic ridges and elsewhere at several locations around the world. The Council decided in 2006 to prepare separate sets of regulations for sulphides and for crusts, with priority given to sulphides. It devoted most of its sessions in 2007 and 2008 to this task, but several issues remained unresolved. Chief among these were the definition and configuration of the area to be allocated to contractors for exploration, the fees to be paid to the Authority and the question of how to deal with any overlapping claims that might arise. Meanwhile, the Legal and Technical Commission reported progress on ferromanganese crusts. In addition to its legislative work, the Authority organizes annual workshops on various aspects of seabed exploration, with emphasis on measures to protect the marine environment from any harmful consequences. It disseminates the results of these meetings through publications. Studies over several years covering the key mineral area of the Central Pacific resulted in a technical study on biodiversity, species ranges and gene flow in the abyssal Pacific nodule province, with emphasis on predicting and managing the impacts of deep seabed mining A workshop at Manoa, Hawaii, in October 2007 produced a rationale and recommendations for the establishment of "preservation reference areas" in the Clarion-Clipperton Zone, where nodule mining would be prohibited in order to leave the natural environment intact. The most recent workshop, held at Chennai, India, in February 2008, concerned polymetallic nodule mining technology, with special reference to its current status and challenges ahead Contrary to early hopes that seabed mining would generate extensive revenues for both the exploiting countries and the Authority, no technology has yet been developed for gathering deep-sea minerals at costs that can compete with land-based mines. Until recently, the consensus has been that economic mining of the ocean depths might be decades away. Moreover, the United States, with some of the most advanced ocean technology in the world, has not yet ratified the Law of the Sea Convention and is thus not a member of the Authority. In recent years, however, interest in deep-sea mining, especially with regard to ferromanganese crusts and polymetallic sulphides, has picked up among several firms now operating in waters within the national zones of Papua New Guinea, Fiji and Tonga. Papua New Guinea was the first country in the world to grant commercial exploration licenses for seafloor massive sulphide deposits when it granted the initial license to Nautilus Minerals in 1997. Japan's new ocean policy emphasizes the need to develop methane hydrate and hydrothermal deposits within Japan's exclusive economic zone and calls for the commercialization of these resources within the next 10 years. Reporting on these developments in his annual report to the Authority in April 2008, Secretary-General Nandan referred also to the upward trend in demand and prices for cobalt, copper, nickel and manganese, the main metals that would be derived from seabed mining, and he noted that technologies being developed for offshore extraction could be adapted for deep sea mining. In its preamble, UNCLOS defines the international seabed area—the part under ISA jurisdiction—as "the seabed and ocean floor and the subsoil thereof, beyond the limits of national jurisdiction". There are no maps annexed to the Convention to delineate this area. Rather, UNCLOS outlines the areas of national jurisdiction, leaving the rest for the international portion. National jurisdiction over the seabed normally leaves off at seaward from baselines running along the shore, unless a nation can demonstrate that its continental shelf is naturally prolonged beyond that limit, in which case it may claim up to . ISA has no role in determining this boundary. Rather, this task is left to another body established by UNCLOS, the Commission on the Limits of the Continental Shelf, which examines scientific data submitted by coastal states that claim a broader reach. Maritime boundaries between states are generally decided by bilateral negotiation (sometimes with the aid of judicial bodies), not by ISA. Recently, there has been much interest in the possibility of exploiting seabed resources in the Arctic Ocean, bordered by Canada, Denmark, Iceland, Norway, Russia and the United States (see Territorial claims in the Arctic). Mineral exploration and exploitation activities in any seabed area not belonging to these states would fall under ISA jurisdiction. In 2006 the Authority established an Endowment Fund to Support Collaborative Marine Scientific Research on the International Seabed Area. The Fund will aid experienced scientists and technicians from developing countries to participate in deep-sea research organized by international and national institutions. A campaign was launched in February 2008 to identify participants, establish a network of cooperating bodies and seek outside funds to augment the initial $3 million endowment from the Authority. The International Seabed Authority Endowment Fund promotes and encourages the conduct of collaborative marine scientific research in the international seabed area through two main activities: The Secretariat of the International Seabed Authority is facilitating these activities by creating and maintaining an ongoing list of opportunities for scientific collaboration, including research cruises, deep-sea sample analysis, and training and internship programmes. This entails building a network of co-operating groups interested in (or presently undertaking) these types of activities and programmes, such as universities, institutions, contractors with the Authority and other entities. The Secretariat is also actively seeking applications from scientists and other technical personnel from developing nations to be considered for assistance under the Fund. Application guidelines have been prepared for potential recipients to participate in marine scientific research programmes or other scientific co-operation activity, to enroll in training programmes, and to qualify for technical assistance. An advisory panel will evaluate all incoming applications and make recommendations to the Secretary-General of the International Seabed Authority so successful applicants may be awarded with Fund assistance. To maximize opportunities for and participation in the Fund, the Secretariat is also seeking donations and in-kind contributions to build on the initial investment of US$3 million. This entails raising awareness of the Fund, reporting on its successes and encouraging new activities and participants. In 2017, the Authority registered seven voluntary commitments with the UN Oceans Conference for Sustainable Development Goal 14. These were: The exact nature of the ISA's mission and authority has been questioned by opponents of the Law of the Sea Treaty who are generally skeptical of multilateral engagement by the United States. The United States is the only major maritime power that has not ratified the Convention (see United States non-ratification of the UNCLOS), with one of the main anti-ratification arguments being a charge that the ISA is flawed or unnecessary. In its original form, the Convention included certain provisions that some found objectionable, such as: Because of these concerns, the United States pushed for modification of the Convention, obtaining a 1994 Agreement on Implementation that somewhat mitigates them and thus modifies the ISA's authority. Despite this change the United States has not ratified the Convention and so is not a member of ISA, although it sends sizable delegations to participate in meetings as an observer.
https://en.wikipedia.org/wiki?curid=15028
Industry Standard Architecture Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles. Originally referred to as the PC/AT-bus, it was also termed "I/O Channel" by IBM. The ISA term was coined as a retronym by competing PC-clone manufacturers in the late 1980s or early 1990s as a reaction to IBM attempts to replace the AT-bus with its new and incompatible Micro Channel architecture. The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, Compact Flash, the PC/104 bus, and internally within Super I/O chips. The ISA bus was developed by a team led by Mark Dean at IBM as part of the IBM PC project in 1981 Compaq created the term "Industry Standard Architecture" (ISA) to replace "PC compatible". ISA originated as an 8-bit system. A 16-bit version, the IBM AT bus, was introduced with the release of the IBM PC/AT in 1984. In 1988, the 32-bit Extended Industry Standard Architecture (EISA) standard was proposed by the "Gang of Nine" group of PC-compatible manufacturers that included Compaq. In the process, they retroactively renamed the AT bus to "ISA" to avoid infringing IBM's trademark on its PC/AT computer. IBM designed the 8-bit version as a buffered interface to the motherboard buses of the Intel 8088 (16/8 bit) CPU in the IBM PC and PC/XT. The 16-bit version was an upgrade for the motherboard buses of the Intel 80286 CPU used in the IBM AT. The ISA bus was therefore synchronous with the CPU clock, until sophisticated buffering methods were implemented by chipsets to interface ISA to much faster CPUs. ISA was designed to connect peripheral cards to the motherboard and allows for bus mastering. Only the first 16 MB of main memory is addressable. The original 8-bit bus ran from the 4.77 MHz clock of the 8088 CPU in the IBM PC and PC/XT. The original 16-bit bus ran from the CPU clock of the 80286 in IBM PC/AT computers, which was 6 MHz in the first models and 8 MHz in later models. The IBM RT PC also used the 16-bit bus. ISA was also used in some non-IBM compatible machines such as Motorola 68k-based Apollo (68020) and Amiga 3000 (68030) workstations, the short-lived AT&T Hobbit and the later PowerPC-based BeBox. Companies like Dell improved the AT bus's performance but in 1987, IBM replaced the AT bus with its proprietary Micro Channel Architecture (MCA). MCA overcame many of the limitations then apparent in ISA but was also an effort by IBM to regain control of the PC architecture and the PC market. MCA was far more advanced than ISA and had many features that would later appear in PCI. However, MCA was also a closed standard whereas IBM had released full specifications and circuit schematics for ISA. Computer manufacturers responded to MCA by developing the Extended Industry Standard Architecture (EISA) and the later VESA Local Bus (VLB). VLB used some electronic parts originally intended for MCA because component manufacturers already were equipped to manufacture them. Both EISA and VLB were backwards-compatible expansions of the AT (ISA) bus. Users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially "plug-n-play", this was rare. Users frequently had to configure parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication and PCI actually incorporated many of the ideas first explored with MCA, though it was more directly descended from EISA. This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. In reality, ISA PnP could be troublesome and did not become well-supported until the architecture was in its final days. PCI slots were the first physically-incompatible expansion ports to directly squeeze ISA off the motherboard. At first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Microsoft's PC 99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years, and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from chipsets was on the horizon. PCI slots are "rotated" compared to their ISA counterparts—PCI cards were essentially inserted "upside-down," allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two connectors can be used in each slot at a time, but this allowed for greater flexibility. The AT Attachment (ATA) hard disk interface is directly descended from the 16-bit ISA of the PC/AT. ATA has its origins in hardcards that integrated a hard disk drive (HDD) and a hard disk controller (HDC) onto one card. This was at best awkward and at worst damaging to the motherboard, as ISA slots were not designed to support such heavy devices as HDDs. The next generation of Integrated Drive Electronics drives moved both the drive and controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA is basically a standardization of this arrangement plus a uniform command structure for software to interface with the HDC within the drive. ATA has since been separated from the ISA bus and connected directly to the local bus, usually by integration into the chipset, for much higher clock rates and data throughput than ISA could support. ATA has clear characteristics of 16-bit ISA, such as a 16-bit transfer size, signal timing in the PIO modes and the interrupt and DMA mechanisms. The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the 1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the 8 data and 20 address lines of the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included −5 V and ±12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has four DMA channels originally provided by the Intel 8237, 3 of the DMA channels are brought out to the XT bus expansion slots; of these, 2 are normally already allocated to machine functions (diskette drive and hard disk controller): The PC/AT-bus, a 16-bit (or 80286-) version of the PC/XT bus, was introduced with the IBM PC/AT. This bus was officially termed "I/O Channel" by IBM. It extends the XT-bus by adding a second shorter edge connector in-line with the eight-bit XT-bus connector, which is unchanged, retaining compatibility with most 8-bit cards. The second connector adds four additional address lines for a total of 24, and 8 additional data lines for a total of 16. It also adds new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and 4 × 16-bit DMA channels, as well as control lines to select 8- or 16-bit transfers. The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with the popularity of the AT-architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI connectors). Motherboard devices have dedicated IRQs (not present in the slots). 16-bit devices can use either PC-bus or PC/AT-bus IRQs. It is therefore possible to connect up to 6 devices that use one 8-bit IRQ each, or up to 5 devices that use one 16-bit IRQ each. At the same time, up to 4 devices may use one 8-bit DMA channel each, while up to 3 devices can use one 16-bit DMA channel each. Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many different IBM "clones" on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing problems for certain ISA cards at bus speeds they were not designed for. Later motherboards or integrated chipsets used a separate clock generator, or a clock divider which either fixed the ISA bus frequency at 4, 6, or 8 MHz or allowed the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain Hercules-compatible video cards, for instance), could show significant performance improvements. Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KiB sections, leading to problems when mixing 8- and 16-bit cards as they could not co-exist in the same 128 KiB area. This is because the MEMCS16 line is required to be set based on the value of LA17-23 only. ISA is still used today for specialized industrial purposes. In 2008 IEI Technologies released a modern motherboard for Intel Core 2 Duo processors which, in addition to other special I/O features, is equipped with two ISA slots. It is marketed to industrial and military users who have invested in expensive specialized ISA bus adaptors, which are not available in PCI bus versions. Similarly, ADEK Industrial Computers is releasing a motherboard in early 2013 for Intel Core i3/i5/i7 processors, which contains one (non-DMA) ISA slot. The PC/104 bus, used in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on recent motherboards; while physically quite different, LPC looks just like ISA to software, so that the peculiarities of ISA such as the 16 MiB DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to stick around for a while. As explained in the "History" section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) and more recently Serial ATA (SATA) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits, support for exactly one IRQ and one DMA channel, and 3 address bits. To this ISA subset, ATA adds two IDE address select ("chip select") lines and a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) In addition to the physical interface channel, ATA goes beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every ATA (IDE) drive and a full set of protocols and device commands for controlling fixed disk drives using these registers. The ATA device registers are accessed using the address bits and address select signals in the ATA physical interface channel, and all operations of ATA hard disks are performed using the ATA-specified protocols through the ATA command set. The earliest versions of the ATA standard featured a few simple protocols and a basic command set comparable to the command sets of MFM and RLL controllers (which preceded ATA controllers), but the latest ATA standards have much more complex protocols and instruction sets that include optional commands and protocols providing such advanced optional-use features as sizable hidden system storage areas, password security locking, and programmable geometry translation. A further deviation between ISA and ATA is that while the ISA bus remained locked into a single standard clock rate (for backward hardware compatibility), the ATA interface offered many different speed modes, could select among them to match the maximum speed supported by the attached drives, and kept adding faster speeds with later versions of the ATA standard (up to 133 MB/s for ATA-6, the latest.) In most forms, ATA ran much faster than ISA, provided it was connected directly to a local bus faster than the ISA bus. Before the 16-bit ATA/IDE interface, there was an 8-bit XT-IDE (also known as XTA) interface for hard disks. It was not nearly as popular as ATA has become, and XT-IDE hardware is now fairly hard to find. Some XT-IDE adapters were available as 8-bit ISA cards, and XTA sockets were also present on the motherboards of Amstrad's later XT clones as well as a short-lived line of Philips units. The XTA pinout was very similar to ATA, but only eight data lines and two address lines were used, and the physical device registers had completely different meanings. A few hard drives (such as the Seagate ST351A/X) could support either type of interface, selected with a jumper. Many later AT (and AT successor) motherboards had no integrated hard drive interface but relied on a separate hard drive interface plugged into an ISA/EISA/VLB slot. There were even a few 80486 based units shipped with MFM/RLL interfaces and drives instead of the increasingly common AT-IDE. Commodore built the XT-IDE based peripheral hard drive / memory expansion unit A590 for their Amiga 500 and 500+ computers that also supported a SCSI drive. Later models – the A600, A1200, and the Amiga 4000 series – use AT-IDE drives. The PCMCIA specification can be seen as a superset of ATA. The standard for PCMCIA hard disk interfaces, which included PCMCIA flash drives, allows for the mutual configuration of the port and the drive in an ATA mode. As a de facto extension, most PCMCIA flash drives additionally allow for a simple ATA mode that is enabled by pulling a single pin low, so that PCMCIA hardware and firmware are unnecessary to use them as an ATA drive connected to an ATA port. PCMCIA flash drive to ATA adapters are thus simple and inexpensive, but are not guaranteed to work with any and every standard PCMCIA flash drive. Further, such adapters cannot be used as generic PCMCIA ports, as the PCMCIA interface is much more complex than ATA. Although most modern computers do not have physical ISA buses, all IBM compatible computers — x86, and x86-64 (most non-mainframe, non-embedded) — have ISA buses allocated in physical address space. Embedded controller chips (southbridge) and CPUs themselves provide services such as temperature monitoring and voltage readings through these buses as ISA devices. IEEE started a standardization of the ISA bus in 1985, called the P996 specification. However, despite there even having been books published on the P996 specification, it never officially progressed past draft status. There still is an existing user base with old computers, so some ISA cards are still manufactured, e.g. with USB ports or complete single board computers based on modern processors, USB 3.0, and SATA.
https://en.wikipedia.org/wiki?curid=15029
Intergovernmental Panel on Climate Change The Intergovernmental Panel on Climate Change (IPCC) is an intergovernmental body of the United Nations that is dedicated to providing the world with objective, scientific information relevant to understanding the scientific basis of the risk of human-induced climate change, its natural, political, and economic impacts and risks, and possible response options. The IPCC was established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) and was later endorsed by the United Nations General Assembly. Membership is open to all members of the WMO and UN. The IPCC produces reports that contribute to the work of the United Nations Framework Convention on Climate Change (UNFCCC), the main international treaty on climate change. The objective of the UNFCCC is to "stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic (human-induced) interference with the climate system". The IPCC's Fifth Assessment Report was a critical scientific input into the UNFCCC's Paris Agreement in 2015. IPCC reports cover the "scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation." The IPCC does not carry out original research, nor does it monitor climate or related phenomena itself. Rather, it assesses published literature, including peer-reviewed and non-peer-reviewed sources. However, the IPCC can be said to stimulate research in climate science. Chapters of IPCC reports often close with sections on limitations and knowledge or research gaps, and the announcement of an IPCC special report can catalyse research activity in that area. Thousands of scientists and other experts contribute on a voluntary basis to writing and reviewing reports, which are then reviewed by governments. IPCC reports contain a "Summary for Policymakers", which is subject to line-by-line approval by delegates from all participating governments. Typically, this involves the governments of more than 120 countries. The IPCC provides an internationally accepted authority on climate change, producing reports that have the agreement of leading climate scientists and consensus from participating governments. The 2007 Nobel Peace Prize was shared between the IPCC and Al Gore. Following the election of a new Bureau in 2015, the IPCC embarked on its sixth assessment cycle. Besides the Sixth Assessment Report, to be completed in 2022, the IPCC released the Special Report on Global Warming of 1.5 °C in October 2018, released an update to its 2006 Guidelines for National Greenhouse Gas Inventories—the 2019 Refinement—in May 2019, and delivered two further special reports in 2019: the Special Report on Climate Change and Land (SRCCL), published online on 7 August, and the Special Report on the Ocean and Cryosphere in a Changing Climate (SROCC), released on 25 September 2019. This makes the sixth assessment cycle the most ambitious in the IPCC's 30-year history. The IPCC also decided to prepare a special report on cities and climate change in the seventh assessment cycle and held a conference in March 2018 to stimulate research in this area. The IPCC developed from an international scientific body, the Advisory Group on Greenhouse Gases set up in 1985 by the International Council of Scientific Unions, the United Nations Environment Programme (UNEP), and the World Meteorological Organization (WMO) to provide recommendations based on current research. This small group of scientists lacked the resources to cover the increasingly complex interdisciplinary nature of climate science. The United States Environmental Protection Agency and State Department wanted an international convention to agree restrictions on greenhouse gases, and the conservative Reagan Administration was concerned about unrestrained influence from independent scientists or from United Nations bodies including UNEP and the WMO. The U.S. government was the main force in forming the IPCC as an autonomous intergovernmental body in which scientists took part both as experts on the science and as official representatives of their governments, to produce reports which had the firm backing of all the leading scientists worldwide researching the topic, and which then had to gain consensus agreement from every one of the participating governments. In this way, it was formed as a hybrid between a scientific body and an intergovernmental political organisation. The United Nations formally endorsed the creation of the IPCC in 1988. Some of the reasons the UN stated in its resolution include The IPCC was tasked with reviewing peer-reviewed scientific literature and other relevant publications to provide information on the state of knowledge about climate change. The IPCC does not conduct its own original research. It produces comprehensive assessments, reports on special topics, and methodologies. The assessments build on previous reports, highlighting the latest knowledge. For example, the wording of the reports from the first to the fifth assessment reflects the growing evidence for a changing climate caused by human activity. The IPCC has adopted and published "Principles Governing IPCC Work", which states that the IPCC will assess: This document also states that IPCC will do this work by assessing "on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis" of these topics. The Principles also state that "IPCC reports should be neutral with respect to policy, although they may need to deal objectively with scientific, technical and socio-economic factors relevant to the application of particular policies." Korean economist Hoesung Lee has been the chair of the IPCC since 8 October 2015, with the election of the new IPCC Bureau. Before this election, the IPCC was led by Vice-Chair Ismail El Gizouli, who was designated acting Chair after the resignation of Rajendra K. Pachauri in February 2015. The previous chairs were Rajendra K. Pachauri, elected in May 2002; Robert Watson in 1997; and Bert Bolin in 1988. The chair is assisted by an elected bureau including vice-chairs and working group co-chairs, and by a secretariat. The Panel itself is composed of representatives appointed by governments. Participation of delegates with appropriate expertise is encouraged. Plenary sessions of the IPCC and IPCC Working Groups are held at the level of government representatives. Non-Governmental and Intergovernmental Organizations admitted as observer organizations may also attend. Sessions of the Panel, IPCC Bureau, workshops, expert and lead authors meetings are by invitation only. About 500 people from 130 countries attended the 48th Session of the Panel in Incheon, Republic of Korea, in October 2018, including 290 government officials and 60 representatives of observer organizations. The opening ceremonies of sessions of the Panel and of Lead Author Meetings are open to media, but otherwise IPCC meetings are closed. There are several major groups: The IPCC receives funding through the IPCC Trust Fund, established in 1989 by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO), Costs of the Secretary and of housing the secretariat are provided by the WMO, while UNEP meets the cost of the Depute Secretary. Annual cash contributions to the Trust Fund are made by the WMO, by UNEP, and by IPCC Members. Payments and their size are voluntary. The Panel is responsible for considering and adopting by consensus the annual budget. The organization is required to comply with the Financial Regulations and Rules of the WMO. The IPCC has published five comprehensive assessment reports reviewing the latest climate science, as well as a number of special reports on particular topics. These reports are prepared by teams of relevant researchers selected by the Bureau from government nominations. Expert reviewers from a wide range of governments, IPCC observer organizations and other organizations are invited at different stages to comment on various aspects of the drafts. The IPCC published its First Assessment Report (FAR) in 1990, a supplementary report in 1992, a Second Assessment Report (SAR) in 1995, a Third Assessment Report (TAR) in 2001, a Fourth Assessment Report (AR4) in 2007 and a Fifth Assessment Report (AR5) in 2014. The IPCC is currently preparing the Sixth Assessment Report (AR6), which will be completed in 2022. Each assessment report is in three volumes, corresponding to Working Groups I, II, and III. It is completed by a synthesis report that integrates the working group contributions and any special reports produced in that assessment cycle. The IPCC does not carry out research nor does it monitor climate related data. Lead authors of IPCC reports assess the available information about climate change based on published sources. According to IPCC guidelines, authors should give priority to peer-reviewed sources. Authors may refer to non-peer-reviewed sources (the "grey literature"), provided that they are of sufficient quality. Examples of non-peer-reviewed sources include model results, reports from government agencies and non-governmental organizations, and industry journals. Each subsequent IPCC report notes areas where the science has improved since the previous report and also notes areas where further research is required. There are generally three stages in the review process: Review comments are in an open archive for at least five years. There are several types of endorsement which documents receive: The Panel is responsible for the IPCC and its endorsement of Reports allows it to ensure they meet IPCC standards. There have been a range of commentaries on the IPCC's procedures, examples of which are discussed later in the article (see also IPCC Summary for Policymakers). Some of these comments have been supportive, while others have been critical. Some commentators have suggested changes to the IPCC's procedures. Each chapter has a number of authors who are responsible for writing and editing the material. A chapter typically has two "coordinating lead authors", ten to fifteen "lead authors", and a somewhat larger number of "contributing authors". The coordinating lead authors are responsible for assembling the contributions of the other authors, ensuring that they meet stylistic and formatting requirements, and reporting to the Working Group chairs. Lead authors are responsible for writing sections of chapters. Contributing authors prepare text, graphs or data for inclusion by the lead authors. Authors for the IPCC reports are chosen from a list of researchers prepared by governments and participating organisations, and by the Working Group/Task Force Bureaux, as well as other experts known through their published work. The choice of authors aims for a range of views, expertise and geographical representation, ensuring representation of experts from developing and developed countries and countries with economies in transition. The IPCC First Assessment Report (FAR) was completed in 1990, and served as the basis of the UNFCCC. The executive summary of the WG I Summary for Policymakers report says they are certain that emissions resulting from human activities are substantially increasing the atmospheric concentrations of the greenhouse gases, resulting on average in an additional warming of the Earth's surface. They calculate with confidence that CO2 has been responsible for over half the enhanced greenhouse effect. They predict that under a "business as usual" (BAU) scenario, global mean temperature will increase by about 0.3 °C per decade during the [21st] century. They judge that global mean surface air temperature has increased by 0.3 to 0.6 °C over the last 100 years, broadly consistent with prediction of climate models, but also of the same magnitude as natural climate variability. The unequivocal detection of the enhanced greenhouse effect is not likely for a decade or more. The 1992 supplementary report was an update, requested in the context of the negotiations on the UNFCCC at the Earth Summit (United Nations Conference on Environment and Development) in Rio de Janeiro in 1992. The major conclusion was that research since 1990 did "not affect our fundamental understanding of the science of the greenhouse effect and either confirm or do not justify alteration of the major conclusions of the first IPCC scientific assessment". It noted that transient (time-dependent) simulations, which had been very preliminary in the FAR, were now improved, but did not include aerosol or ozone changes. "Climate Change 1995", the IPCC Second Assessment Report (SAR), was finished in 1996. It is split into four parts: Each of the last three parts was completed by a separate Working Group (WG), and each has a Summary for Policymakers (SPM) that represents a consensus of national representatives. The SPM of the WG I report contains headings: The Third Assessment Report (TAR) was completed in 2001 and consists of four reports, three of them from its Working Groups: A number of the TAR's conclusions are given quantitative estimates of how probable it is that they are correct, e.g., greater than 66% probability of being correct. These are "Bayesian" probabilities, which are based on an expert assessment of all the available evidence. "Robust findings" of the TAR Synthesis Report include: , in . Atmospheric concentrations of anthropogenic (i.e., human-emitted) greenhouse gases have increased substantially. , in In 2001, 16 national science academies issued a joint statement on climate change. The joint statement was made by the Australian Academy of Science, the Royal Flemish Academy of Belgium for Science and the Arts, the Brazilian Academy of Sciences, the Royal Society of Canada, the Caribbean Academy of Sciences, the Chinese Academy of Sciences, the French Academy of Sciences, the German Academy of Natural Scientists Leopoldina, the Indian National Science Academy, the Indonesian Academy of Sciences, the Royal Irish Academy, Accademia Nazionale dei Lincei (Italy), the Academy of Sciences Malaysia, the Academy Council of the Royal Society of New Zealand, the Royal Swedish Academy of Sciences, and the Royal Society (UK). The statement, also published as an editorial in the journal Science, stated "we support the [TAR's] conclusion that it is at least 90% certain that temperatures will continue to rise, with average global surface temperature projected to increase by between 1.4 and 5.8 °C above 1990 levels by 2100". The TAR has also been endorsed by the Canadian Foundation for Climate and Atmospheric Sciences, Canadian Meteorological and Oceanographic Society, and European Geosciences Union (refer to "Endorsements of the IPCC"). In 2001, the US National Research Council (US NRC) produced a report that assessed Working Group I's (WGI) contribution to the TAR. US NRC (2001) "generally agrees" with the WGI assessment, and describes the full WGI report as an "admirable summary of research activities in climate science". IPCC author Richard Lindzen has made a number of criticisms of the TAR. Among his criticisms, Lindzen has stated that the WGI Summary for Policymakers (SPM) does not faithfully summarize the full WGI report. For example, Lindzen states that the SPM understates the uncertainty associated with climate models. John Houghton, who was a co-chair of TAR WGI, has responded to Lindzen's criticisms of the SPM. Houghton has stressed that the SPM is agreed upon by delegates from many of the world's governments, and that any changes to the SPM must be supported by scientific evidence. IPCC author Kevin Trenberth has also commented on the WGI SPM. Trenberth has stated that during the drafting of the WGI SPM, some government delegations attempted to "blunt, and perhaps obfuscate, the messages in the report". However, Trenberth concludes that the SPM is a "reasonably balanced summary". US NRC (2001) concluded that the WGI SPM and Technical Summary are "consistent" with the full WGI report. US NRC (2001) stated: [...] the full [WGI] report is adequately summarized in the Technical Summary. The full WGI report and its Technical Summary are not specifically directed at policy. The Summary for Policymakers reflects less emphasis on communicating the basis for uncertainty and a stronger emphasis on areas of major concern associated with human-induced climate change. This change in emphasis appears to be the result of a summary process in which scientists work with policy makers on the document. Written responses from U.S. coordinating and lead scientific authors to the committee indicate, however, that (a) no changes were made without the consent of the convening lead authors (this group represents a fraction of the lead and contributing authors) and (b) most changes that did occur lacked significant impact. The Fourth Assessment Report (AR4) was published in 2007. Like previous assessment reports, it consists of four reports: People from over 130 countries contributed to the IPCC Fourth Assessment Report, which took 6 years to produce. Contributors to AR4 included more than 2500 scientific expert reviewers, more than 800 contributing authors, and more than 450 lead authors. "Robust findings" of the Synthesis report include: , in , in due to human activities. , in , in Global warming projections from AR4 are shown below. The projections apply to the end of the 21st century (2090–99), relative to temperatures at the end of the 20th century (1980–99). Add 0.7 °C to projections to make them relative to pre-industrial levels instead of 1980–99. (UK Royal Society, 2010, p=10. Descriptions of the greenhouse gas emissions scenarios can be found in Special Report on Emissions Scenarios. "Likely" means greater than 66% probability of being correct, based on expert judgement. Several science academies have referred to and/or reiterated some of the conclusions of AR4. These include: 2008 and 2009 by the science academies of Brazil, China, India, Mexico, South Africa and the G8 nations (the "G8+5"). This statement has been signed by 43 scientific academies. The Netherlands Environmental Assessment Agency (PBL, "et al.", 2009; 2010) has carried out two reviews of AR4. These reviews are generally supportive of AR4's conclusions. PBL (2010) make some recommendations to improve the IPCC process. A literature assessment by the US National Research Council (US NRC, 2010) concludes:Climate change is occurring, is caused largely by human activities, and poses significant risks for—and in many cases is already affecting—a broad range of human and natural systems ["emphasis in original text"]. [...] This conclusion is based on a substantial array of scientific evidence, including recent work, and is consistent with the conclusions of recent assessments by the U.S. Global Change Research Program [...], the Intergovernmental Panel on Climate Change’s Fourth Assessment Report [...], and other assessments of the state of scientific knowledge on climate change. Some errors have been found in the IPCC AR4 Working Group II report. Two errors include the melting of Himalayan glaciers (see later section), and Dutch land area that is below sea level. The IPCC's Fifth Assessment Report (AR5) was completed in 2014. AR5 followed the same general format as of AR4, with three Working Group reports and a Synthesis report. The Working Group I report (WG1) was published in September 2013. Conclusions of AR5 are summarized below: IPCC (11 November 2013): B. Observed Changes in the Climate System, in: Summary for Policymakers (finalized version), in: IPCC (11 November 2013): B.5 Carbon and Other Biogeochemical Cycles, in: Summary for Policymakers (finalized version), in: IPCC (11 November 2013): D. Understanding the Climate System and its Recent Changes, in: Summary for Policymakers (finalized version), in: It is extremely likely (95-100% probability) that human influence was the dominant cause of global warming between 1951–2010. Summary for Policymakers, p.23 (archived 25 June 2014), in SPM.3 Trends in stocks and flows of greenhouse gases and their drivers, in: Summary for Policymakers, p.8 (archived 2 July 2014), in Victor, D., "et al.", Executive summary, in: Chapter 1: Introductory Chapter, p.4 (archived 3 July 2014), in Pledges made as part of the Cancún Agreements are broadly consistent with cost-effective scenarios that give a "likely" chance (66-100% probability) of limiting global warming (in 2100) to below 3 °C, relative to pre-industrial levels. Projections in AR5 are based on "Representative Concentration Pathways" (RCPs). The RCPs are consistent with a wide range of possible changes in future anthropogenic greenhouse gas emissions. Projected changes in global mean surface temperature and sea level are given in the main RCP article. In addition to climate assessment reports, the IPCC publishes Special Reports on specific topics. The preparation and approval process for all IPCC Special Reports follows the same procedures as for IPCC Assessment Reports. In the year 2011 two IPCC Special Report were finalized, the Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) and the Special Report on Managing Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX). Both Special Reports were requested by governments. The Special Report on Emissions Scenarios (SRES) is a report by the IPCC which was published in 2000. The SRES contains "scenarios" of future changes in emissions of greenhouse gases and sulfur dioxide. One of the uses of the SRES scenarios is to project future changes in climate, e.g., changes in global mean temperature. The SRES scenarios were used in the IPCC's Third and Fourth Assessment Reports. The SRES scenarios are "baseline" (or "reference") scenarios, which means that they do not take into account any current or future measures to limit greenhouse gas (GHG) emissions (e.g., the Kyoto Protocol to the United Nations Framework Convention on Climate Change). SRES emissions projections are broadly comparable in range to the baseline projections that have been developed by the scientific community. There have been a number of comments on the SRES. Parson "et al." (2007) stated that the SRES represented "a substantial advance from prior scenarios". At the same time, there have been criticisms of the SRES. The most prominently publicized criticism of SRES focused on the fact that all but one of the participating models compared gross domestic product (GDP) across regions using market exchange rates (MER), instead of the more correct purchasing-power parity (PPP) approach. This criticism is discussed in the main SRES article. This report assesses existing literature on renewable energy commercialisation for the mitigation of climate change. It was published in 2012 and covers the six most important renewable energy technologies in a transition, as well as their integration into present and future energy systems. It also takes into consideration the environmental and social consequences associated with these technologies, the cost and strategies to overcome technical as well as non-technical obstacles to their application and diffusion. The full report in PDF form is found here More than 130 authors from all over the world contributed to the preparation of IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) on a voluntary basis – not to mention more than 100 scientists, who served as contributing authors. The report was published in 2012. It assesses the effect that climate change has on the threat of natural disasters and how nations can better manage an expected change in the frequency of occurrence and intensity of severe weather patterns. It aims to become a resource for decision-makers to prepare more effectively for managing the risks of these events. A potentially important area for consideration is also the detection of trends in extreme events and the attribution of these trends to human influence. The full report, 594 pages in length, may be found here in PDF form. More than 80 authors, 19 review editors, and more than 100 contributing authors from all over the world contributed to the preparation of SREX. When the Paris Agreement was adopted, the UNFCCC invited the Intergovernmental Panel on Climate Change to write a special report on "How can humanity prevent the global temperature rise more than 1.5 degrees above pre-industrial level". The completed report, Special Report on Global Warming of 1.5 °C (SR15), was released on 8 October 2018. Its full title is "Global Warming of 1.5 °C, an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty". The finished report summarizes the findings of scientists, showing that maintaining a temperature rise to below 1.5 °C remains possible, but only through "rapid and far-reaching transitions in energy, land, urban and infrastructure..., and industrial systems". Meeting the Paris target of is possible but would require "deep emissions reductions", "rapid", "far-reaching and unprecedented changes in all aspects of society". In order to achieve the 1.5 °C target, CO2 emissions must decline by 45% (relative to 2010 levels) by 2030, reaching net zero by around 2050. Deep reductions in non-CO2 emissions (such as nitrous oxide and methane) will also be required to limit warming to 1.5 °C. Under the pledges of the countries entering the Paris Accord, a sharp rise of 3.1 to 3.7 °C is still expected to occur by 2100. Holding this rise to 1.5 °C avoids the worst effects of a rise by even 2 °C. However, a warming of even 1.5 degrees will still result in large-scale drought, famine, heat stress, species die-off, loss of entire ecosystems, and loss of habitable land, throwing more than 100 Million into poverty. Effects will be most drastic in arid regions including the Middle East and the Sahel in Africa, where fresh water will remain in some areas following a 1.5 °C rise in temperatures but are expected to dry up completely if the rise reaches 2 °C. The final draft of the "Special Report on climate change and land" (SRCCL)—with the full title, "Special Report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems" was published online on 7 August 2019. The SRCCL consists of seven chapters, Chapter 1: Framing and Context, Chapter 2: Land-Climate Interactions, Chapter 3: Desertification, Chapter 4: Land Degradation, Chapter 5: Food Security, Chapter 5 Supplementary Material, Chapter 6: Interlinkages between desertification, land degradation, food security and GHG fluxes: Synergies, trade-offs and Integrated Response Options, and Chapter 7: Risk management and decision making in relation to sustainable development. The "Special Report on the Ocean and Cryosphere in a Changing Climate" (SROCC) was approved on 25 September 2019 in Monaco. Among other findings, the report concluded that sea level rises could be up to two feet higher by the year 2100, even if efforts to reduce greenhouse gas emissions and to limit global warming are successful; coastal cities across the world could see so-called "storm[s] of the century" at least once a year. Within IPCC the National Greenhouse Gas Inventory Program develops methodologies to estimate emissions of greenhouse gases. This has been undertaken since 1991 by the IPCC WGI in close collaboration with the Organisation for Economic Co-operation and Development and the International Energy Agency. The objectives of the National Greenhouse Gas Inventory Program are: The 1996 Guidelines for National Greenhouse Gas Investories provide the methodological basis for the estimation of national greenhouse gas emissions inventories. Over time these guidelines have been completed with good practice reports: "Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories" and "Good Practice Guidance for Land Use, Land-Use Change and Forestry". The 1996 guidelines and the two good practice reports are to be used by parties to the UNFCCC and to the Kyoto Protocol in their annual submissions of national greenhouse gas inventories. The 2006 "IPCC Guidelines for National Greenhouse Gas Inventories" is the latest version of these emission estimation methodologies, including a large number of default emission factors. Although the IPCC prepared this new version of the guidelines on request of the parties to the UNFCCC, the methods have not yet been officially accepted for use in national greenhouse gas emissions reporting under the UNFCCC and the Kyoto Protocol. The IPCC concentrates its activities on the tasks allotted to it by the relevant WMO Executive Council and UNEP Governing Council resolutions and decisions as well as on actions in support of the UNFCCC process. While the preparation of the assessment reports is a major IPCC function, it also supports other activities, such as the Data Distribution Centre and the National Greenhouse Gas Inventories Programme, required under the UNFCCC. This involves publishing default emission factors, which are factors used to derive emissions estimates based on the levels of fuel consumption, industrial production and so on. The IPCC also often answers inquiries from the UNFCCC Subsidiary Body for Scientific and Technological Advice (SBSTA). In December 2007, the IPCC was awarded the Nobel Peace Prize "for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change". The award is shared with Former U.S. Vice-President Al Gore for his work on climate change and the documentary "An Inconvenient Truth". There is widespread support for the IPCC in the scientific community, which is reflected in publications by other scientific bodies and experts. However, criticisms of the IPCC have been made. Since 2010 the IPCC has come under yet unparalleled public and political scrutiny. The global IPCC consensus approach has been challenged internally and externally, for example, during the 2009 Climatic Research Unit email controversy ("Climategate"). It is contested by some as an information monopoly with results for both the quality and the impact of the IPCC work as such. A paragraph in the 2007 Working Group II report ("Impacts, Adaptation and Vulnerability"), chapter 10 included a projection that Himalayan glaciers could disappear by 2035 This projection was not included in the final summary for policymakers. The IPCC has since acknowledged that the date is incorrect, while reaffirming that the conclusion in the final summary was robust. They expressed regret for "the poor application of well-established IPCC procedures in this instance". The date of 2035 has been correctly quoted by the IPCC from the WWF report, which has misquoted its own source, an ICSI report "Variations of Snow and Ice in the past and at present on a Global and Regional Scale". Rajendra K. Pachauri responded in an interview with "Science". Former IPCC chairman Robert Watson said, regarding the Himalayan glaciers estimation, "The mistakes all appear to have gone in the direction of making it seem like climate change is more serious by overstating the impact. That is worrying. The IPCC needs to look at this trend in the errors and ask why it happened". Martin Parry, a climate expert who had been co-chair of the IPCC working group II, said that "What began with a single unfortunate error over Himalayan glaciers has become a clamour without substance" and the IPCC had investigated the other alleged mistakes, which were "generally unfounded and also marginal to the assessment". The third assessment report (TAR) prominently featured a graph labeled "Millennial Northern Hemisphere temperature reconstruction" based on a 1999 paper by Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes (MBH99), which has been referred to as the "hockey stick graph". This graph extended the similar graph in from the IPCC Second Assessment Report of 1995, and differed from a schematic in the first assessment report that lacked temperature units, but appeared to depict larger global temperature variations over the past 1000 years, and higher temperatures during the Medieval Warm Period than the mid 20th century. The schematic was not an actual plot of data, and was based on a diagram of temperatures in central England, with temperatures increased on the basis of documentary evidence of Medieval vineyards in England. Even with this increase, the maximum it showed for the Medieval Warm Period did not reach temperatures recorded in central England in 2007. The MBH99 finding was supported by cited reconstructions by , , and , using differing data and methods. The Jones et al. and Briffa reconstructions were overlaid with the MBH99 reconstruction in Figure 2.21 of the IPCC report. These studies were widely presented as demonstrating that the current warming period is exceptional in comparison to temperatures between 1000 and 1900, and the MBH99 based graph featured in publicity. Even at the draft stage, this finding was disputed by contrarians: in May 2000 Fred Singer's Science and Environmental Policy Project held a press event on Capitol Hill, Washington, D.C., featuring comments on the graph Wibjörn Karlén and Singer argued against the graph at a United States Senate Committee on Commerce, Science and Transportation hearing on 18 July 2000. Contrarian John Lawrence Daly featured a modified version of the IPCC 1990 schematic, which he mis-identified as appearing in the IPCC 1995 report, and argued that "Overturning its own previous view in the 1995 report, the IPCC presented the 'Hockey Stick' as the new orthodoxy with hardly an apology or explanation for the abrupt U-turn since its 1995 report". Criticism of the MBH99 reconstruction in a review paper, which was quickly discredited in the Soon and Baliunas controversy, was picked up by the Bush administration, and a Senate speech by US Republican senator James Inhofe alleged that "manmade global warming is the greatest hoax ever perpetrated on the American people". The data and methodology used to produce the "hockey stick graph" was criticized in papers by Stephen McIntyre and Ross McKitrick, and in turn the criticisms in these papers were examined by other studies and comprehensively refuted by , which showed errors in the methods used by McIntyre and McKitrick. On 23 June 2005, Rep. Joe Barton, chairman of the House Committee on Energy and Commerce wrote joint letters with Ed Whitfield, chairman of the Subcommittee on Oversight and Investigations demanding full records on climate research, as well as personal information about their finances and careers, from Mann, Bradley and Hughes. Sherwood Boehlert, chairman of the House Science Committee, said this was a "misguided and illegitimate investigation" apparently aimed at intimidating scientists, and at his request the U.S. National Academy of Sciences arranged for its National Research Council to set up a special investigation. The National Research Council's report agreed that there were some statistical failings, but these had little effect on the graph, which was generally correct. In a 2006 letter to "Nature", Mann, Bradley, and Hughes pointed out that their original article had said that "more widespread high-resolution data are needed before more confident conclusions can be reached" and that the uncertainties were "the point of the article". The IPCC Fourth Assessment Report (AR4) published in 2007 featured a graph showing 12 proxy based temperature reconstructions, including the three highlighted in the 2001 Third Assessment Report (TAR); as before, and had both been calibrated by newer studies. In addition, analysis of the Medieval Warm Period cited reconstructions by (as cited in the TAR) and . Ten of these 14 reconstructions covered 1,000 years or longer. Most reconstructions shared some data series, particularly tree ring data, but newer reconstructions used additional data and covered a wider area, using a variety of statistical methods. The section discussed the divergence problem affecting certain tree ring data. Some critics have contended that the IPCC reports tend to be conservative by consistently underestimating the pace and impacts of global warming, and report only the "lowest common denominator" findings. On the eve of the publication of IPCC's Fourth Assessment Report in 2007 another study was published suggesting that temperatures and sea levels have been rising at or above the maximum rates proposed during IPCC's 2001 Third Assessment Report. The study compared IPCC 2001 projections on temperature and sea level change with observations. Over the six years studied, the actual temperature rise was near the top end of the range given by IPCC's 2001 projection, and the actual sea level rise was above the top of the range of the IPCC projection. Another example of scientific research which suggests that previous estimates by the IPCC, far from overstating dangers and risks, have actually understated them is a study on projected rises in sea levels. When the researchers' analysis was "applied to the possible scenarios outlined by the Intergovernmental Panel on Climate Change (IPCC), the researchers found that in 2100 sea levels would be 0.5–1.4 m [50–140 cm] above 1990 levels. These values are much greater than the 9–88 cm as projected by the IPCC itself in its Third Assessment Report, published in 2001". This may have been due, in part, to the expanding human understanding of climate. Greg Holland from the National Center for Atmospheric Research, who reviewed a multi-meter sea level rise study by Jim Hansen, noted “"There is no doubt that the sea level rise, within the IPCC, is a very conservative number, so the truth lies somewhere between IPCC and Jim."” In reporting criticism by some scientists that IPCC's then-impending January 2007 report understates certain risks, particularly sea level rises, an AP story quoted Stefan Rahmstorf, professor of physics and oceanography at Potsdam University as saying "In a way, it is one of the strengths of the IPCC to be very conservative and cautious and not overstate any climate change risk". In his December 2006 book, "", and in an interview on Fox News on 31 January 2007, energy expert Joseph Romm noted that the IPCC Fourth Assessment Report is already out of date and omits recent observations and factors contributing to global warming, such as the release of greenhouse gases from thawing tundra. Political influence on the IPCC has been documented by the release of a memo by ExxonMobil to the Bush administration, and its effects on the IPCC's leadership. The memo led to strong Bush administration lobbying, evidently at the behest of ExxonMobil, to oust Robert Watson, a climate scientist, from the IPCC chairmanship, and to have him replaced by Pachauri, who was seen at the time as more mild-mannered and industry-friendly. Michael Oppenheimer, a long-time participant in the IPCC and coordinating lead author of the Fifth Assessment Report conceded in "Science Magazine's State of the Planet 2008–2009" some limitations of the IPCC consensus approach and asks for concurring, smaller assessments of special problems instead of the large scale approach as in the previous IPCC assessment reports. It has become more important to provide a broader exploration of uncertainties. Others see as well mixed blessings of the drive for consensus within the IPCC process and ask to include dissenting or minority positions or to improve statements about uncertainties. The IPCC process on climate change and its efficiency and success has been compared with dealings with other environmental challenges (compare Ozone depletion and global warming). In case of the Ozone depletion, global regulation based on the Montreal Protocol has been successful. In case of Climate Change, the Kyoto Protocol failed. The Ozone case was used to assess the efficiency of the IPCC process. The lockstep situation of the IPCC is having built a broad science consensus while states and governments still follow different, if not opposing goals. The underlying linear model of policy-making of "the more knowledge we have, the better the political response will be" is being doubted. According to Sheldon Ungar's comparison with global warming, the actors in the ozone depletion case had a better understanding of scientific ignorance and uncertainties. The ozone case communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance", while the public opinion on climate change sees no imminent danger. The stepwise mitigation of the ozone layer challenge was based as well on successfully reducing regional burden sharing conflicts. In case of the IPCC conclusions and the failure of the Kyoto Protocol, varying regional cost-benefit analysis and burden-sharing conflicts with regard to the distribution of emission reductions remain an unsolved problem. In the UK, a report for a House of Lords committee asked to urge the IPCC to involve better assessments of costs and benefits of climate change, but the Stern Review, ordered by the UK government, made a stronger argument in favor to combat human-made climate change. Since the IPCC does not carry out its own research, it operates on the basis of scientific papers and independently documented results from other scientific bodies, and its schedule for producing reports requires a deadline for submissions prior to the report's final release. In principle, this means that any significant new evidence or events that change our understanding of climate science between this deadline and publication of an IPCC report cannot be included. In an area of science where our scientific understanding is rapidly changing, this has been raised as a serious shortcoming in a body which is widely regarded as the ultimate authority on the science. However, there has generally been a steady evolution of key findings and levels of scientific confidence from one assessment report to the next. The submission deadlines for the Fourth Assessment Report (AR4) differed for the reports of each Working Group. Deadlines for the Working Group I report were adjusted during the drafting and review process in order to ensure that reviewers had access to unpublished material being cited by the authors. The final deadline for cited publications was 24 July 2006. The final WG I report was released on 30 April 2007 and the final AR4 Synthesis Report was released on 17 November 2007.Rajendra Pachauri, the IPCC chair, admitted at the launch of this report that since the IPCC began work on it, scientists have recorded "much stronger trends in climate change", like the unforeseen dramatic melting of polar ice in the summer of 2007, and added, "that means you better start with intervention much earlier". Scientists who participate in the IPCC assessment process do so without any compensation other than the normal salaries they receive from their home institutions. The process is labor-intensive, diverting time and resources from participating scientists' research programs. Concerns have been raised that the large uncompensated time commitment and disruption to their own research may discourage qualified scientists from participating. In May 2010, Pachauri noted that the IPCC currently had no process for responding to errors or flaws once it issued a report. The problem, according to Pachauri, was that once a report was issued the panels of scientists producing the reports were disbanded. In February 2010, in response to controversies regarding claims in the Fourth Assessment Report, five climate scientists – all contributing or lead IPCC report authors – wrote in the journal "Nature" calling for changes to the IPCC. They suggested a range of new organizational options, from tightening the selection of lead authors and contributors, to dumping it in favor of a small permanent body, or even turning the whole climate science assessment process into a moderated "living" Wikipedia-IPCC. Other recommendations included that the panel employ a full-time staff and remove government oversight from its processes to avoid political interference. The 2018 report "What Lies Beneath" by the Breakthrough - National Centre for Climate Restoration, with contributions from Kevin Anderson, James Hansen, Michael E. Mann, Michael Oppenheimer, Naomi Oreskes, Stefan Rahmstorf, Eric Rignot, Hans Joachim Schellnhuber, Kevin Trenberth, and others, urges the IPCC, the wider UNFCCC negotiations, and national policy makers to change their approach. The authors note, "We urgently require a reframing of scientific research within an existential risk-management framework." In March 2010, at the invitation of the United Nations secretary-general and the chair of the IPCC, the InterAcademy Council (IAC) was asked to review the IPCC's processes for developing its reports. The IAC panel, chaired by Harold Tafler Shapiro, convened on 14 May 2010 and released its report on 1 September 2010. The IAC found that, "The IPCC assessment process has been successful overall". The panel, however, made seven formal recommendations for improving the IPCC's assessment process, including: The panel also advised that the IPCC avoid appearing to advocate specific policies in response to its scientific conclusions. Commenting on the IAC report, "Nature News" noted that "The proposals were met with a largely favourable response from climate researchers who are eager to move on after the media scandals and credibility challenges that have rocked the United Nations body during the past nine months". Papers and electronic files of certain working groups of the IPCC, including reviews and comments on drafts of their Assessment Reports, are archived at the Environmental Science and Public Policy Archives in the Harvard Library. Various scientific bodies have issued official statements endorsing and concurring with the findings of the IPCC. , in
https://en.wikipedia.org/wiki?curid=15030
IBM Personal Computer The IBM Personal Computer, commonly known as the IBM PC, is the original version of the IBM PC compatible hardware platform. It is IBM model number 5150 and was introduced on August 12, 1981. It was created by a team of engineers and designers under the direction of Don Estridge in Boca Raton, Florida. The generic term "personal computer" ("PC") was in use years before 1981, applied as early as 1972 to the Xerox PARC's Alto, but the term "PC" came to mean more specifically a desktop microcomputer compatible with IBM's "Personal Computer" branded products. The machine was based on open architecture, and third-party suppliers sprang up to provide peripheral devices, expansion cards, and software. IBM had a substantial influence on the personal computer market in standardizing a platform for personal computers, and "IBM compatible" became an important criterion for sales growth. Only the Apple Macintosh family kept a significant share of the microcomputer market after the 1980s without compatibility to the IBM personal computer. International Business Machines (IBM) had a 62% share of the mainframe computer market in the early 1980s. Its slow entry into the minicomputer market in the 1960s, however, let new rivals like Digital Equipment Corporation (DEC) and others earn billions in revenue. IBM did not want to repeat the mistake with personal computers. In the late 1970s, the new industry was dominated by the Commodore PET, Atari 8-bit family, Apple II series, Tandy Corporation's TRS-80, and various CP/M machines. The microcomputer market was large enough for IBM's attention, with $150 million in sales by 1979 and projected annual growth of more than 40% in the early 1980s. Other large technology companies had entered it, such as Hewlett-Packard (HP), Texas Instruments (TI), and Data General, and some large IBM customers were buying Apples. IBM did not want a personal computer with another company's logo on mainframe customers' desks, so introducing its own was both an experiment in a new market and a defense against rivals. In 1980 and 1981, rumors spread of an IBM personal computer, perhaps a miniaturized version of the IBM System/370, while Matsushita acknowledged that it had discussed with IBM the possibility of manufacturing a personal computer for the American company. The Japanese project was a Zilog Z80-based computer codenamed "Go" but ended before the 1981 release of the American-designed IBM PC codenamed "Chess", and two simultaneous projects confused rumors about the forthcoming product. Whether IBM had waited too long to enter an industry in which Tandy, Atari and others were already successful was unclear. Data General and TI's small computers were not very successful, but observers expected AT&T to soon enter the computer industry, and other large companies such as Exxon, Montgomery Ward, Pentel, and Sony were designing their own microcomputers. Xerox quickly produced the 820 to introduce a personal computer before IBM, becoming the second Fortune 500 company after Tandy to do so, and had its Xerox PARC laboratory's sophisticated technology. An observer stated that "IBM bringing out a personal computer would be like teaching an elephant to tap dance". Successful microcomputer company Vector Graphic's fiscal 1980 revenue was $12 million. A single IBM computer in the early 1960s cost as much as $9 million, occupied of air-conditioned space, and had a staff of 60 people; in 1980 its least-expensive computer, the 5120, still cost about $13,500. The "Colossus of Armonk" only sold through its own sales force and had no experience with resellers or retail stores. Another observer claimed that IBM made decisions so slowly that, when tested, "what they found is that it would take at least nine months to ship an empty box", and an employee complained that "IBM has more committees than the U.S. Government". As with other large computer companies, its new products typically required about four to five years for development. While IBM traditionally let others pioneer a new market—it released a commercial computer a year after Remington Rand's UNIVAC in 1951, but within five years had 85% of the market—the personal-computer development and pricing cycles were much faster than for mainframes, with products designed in a few months and obsolete quickly. Many in the microcomputer industry expected that the personal computer would be, Bill Gates of Microsoft recalled, "the overthrow of IBM". They resented the company's power and wealth, and disliked the perception that an industry founded by startups needed a latecomer so staid that it had a strict dress code and employee songbook, and prohibited salesmen with client visits in the afternoon from drinking alcohol at lunch. The potential importance to microcomputers of a company so prestigious, that a popular saying in American companies stated "No one ever got fired for buying IBM", was nonetheless clear. "InfoWorld", which described itself as "The Newsweekly for Microcomputer Users", stated that "for my grandmother, and for millions of people like her, "IBM " and "computer" are synonymous". "Byte" ("The Small Systems Journal") stated in an editorial just before the announcement of the IBM PC: The editorial acknowledged that "some factions in our industry have looked upon IBM as the 'enemy, but concluded with optimism: "I want to see personal computing take a giant step." Desktop sized programmable calculators by HP had evolved into the HP 9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT, and full-function keyboard. SCAMP emulates the IBM 1130r to run APL\1130. In 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because it was the first to emulate APL\1130 performance on a portable, single-user computer, "PC Magazine" in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". The prototype is in the Smithsonian Institution. A non-working industrial design model was also created in 1973 by industrial designer Tom Hardy illustrating how the SCAMP engineering prototype could be transformed into a usable product design for the marketplace. This design model was requested by IBM executive Bill Lowe to complement the engineering prototype in his early efforts to demonstrate the viability of creating a single-user computer. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer in 1975. In the late 1960s such a machine would have been nearly as large as two desks and would have weighed about half a ton. The 5100 is a complete computer system programmable in BASIC or APL, with a small built-in CRT monitor, keyboard, and tape drive for data storage. It was also very expensive, up to US$20,000; the computer was designed for professional and scientific customers, not business users or hobbyists. "BYTE" in 1975 announced the 5100 with the headline "Welcome, IBM, to personal computing", but "PC Magazine" in 1984 described 5100s as "little mainframes" and stated that "as personal computers, these machines were dismal failures ... the antithesis of user-friendly", with no IBM support for third-party software. Despite news reports that the PC was the first IBM product without a model number, it was designated as the IBM 5150, putting it in the "5100" series though its architecture is not descended from the IBM 5100. Later models followed in the trend: For example, the IBM Portable Personal Computer, PC/XT, and PC AT are IBM machine types 5155, 5160, and 5170, respectively. Following SCAMP, the IBM Boca Raton, Florida Laboratory created several single-user computer design concepts to support Lowe's ongoing effort to convince IBM there was a strategic opportunity in the personal computer business. A selection of these early IBM design concepts created by Hardy is highlighted in the book "DELETE: A Design History of Computer Vapourware". One such concept in 1977, code-named Aquarius, is a working prototype utilizing advanced bubble memory cartridges. While this design is more powerful and smaller than Apple II launched the same year, the advanced bubble technology was deemed unstable and not ready for mass production. Some employees opposed IBM entering the market. One said, "Why on earth would you care about the personal computer? It has nothing at all to do with office automation". The company considered personal computer designs but had determined that IBM was unable to build a personal computer profitably. Walden C. Rhines of TI met with a Boca Raton group in the late 1970s considering the TMS9900 16-bit microprocessor for a secret project. He wrote, "We wouldn’t know until 1981 just what we had lost" by not being chosen. IBM President John Opel was not among those skeptical of personal computers. He and CEO Frank Cary had created more than a dozen semi-autonomous "Independent Business Units" (IBU) to encourage innovation; "Fortune" called them "how to start your own company without leaving IBM". Lowe became the first head of the Entry Level Systems IBU in Boca Raton, and his team researched the market. Computer dealers were very interested in selling an IBM product, but they told Lowe that the company could not design, sell, or service it as IBM had previously done. An IBM microcomputer, they said, must be composed of standard parts that store employees could repair. Dealers disliked Apple's autocratic business practices, including a shortage of the Apple II while the company focused on the more sophisticated Apple III. They saw no alternative, however; IBM only sold directly so any IBM personal computer would only hurt them. Dealers doubted that IBM's traditional sales methods and bureaucracy would change. Schools in Broward County—near Boca Raton—purchased Apples, a consequence of IBM lacking a personal computer. Atari proposed in 1980 that it act as original equipment manufacturer for an IBM microcomputer. Lowe was aware that the company needed to enter the market quickly, so he met with Opel, Cary, and others on the Corporate Management Committee in 1980. He demonstrated the proposal with an industrial design model by Hardy based on the Atari 800 platform, and suggested acquiring Atari "because we can't do this within the culture of IBM". Cary agreed about the culture, observing that IBM would need "four years and three hundred people" to develop its own personal computer; Lowe promised one in a year if done without traditional IBM methods. Instead of acquiring Atari, the committee allowed him to form an independent group of employees called "the Dirty Dozen", led by engineer Bill Sydnes, and Lowe promised that they could design a prototype in 30 days. The crude prototype barely worked when Lowe demonstrated it in August, but he presented a detailed business plan which proposed that the new computer have an open architecture, use non-proprietary components and software, and be sold through retail stores, all contrary to IBM practice. Lowe estimated sales of 220,000 computers over three years, more than IBM's entire installed base. The committee agreed that Lowe's approach was the most likely to succeed, and it approved turning the group into another IBU code named "Project Chess" to develop "Acorn", with unusually large funding to help achieve the goal of introducing the product within one year of the August demonstration. Don Estridge became the head of Chess. Cary told the team to do whatever was necessary to develop an IBM personal computer quickly. Key members included Sydnes, Lewis Eggebrecht, David Bradley, Mark Dean, and David O'Connor. Many were already hobbyists who owned their own computers, including Estridge, who had an Apple II. Industrial designer Hardy was also assigned to the project. The team received permission to expand to 150 people by the end of 1980, and more than 500 IBM employees called in one day asking to join. IBM normally was vertically integrated, internally developing all important hardware and software with only what was available in internal catalogs of IBM components, and only purchasing parts like transformers and semiconductors. The company's purchase of Rolm in 1984 was its first acquisition in 18 years. IBM discouraged customers from purchasing compatible third-party products. IBM began using some outside components before the PC to save money and time, but when designing it the company avoided vertical integration as much as possible; choosing, for example, to license Microsoft BASIC despite having a BASIC of its own for mainframes. (Estridge said that unlike IBM's own version "Microsoft BASIC had hundreds of thousands of users around the world. How are you going to argue with that?") Although the company denied doing so, many observers concluded that IBM intentionally emulated Apple when designing the PC. The many Apple II owners on the team influenced its decision to design the computer with an open architecture and publish technical information so others could create software and expansion slot peripherals. Eggebrecht wanted to use the Motorola 68000, Gates recalled. Rhines later said that it "was undoubtedly the hands-on winner" among 16-bit CPUs for an IBM microcomputer; big endian like other IBM computers, and more powerful than TMS9900 or Intel 8088. The 68000 was not production ready like the others, however; thus "Motorola, with its superior technology, lost the single most important design contest of the last 50 years", he said. Project Go, from a rival division, planned to use an 8-bit CPU. Gates said that Project Chess also planned to do so until he convinced IBM to choose the 8088. Although the company knew that it could not avoid competition from third-party software on proprietary hardware—Digital Research released CP/M-86 for the IBM Displaywriter, for example—it considered using the IBM 801 RISC processor and its operating system, developed at the Thomas J. Watson Research Center in Yorktown Heights, New York. The 801 processor was more than an order of magnitude more powerful than the Intel 8088, and the operating system more advanced than the PC DOS 1.0 operating system from Microsoft. Ruling out an in-house solution made the team's job much easier and may have avoided a delay in the schedule, but the ultimate consequences of this decision for IBM were far-reaching. IBM had recently developed the IBM System/23 Datamaster business microcomputer, which uses a processor and other chips from Intel; familiarity with them and the immediate availability of the 8088 was a reason for choosing it for the PC. The 62-pin expansion bus slots were designed to be similar to the Datamaster slots. Differences from the Datamaster include avoiding an all-in-one design while limiting the computer's size so that it fits on a standard desktop with the keyboard (also similar to the Datamaster's), and " disk drives instead of 8". Delays due to in-house development of the Datamaster software was a reason why IBM chose Microsoft BASIC—already available for the 8088—and published available technical information to encourage third-party developers. IBM chose the 8088 over the similar but superior 8086 because Intel offered a better price on the former and could provide more units, and the 8088's 8-bit bus reduced the cost of the rest of the computer. Gates praised Eggebrecht for designing the 8088-based motherboard in 40 days, describing it as "one of the most phenomenal projects". The IBU built a working prototype in four months and made the first internal demonstration by January 1981. The design for the computer was essentially complete by April 1981, when the manufacturing team took over the project. IBM would not be able to make a profit if it were to use only its own hardware with Acorn; to save time and money, the IBU built the machine with commercial off-the-shelf parts from original equipment manufacturers whenever possible, with only final assembly occurring in Boca Raton at a plant Estridge designed. The IBU would decide whether it would be more economical to "Make or Buy" each manufacturing step. Various IBM divisions for the first time competed with outsiders to build parts of the new computer; a North Carolina IBM factory built the keyboard, the Endicott, New York factory had to lower its bid for printed circuit boards, and a Taiwanese company built the monitor. The IBU chose an existing monitor from IBM Japan and an Epson printer. Because of the off-the-shelf parts only the system unit and keyboard has unique IBM industrial design elements. The IBM copyright appears in only the ROM BIOS and on the company logo, and the company reportedly received no patents on the PC, with outsiders manufacturing 90% of it. Because the product would carry the IBM logo, the only corporate division the IBU could not bypass was the Quality Assurance Unit, part of why IBM did not use the 68000. A component manufacturer described the process of being selected as a supplier as rigorous and "absolutely amazing", with IBM inspectors even testing solder flux. They stayed after selection, monitoring and helping to improve the manufacturing process. IBM's size overwhelmed other companies; "a hundred IBM engineers" reportedly visited Mitel to meet with two of the latter's employees about a problem, according to "The New York Times". Another aspect of IBM that did not change was secrecy; employees at Yorktown knew nothing of Boca Raton's activities. Those working on the project, within and outside of IBM, were under strict confidentiality agreements. When an individual mentioned in public on a Saturday that his company was working on software for a new IBM computer, IBM security appeared at the company on Monday to investigate the leak. After an IBM official discovered printouts in a supplier's garbage, the former's company persuaded the latter to purchase a paper shredder. Management Science America did not know until after agreeing to buy Peachtree Software in 1981 that the latter was working on software for the PC. Developers such as Software Arts received breadboard prototype computers in soldered boxes lined with lead to block X-rays, and had to keep them in locked, windowless rooms; to develop software, Microsoft emulated the PC on a DEC minicomputer and used the prototype for debugging. After the PC's debut, IBM Boca Raton employees continued to decline to discuss their jobs in public. One writer compared the "silence" after asking one about his role at the company to "hit[ting] the wall at the Boston Marathon: the conversation is over". After developing it in 12 months—faster than any other hardware product in company history—IBM announced the Personal Computer on August 12, 1981. Pricing started at for a configuration with 16K RAM, Color Graphics Adapter, and no disk drives. The company intentionally set prices for it and other configurations that were comparable to those of Apple and other rivals; the Datamaster, announced two weeks earlier as the previous least-expensive IBM computer, cost $10,000. What Dan Bricklin described as "pretty competitive" pricing surprised him and other Software Arts employees. One analyst stated that IBM "has taken the gloves off", while the company said "we suggest [the PC's price] invites comparison". "BYTE" described IBM as having "the strongest marketing organization in the world", but the PC's marketing also differed from that of previous products. The company was aware of its strong corporate reputation among potential customers; an early advertisement began "Presenting the IBM of Personal Computers". Estridge recalled that "The most important thing we learned was that how people reacted to a personal computer emotionally was almost more important than what they did with it". Advertisements emphasized the novelty of an individual owning an IBM computer, describing "a product "you" may have a personal interest in" and asking readers to think of My own IBM computer. Imagine that' ... it's yours. For your business, your project, your department, your class, your family and, indeed, for yourself". In addition to the existing corporate sales force IBM opened its own Product Center retail stores. After studying Apple's successful distribution network, the company surprised the industry by selling through others for the first time, including ComputerLand and Sears Roebuck. Because retail stores receive revenue from repairing computers and providing warranty service, IBM broke a 70-year tradition by permitting and training non-IBM service personnel to fix the PC. IBM considered Alan Alda, Beverly Sills, Kermit the Frog, and Billy Martin to be celebrity endorsers of the PC, but chose Charlie Chaplin's The Little Tramp character for a series of advertisements based on Chaplin's films, played by Billy Scudder. Chaplin's film "Modern Times" expressed his opposition to big business, mechanization, and technological efficiency, but the $36-million marketing campaign made Chaplin the (according to "Creative Computing") "warm cuddly" mascot of one of the world's largest companies. Chaplin and his character became so widely associated with IBM that others used his bowler hat and cane to represent or satirize the company. Chaplin's estate sued those who used the trademark without permission, yet "PC Magazine"s April 1983 issue had 12 advertisements which referred to the Little Tramp. Perhaps Chess's most unusual decision for IBM was to publish the PC's technical specifications, allowing outsiders to create products for it. "We encourage third-party suppliers ... we are delighted to have them", the company stated. Although the team began dogfooding before the PC's debut by managing business operations on prototypes, and despite IBM's $5.3 billion R&D budget in 1982—larger than the total revenue of many competitors—the company did not sell internally developed PC software until April 1984, instead relying on already established software companies. Microsoft, Personal Software, and Peachtree Software were among the developers of nine launch titles including EasyWriter and VisiCalc, all already available for other computers. The company contacted Microsoft even before the official approval of Chess, and it and others received cooperation that was, "PC Magazine" said, "unheard of" for IBM. Such openness surprised observers; "BYTE" called it "striking" and "startling", and one developer reported that "it's a very different IBM". Another said "They were very open and helpful about giving us all the technical information we needed. The feeling was so radically different—it's like stepping out into a warm breeze." He concluded, "After years of hassling—fighting the Not-Invented-Here attitude—we're the gods." Most other personal-computer companies did not disclose technical details. Tandy hoped to monopolize sales of TRS-80 software and peripherals. Its RadioShack stores only sold Tandy products; third-party developers found selling their offerings difficult. TI intentionally made developing third-party TI-99/4A software difficult, even requiring a lockout chip in cartridges. IBM itself kept its mainframe technology so secret that rivals were indicted for industrial espionage. For the PC, however, IBM immediately released detailed information. The US$36 "IBM PC Technical Reference Manual" included complete circuit schematics, commented ROM BIOS source code, and other engineering and programming information for all of IBM's PC-related hardware, plus instructions on designing third-party peripherals. It was so comprehensive that one reviewer suggested that the manual could serve as a university textbook, and so clear that a developer claimed that he could design an expansion card without seeing the physical computer. IBM marketed the technical manual in full-page color print advertisements, stating that "our software story is still being written. Maybe by you". Sydnes stated that "The definition of a personal computer "is" third-party hardware and software". Estridge said that IBM did not keep software development proprietary because it would have to "out-VisiCalc VisiCorp and out-Peachtree Peachtree—and you just can't do that". Another advertisement told developers that the company would consider publishing software for "Education. Entertainment. Personal finance. Data management. Self-improvement. Games. Communications. And yes, business." Estridge explicitly invited small, "cottage" amateur and professional developers to create products "with", he said, "our logo and our support". IBM sold the PC at a large discount to employees, encouraged them to write software, and distributed a catalog of inexpensive software written by individuals that might not otherwise appear in public. The announcement by "a company whose name is synonymous with computers", the "Times" said, gave credibility to the new industry. The press reported on most details of the PC before the official announcement; only IBM not providing internally developed software, including DOS (86-DOS) as the operating system, surprised observers. "BYTE" was correct in predicting that an IBM personal computer would nonetheless receive much public attention. Its rapid development amazed observers, as did the Colossus of Armonk selling as a launch title Microsoft "Adventure" (a video game that, its press release stated, brought "players into a fantasy world of caves and treasures"); the company even offered an optional joystick port. Future Computing estimated that "IBM's Billion Dollar Baby" would have $2.3 billion in hardware sales by 1986. David Bunnell, an editor at Osborne/McGraw-Hill, recalled that Within seven weeks Bunnell helped found "PC Magazine", the first periodical for the new computer. The industry awaited and feared IBM's announcement for months. "InfoWorld" reported that "On the morning of the announcement, phone calls to IBM's competitors revealed that almost everyone was having an 'executive meeting' involving the high-level officials who might be in a position to publicly react to the IBM announcement". Claiming that the new IBM computer competed against rivals' products, they were publicly skeptical about the PC. Adam Osborne said that unlike his Osborne I, "when you buy a computer from IBM, you buy a la carte. By the time you have a computer that does anything, it will cost more than an Apple. I don't think Apple has anything to worry about". Apple's Mike Markkula agreed that IBM's product was more expensive than the Apple II, and claimed that the Apple III "offers better performance". He denied that the IBM PC offered more memory, stating that his company could offer more than 128K "but frankly we don't know what anyone would do with that memory". At Tandy, John Roach said "I don't think it's that significant"; Jon Shirley admitted that IBM had a "legendary service reputation" but claimed that the thousands of Radio Shack stores "can provide better service", while predicting that the IBM PC's "major market will be IBM addicts"; and Garland P. Asher said that he was "relieved that whatever they were going to do, they finally did it. I'm certainly relieved at the pricing". Tandy could undersell a $3,000 IBM computer by $1,000, he stated. Many criticized the PC's design as outdated and not innovative, and believed that alleged weaknesses, such as the use of single-sided, single-density disks with less storage than the computer's RAM, and limited graphics capability (customers who wanted both color and high-quality text needed two graphics cards and two monitors), existed because the company was uncertain about the market and was experimenting before releasing a better computer. (Estridge boasted, "Many ... said that there was nothing technologically new in this machine. That was the best news we could have had; we actually had done what we had set out to do".) Although the "Times" said that IBM would "pose the stiffest challenge yet to Apple and to Tandy", they and Commodore—together with more than 50% of the personal-computer market—had many advantages. While IBM began with one microcomputer, little available hardware or software, and a couple of hundred dealers, Radio Shack had sold more than 350,000 computers. It had 14 million customers and 8,000 stores—more than McDonald's—that only sold its broad range of computers and accessories. Apple had sold more than 250,000 computers and had five times as many dealers in the US as IBM and an established international distribution network. Hundreds of independent developers produced software and peripherals for Tandy and Apple computers; at least ten Apple databases and ten word processors were available, while the PC had no databases and one word processor. Altos, Vector Graphic, Cromemco, and Zenith were among those making CP/M, "InfoWorld" said, ""the" small-computer operating system". Radio Shack and Apple hoped that an IBM personal computer would help grow the market. Steve Jobs at Apple ordered a team to examine an IBM PC. After finding it unimpressive—Chris Espinosa called the computer "a half-assed, hackneyed attempt"—the company confidently purchased a full-page advertisement in "The Wall Street Journal" with the headline "Welcome, IBM. Seriously". Gates was at Apple headquarters the day of IBM's announcement and later said "They didn't seem to care. It took them a full year to realize what had happened". The PC was immediately successful. "PC Magazine" later wrote that "IBM's biggest error was in underestimating the demand for the PC". "BYTE" reported a rumor that more than 40,000 were ordered on the day of the announcement; one dealer received 22 $1,000 deposits from customers although he could not promise a delivery date. John Dvorak recalled that a dealer that day praised the computer as an "incredible winner, and IBM knows how to treat us — none of the Apple arrogance". The company could have sold its entire projected first-year production to employees, and IBM customers that were reluctant to purchase Apples were glad to buy microcomputers from their traditional supplier. The computer began shipping in October, ahead of schedule; by then some referred to it simply as "the PC". "BYTE" estimated that 90% of the 40,000 first-day orders were from software developers. By COMDEX in November Tecmar developed 20 products including memory expansion and expansion chassis, surprising even IBM. Jerry Pournelle reported after attending the West Coast Computer Faire in early 1982 that because IBM "encourages amateurs" with "documents that tell all", "an explosion of [third-party] hardware and software" was visible at the convention. Many manufacturers of professional business application software, who had been planning/developing versions for the Apple II, promptly switched their efforts over to the IBM PC when it was announced. Often, these products needed the capacity and speed of a hard-disk. Although IBM did not offer a hard-disk option for almost two years following introduction of its PC, business sales were nonetheless catalyzed by the simultaneous availability of hard-disk subsystems, like those of Tallgrass Technologies which sold in ComputerLand stores alongside the IBM 5150 at the introduction in 1981. One year after the PC's release, although IBM had sold fewer than 100,000 computers, "PC World" counted 753 software packages for the PC—more than four times the number available for the Apple Macintosh one year after its 1984 release—including 422 applications and almost 200 utilities and languages. "InfoWorld" reported that "most of the major software houses have been frantically adapting their programs to run on the PC", with new PC-specific developers composing "an entire subindustry that has formed around the PC's open system", which Dvorak described as a "de facto standard microcomputer". The magazine estimated that "hundreds of tiny garage-shop operations" were in "bloodthirsty" competition to sell peripherals, with 30 to 40 companies in a price war for memory-expansion cards, for example. "PC Magazine" renamed its planned "1001 Products to Use with Your IBM PC" special issue after the number of product listings it received exceeded the figure. Tecmar and other companies that benefited from IBM's openness rapidly grew in size and importance, as did "PC Magazine"; within two years it expanded from 96 bimonthly to 800 monthly pages, including almost 500 pages of advertisements. Gates correctly predicted that IBM would sell "not far from 200,000" PCs in 1982; by the end of the year it was selling one every minute of the business day. The company estimated that 50 to 70% of PCs sold in retail stores went to the home, and the publicity from selling a popular product to consumers caused IBM to, a spokesman said, "enter the world" by familiarizing them with the Colossus of Armonk. Although the PC only provided two to three percent of sales the company found that demand exceeded its estimate by as much as 800%. Because its prices were based on forecasts of much lower volume—250,000 over five years, which would have made the PC a very successful IBM product—the PC became very profitable; at times the company sold almost that many computers per month. Estridge said in 1983 that from October 1982 to March 1983 customer demand quadrupled, with production increasing three times in one year, and warned of a component shortage if demand continued to increase. Many small suppliers' sales to IBM grew rapidly, both pleasing their executives and causing them to worry about being overdependent on it. Miniscribe, for example, in 1983 received 61% of its hard drive orders from IBM; the company's stock price fell by more than one third in one day after IBM reduced orders in January 1984. Suppliers often found, however, that the prestige of having IBM as a customer led to additional sales elsewhere. By early 1983 the "Times" said "I.B.M.'s role in the personal computer world is beginning to resemble its central role in the mainframe computer business, in which I.B.M. is the sun around which everything else revolves". As "a de facto standard", the newspaper wrote, "Virtually every software company is giving first priority to writing programs for the I.B.M. machine". Yankee Group estimated that year that ten new IBM PC-related products appeared every day. In August the Chess IBU, with 4,000 employees, became the Entry Systems Division, which observers believed indicated that the PC was significantly important to IBM overall, and no longer an experiment. In November the Associated Press stated that the PC "in two years [had] effectively set a new standard in desktop computers", and "The Economist" said that IBM "set a standard that those who hope to compete will usually have to follow". The PC surpassed the Apple II in late 1983 as the best-selling personal computer with more than 750,000 sold by the end of the year, while DEC only sold 69,000 microcomputers in the first nine months despite offering three models for different markets. IBM recruited the best Apple dealers while avoiding the grey market; by March 1983 770 separate resellers sold the PC in the US and Canada. It was 65% of BusinessLand's revenue. Demand still so exceeded supply two years after its debut that, despite IBM shipping 40,000 PCs a month, dealers reportedly received 60% or less of their desired quantity; some promoted Apples to reduce dependence. Pournelle received the PC he paid for in early July 1983 on 1 November, and IBM Boca Raton employees and neighbors had to wait five weeks to buy the computers assembled there. Yankee Group also stated that the PC had by 1983 "destroyed the market for some older machines" from companies like Vector Graphic, North Star, and Cromemco. "inCider" wrote "This may be an Apple magazine, but let's not kid ourselves, IBM has devoured competitors like a cloud of locusts". By February 1984 "BYTE" reported on "the phenomenal market acceptance of the IBM PC", and by fall concluded that the company "has given the field its third major standard, after the Apple II and CP/M". Rivals speculated that the government might again prosecute IBM for antitrust, and Ben Rosen claimed that the company's dominance "is having a chilling effect on new ventures, a fear factor". By that time, Apple was less welcoming of the rival that "inCider" stated had a "godlike" reputation. The PC almost completely ended sales of the Apple III, the company's most comparable product, but its focus on the III had delayed improvements to the II, and the sophisticated Lisa was unsuccessful in part because, unlike the II and the PC, Apple discouraged third-party developers. The head of a retail chain said "It appears that IBM had a better understanding of why the Apple II was successful than had Apple". Jobs, after trying to recruit Estridge to become Apple's president, admitted that in two years IBM had joined Apple as "the industry's two strongest competitors". He warned in a speech before previewing the forthcoming "1984" Super Bowl commercial: "It appears IBM wants it "all" ... Will Big Blue dominate the entire computer industry? The entire information age? Was George Orwell right about 1984?" IBM had $4 billion in annual PC revenue by 1984, more than twice that of Apple and as much as the sales of Apple, Commodore, HP, and Sperry combined, and 6% of total revenue. Most companies with mainframes used PCs with the larger computers. Customers having what some IBM executives called the "logo machine" on desks likely benefited mainframe sales and discouraged their purchasing non-IBM hardware; they "prefer a single standard", "The Economist" said. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. A 1985 "Fortune" survey found that 56% of American companies with personal computers used PCs, compared to Apple's 16%. Yankee Group wrote that the PC's success showed that "technological elegance and a leading price/performance position is almost irrelevant to market success". IBM had defeated UNIVAC with an inferior mainframe computer, and IBM's own documentation agreed with observers that described the PC as inferior to competitors' less-expensive products. The company generally did not compete on price; rather, the 1983 study found that customers preferred "IBM's hegemony" because of its support. They wanted "vendor recognition, applications software availability (vendor and third-party), a reputation for product reliability and support, moderately competitive pricing, and an assurance that the vendor won't disappear in the impending personal computer market shakeout", Yankee Group wrote. In 1984, IBM introduced the PC/AT, unlike its predecessor the most sophisticated personal computer from any major company. By 1985, the PC family had more than doubled Future Computing's 1986 revenue estimate, with more than 12,000 applications and 4,500 dealers and distributors worldwide. The PC was similarly dominant in Europe, two years after release there. In his 1985 obituary, "The New York Times" wrote that Estridge had led the "extraordinarily successful entry of the International Business Machines Corporation into the personal computer field". The Entry Systems Division had 10,000 employees and by itself would have been the world's third-largest computer company behind IBM and DEC, with more revenue than IBM's minicomputer business despite its much later start. IBM was the only major company with significant minicomputer and microcomputer businesses; rivals like DEC and Wang also released personal computers but did not adjust to retail sales. Rumors of "lookalike", compatible computers, created without IBM's approval, began almost immediately after the IBM PC's release. Other manufacturers soon reverse engineered the BIOS to produce their own non-infringing functional copies. Columbia Data Products introduced the first IBM-PC compatible computer in June 1982. In November 1982, Compaq Computer Corporation announced the "Compaq Portable", the first portable IBM PC compatible. The first models were shipped in January 1983. The success of the IBM computer led other companies to develop "IBM Compatibles", which in turn led to branding like diskettes being advertised as "IBM format". An IBM PC clone could be built with off-the-shelf parts, but the BIOS required some reverse engineering. Companies like Compaq, Phoenix Software Associates, American Megatrends, Award, and others achieved fully functional versions of the BIOS, allowing companies like Dell, Gateway and HP to manufacture PCs that worked like IBM's product. The IBM PC became the industry standard. Because IBM had no retail experience, the retail chains ComputerLand and Sears Roebuck provided important knowledge of the marketplace. They became the main outlets for the new product. More than 190 ComputerLand stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product. This guaranteed IBM widespread distribution across the U.S. Targeting the new PC at the home market, Sears Roebuck sales failed to live up to expectations. This unfavorable outcome revealed that the strategy of targeting the office market was the key to higher sales. All IBM personal computers are software backwards-compatible with each other in general, but not every program will work in every machine. Some programs are time sensitive to a particular speed class. Older programs will not take advantage of newer higher-resolution and higher-color display standards, while some newer programs require newer display adapters. (Note that as the display adapter was an adapter card in all of these IBM models, newer display hardware could easily be, and often was, retrofitted to older models.) A few programs, typically very early ones, are written for and require a specific version of the IBM PC BIOS ROM. Most notably, BASICA which was dependent on the BIOS ROM had a sister program called GW-BASIC which supported more functions, was 100% backwards compatible and could run independently from the BIOS ROM. The CGA video card, with a suitable modulator, could use an NTSC television set or an RGBi monitor for display; IBM's RGBi monitor was their display model 5153. The other option that was offered by IBM was an MDA and their monochrome display model 5151. It was possible to install both an MDA and a CGA card and use both monitors concurrently if supported by the application program. For example, AutoCAD, Lotus 1-2-3 and others allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Some model 5150 PCs with CGA monitors and a printer port also included the MDA adapter by default, because IBM provided the MDA port and printer port on the same adapter card; it was in fact an MDA/printer port combo card. Although cassette tape was originally envisioned by IBM as a low-budget storage alternative, the most commonly used medium was the floppy disk. The 5150 was available with one or two " floppy drives – with two drives the program disc(s) would be in drive A, while drive B would hold the disc(s) for working files; with one drive the user had to swap program and file discs into the single drive. For models without any drives or storage medium, IBM intended users to connect their own cassette recorder via the 5150's cassette socket. The cassette tape socket was physically the same DIN plug as the keyboard socket and next to it, but electrically completely different. A hard disk drive could not be installed into the 5150's system unit without changing to a higher-rated power supply (although later drives with lower power consumption have been known to work with the standard 63.5 Watt unit). The "IBM 5161 Expansion Chassis" came with its own power supply and one 10 MB hard disk and allowed the installation of a second hard disk. Without the expansion chassis perhaps one free slot remains after installing necessary cards, as a working configuration requires that some of the slots be occupied by display, disk, and I/O adapters, as none of these were built into the 5150's motherboard; the only motherboard external connectors are the keyboard and cassette ports.The system unit has five expansion slots, and the expansion unit has eight; however, one of the system unit's slots and one of the expansion unit's slots have to be occupied by the Extender Card and Receiver Card, respectively, which are needed to connect the expansion unit to the system unit and make the expansion unit's other slots available, for a total of 11 slots. The simple PC speaker sound hardware was also on board. The original PC's maximum memory using IBM parts was 256 kB, achievable through the installation of 64 kB on the motherboard and three 64 kB expansion cards. The processor was an Intel 8088 running at 4.77 MHz, 4/3 the standard NTSC color burst frequency of 315/88 = 3.579 MHz. (In early units, the Intel 8088 used was a 1978 version, later were 1978/81/2 versions of the Intel chip; second-sourced AMDs were used after 1983). Some owners replaced the 8088 with an NEC V20 for a slight increase in processing speed and support for real mode 80186 instructions. The V20 gained its speed increase through the use of a hardware multiplier which the 8088 lacked. An Intel 8087 coprocessor could also be added for hardware floating-point arithmetic. IBM sold the first IBM PCs in configurations with 16 or 64 kB of RAM preinstalled using either nine or thirty-six 16-kilobit DRAM chips. (The ninth bit was used for parity checking of memory.) In November 1982, the hardware was changed to allow the use of 64-Kbit chips (as opposed to the original 16-Kbit chips) - the same RAM configuration as the soon-to-be-released IBM XT. (64 kB in one bank, expandable to 256kB by populating the other three banks.) Although the TV-compatible video board, cassette port and Federal Communications Commission Class B certification were all aimed at making it a home computer, the original PC proved too expensive for the home market. At introduction, a PC with 64 kB of RAM and a single 5.25-inch floppy drive and monitor sold for (), while the cheapest configuration () that had no floppy drives, only 16 kB RAM, and no monitor (again, under the expectation that users would connect their existing TV sets and cassette recorders) proved too unattractive and low-spec, even for its time (cf. footnotes to the above IBM PC range table). While the 5150 did not become a top selling home computer, its floppy-based configuration became an unexpectedly large success with businesses. The "IBM Personal Computer XT", IBM model 5160, was introduced two years after the PC and featured a 10 megabyte hard drive. It had eight expansion slots but the same processor and clock speed as the PC. The XT had no cassette jack, but still had the Cassette Basic interpreter in ROMs. The XT could take 256 kB of memory on the main board (using 64 kbit DRAM); later models were expandable to 640 kB. The remaining 384 kilobytes of the 8088 address space (between 640 KB and 1 MB) were used for the BIOS ROM, adapter ROM and RAM space, including video RAM space. It was usually sold with a Monochrome Display Adapter (MDA) video card or a CGA video card. The eight expansion slots were the same as the model 5150 but were spaced closer together. Although rare, a card designed for the 5150 could be wide enough to obstruct the adjacent slot in an XT. Because of the spacing, an XT motherboard would not fit into a case designed for the PC motherboard, but the slots and peripheral cards were compatible. The XT expansion bus (later called "8-bit Industry Standard Architecture" (ISA) by competitors) was retained in the IBM AT, which added connectors for some slots to allow 16-bit transfers; 8-bit cards could be used in an AT. The "IBM Personal Computer XT/370" was an XT with three custom 8-bit cards: the processor card (370PC-P) contained a modified Motorola 68000 chip, microcoded to execute System/370 instructions, a second 68000 to handle bus arbitration and memory transfers, and a modified 8087 to emulate the S/370 floating point instructions. The second card (370PC-M) connected to the first and contained 512 kB of memory. The third card (PC3277-EM), was a 3270 terminal emulator necessary to install the system software for the VM/PC software to run the processors. The computer booted into DOS, then ran the VM/PC Control Program. The "IBM PCjr" was IBM's first attempt to enter the market for relatively inexpensive educational and home-use personal computers. The PCjr, IBM model number 4860, retained the IBM PC's 8088 CPU and BIOS interface for compatibility, but its cost and differences in the PCjr's architecture, as well as other design and implementation decisions (chief among these was the use of a "chiclet" keyboard, which was difficult to type with), eventually led to the PCjr, and the related IBM JX, being commercial failures. The "IBM Portable Personal Computer" 5155 model 68 was an early portable computer developed by IBM after the success of Compaq's suitcase-size portable machine (the Compaq Portable). It was released in February 1984, and was eventually replaced by the IBM Convertible. The Portable was an XT motherboard, transplanted into a Compaq-style luggable case. The system featured 256 kilobytes of memory (expandable to 512 KB), an added CGA card connected to an internal monochrome (amber) composite monitor, and one or two half-height 5.25" 360 KB floppy disk drives. Unlike the Compaq Portable, which used a dual-mode monitor and special display card, IBM used a stock CGA board and a composite monitor, which had lower resolution. It could however, display color if connected to an external monitor or television. The "IBM Personal Computer/AT" (model 5170), announced August 15, 1984, used an Intel 80286 processor, originally running at 6 MHz. It had a 16-bit ISA bus and 20 MB hard drive. A faster model, running at 8 MHz and sporting a 30-megabyte hard disk was introduced in 1986. The AT was designed to support multitasking; the new SysRq (system request) key, little noted and often overlooked, is part of this design, as is the 80286 itself, the first Intel 16-bit processor with multitasking features (i.e. the 80286 protected mode). IBM made some attempt at marketing the AT as a multi-user machine, but it sold mainly as a faster PC for power users. For the most part, IBM PC/ATs were used as more powerful DOS (single-tasking) personal computers, in the literal sense of the PC name. Early PC/ATs were plagued with reliability problems, in part because of some software and hardware incompatibilities, but mostly related to the internal 20 MB hard disk, and High Density Floppy Disk Drive. While some people blamed IBM's hard disk controller card and others blamed the hard disk manufacturer Computer Memories Inc. (CMI), the IBM controller card worked fine with other drives, including CMI's 33-MB model. The problems introduced doubt about the computer and, for a while, even about the 286 architecture in general, but after IBM replaced the 20 MB CMI drives, the PC/AT proved reliable and became a lasting industry standard. IBM AT's Drive parameter table listed the CMI-33 as having 615 cylinders instead of the 640 the drive was designed with, as to make the size an even 30 MB. Those who re-used the drives mostly found that the 616th cylinder was bad due to it being used as a landing area. The "IBM Personal Computer AT/370" was an AT with two custom 16-bit cards, running almost exactly the same setup as the XT/370. The IBM PC Convertible, released April 3, 1986, was IBM's first laptop computer and was also the first IBM computer to utilize the 3.5" floppy disk which went on to become the standard. Like modern laptops, it featured power management and the ability to run from batteries. It was the follow-up to the IBM Portable and was model number 5140. The concept and the design of the body was made by the German industrial designer Richard Sapper. It utilized an Intel 80c88 CPU (a CMOS version of the Intel 8088) running at 4.77 MHz, 256 kB of RAM (expandable to 640 kB), dual 720 kB 3.5" floppy drives, and a monochrome CGA-compatible LCD screen at a price of $2,000. It weighed and featured a built-in carrying handle. The PC Convertible had expansion capabilities through a proprietary ISA bus-based port on the rear of the machine. Extension modules, including a small printer and a video output module, could be snapped into place. The machine could also take an internal modem, but there was no room for an internal hard disk. Discontinued in August of 1989, the IBM PC Convertible was the only IBM PC model to last beyond the April 2, 1987 discontinuation of all other models. The IBM PS/2 line was introduced in 1987. The Model 30 at the bottom end of the lineup was very similar to earlier models; it used an 8086 processor and an ISA bus. The Model 30 was not "IBM compatible" in that it did not have standard 5.25-inch drive bays; it came with a 3.5-inch floppy drive and optionally a 3.5-inch-sized hard disk. Most models in the PS/2 line further departed from "IBM compatible" by replacing the ISA bus completely with Micro Channel Architecture. The MCA bus was not received well by the customer base for PC's, since it was proprietary to IBM. It was rarely implemented by any of the other PC-compatible makers. Eventually IBM would abandon this architecture entirely and return to the standard ISA bus. The main circuit board in a PC is called the motherboard (IBM terminology calls it a "planar"). This mainly carries the CPU and RAM, and has a bus with slots for expansion cards. Also on the motherboard are the ROM subsystem, DMA and IRQ controllers, coprocessor socket, sound (PC speaker, tone generation) circuitry, and keyboard interface. The original PC also has a cassette interface. The bus used in the original PC became very popular, and it was later named ISA. It was originally known as the PC-bus or XT-bus; the term "ISA" arose later when industry leaders chose to continue manufacturing machines based on the IBM PC AT architecture rather than license the PS/2 architecture and its Micro Channel bus from IBM. The XT-bus was then retroactively named "8-bit ISA" or "XT ISA", while the unqualified term "ISA" usually refers to the 16-bit AT-bus (as better defined in the ISA specifications). The AT-bus is an extension of the PC-/XT-bus and is in use to this day in computers for industrial use, where its relatively low speed, 5-volt signals, and relatively simple, straightforward design (all by year 2011 standards) give it technical advantages (e.g. noise immunity for reliability). A monitor and any floppy or hard disk drives are connected to the motherboard through cables connected to graphics adapter and disk controller cards, respectively, installed in expansion slots. Each expansion slot on the motherboard has a corresponding opening in the back of the computer case through which the card can expose connectors; a blank metal cover plate covers this case opening (to prevent dust and debris intrusion and control airflow) when no expansion card is installed. Memory expansion beyond the amount installable on the motherboard was also done with boards installed in expansion slots, and I/O devices such as parallel, serial, or network ports were likewise installed as individual expansion boards. For this reason, it was easy to fill the five expansion slots of the PC, or even the eight slots of the XT, even without installing any special hardware. Companies like Quadram and AST addressed this with their popular multi-I/O cards which combine several peripherals on one adapter card that uses only one slot; Quadram offered the QuadBoard and AST the SixPak. Intel 8086 and 8088-based PCs require expanded memory (EMS) boards to work with more than 640 kB of memory. (Though the 8088 can address one megabyte of memory, the last 384 kB of that is used or reserved for the BIOS ROM, BASIC ROM, extension ROMs installed on adapter cards, and memory address space used by devices including display adapter RAM and even the 64 kB EMS page frame itself.) The original IBM PC AT used an Intel 80286 processor which can access up to 16 MB of memory (though standard DOS applications cannot use more than one megabyte without using additional APIs). Intel 80286-based computers running under OS/2 can work with the maximum memory. The set of peripheral chips selected for the original IBM PC defined the functionality of an IBM compatible. These became the de facto base for later application-specific integrated circuits (ASICs) used in compatible products. The original system chips were one Intel 8259 programmable interrupt controller (PIC) (at I/O address ), one Intel 8237 direct memory access (DMA) controller (at I/O address ), and an Intel 8253 programmable interval timer (PIT) (at I/O address ). The PIT provides the clock ticks, dynamic memory refresh timing, and can be used for speaker output; one DMA channel is used to perform the memory refresh. The mathematics coprocessor was the Intel 8087 using I/O address 0xF0. This was an option for users who needed extensive floating-point arithmetic, such as users of computer-aided drafting. The IBM PC AT added a second, slave 8259 PIC (at I/O address ), a second 8237 DMA controller for 16-bit DMA (at I/O address ), a DMA address register (implemented with a 74LS612 IC) (at I/O address ), and a Motorola MC146818 real-time clock (RTC) with nonvolatile memory (NVRAM) used for system configuration (replacing the DIP switches and jumpers used for this purpose in PC and PC/XT models (at I/O address ). On expansion cards, the Intel 8255 programmable peripheral interface (PPI) (at I/O addresses is used for parallel I/O controls the printer, and the 8250 universal asynchronous receiver/transmitter (UART) (at I/O address or ) controls the serial communication at the (pseudo-) RS-232 port. IBM offered a Game Control Adapter for the PC, which supported analog joysticks similar to those on the Apple II. Although analog controls proved inferior for arcade-style games, they were an asset in certain other genres such as flight simulators. The joystick port on the IBM PC supported two controllers, but required a Y-splitter cable to connect both at once. It remained the standard joystick interface on IBM compatibles until being replaced by USB during the 2000s. The keyboard that came with the IBM 5150 was an extremely reliable and high-quality electronic keyboard originally developed in North Carolina for the Datamaster. Each key was rated to be reliable to over 100 million keystrokes. For the IBM PC, a separate keyboard housing was designed with a novel usability feature that allowed users to adjust the keyboard angle for personal comfort. Compared with the keyboards of other small computers at the time, the IBM PC keyboard was far superior and played a significant role in establishing a high-quality impression. For example, the industrial design of the adjustable keyboard, together with the system unit, was recognized with a major design award. "Byte" magazine in the fall of 1981 went so far as to state that the keyboard was 50% of the reason to buy an IBM PC. The importance of the keyboard was definitely established when the 1983 IBM PCjr flopped, in very large part for having a much different and mediocre Chiclet keyboard that made a poor impression on customers. Oddly enough, the same thing almost happened to the original IBM PC when in early 1981 management seriously considered substituting a cheaper and lower quality keyboard. This mistake was narrowly avoided on the advice of one of the original development engineers. However, the original 1981 IBM PC 83-key keyboard was criticized by typists for its non-standard placement of the and left keys, and because it did not have separate cursor and numeric pads that were popular on the pre-PC DEC VT100 series video terminals. In 1982, Key Tronic introduced a 101-key PC keyboard, albeit not with the now-familiar layout. In 1984, IBM corrected the and left keys on its AT keyboard, but shortened the key, making it harder to reach. In 1986, IBM introduced the 101 key Enhanced Keyboard, which added the separate cursor and numeric key pads, relocated all the function keys and the keys, and the key was also relocated to the opposite side of the keyboard. The Enhanced Keyboard was an option for the PC XT/AT in 1986, both of which were also available with their original keyboards, and introduced the key layout that's still the industry standard. Another feature of the original keyboard is the relatively loud "click" sound each key made when pressed. Since typewriter users were accustomed to keeping their eyes on the hardcopy they were typing from and had come to rely on the mechanical sound that was made as each character was typed onto the paper to ensure that they had pressed the key hard enough (and only once), the PC keyboard used a keyswitch that produced a click and tactile bump intended to provide that same reassurance. The IBM PC keyboard is very robust and flexible. The low-level interface for each key is the same: each key sends a signal when it is pressed and another signal when it is released. An integrated microcontroller in the keyboard scans the keyboard and encodes a "scan code" and "release code" for each key as it is pressed and released separately. Any key can be used as a shift key, and a large number of keys can be held down simultaneously and separately sensed. The controller in the keyboard handles typematic operation, issuing periodic repeat scan codes for a depressed key and then a single release code when the key is finally released. An "IBM PC compatible" may have a keyboard that does not recognize every key combination a true IBM PC does, such as shifted cursor keys. In addition, the "compatible" vendors sometimes used proprietary keyboard interfaces, preventing the keyboard from being replaced. Although the PC/XT and AT used the same style of keyboard connector, the low-level protocol for reading the keyboard was different between these two series. The AT keyboard uses a bidirectional interface which allows the computer to send commands to the keyboard. An AT keyboard could not be used in an XT, nor the reverse. Third-party keyboard manufacturers provided a switch on some of their keyboards to select either the AT-style or XT-style protocol for the keyboard. The original IBM PC used the 7-bit ASCII alphabet as its basis, but extended it to 8 bits with nonstandard character codes. This character set was not suitable for some international applications, and soon a veritable cottage industry emerged providing variants of the original character set in various national variants. In IBM tradition, these variants were called code pages. These codings are now largely obsolete, having been replaced by more systematic and standardized forms of character coding, such as ISO 8859-1, Windows-1251 and Unicode. The original character set is known as code page 437. IBM equipped the model 5150 with a cassette port for connecting a cassette drive and assumed that home users would purchase the low-end model and save files to cassette tapes as was typical of home computers of the time. However, adoption of the floppy- and monitor-less configuration was low; few (if any) IBM PCs left the factory without a floppy disk drive installed. Also, DOS was not available on cassette tape, only on floppy disks (hence "Disk Operating System"). 5150s with just external cassette recorders for storage could only use the built-in ROM BASIC as their operating system. As DOS saw increasing adoption, the incompatibility of DOS programs with PCs that used only cassettes for storage made this configuration even less attractive. The ROM BIOS supported cassette operations. The IBM PC cassette interface encodes data using frequency modulation with a variable data rate. Either a one or a zero is represented by a single cycle of a square wave, but the square wave frequencies differ by a factor of two, with ones having the lower frequency. Therefore, the bit periods for zeros and ones also differ by a factor of two, with the unusual effect that a data stream with more zeros than ones will use less tape (and time) than an equal-length (in bits) data stream containing more ones than zeros, or equal numbers of each. IBM also had an exclusive license agreement with Microsoft to include BASIC in the ROM of the PC; clone manufacturers could not have ROM BASIC on their machines, but it also became a problem as the XT, AT, and PS/2 eliminated the cassette port and IBM was still required to install the (now useless) BASIC with them. The agreement finally expired in 1991 when Microsoft replaced BASICA/GW-BASIC with QBASIC. The main core BASIC resided in ROM and "linked" up with the RAM-resident BASIC.COM/BASICA.COM included with PC DOS (they provided disk support and other extended features not present in ROM BASIC). Because BASIC was over 50 kB in size, this served a useful function during the first three years of the PC when machines only had 64–128 kB of memory, but became less important by 1985. For comparison, clone makers such as Compaq were forced to include a version of BASIC that resided entirely in RAM. The first IBM 5150 PCs had two 5.25-inch 160 KiB single sided double density (SSDD) floppy disk drives. As two heads drives became available in the spring of 1982, later IBM PC and compatible computers could read 320 KiB double sided double density (DSDD) disks with software support of MS-DOS 1.25 and higher. The same type of physical diskette media could be used for both drives but a disk formatted for double-sided use could not be read on a single-sided drive. PC DOS 2.0 added support for 180 KiB and 360 KiB SSDD and DSDD floppy disks, using the same physical media again. The disks were Modified Frequency Modulation (MFM) coded in 512-byte sectors, and were soft-sectored. They contained 40 tracks per side at the 48 track per inch (TPI) density, and initially were formatted to contain eight sectors per track. This meant that SSDD disks initially had a formatted capacity of 160 kB, while DSDD disks had a capacity of 320 kB. However, the PC DOS 2.0 and later operating systems allowed formatting the disks with nine sectors per track. This yielded a formatted capacity of 180 kB with SSDD disks/drives, and 360 kB with DSDD disks/drives. The "unformatted" capacity of the floppy disks was advertised as "250KB" for SSDD and "500KB" for DSDD ("KB" ambiguously referring to either 1000 or 1024 bytes; essentially the same for rounded-off values), however these "raw" 250/500 kB were not the same thing as the usable formatted capacity; under DOS, the maximum capacity for SSDD and DSDD disks was 180 kB and 360 kB, respectively. Regardless of type, the file system of all floppy disks (under DOS) was FAT12. After the upgraded 64k-256k motherboard PCs arrived in early 1983, single-sided drives and the cassette model were discontinued. IBM's original floppy disk controller card also included an external 37-pin D-shell connector. This allowed users to connect additional external floppy drives by third party vendors, but IBM did not offer their own external floppies until 1986. The industry-standard way of setting floppy drive numbers was via setting jumper switches on the drive unit, however IBM chose to instead use a method known as the "cable twist" which had a floppy data cable with a bend in the middle of it that served as a switch for the drive motor control. This eliminated the need for users to adjust jumpers while installing a floppy drive. The 5150 could not itself power hard drives without retrofitting a stronger power supply, but IBM later offered the 5161 Expansion Unit, which not only provided more expansion slots, but also included a 10 MB (later 20 MB) hard drive powered by the 5161's own separate 130-watt power supply. The IBM 5161 Expansion Unit was released in early 1983. During the first year of the IBM PC, it was commonplace for users to install third-party Winchester hard disks which generally connected to the floppy controller and required a patched version of PC DOS which treated them as a giant floppy disk (there was no subdirectory support). IBM began offering hard disks with the XT, however the original PC was never sold with them. Nonetheless, many users installed hard disks and upgraded power supplies in them. After floppy disks became obsolete in the early 2000s, the letters A and B became unused. But for 25 years, virtually all DOS-based PC software assumed the program installation drive was C, so the primary HDD continues to be "the C drive" even today. Other operating system families (e.g. Unix) are not bound to these designations. Which operating system IBM customers would choose was at first unclear. Although the company expected that most would use PC DOS IBM supported using CP/M-86—which became available six months after DOS—or UCSD p-System as operating systems. IBM promised that it would not favor one operating system over the others; the CP/M-86 support surprised Gates, who claimed that IBM was "blackmailed into it". IBM was correct, nonetheless, in its expectation; one survey found that 96.3% of PCs were ordered with the $40 DOS compared to 3.4% for the $240 CP/M-86. The IBM PC's ROM BASIC and BIOS supported cassette tape storage. PC DOS itself did not support cassette tape storage. PC DOS version 1.00 supported only 160 kB SSDD floppies, but version 1.1, which was released nine months after the PC's introduction, supported 160 kB SSDD and 320 kB DSDD floppies. Support for the slightly larger nine sector per track 180 kB and 360 kB formats arrived 10 months later in March 1983. The BIOS (Basic Input/Output System) provided the core ROM code for the PC. It contained a library of functions that software could call for basic tasks such as video output, keyboard input, and disk access in addition to interrupt handling, loading the operating system on boot-up, and testing memory and other system components. The original IBM PC BIOS was 8k in size and occupied four 2k ROM chips on the motherboard, with a fifth and sixth empty slot left for any extra ROMs the user wished to install. IBM offered three different BIOS revisions during the PC's lifespan. The initial BIOS was dated April 1981 and came on the earliest models with single-sided floppy drives and PC DOS 1.00. The second version was dated October 1981 and arrived on the "Revision B" models sold with double-sided drives and PC DOS 1.10. It corrected some bugs, but was otherwise unchanged. Finally, the third BIOS version was dated October 1982 and found on all IBM PCs with the newer 64k-256k motherboard. This revision was more-or-less identical to the XT's BIOS. It added support for detecting ROMs on expansion cards as well as the ability to use 640k of memory (the earlier BIOS revisions had a limit of 544k). Unlike the XT, the original PC remained functionally unchanged from 1983 until its discontinuation in early 1987 and did not get support for 101-key keyboards or 3.5" floppy drives, nor was it ever offered with half-height floppies. IBM initially offered two video adapters for the PC, the Color/Graphics Adapter and the Monochrome Display and Printer Adapter. CGA was intended to be a typical home computer display; it had NTSC output and could be connected to a composite monitor or a TV set with an RF modulator in addition to RGB for digital RGBI-type monitors, although IBM did not offer their own RGB monitor until 1983. Supported graphics modes were 40 or 80×25 color text with 8×8 character resolution, 320×200 bitmap graphics with two fixed 4-color palettes, or 640×200 monochrome graphics. The MDA card and its companion 5151 monitor supported only 80×25 text with a 9×14 character resolution (total pixel resolution was 720×350). It was mainly intended for the business market and so also included a printer port. During 1982, the first third-party video card for the PC appeared when Hercules Computer Technologies released a clone of the MDA that could use bitmap graphics. Although not supported by the BIOS, the Hercules Graphics Adapter became extremely popular for business use due to allowing sharp, high resolution graphics plus text and itself was widely cloned by other manufacturers. In 1985, after the launch of the IBM AT, the new Enhanced Graphics Adapter became available which could support 320×200 or 640×200 in 16 colors in addition to high-resolution 640×350 16 color graphics. IBM also offered a video board for the PC, XT, and AT known as the Professional Graphics Adapter during 1984–86, mainly intended for CAD design. It was extremely expensive, required a special monitor, and was rarely ordered by customers. VGA graphics cards could also be installed in IBM PCs and XTs, although they were introduced after the computer's discontinuation. The serial port is an 8250 or a derivative (such as the 16450 or 16550), mapped to eight consecutive IO addresses and one interrupt request line. Only COM1: and COM2: addresses were defined by the original PC. Attempts to share IRQ3 and IRQ4 to use additional ports require special measures in hardware and software, since shared IRQs were not defined in the original PC design. The most typical devices plugged into the serial port were modems and mice. Plotters and serial printers were also among the more commonly used serial peripherals, and there were numerous other more unusual uses such as operating cash registers, factory equipment, and connecting terminals. IBM made a deal with Japan-based Epson to produce printers for the PC and all IBM-branded printers were manufactured by that company (Epson of course also sold printers with their own name). There was a considerable amount of controversy when IBM included a printer port on the PC that did not follow the industry-standard Centronics design, and it was rumored that this had been done to prevent customers from using non-Epson/IBM printers with their machines (plugging a Centronics printer into an IBM PC could damage the printer, the parallel port, or both). Although third-party cards were available with Centronics ports on them, PC clones quickly copied the IBM printer port and by the late 80s, it had largely displaced the Centronics standard. "BYTE" wrote in October 1981 that the IBM PC's "hardware is impressive, but even more striking are two decisions made by IBM: to use outside suppliers already established in the microcomputer industry, and to provide information and assistance to independent, small-scale software writers and manufacturers of peripheral devices". It praised the "smart" hardware design and stated that its price was not much higher than the 8-bit machines from Apple and others. The reviewer admitted that the computer "came as a shock. I expected that the giant would stumble by overestimating or underestimating the capabilities the public wants and stubbornly insisting on incompatibility with the rest of the microcomputer world. But IBM didn't stumble at all; instead, the giant jumped leagues in front of the competition". He concluded that "the only disappointment about the IBM Personal Computer is its dull name". In a more detailed review in January 1982, "BYTE" called the IBM PC "a synthesis of the best the microcomputer industry has offered to date ... as well designed on the inside as it is on the outside". The magazine praised the keyboard as "bar none, the best ... on any microcomputer", describing the unusual Shift key locations as "minor [problems] compared to some of the gigantic mistakes made on almost every other microcomputer keyboard". The review also complimented IBM's manuals, which it predicted "will set the standard for all microcomputer documentation in the future. Not only are they well packaged, well organized, and easy to understand, but they are also "complete"". Observing that detailed technical information was available "much earlier ... than it has been for other machines", the magazine predicted that "given a reasonable period of time, plenty of hardware and software will probably be developed for" the computer. The review stated that although the IBM PC cost more than comparably configured Apple II and TRS-80 computers, and the insufficient number of slots for all desirable expansion cards was its most serious weakness, "you get a "lot" more for your money". He concluded, "In two years or so, I think [it] will be one of the most popular and best-supported ... IBM should be proud of the people who designed it". In a special 1984 issue dedicated to the IBM PC, "BYTE" concluded that the PC had succeeded both because of its features like an 80-column screen, open architecture, and high-quality keyboard, and "the failure of other major companies to provide these same fundamental features earlier. In retrospect, it seems IBM stepped into a void that remained, paradoxically, at the center of a crowded market". "Creative Computing" that year named the PC the best desktop computer between $2000 and $4000, praising its vast hardware and software selection, manufacturer support, and resale value. Many IBM PCs have remained in service long after their technology became largely obsolete. In June 2006, IBM PC and XT models were still in use at the majority of U.S. National Weather Service upper-air observing sites, used to process data as it is returned from the ascending radiosonde, attached to a weather balloon, although they have been slowly phased out. Factors that have contributed to the 5150 PC's longevity are its flexible modular design, its open technical standard (making information needed to adapt, modify, and repair it readily available), use of few special nonstandard parts, and rugged high-standard IBM manufacturing, which provided for exceptional long-term reliability and durability. Some of the mechanical aspects of the slot specifications are still used in current PCs. A few systems still come with PS/2 style keyboard and mouse connectors. The IBM model 5150 Personal Computer has become a collectable among vintage computer collectors, due to the system being the first true “PC” as we know them today. , the system had a market value of $50–$500. The IBM model 5150 has proven to be reliable; despite their age of 30 years or more, some still function as they did when new.
https://en.wikipedia.org/wiki?curid=15032
Counties of Ireland The counties of Ireland (; Ulster-Scots: "coonties o Airlann") are sub-national divisions that have been, and in some cases continue to be, used to geographically demarcate areas of local government. These land divisions were formed following the Norman invasion of Ireland in imitation of the counties then in use as units of local government in the Kingdom of England. The older term ‘shire’ was historically equivalent to ‘county’. The principal function of the county was to impose royal control in the areas of taxation, security and the administration of justice at the local level. Cambro-Norman control was initially limited to the southeastern parts of Ireland; a further four centuries elapsed before the entire island was shired. At the same time, the now obsolete concept of county corporate elevated a small number of towns and cities to a status which was deemed to be no less important than the existing counties in which they lay. This double control mechanism of 32 counties plus 10 counties corporate remained unchanged for a little over two centuries until the early 19th century. Since then, counties have been adapted and in some cases divided by legislation to meet new administrative and political requirements. The powers exercised by the Cambro-Norman barons and the Old English nobility waned over time. New offices of political control came to be established at a county level. In the Republic of Ireland, some counties have been split resulting in the creation of new counties. Along with certain defined cities, counties still form the basis for the demarcation of areas of local government in the Republic of Ireland. Currently, there are 26 county level, 3 city level and 2 city and county entities – the modern equivalent of counties corporate – that are used to demarcate areas of local government in the Republic. In Northern Ireland, counties are no longer used for local government; districts are instead used. Upon the partition of Ireland in 1921, the county became one of the basic land divisions employed, along with county boroughs. The word "county" has come to be used in different senses for different purposes. In common usage, many people have in mind the 32 counties that existed prior to 1838 – the so-called traditional counties. However, in official usage in the Republic of Ireland, the term often refers to the 28 modern counties. The term is also conflated with the 31 areas currently used to demarcate areas of local government in the Republic of Ireland at the level of LAU 1. In Ireland, usage of the word "county" nearly always comes before rather than after the county name; thus ""County" Roscommon" in Ireland as opposed to "Roscommon "County"" in Michigan, United States. The former "King's County" and "Queen's County" were exceptions; these are now County Offaly and County Laois, respectively. The abbreviation Co. is used, as in "Co. Roscommon". A further exception occurs in the case of those counties created after 1994 which often drop the word "county" entirely, or use it after the name; thus for example internet search engines show many more uses (on Irish sites) of "Fingal" than of either "County Fingal" or "Fingal County". There appears to be no official guidance in the matter, as even the local council uses all three forms. In informal use, the word "county" is often dropped except where necessary to distinguish between county and town or city; thus "Offaly" rather than "County Offaly", but "County Antrim" to distinguish it from Antrim town. The synonym "shire" is not used for Irish counties, although the Marquessate of Downshire was named in 1789 after County Down. Parts of some towns and cities were exempt from the jurisdiction of the counties that surrounded them. These towns and cities had the status of a County corporate, many granted by Royal Charter, which had all the judicial, administrative and revenue raising powers of the regular counties. The political geography of Ireland can be traced with some accuracy from the 6th century. At that time Ireland was divided into a patchwork of petty kingdoms with a fluid political hierarchy which, in general, had three traditional grades of king. The lowest level of political control existed at the level of the "túath" (pl. "túatha"). A "túath" was an autonomous group of people of independent political jurisdiction under a rí túaithe, that is, a local petty king. About 150 such units of government existed. Each "rí túaithe" was in turn subject to a regional or "over-king" (). There may have been as many as 20 genuine ruiri in Ireland at any time. A "king of over-kings" () was often a provincial () or semi-provincial king to whom several ruiri were subordinate. No more than six genuine rí ruirech were ever contemporary. Usually, only five such "king of over-kings" existed contemporaneously and so are described in the Irish annals as "fifths" (). The areas under the control of these kings were: Ulster (), Leinster (), Connacht (), Munster () and Mide (). Later record-makers dubbed them "provinces", in imitation of Roman provinces. In the Norman period, the historic fifths of Leinster and Meath gradually merged, mainly due to the impact of the Pale, which straddled both, thereby forming the present-day province of Leinster. The use of provinces as divisions of political power was supplanted by the system of counties after the Norman invasion. In modern times clusters of counties have been attributed to certain provinces but these clusters have no legal status. They are today seen mainly in a sporting context, as Ireland's four professional rugby teams play under the names of the provinces, and the Gaelic Athletic Association has separate Provincial councils and Provincial championships. With the arrival of Cambro-Norman knights in 1169, the Norman invasion of Ireland commenced. This was followed in 1172 by the invasion of King Henry II of England, commencing English royal involvement. After his intervention in Ireland, Henry II effectively divided the English colony into liberties also known as lordships. These were effectively palatine counties and differed from ordinary counties in that they were disjoined from the crown and that whoever they were granted to essentially had the same authority as the king and that the king's writ had no effect except a writ of error. This covered all land within the county that was not church land. The reasons for the creating of such powerful entities in Ireland was due to the lack of authority the English crown had there. The same process occurred after the Norman conquest of England where despite there being a strong central government, county palatines were needed in border areas with Wales and Scotland. In Ireland this meant that the land was divided and granted to Richard de Clare and his followers who became lords (and sometimes called earls), with the only land which the English crown had any direct control over being the sea-coast towns and territories immediately adjacent. Of Henry II's grants, at least three of them—Leinster to Richard de Clare; Meath to Walter de Lacy; Ulster to John de Courcy—were equivalent to palatine counties in their bestowing of royal jurisdiction to the grantees. Other grants include the liberties of Connaught and Tipperary. These initial lordships were later subdivided into smaller "liberties", which appear to have enjoyed the same privileges as their predecessors. The division of Leinster and Munster into smaller counties is commonly attributed to King John, mostly due to lack of prior documentary evidence, which has been destroyed. However, they may have had an earlier origin. These counties were: in Leinster: Carlow (also known as Catherlogh), Dublin, Kildare, Kilkenny, Louth (also known as Uriel), Meath, Wexford, Waterford; in Munster: Cork, Limerick, Kerry and Tipperary. It is thought that these counties did not have the administrative purpose later attached to them until late in the reign of King John, and that no new counties were created until the Tudor dynasty. The most important office in those that were palatine was that of seneschal. In those liberties that came under Crown control this office was held by a sheriff. The sovereign could and did appoint sheriffs in palatines; however, their power was confined to the church lands, and they became known as sheriffs of a County of the Cross, of which there seem to have been as many in Ireland as there were counties palatine. The exact boundaries of the liberties and shrievalties appears to have been in constant flux throughout the Plantagenet period, seemingly in line with the extent of English control. For example, in 1297 it is recorded that Kildare had extended to include the lands that now comprise the modern-day counties of Offaly, Laois (Leix) and Wicklow (Arklow). Some attempts had also been made to extend the county system to Ulster. However the Bruce Invasion of Ireland in 1315 resulted in the collapse of effective English rule in Ireland, with the land controlled by the crown continually shrinking to encompass Dublin, and parts of Meath, Louth and Kildare. Throughout the rest of Ireland, English rule was upheld by the earls of Desmond, Ormond, and Kildare (all created in the 14th-century), with the extension of the county system all but impossible. During the reign of Edward III (1327–77) all franchises, grants and liberties had been temporarily revoked with power passed to the king's sheriffs over the seneschals. This may have been due to the disorganisation caused by the Bruce invasion as well as the renouncing of the Connaught Burkes of their allegiance to the crown. The Earls of Ulster divided their territory up into counties; however, these are not considered part of the Crown's shiring of Ireland. In 1333, the Earldom of Ulster is recorded as consisting of seven counties: Antrim, Blathewyc, Cragferus, Coulrath, del Art, Dun (also known as Ladcathel), and Twescard. Of the original lordships or palatine counties: With the passing of liberties to the Crown, the number of Counties of the Cross declined, and only one, Tipperary, survived into the Stuart era; the others had ceased to exist by the reign of Henry VIII. It was not until the Tudors, specifically the reign of Henry VIII (1509–47), that crown control started to once again extend throughout Ireland. Having declared himself King of Ireland in 1541, Henry VIII went about converting Irish chiefs into feudal subjects of the crown with land divided into districts, which were eventually amalgamated into the modern counties. County boundaries were still ill-defined; however, in 1543 Meath was split into Meath and Westmeath. Around 1545, the Byrnes and O'Tooles, both native septs who had constantly been a pain for the English administration of the Pale, petitioned the Lord Deputy of Ireland to turn their district into its own county, Wicklow, however this was ignored. During the reigns of the last two Tudor monarchs, Mary I (1553–58) and Elizabeth I (1568–1603), the majority of the work for the foundation of the modern counties was carried out under the auspices of three Lord Deputies: Thomas Radclyffe, 3rd Earl of Sussex, Sir Henry Sydney, and Sir John Perrot. Mary's reign saw the first addition of actual new counties since the reign of King John. Radclyffe had conquered the districts of Glenmaliry, Irry, Leix, Offaly, and Slewmargy from the O'Moores and O'Connors, and in 1556 a statute decreed that Offaly and part of Glenmaliry would be made into the county of King's County, whilst the rest of Glenmarliry along with Irry, Leix and Slewmargy was formed into Queen's County. Radclyffe brought forth legislation to shire all land as yet unshired throughout Ireland and sought to divide the island into six parts—Connaught, Leinster, Meath, Nether Munster, Ulster, and Upper Munster. However, his administrative reign in Ireland was cut short, and it was not until the reign of Mary's successor, Elizabeth, that this legislation was re-adopted. Under Elizabeth, Radclyffe was brought back to implement it. Sydney during his three tenures as Lord Deputy created two presidencies to administer Connaught and Munster. He shired Connaught into the counties of Galway, Mayo, Roscommon, and Sligo. In 1565 the territory of the O'Rourkes within Roscommon was made into the county of Leitrim. In an attempt to reduce the importance of the province of Munster, Sydney, using the River Shannon as a natural boundary took the former kingdom of Thomond (North Munster) and made it into the county of Clare as part of the presidency of Connaught in 1569. A commission headed by Perrot and others in 1571 declared that the territory of Desmond in Munster was to be made a county of itself, and it had its own sheriff appointed, however in 1606 it was merged with the county of Kerry. In 1575 Sydney made an expedition to Ulster to plan its shiring. However, nothing came to bear. In 1578 the go-ahead was given for turning the districts of the Byrnes and O'Tooles into the county of Wicklow. However, with the outbreak of war in Munster and then Ulster, they resumed their independence. Sydney also sought to split Wexford into two smaller counties, the northern half of which was to be called Ferns, but the matter was dropped as it was considered impossible to properly administer. The territory of the O'Farrells of Annaly, however, which was in Westmeath, in 1583 was formed into the county of Longford and transferred to Connaught. The Desmond rebellion (1579–83) that was taking place in Munster stopped Sydney's work and by the time it had been defeated Sir John Perrot was now Lord Deputy, being appointed in 1584. Perrot would be most remembered for shiring the only province of Ireland that remained effectively outside of English control, that of Ulster. Prior to his tenancy the only proper county in Ulster was Louth, which had been part of the Pale. There were two other long recognised entities north of Louth—Antrim and Down—that had at one time been "counties" of the Earldom of Ulster and were regarded as apart from the unreformed parts of the province. The date Antrim and Down became constituted is unknown. Perrot was recalled in 1588 and the shiring of Ulster would for two decades basically exist on paper as the territory affected remained firmly outside of English control until the defeat of Hugh O'Neill, Earl of Tyrone in the Nine Years' War. These counties were: Armagh, Cavan, Coleraine, Donegal, Fermanagh, Monaghan, and Tyrone. Cavan was formed from the territory of the O'Reilly's of East Breifne in 1584 and had been transferred from Connaught to Ulster. After O'Neill and his allies fled Ireland in 1607 in the Flight of the Earls, their lands became escheated to the Crown and the county divisions designed by Perrot were used as the basis for the grants of the subsequent Plantation of Ulster effected by King James I, which officially started in 1609. Around 1600 near the end of Elizabeth's reign, Clare was made an entirely distinct presidency of its own under the Earls of Thomond and would not return to being part of Munster until after the Restoration in 1660. It was not until the subjugation of the Byrnes and O'Tooles by Lord Deputy Sir Arthur Chichester that in 1606 Wicklow was finally shired. This county was one of the last to be created, yet was the closest to the center of English power in Ireland. County Londonderry was incorporated in 1613 by the merger of County Coleraine with the barony of Loughinsholin (in County Tyrone), the North West Liberties of Londonderry (in County Donegal), and the North East Liberties of Coleraine (in County Antrim). Throughout the Elizabethan era and the reign of her successor James I, the exact boundaries of the provinces and the counties they consisted of remained uncertain. In 1598 Meath is considered a province in Hayne's "Description of Ireland", and included the counties of Cavan, East Meath, Longford, and Westmeath. This contrasts to George Carew's 1602 survey where there were only four provinces with Longford part of Connaught and Cavan not mentioned at all with only three counties mentioned for Ulster. During Perrot's tenure as Lord President of Munster before he became Lord Deputy, Munster contained as many as eight counties rather than the six it later consisted of. These eight counties were: the five English counties of Cork, Limerick, Kerry, Tipperary, and Waterford; and the three Irish counties of Desmond, Ormond, and Thomond. Perrot's divisions in Ulster were for the main confirmed by a series of inquisitions between 1606 and 1610 that settled the demarcation of the counties of Connaught and Ulster. John Speed's "Description of the Kingdom of Ireland" in 1610 showed that there was still a vagueness over what counties constituted the provinces, however Meath was no longer reckoned a province. By 1616 when the Attorney General for Ireland Sir John Davies departed Ireland, almost all counties had been delimited. The only exception was the county of Tipperary, which still belonged to the palatinate of Ormond. Tipperary would remain an anomaly being in effect two counties, one palatine, the other of the Cross until 1715 during the reign of King George I where an act abolished the "royalties and liberties of the County of Tipperary" and "that whatsoever hath been denominated or called Tipperary or Cross Tipperary, shall henceforth be and remain one county for ever, under the name of the County of Tipperary." To correspond with the subdivisions of the English shires into honours or baronies, Irish counties were granted out to the Anglo-Norman noblemen in cantreds, later known as baronies, which in turn were subdivided, as in England, into parishes. Parishes were composed of townlands. However, in many cases, these divisions correspond to earlier, pre-Norman, divisions. While there are 331 baronies in Ireland, and more than a thousand civil parishes, there are around sixty thousand townlands that range in size from one to several thousand hectares. Townlands were often traditionally divided into smaller units called "quarters", but these subdivisions are not legally defined. The following towns/cities had charters specifically granting them the status of a county corporate: The only entirely new counties created in 1898 were the county boroughs of Londonderry and Belfast. Carrickfergus, Drogheda and Kilkenny were abolished; Galway was also abolished, but recreated in 1986. Regional presidencies of Connacht and Munster remained in existence until 1672, with special powers over their subsidiary counties. Tipperary remained a county palatine until the passing of the County Palatine of Tipperary Act 1715, with different officials and procedures from other counties. At the same time, Dublin, until the 19th century, had ecclesiastical liberties with rules outside those applying to the rest of Dublin city and county. Exclaves of the county of Dublin existed in counties Kildare and Wicklow. At least eight other enclaves of one county inside another, or between two others, existed. The various enclaves and exclaves were merged into neighbouring and surrounding counties, primarily in the mid-19th century under a series of Orders in Council. The Church of Ireland exercised functions at the level of civil parish that would later be exercised by county authorities. Vestigial feudal power structures of major old estates remained well into the 18th century. Urban corporations operated individual royal charters. Management of counties came to be exercised by grand juries. Members of grand juries were the local payers of rates who historically held judicial functions, taking maintenance roles in regard to roads and bridges, and the collection of "county cess" taxes. They were usually composed of wealthy "country gentlemen" (i.e. landowners, farmers and merchants):A country gentleman as a member of a Grand Jury...levied the local taxes, appointed the nephews of his old friends to collect them, and spent them when they were gathered in. He controlled the boards of guardians and appointed the dispensary doctors, regulated the diet of paupers, inflicted fines and administered the law at petty sessions. The counties were initially used for judicial purposes, but began to take on some governmental functions in the 17th century, notably with grand juries. In 1836, the use of counties as local government units was further developed, with grand-jury powers extended under the Grand Jury (Ireland) Act 1836. The traditional county of Tipperary was split into two judicial counties (or ridings) following the establishment of assize courts in 1838. Also in that year, local poor law boards, with a mix of magistrates and elected "guardians" took over the health and social welfare functions of the grand juries. Sixty years later, a more radical reorganisation of local government took place with the passage of the Local Government (Ireland) Act 1898. This Act established a county council for each of the thirty-three Irish administrative counties. Elected county councils took over the powers of the grand juries. The boundaries of the traditional counties changed on a number of occasions. The 1898 Act changed the boundaries of Counties Galway, Clare, Mayo, Roscommon, Sligo, Waterford, Kilkenny, Meath and Louth, and others. County Tipperary was divided into two regions: North Riding and South Riding. Areas of the cities of Belfast, Cork, Dublin, Limerick, Derry and Waterford were carved from their surrounding counties to become county boroughs in their own right and given powers equivalent to those of administrative counties. Under the Government of Ireland Act 1920, the island was partitioned between Southern Ireland and Northern Ireland. For the purposes of the Act, ... Northern Ireland shall consist of the parliamentary counties of Antrim, Armagh, Down, Fermanagh, Londonderry and Tyrone, and the parliamentary boroughs of Belfast and Londonderry, and Southern Ireland shall consist of so much of Ireland as is not comprised within the said parliamentary counties and boroughs. The county and county borough borders were thus used to determine the line of partition. Southern Ireland shortly afterwards became the Irish Free State. This partition was entrenched in the Anglo-Irish Treaty, which was ratified in 1922, by which the Irish Free State left the United Kingdom with Northern Ireland making the decision to not separate two days later. Under the Local Government Provisional Order Confirmation Act 1976, part of the urban area of Drogheda, which lay in County Meath, was transferred to County Louth on 1 January 1977. This resulted in the land area of County Louth increasing slightly at the expense of County Meath. The possibility of a similar action with regard to Waterford City has been raised in recent years, though opposition from Kilkenny has been strong. Areas that were shired by 1607 and continued as counties until the local government reforms of 1836, 1898 and 2001 are sometimes referred to as "traditional" or "historic" counties. These were distinct from the counties corporate that existed in some of the larger towns and cities, although linked to the county at large for other purposes. From 1898 to 2001, areas with county councils were known as administrative counties, while the counties corporate were designated as county boroughs. In other cases, the "traditional" county was divided to form two administrative counties. From 2001, certain administrative counties, which were originally "traditional" counties, underwent further splitting. In the Republic of Ireland the traditional counties are, in general, the basis for local government, planning and community development purposes, are governed by county councils and are still generally respected for other purposes. Administrative borders have been altered to allocate various towns exclusively into one county having been originally split between two counties. There are now 26 county councils, three city councils, and two city and county councils – a total of 31 local government areas. County Tipperary was split into North and South Ridings in 1838. These Ridings were established as separate administrative counties under the Local Government (Ireland) Act 1898. The Local Government Reform Act 2014 abolished North Tipperary and South Tipperary, and re-established County Tipperary. County Dublin was abolished as an administrative county in 1994, while also remaining a point of reference for purposes other than local government. Its territory was divided into three administrative counties: Dún Laoghaire–Rathdown, Fingal, and South Dublin. The county borough of Dublin, together with the county boroughs of Cork, Galway, Limerick and Waterford, were re-styled as city councils under the Local Government Act 2001, with the same status in law as county councils. The city councils of Limerick and Waterford were merged with their respective county councils by the Local Government Reform Act 2014, to form new city and county councils. The city of Kilkenny does not have a "city council" as it was a borough but not a county borough. It is now administered by its eponymous county council but is, exceptionally, permitted to retain the style of "city" for ornament only. Of the administrative structures established under the 1898 Act, the only type to have been completely abolished was the Rural District, which was rendered void in the early years of the Irish Free State amidst widespread allegations of corruption. At a level above that of LAU is the Region which clusters counties together for NUTS purposes. The Regions are administered by Regional Authorities which were established by the Local Government Act 1991 and came into existence in 1994. In 2013 Education and Training Boards (ETBs) were formed throughout the Republic of Ireland, replacing the system of Vocational Education Committees (VECs) created in 1930. Originally, VECs were formed for each administrative county and county borough, and also in a number of larger towns. In 1997 the majority of town VECs were absorbed by the surrounding county. The 33 VEC areas were reduced to 16 ETB areas, with each consisting of one or more local government county or city. The Institute of technology system was organised on the committee areas or "functional areas", these still remain legal but are not as important as originally envisioned as the institutes are now more national in character and are only really applied today when selecting governing councils, similarly Dublin Institute of Technology was originally a group of several colleges of the City of Dublin committee. Where possible, Dáil constituencies follow county boundaries. Under the Electoral Act 1997, a Constituency Commission is established following the publication of census figures every five years. The Commission is charged with defining constituency boundaries, and the 1997 Act provides that "the breaching of county boundaries shall be avoided as far as practicable". This provision does not apply to the boundaries between cities and counties, or between the three counties in the Dublin area. This system usually results in more populated counties having several constituencies: Dublin, including Dublin city, is subdivided into twelve constituencies, Cork into five. On the other hand, smaller counties such as Carlow and Kilkenny or Laois and Offaly may be paired to form constituencies. An extreme case is the splitting of Ireland's least populated county of Leitrim between the constituencies of Sligo–North Leitrim and Roscommon–South Leitrim. Each county or city is divided into local electoral areas for the election of councillors. The boundaries of the areas and the number of councillors assigned are fixed from time to time by order of the Minister for Housing, Planning and Local Government, following a report by the Local Government Commission, and based on population changes recorded in the census. In Northern Ireland, a major reorganisation of local government in 1973 replaced the six traditional counties and two county boroughs (Belfast and Derry) with 26 single-tier districts for local government purposes. In 2015, as a result of a reform process that started in 2005, these districts were merged to form 11 new single-tier "super districts". The six traditional counties remain in use for some purposes, including the three-letter coding of vehicle number plates, the Royal Mail Postcode Address File (which records counties in all addresses although they are no longer required for postcoded mail) and Lord Lieutenancies (for which the former county boroughs are also used). There are no longer official 'county towns'. However the counties are still very widely acknowledged, for example as administrative divisions for sporting and cultural organisations. The administrative division of the island along the lines of the traditional 32 counties was also adopted by non-governmental and cultural organisations. In particular the Gaelic Athletic Association continues to organise its activities on the basis of GAA counties that, throughout the island, correspond almost exactly to the 32 traditional counties in use at the time of the foundation of that organisation in 1884. The GAA also uses the term "county" for some of its organisational units in Britain and further afield. The 35 divisions listed below include the ‘traditional’ counties of Ireland as well as those created or re-created after the 19th century. Twenty four counties still delimit the remit of local government divisions in the Republic of Ireland (in some cases with slightly redrawn boundaries). In Northern Ireland, the counties listed no longer serve this purpose. The Irish-language names of counties in the Republic of Ireland are prescribed by ministerial order, which in the case of three newer counties, omits the word "contae" (county). Irish names form the basis for all English-language county names except Waterford, Wexford, and Wicklow, which are of Norse origin. In the ’Region’ column of the table below, except for the six Northern Ireland counties the reference is to NUTS 3 statistical regions of the Republic of Ireland. ’County town’ is the current or former administrative capital of the county. Cities which, in the Republic, are currently administered outside the county system, but with the same legal status as administrative counties, are not shown separately: these are Cork, Dublin and, Galway. Also omitted are the former county boroughs of Londonderry and Belfast which in Northern Ireland had the same legal status as the six counties until the reorganisation of local government in 1973. County Dublin, which was officially abolished in 1994, is included, as are the three new administrative counties which took over the functions of the defunct Dublin County Council. ‡ Also administrative
https://en.wikipedia.org/wiki?curid=15033
Information Sciences Institute The USC Information Sciences Institute (ISI) is a component of the University of Southern California (USC) Viterbi School of Engineering, and specializes in research and development in information processing, computing, and communications technologies. It is located in Marina del Rey, California. ISI actively participated in the information revolution, and it played a leading role in developing and managing the early Internet and its predecessor ARPAnet. The Institute conducts basic and applied research supported by more than 20 U.S. government agencies involved in defense, science, health, homeland security, energy and other areas. Annual funding is about $100 million. ISI employs about 350 research scientists, research programmers, graduate students and administrative staff at its Marina del Rey, California headquarters and in Arlington, Virginia. About half of the research staff hold PhD degrees, and about 40 are research faculty who teach at USC and advise graduate students. Several senior researchers are tenured USC faculty in the Viterbi School. ISI research spans artificial intelligence (AI), cybersecurity, grid computing, cloud computing, quantum computing, microelectronics, supercomputing, nano-satellites and many other areas. AI expertise includes natural language processing, in which ISI has an international reputation, reconfigurable robotics, information integration, motion analysis and social media analysis. Hardware/software expertise includes cyber-physical system security, data mining, reconfigurable computing and cloud computing. In networking, ISI explores Internet resilience, Internet traffic analysis and photonics, among other areas. Researchers also work in scientific data management, wireless technologies, biomimetics and electrical smart grid, in which ISI is advising the Los Angeles Department of Water and Power on a major demonstration project. Another current initiative involves big data brain imaging jointly with the Keck School of Medicine of USC. Federal agency sponsors include the Air Force Office of Scientific Research, Department of Defense Advanced Research Projects Agency, Department of Education, Department of Energy, Department of Homeland Security, National Institutes of Health, National Science Foundation, and other scientific, technical, and defense-related agencies. Corporate partners include Chevron Corp. in the Center for Interactive Smart Oilfield Technologies (CiSoft), Lockheed Martin Company in the USC-Lockheed Martin Quantum Computing Center, and Parsons Corp. subsidiary Sparta Inc. in the DETER Project, a cybersecurity research initiative and international testbed. ISI also has partnered with businesses including IBM Corporation, Samsung Electronics Company, the Raytheon Company, GlobalFoundries Inc., Northrop Grumman Corporation and Carl Zeiss AG, and currently is working with Micron Technology, Inc., Altera Corporation and Fujitsu Ltd. ISI also operates the Metal Oxide Semiconductor Implementation Service (MOSIS), a multi-project electronic circuit wafer service that has prototyped more than 60,000 chips since 1981. MOSIS provides design tools and pools circuit designs to produce specialty and low-volume chips for corporations, universities and other research entities worldwide. The Institute also has given rise to several startup and spinoff companies in grid software, geospatial information fusion, machine translation, data integration and other technologies. ISI was founded by Keith Uncapher, who headed the computer research group at RAND Corporation in the 1960s and early 1970s. Uncapher decided to leave RAND after his group's funding was cut in 1971. He approached the University of California at Los Angeles about creating an off-campus technology institute, but was told that a decision would take 15 months. He then presented the concept to USC, which approved the proposal in five days. ISI was launched with three employees in 1972. Its first proposal was funded by the Defense Advanced Research Projects Agency (DARPA) in 30 days for $6 million. ISI became one of the earliest nodes on ARPANET, the predecessor to the Internet, and in 1977 figured prominently in a demonstration of its international viability. ISI also helped refine the TCP/IP communications protocols fundamental to Net operations, and researcher Paul Mockapetris developed the now-familiar Domain Name System characterized by .com, .org, .net, .gov, and .edu on which the Net still operates. (The names .com, .org et al. were invented at SRI International, an ongoing collaborator.) Steve Crocker originated the Request for Comments (RFC) series, the written record of the network's technical structure and operation that both documented and shaped the emerging Internet. Another ISI researcher, Danny Cohen, became first to implement packet voice and packet video over ARPANET, demonstrating the viability of packet switching for real-time applications. Jonathan Postel collaborated in development of TCP/IP, DNS and the SMTP protocol that supports email. He also edited the RFC for nearly three decades until his sudden death in 1998, when ISI colleagues assumed responsibility. The Institute retained that role until 2009. Postel simultaneously directed the Internet Assigned Number Authority (IANA) and its predecessor, which assign Internet addresses. IANA was administered from ISI until a nonprofit organization, ICANN, was created for that purpose in 1998. Some of the first Net security applications, and one of the world's first portable computers, also originated at ISI. ISI researchers also created or co-created the: In 2011, several ISI natural language experts advised the IBM team that created Watson, the computer that became the first machine to win against human competitors on the "Jeopardy!" TV show. In 2012, ISI's Kevin Knight spearheaded a successful drive to crack the Copiale cipher, a lengthy encrypted manuscript that had remained unreadable for 250 years. Also in 2012, the USC-Lockheed Martin Quantum Computing Center (QCC) became the first organization to operate a quantum annealing system outside of its manufacturer, D-Wave Systems, Inc. USC, ISI and Lockheed Martin now are performing basic and applied research into quantum computing. A second quantum annealing system is located at NASA Ames Research Center, and is operated jointly by NASA and Google. The USC Andrew and Erna Viterbi School of Engineering was ranked among the nation's top 10 engineering graduate schools by "US News & World Report" in 2015. Including ISI, USC is ranked first nationally in federal computer science research and development expenditures. ISI is organized into six divisions focused on differing areas of research expertise: Smaller, specialized research groups operate within almost all divisions. ISI is led by Executive Director Prem Natarajan, previously an executive vice president and principal scientist at Raytheon BBN Technologies. He is a natural language specialist with research interests that focus on optical character recognition, speech processing, and multimedia analysis. Natarajan joined ISI in 2013, succeeding USC Viterbi School vice dean John O'Brien, who served as interim executive director in 2012 and 2013. From 1988 to 2012, ISI was led by former IBM executive Herbert Schorr.
https://en.wikipedia.org/wiki?curid=15034
Information security Information security, sometimes shortened to infosec, is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or at least reducing the probability of unauthorized/inappropriate access to data, or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g. electronic or physical, tangible (e.g. paperwork) or intangible (e.g. knowledge). Information security's primary focus is the balanced protection of the confidentiality, integrity and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process that involves: To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards on password, antivirus software, firewall, encryption software, legal liability, security awareness and training, and so forth. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred and destroyed. However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted. Various definitions of information security are suggested below, summarized from different sources: At the core of information security is information assurance, the act of maintaining the confidentiality, integrity and availability (CIA) of information, ensuring that information is not compromised in any way when critical issues arise. These issues include but are not limited to natural disasters, computer/server malfunction, and physical theft. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized, with information assurance now typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to acquire critical private information or gain control of the internal systems. The field of information security has grown and evolved significantly in recent years. It offers many areas for specialization, including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, and digital forensics. Information security professionals are very stable in their employment. more than 80 percent of professionals had no change in employer or employment over a period of a year, and the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019. Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the information technology (IT) field. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information through social engineering. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile, are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is conduct periodical user awareness. The number one threat to any organisation are users or internal employees, they are also called insider threats. Governments, military, corporations, financial institutions, hospitals, non-profit organisations and private businesses amass a great deal of confidential information about their employees, customers, products, research and financial status. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost; the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern. For the individual, information security has a significant effect on privacy, which is viewed very differently in various cultures. Possible responses to a security threat or risk are: Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands; however, for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653). In the mid-nineteenth century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. For example, the British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed. The Enigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted by Alan Turing, can be regarded as a striking example of creating and using secured information. Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture of U-570). The end of the twentieth century and the early years of the twenty-first century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful and less expensive computing equipment made electronic data processing within the reach of small business and the home user. These computers quickly became interconnected through the internet. The rapid growth and widespread use of electronic data processing and electronic business conducted through the internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability of information systems. The CIA triad of confidentiality, integrity, and availability is at the heart of information security. (The members of the classic InfoSec triad—confidentiality, integrity and availability—are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts. The triad seems to have first been mentioned in a NIST publication in 1977. In 1992 and revised in 2002, the OECD's "Guidelines for the Security of Information Systems and Networks" proposed the nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment. Building upon those, in 2004 the NIST's "Engineering Principles for Information Technology Security" proposed 33 principles. From each of these derived guidelines and practices. In 1998, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian Hexad are a subject of debate amongst security professionals. In 2011, The Open Group published the information security management standard O-ISM3. This standard proposed an operational definition of the key concepts of security, with elements called "security objectives", related to access control (9), availability (3), data quality (1), compliance and technical (4). In 2009, DoD Software Protection Initiative released the Three Tenets of Cybersecurity which are System Susceptibility, Access to the Flaw, and Capability to Exploit the Flaw. Neither of these models are widely adopted. In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes." While similar to "privacy," the two words aren't interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers. Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals. In information security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire lifecycle. This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically provide message integrity alongside confidentiality. For any information system to serve its purpose, the information must be available when it is needed. This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down. In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program. Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect. This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails. Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response and policy/change management. A successful information security team involves many different key roles to mesh and align for the CIA triad to be provided effectively. In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity). The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation). The "Certified Information Systems Auditor (CISA) Review Manual 2006" provides the following definition of risk management: "Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization." There are two things in this definition that may need some clarification. First, the "process" of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Risk analysis and risk evaluation processes have their limitations since, when security incidents occur, they emerge in a context, and their rarity and uniqueness give rise to unpredictable threats. The analysis of these phenomena, which are characterized by breakdowns, surprises and side-effects, requires a theoretical approach that is able to examine and interpret subjectively the detail of each incident. Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk." A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human. The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment: In broad terms, the risk management process consists of: For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels. Control selection should follow and should be based on the risk assessment. Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information. ISO/IEC 27001 has defined controls in different areas. Organizations can implement additional controls according to requirement of the organization. ISO/IEC 27002 offers a guideline for organizational information security standards. Administrative controls consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day-to-day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed – the Payment Card Industry Data Security Standard (PCI DSS) required by Visa and MasterCard is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls, which are of paramount importance. Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. Passwords, network and host-based firewalls, network intrusion detection systems, access control lists, and data encryption are examples of logical controls. An important logical control that is frequently overlooked is the principle of least privilege, which requires that an individual, program or system process not be granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read email and surf the web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, employees are promoted to a new position, or employees are transferred to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges, which may no longer be necessary or appropriate. Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities and include doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and workplace into functional areas are also physical controls. An important physical control that is frequently overlooked is separation of duties, which ensures that an individual can not complete a critical task by himself. For example, an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator; these roles and responsibilities must be separated from one another. Information security must protect information throughout its lifespan, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on and overlapping of security measures is called "defense in depth." In contrast to a metal chain, which is famously only as strong as its weakest link, the defense in depth strategy aims at a structure where, should one defensive measure fail, other measures will continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense in depth strategy. With this approach, defense in depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, host-based security and application security forming the outermost layers of the onion. Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy. An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The Information Systems Audit and Control Association (ISACA) and its "Business Model for Information Security" also serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed. The type of information security classification labels selected and used will depend on the nature of the organization, with examples being: All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures. Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Access control is generally considered in three steps: identification, authentication, and authorization. Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to". Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe, a claim of identity. The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to. There are three different types of information that can be used for authentication: Strong authentication requires providing more than one type of authentication information (two-factor authentication). The username is the most common form of identification on computer systems today and the password is the most common form of authentication. Usernames and passwords have served their purpose, but they are increasingly inadequate. Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such as Time-based One-time Password algorithms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include role-based access control, available in many advanced database management systems; simple file permissions provided in the UNIX and Windows operating systems; Group Policy Objects provided in Windows network systems; and Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. The U.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type of audit trail. Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions. This principle is used in the government when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to. Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad. Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older, less secure applications such as Telnet and File Transfer Protocol (FTP) are slowly being replaced with more secure applications such as Secure Shell (SSH) that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management. The terms "reasonable and prudent person," "due care" and "due diligence" have been used in the fields of finance, securities, and law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal, ethical manner. A prudent person is also diligent (mindful, attentive, ongoing) in their due care of the business. In the field of information security, Harris offers the following definitions of due care and due diligence: ""Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees."" And, [Due diligence are the] ""continual activities that make sure the protection mechanisms are continually maintained and operational."" Attention should be made to two important points in these definitions. First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing. Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA) provides principles and practices for evaluating risk. It considers all parties that could be affected by those risks. DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden. With increased data breach litigation, companies must balance security controls, compliance, and its mission. The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise Security (GES) Implementation Guide", defines characteristics of effective security governance. These include: An incident response plan is a group of policies that dictate an organizations reaction to a cyber attack. Once an security breach has been identified the plan is initiated. It is important to note that there can be legal implications to a data breach. Knowing local and federal laws is critical. Every plan is unique to the needs of the organization, and it can involve skill set that are not part of an IT team. For example, a lawyer may be included in the response plan to help navigate legal implications to a data breach. As mentioned above every plan is unique but most plans will include the following: Good preparation includes the development of an Incident Response Team (IRT). Skills need to be used by this team would be, penetration testing, computer forensics, network security, etc. This team should also keep track of trends in cybersecurity and modern attack strategies. A training program for end users is important as well as most modern attack strategies target users on the network. This part of the incident response plan identifies if there was a security event. When an end user reports information or an admin notices irregularities, an investigation is launched. An incident log is a crucial part of this step. All of the members of the team should be updating this log to ensure that information flows as fast as possible. If it has been identified that a security breach has occurred the next step should be activated. In this phase, the IRT works to isolate the areas that the breach took place to limit the scope of the security event. During this phase it is important to preserve information forensically so it can be analyzed later in the process. Containment could be as simple as physically containing a server room or as complex as segmenting a network to not allow the spread of a virus. This is where the threat that was identified is removed from the affected systems. This could include using deleting malicious files, terminating compromised accounts, or deleting other components. Some events do not require this step, however it is important to fully understand the event before moving to this step. This will help to ensure that the threat is completely removed. This stage is where the systems are restored back to original operation. This stage could include the recovery of data, changing user access information, or updating firewall rules or policies to prevent a breach in the future. Without executing this step, the system could still be vulnerable to future security threats. In this step information that has been gathered during this process is used to make future decisions on security. This step is crucial to the ensure that future events are prevented. Using this information to further train admins is critical to the process. This step can also be used to process information that is distributed from other entities who have experienced a security event. Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of management's many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a change review board composed of representatives from key business areas, security, networking, systems administrators, database administration, application developers, desktop support and the help desk. The tasks of the change review board can be facilitated with the use of automated work flow application. The responsibility of the change review board is to ensure the organization's documented change management procedures are followed. The change management process is as follows Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the overall quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation and communication. ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Full book summary), and Information Technology Infrastructure Library all provide valuable guidance on implementing an efficient and effective change management program information security. Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects. BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual. The BCM should be included in an organizations risk analysis plan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function. It encompasses: Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, a disaster recovery plan (DRP) focuses specifically on resuming business operations as quickly as possible after a disaster. A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover critical information and communications technology (ICT) infrastructure. Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan. Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security. Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations: Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests. Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. The International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology – Security techniques – A framework for IT security assurance", ISO/IEC 27002: "Information technology – Security techniques – Code of practice for information security management", ISO-20000: "Information technology – Service management", and ISO/IEC 27001: "Information technology – Security techniques – Information security management systems – Requirements" are of particular interest to information security professionals. The US National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management and operation. NIST is also the custodian of the U.S. Federal Information Processing Standard publications (FIPS). The Internet Society is a professional membership society with more than 100 organizations and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the internet, and it is the organizational home for the groups responsible for internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook. The Information Security Forum (ISF) is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes research into information security practices and offers advice in its biannual Standard of Good Practice and more detailed advisories for members. The Institute of Information Security Professionals (IISP) is an independent, non-profit body governed by its members, with the principal objective of advancing the professionalism of information security practitioners and thereby the professionalism of the industry as a whole. The institute developed the IISP Skills Framework. This framework describes the range of competencies expected of information security and information assurance professionals in the effective performance of their roles. It was developed through collaboration between both private and public sector organizations and world-renowned academics and security leaders. The German Federal Office for Information Security (in German "Bundesamt für Sicherheit in der Informationstechnik (BSI)") BSI-Standards 100-1 to 100-4 are a set of recommendations including "methods, processes, procedures, approaches and measures relating to information security". The BSI-Standard 100-2 "IT-Grundschutz Methodology" describes how information security management can be implemented and operated. The standard includes a very specific guide, the IT Baseline Protection Catalogs (also known as IT-Grundschutz Catalogs). Before 2005, the catalogs were formerly known as "IT Baseline Protection Manual". The Catalogs are a collection of documents useful for detecting and combating security-relevant weak points in the IT environment (IT cluster). The collection encompasses as of September 2013 over 4,400 pages with the introduction and catalogs. The IT-Grundschutz approach is aligned with to the ISO/IEC 2700x family. The European Telecommunications Standards Institute standardized a catalog of information security indicators, headed by the Industrial Specification Group (ISG) ISI.
https://en.wikipedia.org/wiki?curid=15036
Income Income is the consumption and saving opportunity gained by an entity within a specified timeframe, which is generally expressed in monetary terms. For households and individuals, "income is the sum of all the wages, salaries, profits, interest payments, rents, and other forms of earnings received in a given period of time." (also known as gross income). Net income is defined as the gross income minus taxes and other deductions (e.g., mandatory pension contributions), and is usually the basis to calculate how much income tax is owed. In the field of public economics, the concept may comprise the accumulation of both monetary and non-monetary consumption ability, with the former (monetary) being used as a proxy for total income. For a firm, gross income can be defined as sum of all revenue minus the cost of goods sold. Net income nets out expenses: net income equals revenue minus cost of goods sold, expenses, depreciation, interest, and taxes. In economics, "factor income" is the return accruing for a person, or a nation, derived from the "factors of production": rental income, wages generated by labor, the interest created by capital, and profits from entrepreneurial ventures. In consumer theory 'income' is another name for the "budget constraint," an amount formula_1 to be spent on different goods x and y in quantities formula_2 and formula_3 at prices formula_4 and formula_5. The basic equation for this is This equation implies two things. First buying one more unit of good x implies buying formula_7 less units of good y. So, formula_7 is the "relative" price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed formula_1 and fixed formula_10 then its relative price falls. The usual hypothesis, the law of demand, is that the quantity demanded of x would increase at the lower price. The analysis can be generalized to more than two goods. The theoretical generalization to more than one period is a multi-period wealth and income constraint. For example, the same person can gain more productive skills or acquire more productive income-earning assets to earn a higher income. In the multi-period case, something might also happen to the economy beyond the control of the individual to reduce (or increase) the flow of income. Changing measured income and its relation to consumption over time might be modeled accordingly, such as in the permanent income hypothesis. "Full income" refers to the accumulation of both the monetary and the non-monetary consumption-ability of any given entity, such as a person or a household. According to what the economist Nicholas Barr describes as the "classical definition of income" (the 1938 Haig–Simons definition): "income may be defined as the... sum of (1) the market value of rights exercised in consumption and (2) the change in the value of the store of property rights..." Since the consumption potential of non-monetary goods, such as leisure, cannot be measured, monetary income may be thought of as a proxy for full income. As such, however, it is criticized for being unreliable, "i.e." failing to accurately reflect affluence (and thus the consumption opportunities) of any given agent. It omits the utility a person may derive from non-monetary income and, on a macroeconomic level, fails to accurately chart social welfare. According to Barr, "in practice money income as a proportion of total income varies widely and unsystematically. Non-observability of full-income prevent a complete characterization of the individual opportunity set, forcing us to use the unreliable yardstick of money income. Income per capita has been increasing steadily in most countries. Many factors contribute to people having a higher income, including education, globalisation and favorable political circumstances such as economic freedom and peace. Increases in income also tend to lead to people choosing to work fewer hours. Developed countries (defined as countries with a "developed economy") have higher incomes as opposed to developing countries tending to have lower incomes. Income inequality is the extent to which income is distributed in an uneven manner. It can be measured by various methods, including the Lorenz curve and the Gini coefficient. Many economists argue that certain amounts of inequality are necessary and desirable but that excessive inequality leads to efficiency problems and social injustice. National income, measured by statistics such as net national income (NNI), measures the total income of individuals, corporations, and government in the economy. For more information see Measures of national income and output. Throughout history, many have written about the impact of income on morality and society. Saint Paul wrote 'For the love of money is a root of all kinds of evil:' ( (ASV)). Some scholars have come to the conclusion that material progress and prosperity, as manifested in continuous income growth at both the individual and the national level, provide the indispensable foundation for sustaining any kind of morality. This argument was explicitly given by Adam Smith in his "Theory of Moral Sentiments", and has more recently been developed by Harvard economist Benjamin Friedman in his book "The Moral Consequences of Economic Growth". The International Accounting Standards Board (IASB) uses the following definition: "Income is increases in economic benefits during the accounting period in the form of inflows or enhancements of assets or decreases of liabilities that result in increases in equity, other than those relating to contributions from equity participants." [F.70] (IFRS Framework). According to John Hicks' definitions, income "is the maximum amount which can be spent during a period if there is to be an expectation of maintaining intact, the capital value of prospective receipts (in money terms)”. John Hicks used "I" for income, but Keynes wrote to him in 1937, ""after trying both, I believe it is easier to use Y for income and I for investment."" Some consider Y as an alternative letter for the phonem I in languages like Spanish, although Y as the "Greek I" was actually pronounced like the modern German ü or the phonetic /y/.
https://en.wikipedia.org/wiki?curid=15037
Iona Iona (, sometimes simply "Ì") is a small island in the Inner Hebrides off the Ross of Mull on the western coast of Scotland. It is mainly known for Iona Abbey, though there are other buildings on the island. Iona Abbey was a centre of Gaelic monasticism for three centuries and is today known for its relative tranquility and natural environment. It is a tourist destination and a place for spiritual retreats. Its modern Scottish Gaelic name means "Iona of (Saint) Columba" (formerly anglicised "Icolmkill"). The Hebrides have been occupied by the speakers of several languages since the Iron Age, and as a result many of the names of these islands have more than one possible meaning. Nonetheless few, if any, can have accumulated as many different names over the centuries as the island now known in English as "Iona". The earliest forms of the name enabled place-name scholar William J. Watson to show that the name originally meant something like "yew-place". The element "Ivo-", denoting "yew", occurs in Ogham inscriptions ("Iva-cattos" [genitive], "Iva-geni" [genitive]) and in Gaulish names ("Ivo-rix", "Ivo-magus") and may form the basis of early Gaelic names like "Eógan" (ogham: "Ivo-genos"). It is possible that the name is related to the mythological figure, "Fer hÍ mac Eogabail", foster-son of Manannan, the forename meaning "man of the yew". Mac an Tàilleir (2003) lists the more recent Gaelic names of "Ì", "Ì Chaluim Chille" and "Eilean Idhe" noting that the first named is "generally lengthened to avoid confusion" to the second, which means "Calum's (i.e. in latinised form "Columba's") Iona" or "island of Calum's monastery". The confusion results from "ì", despite its original etymology as the name of the island, being confused with the Gaelic noun "ì" "island" (now obsolete) of Old Norse origin ("ey" "island", "Eilean Idhe" means "the isle of Iona", also known as "Ì nam ban bòidheach" ("the isle of beautiful women"). The modern English name comes of yet another variant, "Ioua", which was either just Adomnán's attempt to make the Gaelic name fit Latin grammar or else a genuine derivative from "Ivova" ("yew place"). "Ioua"'s change to "Iona", attested from c.1274, results from a transcription mistake resulting from the similarity of "n" and "u" in Insular Minuscule. Despite the continuity of forms in Gaelic between the pre-Norse and post-Norse eras, Haswell-Smith (2004) speculates that the name may have a Norse connection, "Hiōe" meaning "island of the den of the brown bear". The medieval English language version was "Icolmkill" (and variants thereof). Murray (1966) claims that the "ancient" Gaelic name was "Innis nan Druinich" ("the isle of Druidic hermits") and repeats a Gaelic story (which he admits is apocryphal) that as Columba's coracle first drew close to the island one of his companions cried out ""Chì mi i"" meaning "I see her" and that Columba's response was "Henceforth we shall call her Ì". The geology of Iona is quite complex given the island’s size and quite distinct from that of nearby Mull. About half of the island’s bedrock is Scourian gneiss assigned to the Lewisian complex and dating from the Archaean eon making it some of the oldest rock in Britain and indeed Europe. Closely associated with these gneisses are mylonite and meta-anorthosite and melagabbro. Along the eastern coast facing Mull are steeply dipping Neoproterozoic age metaconglomerates, metasandstones, metamudstones and hornfelsed metasiltstones ascribed to the Iona Group, described traditionally as Torridonian. In the southwest and on parts of the west coast are pelites and semipelites of Archaean to Proterozoic age. There are small outcrops of Silurian age pink granite on southeastern beaches, similar to those of the Ross of Mull pluton cross the sound to the east. Numerous geological faults cross the island, many in a E-W or NW-SE alignment. Devonian aged microdiorite dykes are found in places and some of these are themselves cut by Palaeocene age camptonite and monchiquite dykes ascribed to the ‘Iona-Ross of Mull dyke swarm’. More recent sedimentary deposits of Quaternary age include both present day beach deposits and raised marine deposits around Iona as well as some restricted areas of blown sand. Iona lies about from the coast of Mull. It is about wide and long with a resident population of 125. Like other places swept by ocean breezes, there are few trees; most of them are near the parish church. Iona's highest point is Dùn Ì, , an Iron Age hill fort dating from 100 BC – AD 200. Iona's geographical features include the Bay at the Back of the Ocean and "Càrn Cùl ri Éirinn" (the Hill/Cairn of [turning the] Back to Ireland), said to be adjacent to the beach where St. Columba first landed. The main settlement, located at St. Ronan's Bay on the eastern side of the island, is called "Baile Mòr" and is also known locally as "The Village". The primary school, post office, the island's two hotels, the Bishop's House and the ruins of the Nunnery are here. The Abbey and MacLeod Centre are a short walk to the north. Port Bàn (white port) beach on the west side of the island is home to the Iona Beach Party. There are numerous offshore islets and skerries: Eilean Annraidh (island of storm) and Eilean Chalbha (calf island) to the north, Rèidh Eilean and Stac MhicMhurchaidh to the west and Eilean Mùsimul (mouse holm island) and Soa Island to the south are amongst the largest. The steamer "Cathcart Park" carrying a cargo of salt from Runcorn to Wick ran aground on Soa on 15 April 1912, the crew of 11 escaping in two boats. On a map of 1874, the following territorial subdivision is indicated (from north to south): In the early Historic Period Iona lay within the Gaelic kingdom of Dál Riata, in the region controlled by the Cenél Loairn (i.e. Lorn, as it was then). The island was the site of a highly important monastery (see Iona Abbey) during the Early Middle Ages. According to tradition the monastery was founded in 563 by the monk Columba, also known as Colm Cille, who had been exiled from his native Ireland as a result of his involvement in the Battle of Cul Dreimhne. Columba and twelve companions went into exile on Iona and founded a monastery there. The monastery was hugely successful, and played a crucial role in the conversion to Christianity of the Picts of present-day Scotland in the late 6th century and of the Anglo-Saxon kingdom of Northumbria in 635. Many satellite institutions were founded, and Iona became the centre of one of the most important monastic systems in Great Britain and Ireland. Iona became a renowned centre of learning, and its scriptorium produced highly important documents, probably including the original texts of the Iona Chronicle, thought to be the source for the early Irish annals. The monastery is often associated with the distinctive practices and traditions known as Celtic Christianity. In particular, Iona was a major supporter of the "Celtic" system for calculating the date of Easter at the time of the Easter controversy, which pitted supporters of the Celtic system against those favoring the "Roman" system used elsewhere in Western Christianity. The controversy weakened Iona's ties to Northumbria, which adopted the Roman system at the Synod of Whitby in 664, and to Pictland, which followed suit in the early 8th century. Iona itself did not adopt the Roman system until 715, according to the Anglo-Saxon historian Bede. Iona's prominence was further diminished over the next centuries as a result of Viking raids and the rise of other powerful monasteries in the system, such as the Abbey of Kells. The Book of Kells may have been produced or begun on Iona towards the end of the 8th century. Around this time the island's exemplary high crosses were sculpted; these may be the first such crosses to contain the ring around the intersection that became characteristic of the "Celtic cross". The series of Viking raids on Iona began in 794 and, after its treasures had been plundered many times, Columba's relics were removed and divided two ways between Scotland and Ireland in 849 as the monastery was abandoned. As the Norse domination of the west coast of Scotland advanced, Iona became part of the Kingdom of the Isles. The Norse "Rex plurimarum insularum" Amlaíb Cuarán died in 980 or 981 whilst in "religious retirement" on Iona. Nonetheless the island was sacked twice by his successors, on Christmas night 986 and again in 987. Although Iona was never again important to Ireland, it rose to prominence once more in Scotland following the establishment of the Kingdom of Scotland in the later 9th century; the ruling dynasty of Scotland traced its origin to Iona, and the island thus became an important spiritual centre for the new kingdom, with many of its early kings buried there. However, a campaign by Magnus Barelegs led to the formal acknowledgement of Norwegian control of Argyll, in 1098. Somerled, the brother-in-law of Norway's governor of the region (the "King of the Isles"), launched a revolt, and made the kingdom independent. A convent for Augustinian nuns was established in about 1208, with Bethóc, Somerled's daughter, as first prioress. The present Benedictine abbey, Iona Abbey, was built in about 1203. On Somerled's death, nominal Norwegian overlordship of the Kingdom was re-established, but de facto control was split between Somerled's sons, and his brother-in-law. Following the 1266 Treaty of Perth the Hebrides were transferred from Norwegian to Scottish overlordship. At the end of the century, King John Balliol was challenged for the throne by Robert the Bruce. By this point, Somerled's descendants had split into three groups, the MacRory, MacDougalls, and MacDonalds. The MacDougalls backed Balliol, so when he was defeated by de Bruys, the latter exiled the MacDougalls and transferred their island territories to the MacDonalds; by marrying the heir of the MacRorys, the heir of the MacDonalds re-unified most of Somerled's realm, creating the Lordship of the Isles, under nominal Scottish authority. Iona, which had been a MacDougall territory (together with the rest of Lorn), was given to the Campbells, where it remained for half a century. In 1354, though in exile and without control of his ancestral lands, John, the MacDougall heir, quitclaimed any rights he had over Mull and Iona to the Lord of the Isles (though this had no meaningful effect at the time). When Robert's son, David II, became king, he spent some time in English captivity; following his release, in 1357, he restored MacDougall authority over Lorn. The 1354 quitclaim, which seems to have been an attempt to ensure peace in just such an eventuality, took automatic effect, splitting Mull and Iona from Lorn, and making it subject to the Lordship of the Isles. Iona remained part of the Lordship of the Isles for the next century and a half. Following the 1491 Raid on Ross, the Lordship of the Isles was dismantled, and Scotland gained full control of Iona for the second time. The monastery and nunnery continued to be active until the Reformation, when buildings were demolished and all but three of the 360 carved crosses destroyed. The Augustine nunnery now only survives as a number of 13th century ruins, including a church and cloister. By the 1760s little more of the nunnery remained standing than at present, though it is the most complete remnant of a medieval nunnery in Scotland. After a visit in 1773, the English writer Samuel Johnson remarked: He estimated the population of the village at 70 families or perhaps 350 inhabitants. In the 19th century green-streaked marble was commercially mined in the south-east of Iona; the quarry and machinery survive, see 'Marble Quarry remains' below. Iona Abbey, now an ecumenical church, is of particular historical and religious interest to pilgrims and visitors alike. It is the most elaborate and best-preserved ecclesiastical building surviving from the Middle Ages in the Western Isles of Scotland. Though modest in scale in comparison to medieval abbeys elsewhere in Western Europe, it has a wealth of fine architectural detail, and monuments of many periods. The 8th Duke of Argyll presented the sacred buildings and sites of the island to the Iona Cathedral trust in 1899. In front of the Abbey stands the 9th century St Martin's Cross, one of the best-preserved Celtic crosses in the British Isles, and a replica of the 8th century St John's Cross (original fragments in the Abbey museum). The ancient burial ground, called the Rèilig Odhrain (Eng: Oran's "burial place" or "cemetery"), contains the 12th century chapel of St Odhrán (said to be Columba's uncle), restored at the same time as the Abbey itself. It contains a number of medieval grave monuments. The abbey graveyard is said to contain the graves of many early Scottish Kings, as well as Norse kings from Ireland and Norway. Iona became the burial site for the kings of Dál Riata and their successors. Notable burials there include: In 1549 an inventory of 48 Scottish, 8 Norwegian and 4 Irish kings was recorded. None of these graves are now identifiable (their inscriptions were reported to have worn away at the end of the 17th century). Saint Baithin and Saint Failbhe may also be buried on the island. The Abbey graveyard is also the final resting place of John Smith, the former Labour Party leader, who loved Iona. His grave is marked with an epitaph quoting Alexander Pope: "An honest man's the noblest work of God". Limited archaeological investigations commissioned by the National Trust for Scotland found some evidence for ancient burials in 2013. The excavations, conducted in the area of Martyrs Bay, revealed burials from the 6th-8th centuries, probably jumbled up and reburied in the 13-15th century. Other early Christian and medieval monuments have been removed for preservation to the cloister arcade of the Abbey, and the Abbey museum (in the medieval infirmary). The ancient buildings of Iona Abbey are now cared for by Historic Environment Scotland (entrance charge). The remains of a marble quarrying enterprise can be seen in a small bay on the south-east shore of Iona. The quarry is the source of ‘Iona Marble’, a beautiful translucent green and white stone, much used in brooches and other jewellery. The stone has been known of for centuries and was credited with healing and other powers. While the quarry had been used in a small way, it was not until around the end of the 18th century when it was opened up on a more industrial scale by the Duke of Argyle. The then difficulties of extracting the hard stone and transporting it meant that the scheme was short lived. Another attempt was started in 1907, this time more successful with considerable quantities of stone extracted and indeed exported, but the First World War put paid to this as well, with little quarrying after 1914 and the operation finally closing in 1919. A painting showing the quarry in operation, "The Marble Quarry, Iona" (1909) by David Young Cameron, is in the collection of Cartwright Hall art gallery in Bradford.. Such is the site’s rarity that it has been designated as a Scheduled Ancient Monument. The island, other than the land owned by the Iona Cathedral Trust, was purchased from the Duke of Argyll by Hugh Fraser in 1979 and donated to the National Trust for Scotland. In 2001 Iona's population was 125 and by the time of the 2011 census this had grown to 177 usual residents. During the same period Scottish island populations as a whole grew by 4% to 103,702. Not to be confused with the local island community, Iona (Abbey) Community are based within Iona Abbey. In 1938 George MacLeod founded the Iona Community, an ecumenical Christian community of men and women from different walks of life and different traditions in the Christian church committed to seeking new ways of living the Gospel of Jesus in today's world. This community is a leading force in the present Celtic Christian revival. The Iona Community runs 3 residential centres on the Isle of Iona and on Mull, where one can live together in community with people of every background from all over the world. Weeks at the centres often follow a programme related to the concerns of the Iona Community. The 8 tonne "Fallen Christ" sculpture by Ronald Rae was permanently situated outside the MacLeod Centre in February 2008. Visitors can reach Iona by the 10-minute ferry trip across the Sound of Iona from Fionnphort on Mull. The most common route from the mainland is via Oban in Argyll and Bute, where regular ferries connect to Craignure on Mull, from where the scenic road runs to Fionnphort. Tourist coaches and local bus services meet the ferries. Car ownership is lightly regulated, with no requirement for an MOT Certificate or payment of Road Tax for cars kept permanently on the island, but vehicular access is restricted to permanent residents and there are few cars. Visitors must leave their car in Fionnphort, but upon landing on Iona they will find the village, the shops, the post office, the cafe, the hotels and the abbey are all within walking distance. Bike hire is available at the pier, and on Mull. In addition to the hotels, there are several bed and breakfasts on Iona and various self-catering properties. The Iona Hostel at Lagandorain and the Iona campsite at Cnoc Oran also offer accommodation. The island of Iona has played an important role in Scottish landscape painting, especially during the Twentieth Century. As travel to north and west Scotland became easier from the mid C18 on, artists’ visits to the island steadily increased. The Abbey remains in particular became frequently recorded during this early period. Many of the artists are listed and illustrated in the valuable book, "‘Iona Portrayed – The Island through Artists’ Eyes 1760-1960’", which lists over 170 artists known to have painted on the island. The C20 however saw the greatest period of influence on landscape painting, in particular through the many paintings of the island produced by F C B Cadell and S J Peploe, two of the ‘Scottish Colourists’. As with many artists, both professional and amateur, they were attracted by the unique quality of light, the white sandy beaches, the aquamarine colours of the sea and the landscape of rich greens and rocky outcrops. While Cadell and Peploe are perhaps best known, many major Scottish painters of the C20 worked on Iona and visited many times – for example George Houston, D Y Cameron, James Shearer, John Duncan and John Maclauchlan Milne, among many. Samuel Johnson wrote "That man is little to be envied whose patriotism would not gain force upon the plains of Marathon, or whose piety would not grow warmer amid the ruins of Iona." In Jules Verne's novel "The Green Ray", the heroes visit Iona in chapters 13 to 16. The inspiration is romantic, the ruins of the island are conducive to daydreaming. The young heroine, Helena Campbell, argues that Scotland in general and Iona in particular are the scene of the appearance of goblins and other familiar demons. In Jean Raspail's novel "The Fisherman's Ring" (1995), his cardinal is one of the last to support the antipope Benedict XIII and his successors. In the novel "The Carved Stone" (by Guillaume Prévost), the young Samuel Faulkner is projected in time as he searches for his father and lands on Iona in the year 800, then threatened by the Vikings. "Peace of Iona" is a song written by Mike Scott that appears on the studio album "Universal Hall" and on the live recording "Karma to Burn" by The Waterboys. Iona is the setting for the song "Oran" on the 1997 Steve McDonald album "Stone of Destiny". Kenneth C. Steven published an anthology of poetry entitled "Iona: Poems" in 2000 inspired by his association with the island and the surrounding area. Iona is featured prominently in the first episode ("By the Skin of Our Teeth") of the celebrated arts series "" (1969). Iona is the setting of Jeanne M. Dams' Dorothy Martin mystery "Holy Terror of the Hebrides" (1998). The Academy Award–nominated Irish animated film "The Secret of Kells" is about the creation of the Book of Kells. One of the characters, Brother Aiden, is a master illuminator from Iona Abbey who had helped to illustrate the Book, but had to escape the island with it during a Viking invasion. After his death in 2011, the cremated remains of songwriter/recording artist Gerry Rafferty were scattered on Iona. Frances Macdonald the contemporary Scottish artist based in Crinian, Argyll, regularly paints landscapes on Iona. Iona Abbey is mentioned in Tori Amos's "Twinkle" from her 1996 album "Boys for Pele": "And last time I knew, she worked at an abbey in Iona. She said 'I killed a man, T, I've gotta stay hidden in this abbey' " Iona is the name of a progressive Celtic rock band (first album released in 1990; not active at present), many of whose songs are inspired by the island of Iona and Columba's life. Neil Gaiman's poem "In Relig Odhrain", published in "Trigger Warning: Short Fictions and Disturbances (2015)", retells the story of Oran's death, and the creation of the chapel on Iona. This poem was made into a short stop-motion animated film, released in 2019.
https://en.wikipedia.org/wiki?curid=15039
Ido Ido (, sometimes ) is a constructed language, derived from Reformed Esperanto, created to be a universal second language for speakers of diverse backgrounds. Ido was specifically designed to be grammatically, orthographically, and lexicographically regular, and above all easy to learn and use. In this sense, Ido is classified as a constructed international auxiliary language. It is the most successful of many Esperanto derivatives, called Esperantidos. Ido was created in 1907 out of a desire to reform perceived flaws in Esperanto, a language that had been created 20 years earlier to facilitate international communication. The name of the language traces its origin to the Esperanto word "", meaning "offspring", since the language is a "descendant" of Esperanto. After its inception, Ido gained support from some in the Esperanto community, but following the sudden death in 1914 of one of its most influential proponents, Louis Couturat, it declined in popularity. There were two reasons for this: first, the emergence of further schisms arising from competing reform projects; and second, a general lack of awareness of Ido as a candidate for an international language. These obstacles weakened the movement and it was not until the rise of the Internet that it began to regain momentum. Ido uses the same 26 letters as the English (Latin) alphabet, with no diacritics. It draws its vocabulary from English, French, German, Italian, Latin, Russian, Spanish and Portuguese, and is largely intelligible to those who have studied Esperanto. Several works of literature have been translated into Ido, including "The Little Prince", the Book of Psalms, and the Gospel of Luke. As of the year 2000, there were approximately 100–200 Ido speakers in the world. The idea of a universal second language is not new, and constructed languages are not a recent phenomenon. The first known constructed language was Lingua Ignota, created in the 12th century. But the idea did not catch on in large numbers until the language Volapük was created in 1879. Volapük was popular for some time and apparently had a few thousand users, but was later eclipsed by the popularity of Esperanto, which arose in 1887. Several other languages such as Latino sine Flexione and Idiom Neutral had also been put forward. It was during this time that French mathematician Louis Couturat formed the "Delegation for the Adoption of an International Auxiliary Language". This delegation made a formal request to the International Association of Academies in Vienna to select and endorse an international language; the request was rejected in May 1907. The Delegation then met as a Committee in Paris in October 1907 to discuss the adoption of a standard international language. Among the languages considered was a new language anonymously submitted at the last moment (and therefore against the Committee rules) under the pen name "Ido". In the end the Committee, always without plenary sessions and consisting of only 12 members, concluded the last day with 4 votes for and 1 abstention. They concluded that no language was completely acceptable, but that Esperanto could be accepted "on condition of several modifications to be realized by the permanent Commission in the direction defined by the conclusions of the Report of the Secretaries [Louis Couturat and Léopold Leau] and by the Ido project". Esperanto's inventor, L. L. Zamenhof, having heard a number of complaints, had suggested in 1894 a proposal for a Reformed Esperanto with several changes that Ido adopted and made it closer to French: eliminating the accented letters and the accusative case, changing the plural to an Italianesque "-i", and replacing the table of correlatives with more Latinate words. However, the Esperanto community voted and rejected Reformed Esperanto, and likewise most rejected the recommendations of the 1907 Committee composed by 12 members. Zamenhof deferred to their judgment, although doubtful. Furthermore, controversy ensued when the "Ido project" was found to have been primarily devised by Louis de Beaufront, whom Zamenhof had chosen to represent Esperanto before the Committee, as the Committee's rules dictated that the creator of a submitted language could not defend it. The Committee's language was French and not everyone could speak in French. When the president of the Committee asked who was the author of Ido's project, Couturat, Beaufront and Leau answered that they were not. Beaufront was the person who presented Ido's project and gave a description as a better, richer version of Esperanto. Couturat, Leau, Beaufront and Jespersen were finally the only members who voted, all of them for Ido's project. A month later, Couturat accidentally put Jespersen in a copy of a letter in which he acknowledged that Beaufront was the author of the Ido project. Jespersen was angered by this and asked for a public confession, which was never forthcoming. It is estimated that some 20% of Esperanto leaders and 3–4% of ordinary Esperantists defected to Ido, which from then on suffered constant modifications seeking to perfect it, but which ultimately had the effect of causing many Ido speakers to give up on trying to learn it. Although it fractured the Esperanto movement, the schism gave the remaining Esperantists the freedom to concentrate on using and promoting their language as it stood. At the same time, it gave the Idists freedom to continue working on their own language for several more years before actively promoting it. The "Uniono di la Amiki di la Linguo Internaciona" ("Union of Friends of the International Language") was established along with an Ido Academy to work out the details of the new language. Couturat, who was the leading proponent of Ido, was killed in an automobile accident in 1914. This, along with World War I, practically suspended the activities of the Ido Academy from 1914 to 1920. In 1928 Ido's major intellectual supporter, the Danish linguist Otto Jespersen, published his own planned language, Novial. His defection from the Ido movement set it back even further. The language still has active speakers, having a total of 500 speakers. The Internet has sparked a renewal of interest in the language in recent years. A sample of 24 Idists on the Yahoo! group "Idolisto" during November 2005 showed that 57% had begun their studies of the language during the preceding three years, 32% from the mid-1990s to 2002, and 8% had known the language from before. Few changes have been made to Ido since 1922. Camiel de Cock was named secretary of linguistic issues in 1990, succeeding Roger Moureaux. He resigned after the creation of a linguistic committee in 1991. De Cock was succeeded by Robert C. Carnaghan, who held the position from 1992 to 2008. No new words were adopted between 2001 and 2006. Following the 2008–2011 elections of ULI's direction committee, Gonçalo Neves replaced Carnaghan as secretary of linguistic issues in February 2008. Neves resigned in August 2008. A new linguistic committee was formed in 2010. In April 2010, Tiberio Madonna was appointed as secretary of linguistic issues, succeeding Neves. In January 2011, ULI approved eight new words. This was the first addition of words in many years. As of April 2012, the secretary of linguistic issues remains Tiberio Madonna. Ido has five vowel phonemes. The vowels and are interchangeable depending on speaker preference, as are and . The combinations /au/ and /eu/ become diphthongs in word roots but not when adding affixes. All polysyllabic words are stressed on the second-to-last syllable except for verb infinitives, which are stressed on the last syllableskolo, kafeo and lernas for "school", "coffee" and the present tense of "to learn", but irar, savar and drinkar for "to go", "to know" and "to drink". If an i or u precedes another vowel, the pair is considered part of the same syllable when applying the accent rulethus radio, familio and manuo for "radio", "family" and "hand", unless the two vowels are the only ones in the word, in which case the "i" or "u" is stressed: dio, frua for "day" and "early". Ido uses the same 26 letters as the English alphabet and ISO Basic Latin alphabet with three digraphs and no ligatures or diacritics. Where the table below lists two pronunciations, either is perfectly acceptable. The digraphs are: The definite article is ""la"" and is invariable. The indefinite article (a/an) does not exist in Ido. Each word in the Ido vocabulary is built from a root word. A word consists of a root and a grammatical ending. Other words can be formed from that word by removing the grammatical ending and adding a new one, or by inserting certain affixes between the root and the grammatical ending. Some of the grammatical endings are defined as follows: These are the same as in Esperanto except for "-i", "-ir", "-ar", "-or" and "-ez". Esperanto marks noun plurals by an "agglutinative" ending "-j" (so plural nouns end in "-oj"), uses "-i" for verb infinitives (Esperanto infinitives are tenseless), and uses "-u" for the imperative. Verbs in Ido, as in Esperanto, do not conjugate depending on person, number or gender; the -as, -is, and -os endings suffice whether the subject is I, you, he, she, they, or anything else. For the word "to be," Ido allows either ""esas"" or ""es"" in the present tense; however, the full forms must be used for the past tense ""esis"" and future tense ""esos"." Adjectives and adverbs are compared in Ido by means of the words "plu" = more, "maxim" = most, "min" = less, "minim" = least, "kam" = than/as. There exist in Ido three categories of adverbs: the simple, the derived, and the composed. The simple adverbs do not need special endings, for example: "tre" = very, "tro" = too, "olim" =formerly, "nun" = now, "nur" = only. The derived and composed adverbs, not being originally adverbs but derived from nouns, adjectives and verbs, have the ending -e. Ido word order is generally the same as English (subject–verb–object), so the sentence "Me havas la blua libro" is the same as the English "I have the blue book", both in meaning and word order. There are a few differences, however: Ido generally does not impose rules of grammatical agreement between grammatical categories within a sentence. For example, the verb in a sentence is invariable regardless of the number and person of the subject. Nor must the adjectives be pluralized as well the nounsin Ido "the large books" would be "la granda libri" as opposed to the French "les grands livres" or the Esperanto "la grandaj libroj". Negation occurs in Ido by simply adding ne before a verb: Me ne havas libro means "I do not have a book". This as well does not vary, and thus the "I do not", "He does not", "They do not" before a verb are simply Me ne, Il ne, and Li ne. In the same way, past tense and future tense negatives are formed by ne before the conjugated verb. "I will not go" and "I did not go" become Me ne iros and Me ne iris respectively. Yes/no questions are formed by the particle ka in front of the question. "I have a book" (me havas libro) becomes Ka me havas libro? (do I have a book?). Ka can also be placed in front of a noun without a verb to make a simple question, corresponding to the English "is it?" Ka Mark? can mean, "Are you Mark?", "Is it Mark?", "Do you mean Mark?" depending on the context. The pronouns of Ido were revised to make them more acoustically distinct than those of Esperanto, which all end in "i". Especially the singular and plural first-person pronouns "mi" and "ni" may be difficult to distinguish in a noisy environment, so Ido has "me" and "ni" instead. Ido also distinguishes between intimate ("tu") and formal ("vu") second-person singular pronouns as well as plural second-person pronouns ("vi") not marked for intimacy. Furthermore, Ido has a pan-gender third-person pronoun "lu" (it can mean "he", "she", or "it", depending on the context) in addition to its masculine ("il"), feminine ("el"), and neuter ("ol") third-person pronouns. "ol", like English "it" and Esperanto "ĝi", is not limited to inanimate objects, but can be used "for entities whose sex is indeterminate: "babies, children, humans, youths, elders, people, individuals, horses, [cattle], cats," etc." "Lu" is often mistakenly labeled an epicene pronoun, that is, one that refers to both masculine and feminine beings, but in fact, "lu" is more properly a "pan-gender" pronoun, as it is also used for referring to inanimate objects. From "Kompleta Gramatiko Detaloza di la Linguo Internaciona Ido" by Beaufront: Ido makes correlatives by combining entire words together and changing the word ending, with some irregularities to show distinction. Composition in Ido obeys stricter rules than in Esperanto, especially formation of nouns, adjectives and verbs from a radical of a different class. The reversibility principle assumes that for each composition rule (affix addition), the corresponding decomposition rule (affix removal) is valid. Hence, while in Esperanto an adjective (for instance , formed on the noun radical , can mean an attribute ( “paper-made encyclopedia”) and a relation ( “paper-making factory”), Ido will distinguish the attribute (“paper” or “of paper” (not “paper-made” exactly)) from the relation (“paper-making”). Similarly, means in both Esperanto and Ido the noun “crown”; where Esperanto allows formation of “to crown” by simply changing the ending from noun to verb (“crowning” is ), Ido requires an affix so the composition is reversible: (“the act of crowning” is ). According to Claude Piron, some modifications brought by Ido are in practice impossible to use and ruin spontaneous expression: Ido displays, on linguistic level, other drawbacks Esperanto succeeded to avoid, but I don’t have at hand documents which would allow me to go further in detail. For instance, if I remember correctly, where Esperanto only has the suffix *, Ido has several: **, **, **, which match subtleties which were meant to make language clearer, but that, in practice, inhibit natural expression. Vocabulary in Ido is derived from French, Italian, Spanish, English, German, and Russian. Basing the vocabulary on various widespread languages was intended to make Ido as easy as possible for the greatest number of people possible. Early on, the first 5,371 Ido word roots were analyzed compared to the vocabulary of the six source languages, and the following result was found: Another analysis showed that: Vocabulary in Ido is often created through a number of official prefixes and suffixes that alter the meaning of the word. This allows a user to take existing words and modify them to create neologisms when necessary, and allows for a wide range of expression without the need to learn new vocabulary each time. Though their number is too large to be included in one article, some examples include: New vocabulary is generally created through an analysis of the word, its etymology, and reference to the six source languages. If a word can be created through vocabulary already existing in the language then it will usually be adopted without need for a new radical (such as wikipedio for "Wikipedia", which consists of wiki + enciklopedio for "encyclopedia"), and if not an entirely new word will be created. The word alternatoro for example was adopted in 1926, likely because five of the six source languages used largely the same orthography for the word, and because it was long enough to avoid being mistaken for other words in the existing vocabulary. Adoption of a word is done through consensus, after which the word will be made official by the union. Care must also be taken to avoid homonyms if possible, and usually a new word undergoes some discussion before being adopted. Foreign words that have a restricted sense and are not likely to be used in everyday life (such as the word "intifada" to refer to the conflict between Israel and Palestine) are left untouched, and often written in italics. Ido, unlike Esperanto, does not assume the male sex by default. For example, Ido does not derive the word for “waitress” by adding a feminine suffix to “waiter”, as Esperanto does. Instead, Ido words are defined as sex-neutral, and two different suffixes derive masculine and feminine words from the root: ' for a waiter of either sex, ' for a male waiter, and ' for a waitress. There are only two exceptions to this rule: First, ' for “father”, ' for “mother”, and ' for “parent”, and second, ' for “man”, ' for “woman”, and "" for “adult”. The Lord's Prayer: Ido has a number of publications that can be subscribed to or downloaded for free in most cases. "Kuriero Internaciona" is a magazine produced in France every few months with a range of topics. "Adavane!" is a magazine produced by the Spanish Ido Society every two months that has a range of topics, as well as a few dozen pages of work translated from other languages. "Progreso" is the official organ of the Ido movement and has been around since the inception of the movement in 1908. Other sites can be found with various stories, fables or proverbs along with a few books of the Bible translated into Ido on a smaller scale. The site "publikaji" has a few podcasts in Ido along with various songs and other recorded material. Wikipedia includes an Ido-language edition (known in Ido as "Wikipedio"); in January 2012 it was the 81st most visited Wikipedia. ULI organises Ido conventions yearly, and the conventions include a mix of tourism and work. Additional notes
https://en.wikipedia.org/wiki?curid=15040
Improvisational theatre Improvisational theatre, often called improvisation or improv, is the form of theatre, often comedy, in which most or all of what is performed is unplanned or unscripted: created spontaneously by the performers. In its purest form, the dialogue, action, story, and characters are created collaboratively by the players as the improvisation unfolds in present time, without use of an already prepared, written script. Improvisational theatre exists in performance as a range of styles of improvisational comedy as well as some non-comedic theatrical performances. It is sometimes used in film and television, both to develop characters and scripts and occasionally as part of the final product. Improvisational techniques are often used extensively in drama programs to train actors for stage, film, and television and can be an important part of the rehearsal process. However, the skills and processes of improvisation are also used outside the context of performing arts. This practice, known as applied improvisation, is used in classrooms as an educational tool and in businesses as a way to develop communication skills, creative problem solving, and supportive team-work abilities that are used by improvisational, ensemble players. It is sometimes used in psychotherapy as a tool to gain insight into a person's thoughts, feelings, and relationships. The earliest well-documented use of improvisational theatre in Western history is found in the Atellan Farce of 391 BC. From the 16th to the 18th centuries, "commedia dell'arte" performers improvised based on a broad outline in the streets of Italy. In the 1890s, theatrical theorists and directors such as the Russian Konstantin Stanislavski and the French Jacques Copeau, founders of two major streams of acting theory, both heavily utilized improvisation in acting training and rehearsal. Modern theatrical improvisation games began as drama exercises for children, which were a staple of drama education in the early 20th century thanks in part to the progressive education movement initiated by John Dewey in 1916. Some people credit American Dudley Riggs as the first vaudevillian to use audience suggestions to create improvised sketches on stage. Improvisation exercises were developed further by Viola Spolin in the 1940s, 50s, and 60s, and codified in her book "Improvisation For The Theater", the first book that gave specific techniques for learning to do and teach improvisational theater. In the 1970s in Canada, British playwright and director Keith Johnstone wrote "", a book outlining his ideas on improvisation, and invented Theatresports, which has become a staple of modern improvisational comedy and is the inspiration for the popular television show "Whose Line Is It Anyway?" Spolin influenced the first generation of modern American improvisers at The Compass Players in Chicago, which led to The Second City. Her son, Paul Sills, along with David Shepherd, started The Compass Players. Following the demise of the Compass Players, Paul Sills began The Second City. They were the first organized troupes in Chicago, and the modern Chicago improvisational comedy movement grew from their success. Many of the current "rules" of comedic improv were first formalized in Chicago in the late 1950s and early 1960s, initially among The Compass Players troupe, which was directed by Paul Sills. From most accounts, David Shepherd provided the philosophical vision of the Compass Players, while Elaine May was central to the development of the premises for its improvisations. Mike Nichols, Ted Flicker, and Del Close were her most frequent collaborators in this regard. When The Second City opened its doors on December 16, 1959, directed by Paul Sills, his mother Viola Spolin began training new improvisers through a series of classes and exercises which became the cornerstone of modern improv training. By the mid-1960s, Viola Spolin's classes were handed over to her protégé, Jo Forsberg, who further developed Spolin's methods into a one-year course, which eventually became The Players Workshop, the first official school of improvisation in the USA. During this time, Forsberg trained many of the performers who went on to star on The Second City stage. Many of the original cast of "Saturday Night Live" came from The Second City, and the franchise has produced such comedy stars as Mike Myers, Tina Fey, Bob Odenkirk, Amy Sedaris, Stephen Colbert, Eugene Levy, Jack McBrayer, Steve Carell, Chris Farley, Dan Aykroyd, and John Belushi. Simultaneously, Keith Johnstone's group The Theatre Machine, which originated in London, was touring Europe. This work gave birth to Theatresports, at first secretly in Johnstone's workshops, and eventually in public when he moved to Canada. Toronto has been home to a rich improv tradition. In 1984, Dick Chudnow (Kentucky Fried Theater) founded ComedySportz in Milwaukee, WI. Expansion began with the addition of ComedySportz-Madison (WI), in 1985. The first Comedy League of America National Tournament was held in 1988, with 10 teams participating. The league is now known as CSz Worldwide and boasts a roster of 29 international cities. In San Francisco, The Committee theater was active in North Beach during the 1960s. It was founded by alumni of Chicago's Second City, Alan Myerson and his wife Jessica. When The Committee disbanded in 1972, three major companies were formed: The Pitchell Players, The Wing, and Improvisation Inc. The only company that continued to perform Close's Harold was the latter one. Its two former members, Michael Bossier and John Elk, formed Spaghetti Jam in San Francisco's Old Spaghetti Factory in 1976, where shortform improv and Harolds were performed through 1983. Stand-up comedians performing down the street at the Intersection for the Arts would drop by and sit in. In 1979, Elk brought shortform to England, teaching workshops at Jacksons Lane Theatre, and he was the first American to perform at The Comedy Store, London, above a Soho strip club. Modern political improvisation's roots include Jerzy Grotowski's work in Poland during the late 1950s and early 1960s, Peter Brook's "happenings" in England during the late 1960s, Augusto Boal's "Forum Theatre" in South America in the early 1970s, and San Francisco's The Diggers' work in the 1960s. Some of this work led to pure improvisational performance styles, while others simply added to the theatrical vocabulary and were, on the whole, avant-garde experiments. Joan Littlewood, an English actress and director who was active from the 1950s to 1960s, made extensive use of improv in developing plays for performance. However, she was successfully prosecuted twice for allowing her actors to improvise in performance. Until 1968, British law required scripts to be approved by the Lord Chamberlain's Office. The department also sent inspectors to some performances to check that the approved script was performed exactly as approved. In 1987, Annoyance Theatre began as a club in Chicago that emphasizes longform improvisation. The Annoyance Theatre has grown into multiple locations in Chicago and New York City. It is the home of the longest running musical improv show in history at 11 years. In 2012, Lebanese writer and director Lucien Bourjeily used improvisational theater techniques to create a multi-sensory play entitled "66 Minutes in Damascus". This play premiered at the London International Festival of Theater, and is considered one of the most extreme kinds of interactive improvised theater put on stage. The audience play the part of kidnapped tourists in today's Syria in a hyperreal sensory environment. Rob Wittig and Mark C. Marino have developed a form of improv for online theatrical improvisation called netprov. The form relies on social media to engage audiences in the creation of dynamic fictional scenarios that evolve in real-time. Modern improvisational comedy, as it is practiced in the West, falls generally into two categories: shortform and longform. Shortform improv consists of short scenes usually constructed from a predetermined game, structure, or idea and driven by an audience suggestion. Many short form exercises were first created by Viola Spolin, who called them theatre games, influenced by her training from recreational games expert Neva Boyd. The short-form improv comedy television series "Whose Line Is It Anyway?" has familiarized American and British viewers with short-form. Longform improv performers create shows in which short scenes are often interrelated by story, characters, or themes. Longform shows may take the form of an existing type of theatre, for example a full-length play or Broadway-style musical such as Spontaneous Broadway. One of the better-known longform structures is the Harold, developed by ImprovOlympic co-founder Del Close. Many such longform structures now exist. Longform improvisation is especially performed in Chicago, New York City, Los Angeles; has a strong presence in Austin, Boston, Minneapolis, Phoenix, Philadelphia, San Francisco, Seattle, Detroit, Toronto, Vancouver, Washington, D.C.; and is building a growing following in Baltimore, Denver, Kansas City, Montreal, Columbus, New Orleans, Omaha, Rochester, and Hawaii. Outside the United States, longform improv has a growing presence in the United Kingdom, especially in cities such as London, Bristol, and at the Edinburgh Festival Fringe. Other forms of improvisational theatre training and performance techniques are experimental and avant-garde in nature and not necessarily intended to be comedic. These include Playback Theatre and Theatre of the Oppressed, the Poor Theatre, the Open Theatre, to name only a few. The Open Theatre was founded in New York City by a group of former students of acting teacher Nola Chilton, and joined shortly thereafter by director Joseph Chaikin, formerly of The Living Theatre, and Peter Feldman. This avante-garde theatre group explored political, artistic, and social issues. The company, developing work through an improvisational process drawn from Chilton and Viola Spolin, created well-known exercises, such as "sound and movement" and "transformations", and originated radical forms and techniques that anticipated or were contemporaneous with Jerzy Grotowski's "poor theater" in Poland.[1] During the sixties Chaikin and the Open Theatre developed full theatrical productions with nothing but the actors, a few chairs and a bare stage, creating character, time and place through a series of transformations the actors physicalized and discovered through improvisations. Longform, dramatic, and narrative-based improvisation is well-established on the west coast with companies such as San Francisco's BATS Improv. This format allows for full-length plays and musicals to be created improvisationally. Many people who have studied improv have noted that the guiding principles of improv are useful, not just on stage, but in everyday life. For example, Stephen Colbert in a commencement address said, Tina Fey in her book "Bossypants" lists several rules of improv that apply in the workplace. There has been much interest in bringing lessons from improv into the corporate world. In a New York Times article titled "Can Executives Learn to Ignore the Script?", Stanford professor and author, Patricia Ryan Madson notes, "executives and engineers and people in transition are looking for support in saying yes to their own voice. Often, the systems we put in place to keep us secure are keeping us from our more creative selves." Many directors have made use of improvisation in the creation of both mainstream and experimental films. Many silent filmmakers such as Charlie Chaplin and Buster Keaton used improvisation in the making of their films, developing their gags while filming and altering the plot to fit. The Marx Brothers were notorious for deviating from the script they were given, their ad libs often becoming part of the standard routine and making their way into their films. Many people, however, make a distinction between ad-libbing and improvising. The British director Mike Leigh makes extensive use of improvisation in the creation of his films, including improvising important moments in the characters' lives that will not even appear in the film. "This Is Spinal Tap" and other mockumentary films of director Christopher Guest were created with a mix of scripted and unscripted material. "Blue in the Face" is a 1995 comedy directed by Wayne Wang and Paul Auster created in part by the improvisations during the filming of "Smoke". Some of the best known American film directors who used improvisation in their work with actors are John Cassavetes, Robert Altman, Christopher Guest, and Rob Reiner. Improv comedy techniques have also been used in hit television shows such as HBO's "Curb Your Enthusiasm" created by Larry David, the UK Channel 4 and ABC television series "Whose Line Is It Anyway" (and its spinoffs "Drew Carey's Green Screen Show" and "Drew Carey's Improv-A-Ganza"), Nick Cannon's improv comedy show "Wild 'N Out", and "Thank God You're Here". A very early American improv television program was the weekly half-hour “What Happens Now?” which premiered on New York's WOR-TV on October 15, 1949 and ran for 22 episodes. “The Improvisers” were six actors (including Larry Blyden, Ross Martin, and Jean Alexander – Jean Pugsley at the time) who improvised skits based on situations suggested by viewers. In Canada, the series "Train 48" was improvised from scripts which contained a minimal outline of each scene, and the comedy series "This Sitcom Is...Not to Be Repeated" incorporated dialogue drawn from a hat during the course of an episode. The American show "Reno 911!" also contained improvised dialogue based on a plot outline. "Fast and Loose" is an improvisational game show, much like "Whose Line Is It Anyway?". The BBC sitcoms "Outnumbered" and "The Thick of It" also had some improvised elements in them. In the field of the psychology of consciousness, Eberhard Scheiffele explored the altered state of consciousness experienced by actors and improvisers in his scholarly paper "Acting: an altered state of consciousness". According to G. William Farthing in "The Psychology of Consciousness" comparative study, actors routinely enter into an altered state of consciousness (ASC). Acting is seen as altering most of the 14 dimensions of changed subjective experience which characterize ASCs according to Farthing, namely: attention, perception, imagery and fantasy, inner speech, memory, higher-level thought processes, meaning or significance of experiences, time experience, emotional feeling and expression, level of arousal, self-control, suggestibility, body image, and sense of personal identity. In the growing field of Drama Therapy, psychodramatic improvisation, along with other techniques developed for Drama Therapy, are used extensively. The ""Yes, and"" rule has been compared to Milton Erickson's "utilization" process and to a variety of acceptance-based psychotherapies. Improv training has been recommended for couples therapy and therapist training, and it has been speculated that improv training may be helpful in some cases of social anxiety disorder. Improvisational theatre often allows an interactive relationship with the audience. Improv groups frequently solicit suggestions from the audience as a source of inspiration, a way of getting the audience involved, and as a means of proving that the performance is not scripted. That charge is sometimes aimed at the masters of the art, whose performances can seem so detailed that viewers may suspect the scenes are planned. In order for an improvised scene to be successful, the improvisers involved must work together responsively to define the parameters and action of the scene, in a process of co-creation. With each spoken word or action in the scene, an improviser makes an "offer", meaning that he or she defines some element of the reality of the scene. This might include giving another character a name, identifying a relationship, location, or using mime to define the physical environment. These activities are also known as "endowment". It is the responsibility of the other improvisers to accept the offers that their fellow performers make; to not do so is known as blocking, negation, or denial, which usually prevents the scene from developing. Some performers may deliberately block (or otherwise break out of character) for comedic effect—this is known as "gagging"—but this generally prevents the scene from advancing and is frowned upon by many improvisers. Accepting an offer is usually accompanied by adding a new offer, often building on the earlier one; this is a process improvisers refer to as ""Yes, And..."" and is considered the cornerstone of improvisational technique. Every new piece of information added helps the improvisers to refine their characters and progress the action of the scene. The ""Yes, And..."" rule, however, applies to a scene's early stage since it is in this stage that a "base (or shared) reality" is established in order to be later redefined by applying the ""if (this is true), then (what else can also be true)"" practice progressing the scene into comedy, as explained in the 2013 manual by the "Upright Citizens Brigade" members. The unscripted nature of improv also implies no predetermined knowledge about the props that might be useful in a scene. Improv companies may have at their disposal some number of readily accessible props that can be called upon at a moment's notice, but many improvisers eschew props in favor of the infinite possibilities available through mime. In improv, this is more commonly known as 'space object work' or 'space work', not 'mime', and the props and locations created by this technique, as 'space objects' created out of 'space substance,' developed as a technique by Viola Spolin. As with all improv "offers", improvisers are encouraged to respect the validity and continuity of the imaginary environment defined by themselves and their fellow performers; this means, for example, taking care not to walk through the table or "miraculously" survive multiple bullet wounds from another improviser's gun. Because improvisers may be required to play a variety of roles without preparation, they need to be able to construct characters quickly with physicality, gestures, accents, voice changes, or other techniques as demanded by the situation. The improviser may be called upon to play a character of a different age or sex. Character motivations are an important part of successful improv scenes, and improvisers must therefore attempt to act according to the objectives that they believe their character seeks. In improv formats with multiple scenes, an agreed-upon signal is used to denote scene changes. Most often, this takes the form of a performer running in front of the scene, known as a "wipe." Tapping a character in or out can also be employed. The performers not currently part of the scene often stand at the side or back of the stage, and can enter or exit the scene by stepping into or out of the stage center. Many theatre troupes are devoted to staging improvisational performances and growing the improv community through their training centers. In addition to for-profit theatre troupes, there are many college-based improv groups in the United States and around the world. In Europe the special contribution to the theatre of the abstract, the surreal, the irrational and the subconscious have been part of the stage tradition for centuries. From the 1990s onwards a growing number of European Improv groups have been set up specifically to explore the possibilities offered by the use of the abstract in improvised performance, including dance, movement, sound, music, mask work, lighting, and so on. These groups are not especially interested in comedy, either as a technique or as an effect, but rather in expanding the improv genre so as to incorporate techniques and approaches that have long been a legitimate part of European theatre. Some key figures in the development of improvisational theatre are Viola Spolin and her son Paul Sills, founder of Chicago's famed Second City troupe and originator of Theater Games, and Del Close, founder of ImprovOlympic (along with Charna Halpern) and creator of a popular longform improv format known as The Harold. Other luminaries include Keith Johnstone, the British teacher and writer–author of "Impro", who founded the Theatre Machine and whose teachings form the foundation of the popular shortform Theatresports format, Dick Chudnow, founder of ComedySportz which evolved its family-friendly show format from Johnstone's Theatersports, and Bill Johnson, creator/director of The Magic Meathands, who pioneered the concept of "Commun-edy Outreach" by tailoring performances to non-traditional audiences, such as the homeless and foster children. David Shepherd, with Paul Sills, founded The Compass Players in Chicago. Shepherd was intent on developing a true "people's Theatre", and hoped to bring political drama to the stockyards. The Compass went on to play in numerous forms and companies, in a number of cities including NY and Hyannis, after the founding of The Second City. A number of Compass members were also founding members of The Second City. In the 1970s, Shepherd began experimenting with group-created videos. He is the author of "That Movie In Your Head", about these efforts. In the 1970s, David Shepherd and Howard Jerome created the Improvisational Olympics, a format for competition based improv. The Improv Olympics were first demonstrated at Toronto's Homemade Theatre in 1976 and have been continued on as the Canadian Improv Games. In the United States, the Improv Olympics were later produced by Charna Halpern under the name "ImprovOlympic" and now as "IO"; IO operates training centers and theaters in Chicago and Los Angeles. At IO, Halpern combined Shepherd's "Time Dash" game with Del Close's "Harold" game; the revised format for the Harold became the fundamental structure for the development of modern longform improvisation. In 1975 Jonathan Fox founded Playback Theatre, a form of improvised community theatre which is often not comedic and replays stories as shared by members of the audience. The Groundlings is a popular and influential improv theatre and training center in Los Angeles, California. The late Gary Austin, founder of The Groundlings, taught improvisation around the country, focusing especially in Los Angeles. He was widely acclaimed as one of the greatest acting teachers in America. His work was grounded in the lessons he learned as an improviser at The Committee with Del Close, as well as in his experiences as founding director of The Groundlings. The Groundlings is often seen as the Los Angeles training ground for the "second generation" of improv luminaries and troupes. Stan Wells developed the "Clap-In" style of longform improvisation here, later using this as the basis for his own theatre, The Empty Stage which in turn bred multiple troupes utilizing this style. In the late 1990s, Matt Besser, Amy Poehler, Ian Roberts, and Matt Walsh founded the Upright Citizens Brigade Theatre in New York and later they founded one in Los Angeles, each with an accompanying improv/sketch comedy school. In September 2011 the UCB opened a third theatre in New York City's East Village, known as UCBeast. Hoopla Impro are the founders of the UK and London's 1st improv theatre. They also run an annual UK improv festival and improv marathon. In 2015, The Free Association opened in London as a counterpart to American improv schools. Gunter Lösel compared the existing improvisational theater theories (from Moreno, Spolin, Johnstone, Close...), structured them and wrote a general theory of improvisational theater. Alan Alda's book "If I Understood You, Would I Have This Look on My Face?" investigates the way in which improvisation improves communication in the sciences. The book is based on his work at Alan Alda Center for Communicating Science at Stony Brook University. The book has many examples of how improvisational theater games can increase communication skills and develop empathy.
https://en.wikipedia.org/wiki?curid=15041
International Space Station The International Space Station (ISS) is a modular space station (habitable artificial satellite) in low Earth orbit. The ISS program is a multi-national collaborative project between five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada). The ownership and use of the space station is established by intergovernmental treaties and agreements. It evolved from the Space Station Freedom proposal. The ISS serves as a microgravity and space environment research laboratory in which scientific experiments are conducted in astrobiology, astronomy, meteorology, physics, and other fields. The station is suited for testing the spacecraft systems and equipment required for possible future long-duration missions to the Moon and Mars. It is the largest artificial object in space and the largest satellite in low Earth orbit, regularly visible to the naked eye from Earth's surface. It maintains an orbit with an average altitude of by means of reboost manoeuvres using the engines of the "Zvezda" Service Module or visiting spacecraft. The ISS circles the Earth in roughly 93 minutes, completing  orbits per day. The station is divided into two sections: the Russian Orbital Segment (ROS), operated by Russia; and the United States Orbital Segment (USOS), which is shared by many nations. Roscosmos has endorsed the continued operation of ISS through 2024, but had previously proposed using elements of the Russian segment to construct a new Russian space station called OPSEK. , the station is expected to operate until 2030. The first ISS component was launched in 1998, with the first long-term residents arriving on 2 November 2000. Since then, the station has been continuously occupied for . This is the longest continuous human presence in low Earth orbit, having surpassed the previous record of held by the "Mir" space station. The latest major pressurised module was fitted in 2011, with an experimental inflatable space habitat added in 2016. Development and assembly of the station continues, with several major new Russian elements scheduled for launch starting in 2020. The ISS consists of pressurised habitation modules, structural trusses, photovoltaic solar arrays, thermal radiators, docking ports, experiment bays and robotic arms. Major ISS modules have been launched by Russian Proton and Soyuz rockets and US Space Shuttles. The ISS is the ninth space station to be inhabited by crews, following the Soviet and later Russian "Salyut", "Almaz", and "Mir" stations as well as "Skylab" from the US. The station is serviced by a variety of visiting spacecraft: the Russian Soyuz and Progress, the US Dragon and Cygnus, the Japanese H-II Transfer Vehicle, and formerly the European Automated Transfer Vehicle. The Dragon spacecraft allows the return of pressurised cargo to Earth (downmass), which is used, for example, to repatriate scientific experiments for further analysis. The Soyuz return capsule has minimal downmass capability next to the astronauts. , 239 astronauts, cosmonauts, and space tourists from 20 different nations have visited the space station, many of them multiple times. The United States sent 151 people, Russia sent 47, nine were Japanese, eight Canadian, five Italian, four French, three German, and one each from Belgium, Brazil, Denmark, Kazakhstan, Malaysia, the Netherlands, South Africa, South Korea, Spain, Sweden, the United Arab Emirates, and the United Kingdom. The ISS was originally intended to be a laboratory, observatory, and factory while providing transportation, maintenance, and a low Earth orbit staging base for possible future missions to the Moon, Mars, and asteroids. However, not all of the uses envisioned in the initial memorandum of understanding between NASA and Roscosmos have come to fruition. In the 2010 United States National Space Policy, the ISS was given additional roles of serving commercial, diplomatic, and educational purposes. The ISS provides a platform to conduct scientific research, with power, data, cooling, and crew available to support experiments. Small uncrewed spacecraft can also provide platforms for experiments, especially those involving zero gravity and exposure to space, but space stations offer a long-term environment where studies can be performed potentially for decades, combined with ready access by human researchers. The ISS simplifies individual experiments by allowing groups of experiments to share the same launches and crew time. Research is conducted in a wide variety of fields, including astrobiology, astronomy, physical sciences, materials science, space weather, meteorology, and human research including space medicine and the life sciences. Scientists on Earth have timely access to the data and can suggest experimental modifications to the crew. If follow-on experiments are necessary, the routinely scheduled launches of resupply craft allows new hardware to be launched with relative ease. Crews fly expeditions of several months' duration, providing approximately 160 person-hours per week of labour with a crew of six. However, a considerable amount of crew time is taken up by station maintenance. Perhaps the most notable ISS experiment is the Alpha Magnetic Spectrometer (AMS), which is intended to detect dark matter and answer other fundamental questions about our universe and is as important as the Hubble Space Telescope according to NASA. Currently docked on station, it could not have been easily accommodated on a free flying satellite platform because of its power and bandwidth needs. On 3 April 2013, scientists reported that hints of dark matter may have been detected by the AMS. According to the scientists, "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays". The space environment is hostile to life. Unprotected presence in space is characterised by an intense radiation field (consisting primarily of protons and other subatomic charged particles from the solar wind, in addition to cosmic rays), high vacuum, extreme temperatures, and microgravity. Some simple forms of life called extremophiles, as well as small invertebrates called tardigrades can survive in this environment in an extremely dry state through desiccation. Medical research improves knowledge about the effects of long-term space exposure on the human body, including muscle atrophy, bone loss, and fluid shift. This data will be used to determine whether high duration human spaceflight and space colonisation are feasible. , data on bone loss and muscular atrophy suggest that there would be a significant risk of fractures and movement problems if astronauts landed on a planet after a lengthy interplanetary cruise, such as the six-month interval required to travel to Mars. Medical studies are conducted aboard the ISS on behalf of the National Space Biomedical Research Institute (NSBRI). Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity study in which astronauts perform ultrasound scans under the guidance of remote experts. The study considers the diagnosis and treatment of medical conditions in space. Usually, there is no physician on board the ISS and diagnosis of medical conditions is a challenge. It is anticipated that remotely guided ultrasound scans will have application on Earth in emergency and rural care situations where access to a trained physician is difficult. Gravity at the altitude of the ISS is approximately 90% as strong as at Earth's surface, but objects in orbit are in a continuous state of freefall, resulting in an apparent state of weightlessness. This perceived weightlessness is disturbed by five separate effects: Researchers are investigating the effect of the station's near-weightless environment on the evolution, development, growth and internal processes of plants and animals. In response to some of this data, NASA wants to investigate microgravity's effects on the growth of three-dimensional, human-like tissues, and the unusual protein crystals that can be formed in space. Investigating the physics of fluids in microgravity will provide better models of the behaviour of fluids. Because fluids can be almost completely combined in microgravity, physicists investigate fluids that do not mix well on Earth. In addition, examining reactions that are slowed by low gravity and low temperatures will improve our understanding of superconductivity. The study of materials science is an important ISS research activity, with the objective of reaping economic benefits through the improvement of techniques used on the ground. Other areas of interest include the effect of the low gravity environment on combustion, through the study of the efficiency of burning and control of emissions and pollutants. These findings may improve current knowledge about energy production, and lead to economic and environmental benefits. Future plans are for the researchers aboard the ISS to examine aerosols, ozone, water vapour, and oxides in Earth's atmosphere, as well as cosmic rays, cosmic dust, antimatter, and dark matter in the Universe. The ISS provides a location in the relative safety of low Earth orbit to test spacecraft systems that will be required for long-duration missions to the Moon and Mars. This provides experience in operations, maintenance as well as repair and replacement activities on-orbit, which will be essential skills in operating spacecraft farther from Earth, mission risks can be reduced and the capabilities of interplanetary spacecraft advanced. Referring to the MARS-500 experiment, ESA states that "Whereas the ISS is essential for answering questions concerning the possible impact of weightlessness, radiation and other space-specific factors, aspects such as the effect of long-term isolation and confinement can be more appropriately addressed via ground-based simulations". Sergey Krasnov, the head of human space flight programmes for Russia's space agency, Roscosmos, in 2011 suggested a "shorter version" of MARS-500 may be carried out on the ISS. In 2009, noting the value of the partnership framework itself, Sergey Krasnov wrote, "When compared with partners acting separately, partners developing complementary abilities and resources could give us much more assurance of the success and safety of space exploration. The ISS is helping further advance near-Earth space exploration and realisation of prospective programmes of research and exploration of the Solar system, including the Moon and Mars." A crewed mission to Mars may be a multinational effort involving space agencies and countries outside the current ISS partnership. In 2010, ESA Director-General Jean-Jacques Dordain stated his agency was ready to propose to the other four partners that China, India and South Korea be invited to join the ISS partnership. NASA chief Charlie Bolden stated in February 2011, "Any mission to Mars is likely to be a global effort". Currently, US federal legislation prevents NASA co-operation with China on space projects. The ISS crew provides opportunities for students on Earth by running student-developed experiments, making educational demonstrations, allowing for student participation in classroom versions of ISS experiments, and directly engaging students using radio, videolink and email. ESA offers a wide range of free teaching materials that can be downloaded for use in classrooms. In one lesson, students can navigate a 3-D model of the interior and exterior of the ISS, and face spontaneous challenges to solve in real time. JAXA aims to inspire children to "pursue craftsmanship" and to heighten their "awareness of the importance of life and their responsibilities in society". Through a series of education guides, a deeper understanding of the past and near-term future of crewed space flight, as well as that of Earth and life, will be learned. In the JAXA Seeds in Space experiments, the mutation effects of spaceflight on plant seeds aboard the ISS is explored. Students grow sunflower seeds which flew on the ISS for about nine months. In the first phase of "Kibō" utilisation from 2008 to mid-2010, researchers from more than a dozen Japanese universities conducted experiments in diverse fields. Cultural activities are another major objective. Tetsuo Tanaka, director of JAXA's Space Environment and Utilization Center, says "There is something about space that touches even people who are not interested in science." Amateur Radio on the ISS (ARISS) is a volunteer programme which encourages students worldwide to pursue careers in science, technology, engineering and mathematics through amateur radio communications opportunities with the ISS crew. ARISS is an international working group, consisting of delegations from nine countries including several countries in Europe as well as Japan, Russia, Canada, and the United States. In areas where radio equipment cannot be used, speakerphones connect students to ground stations which then connect the calls to the station. "First Orbit" is a feature-length documentary film about Vostok 1, the first crewed space flight around the Earth. By matching the orbit of the International Space Station to that of Vostok 1 as closely as possible, in terms of ground path and time of day, documentary filmmaker Christopher Riley and ESA astronaut Paolo Nespoli were able to film the view that Yuri Gagarin saw on his pioneering orbital space flight. This new footage was cut together with the original Vostok 1 mission audio recordings sourced from the Russian State Archive. Nespoli, during Expedition 26/27, filmed the majority of the footage for this documentary film, and as a result is credited as its director of photography. The film was streamed through the website firstorbit.org in a global YouTube premiere in 2011, under a free licence. In May 2013, commander Chris Hadfield shot a music video of David Bowie's "Space Oddity" on board the station; the film was released on YouTube. It was the first music video ever to be filmed in space. In November 2017, while participating in Expedition 52/53 on the ISS, Paolo Nespoli made two recordings (one in English the other in his native Italian) of his spoken voice, for use on Wikipedia articles. These were the first content made specifically for Wikipedia, in space. Since the International Space Station is a multi-national collaborative project, the components for in-orbit assembly were manufactured in various countries around the world. Beginning in the mid 1990s, the U.S. components "Destiny", "Unity", the Integrated Truss Structure, and the solar arrays were fabricated at the Marshall Space Flight Center and the Michoud Assembly Facility. These modules were delivered to the Operations and Checkout Building and the Space Station Processing Facility for final assembly and processing for launch. The Russian modules, including "Zarya" and "Zvezda", were manufactured at the Khrunichev State Research and Production Space Center in Moscow. "Zvezda" was initially manufactured in 1985 as a component for "Mir-2", but was never launched and instead became the ISS Service Module. The European Space Agency "Columbus" module was manufactured at the EADS Astrium Space Transportation facilities in Bremen, Germany, along with many other contractors throughout Europe. The other ESA-built modules - "Harmony", "Tranquility", the "Leonardo" MPLM, and the "Cupola" - were initially manufactured at the Thales Alenia Space factory in Turin, Italy. The structural steel hulls of the modules were transported by aircraft to the Kennedy Space Center SSPF for launch processing. The Japanese Experiment Module "Kibō", was fabricated in various technology manufacturing facilities in Japan, at the NASDA (now JAXA) Tsukuba Space Center, and the Institute of Space and Astronautical Science. The "Kibo" module was transported by ship and flown by aircraft to the KSC Space Station Processing Facility. The Mobile Servicing System, consisting of the Canadarm2 and the "Dextre" grapple fixture, was manufactured at various factories in Canada (such as the David Florida Laboratory) and the United States, under contract by the Canadian Space Agency. The mobile base system, a connecting framework for Canadarm2 mounted on rails, was built by Northrop Grumman. The assembly of the International Space Station, a major endeavour in space architecture, began in November 1998. Russian modules launched and docked robotically, with the exception of "Rassvet". All other modules were delivered by the Space Shuttle, which required installation by ISS and Shuttle crewmembers using the Canadarm2 (SSRMS) and extra-vehicular activities (EVAs); , they had added 159 components during more than 1,000 hours of EVA (see List of ISS spacewalks). 127 of these spacewalks originated from the station, and the remaining 32 were launched from the airlocks of docked Space Shuttles. The beta angle of the station had to be considered at all times during construction. The first module of the ISS, "Zarya", was launched on 20 November 1998 on an autonomous Russian Proton rocket. It provided propulsion, attitude control, communications, electrical power, but lacked long-term life support functions. Two weeks later, a passive NASA module "Unity" was launched aboard Space Shuttle flight STS-88 and attached to "Zarya" by astronauts during EVAs. This module has two Pressurised Mating Adapters (PMAs), one connects permanently to "Zarya", the other allowed the Space Shuttle to dock to the space station. At that time, the Russian station "Mir" was still inhabited, and the ISS remained uncrewed for two years. On 12 July 2000, "Zvezda" was launched into orbit. Preprogrammed commands on board deployed its solar arrays and communications antenna. It then became the passive target for a rendezvous with "Zarya" and "Unity": it maintained a station-keeping orbit while the "Zarya"-"Unity" vehicle performed the rendezvous and docking via ground control and the Russian automated rendezvous and docking system. "Zarya" computer transferred control of the station to "Zvezda" computer soon after docking. "Zvezda" added sleeping quarters, a toilet, kitchen, CO2 scrubbers, dehumidifier, oxygen generators, exercise equipment, plus data, voice and television communications with mission control. This enabled permanent habitation of the station. The first resident crew, Expedition 1, arrived in November 2000 on Soyuz TM-31. At the end of the first day on the station, astronaut Bill Shepherd requested the use of the radio call sign ""Alpha"", which he and cosmonaut Krikalev preferred to the more cumbersome ""International Space Station"". The name ""Alpha"" had previously been used for the station in the early 1990s, and its use was authorised for the whole of Expedition 1. Shepherd had been advocating the use of a new name to project managers for some time. Referencing a naval tradition in a pre-launch news conference he had said: "For thousands of years, humans have been going to sea in ships. People have designed and built these vessels, launched them with a good feeling that a name will bring good fortune to the crew and success to their voyage." Yuri Semenov, the President of Russian Space Corporation Energia at the time, disapproved of the name ""Alpha"" as he felt that "Mir" was the first modular space station, so the names ""Beta"" or ""Mir" 2" for the ISS would have been more fitting. Expedition 1 arrived midway between the flights of STS-92 and STS-97. These two Space Shuttle flights each added segments of the station's Integrated Truss Structure, which provided the station with Ku-band communication for US television, additional attitude support needed for the additional mass of the USOS, and substantial solar arrays supplementing the station's four existing solar arrays. Over the next two years, the station continued to expand. A Soyuz-U rocket delivered the "Pirs" docking compartment. The Space Shuttles "Discovery", "Atlantis", and "Endeavour" delivered the "Destiny" laboratory and "Quest" airlock, in addition to the station's main robot arm, the Canadarm2, and several more segments of the Integrated Truss Structure. The expansion schedule was interrupted by the disaster in 2003 and a resulting hiatus in flights. The Space Shuttle was grounded until 2005 with STS-114 flown by "Discovery". Assembly resumed in 2006 with the arrival of STS-115 with "Atlantis", which delivered the station's second set of solar arrays. Several more truss segments and a third set of arrays were delivered on STS-116, STS-117, and STS-118. As a result of the major expansion of the station's power-generating capabilities, more pressurised modules could be accommodated, and the "Harmony" node and "Columbus" European laboratory were added. These were soon followed by the first two components of "Kibō". In March 2009, STS-119 completed the Integrated Truss Structure with the installation of the fourth and final set of solar arrays. The final section of "Kibō" was delivered in July 2009 on STS-127, followed by the Russian "Poisk" module. The third node, "Tranquility", was delivered in February 2010 during STS-130 by the Space Shuttle "Endeavour", alongside the "Cupola", followed in May 2010 by the penultimate Russian module, "Rassvet". "Rassvet" was delivered by Space Shuttle "Atlantis" on STS-132 in exchange for the Russian Proton delivery of the US-funded "Zarya" module in 1998. The last pressurised module of the USOS, "Leonardo", was brought to the station in February 2011 on the final flight of "Discovery", STS-133. The Alpha Magnetic Spectrometer was delivered by "Endeavour" on STS-134 the same year. , the station consisted of 15 pressurised modules and the Integrated Truss Structure. Five modules are still to be launched, including the "Nauka" with the European Robotic Arm, the "Prichal" module, and two power modules called NEM-1 and NEM-2. , Russia's future primary research module "Nauka" is set to launch in the spring of 2021, along with the European Robotic Arm which will be able to relocate itself to different parts of the Russian modules of the station. The gross mass of the station changes over time. The total launch mass of the modules on orbit is about (). The mass of experiments, spare parts, personal effects, crew, foodstuff, clothing, propellants, water supplies, gas supplies, docked spacecraft, and other items add to the total mass of the station. Hydrogen gas is constantly vented overboard by the oxygen generators. The ISS is a third generation modular space station. Modular stations can allow modules to be added to or removed from the existing structure, allowing greater flexibility. Below is a diagram of major station components. The blue areas are pressurised sections accessible by the crew without using spacesuits. The station's unpressurised superstructure is indicated in red. Other unpressurised components are yellow. The "Unity" node joins directly to the "Destiny" laboratory. For clarity, they are shown apart. "Zarya" (), also known as the Functional Cargo Block or FGB (from the or "ФГБ"), is the first module of the ISS to be launched. The FGB provided electrical power, storage, propulsion, and guidance to the ISS during the initial stage of assembly. With the launch and assembly in orbit of other modules with more specialised functionality, "Zarya " is currently primarily used for storage, both inside the pressurised section and in the externally mounted fuel tanks. The "Zarya" is a descendant of the TKS spacecraft designed for the Russian "Salyut" programme. The name "Zarya", which means sunrise, was given to the FGB because it signified the dawn of a new era of international cooperation in space. Although it was built by a Russian company, it is owned by the United States. "Zarya" was built from December 1994 to January 1998 at the Khrunichev State Research and Production Space Center (KhSC) in Moscow. "Zarya" was launched on 20 November 1998 on a Russian Proton rocket from Baikonur Cosmodrome Site 81 in Kazakhstan to a high orbit with a designed lifetime of at least 15 years. After "Zarya" reached orbit, STS-88 launched on 4 December 1998 to attach the "Unity" module. The "Unity" connecting module, also known as Node 1, is the first US-built component of the ISS. It connects the Russian and US segments of the station, and is where crew eat meals together. The module is cylindrical in shape, with six berthing locations (forward, aft, port, starboard, zenith, and nadir) facilitating connections to other modules. "Unity" measures in diameter, is long, made of steel, and was built for NASA by Boeing in a manufacturing facility at the Marshall Space Flight Center in Huntsville, Alabama. "Unity" is the first of the three connecting modules; the other two are "Harmony" and "Tranquility". "Unity" was carried into orbit as the primary cargo of the on STS-88, the first Space Shuttle mission dedicated to assembly of the station. On 6 December 1998, the STS-88 crew mated the aft berthing port of "Unity" with the forward hatch of the already orbiting "Zarya" module. This was the first connection made between two station modules. "Zvezda" (, meaning "star"), "Salyut" DOS-8, also known as the "Zvezda" Service Module, is a module of the ISS. It was the third module launched to the station, and provides all of the station's life support systems, some of which are supplemented in the USOS, as well as living quarters for two crew members. It is the structural and functional center of the Russian Orbital Segment, which is the Russian part of the ISS. Crew assemble here to deal with emergencies on the station. The basic structural frame of "Zvezda", known as "DOS-8", was initially built in the mid-1980s to be the core of the "Mir-2" space station. This means that "Zvezda" is similar in layout to the core module (DOS-7) of the "Mir" space station. It was in fact labeled as "Mir-2" for quite some time in the factory. Its design lineage thus extends back to the original "Salyut" stations. The space frame was completed in February 1985 and major internal equipment was installed by October 1986. The rocket used for launch to the ISS carried advertising; it was emblazoned with the logo of Pizza Hut restaurants, for which they are reported to have paid more than US$1 million. The money helped support Khrunichev State Research and Production Space Center and the Russian advertising agencies that orchestrated the event. On 26 July 2000, "Zvezda" became the third component of the ISS when it docked at the aft port of "Zarya". (U.S. "Unity" module had already been attached to the "Zarya".) Later in July, the computers aboard "Zarya" handed over ISS commanding functions to computers on "Zvezda". The "Destiny" module, also known as the U.S. Lab, is the primary operating facility for U.S. research payloads aboard the International Space Station (ISS). It was berthed to the "Unity" module and activated over a period of five days in February 2001. "Destiny" is NASA's first permanent operating orbital research station since Skylab was vacated in February 1974. The Boeing Company began construction of the research laboratory in 1995 at the Michoud Assembly Facility and then the Marshall Space Flight Center in Huntsville, Alabama. "Destiny" was shipped to the Kennedy Space Center in Florida in 1998, and was turned over to NASA for pre-launch preparations in August 2000. It launched on 7 February 2001 aboard the on STS-98. The "Quest" Joint Airlock, previously known as the Joint Airlock Module, is the primary airlock for the ISS. "Quest" was designed to host spacewalks with both Extravehicular Mobility Unit (EMU) spacesuits and Orlan space suits. The airlock was launched on STS-104 on 14 July 2001. Before "Quest" was attached, Russian spacewalks using Orlan suits could only be done from the "Zvezda" service module, and American spacewalks using EMUs were only possible when a Space Shuttle was docked. The arrival of "Pirs" docking compartment on 16 September 2001 provided another airlock from which Orlan spacewalks can be conducted. "Pirs" () and "Poisk" () are Russian airlock modules, each having two identical hatches. An outward-opening hatch on the "Mir" space station failed after it swung open too fast after unlatching, because of a small amount of air pressure remaining in the airlock. All EVA hatches on the ISS open inwards and are pressure-sealing. "Pirs" was used to store, service, and refurbish Russian Orlan suits and provided contingency entry for crew using the slightly bulkier American suits. The outermost docking ports on both airlocks allow docking of Soyuz and Progress spacecraft, and the automatic transfer of propellants to and from storage on the ROS. "Pirs" was launched on 14 September 2001, as ISS Assembly Mission 4R, on a Russian Soyuz-U rocket, using a modified Progress spacecraft, Progress M-SO1, as an upper stage. "Poisk" was launched on 10 November 2009 attached to a modified Progress spacecraft, called Progress M-MIM2, on a Soyuz-U rocket from Launch Pad 1 at the Baikonur Cosmodrome in Kazakhstan. "Harmony", also known as "Node 2", is the "utility hub" of the ISS. It connects the laboratory modules of the United States, Europe and Japan, as well as providing electrical power and electronic data. Sleeping cabins for four of the six crew are housed here. "Harmony" was successfully launched into space aboard Space Shuttle flight STS-120 on 23 October 2007. After temporarily being attached to the port side of the "Unity", it was moved to its permanent location on the forward end of the Destiny laboratory on 14 November 2007. "Harmony" added to the station's living volume, an increase of almost 20 percent, from to . Its successful installation meant that from NASA's perspective, the station was "U.S. Core Complete". "Tranquility", also known as Node 3, is a module of the ISS. It contains environmental control systems, life support systems, a toilet, exercise equipment, and an observation cupola. ESA and the Italian Space Agency had "Tranquility" built by Thales Alenia Space. A ceremony on 20 November 2009 transferred ownership of the module to NASA. On 8 February 2010, NASA launched the module on the Space Shuttle's STS-130 mission. "Columbus" is a science laboratory that is part of the ISS and is the largest single contribution to the ISS made by the European Space Agency (ESA). The "Columbus" laboratory was flown to the Kennedy Space Center (KSC) in Florida in an Airbus Beluga. It was launched aboard on 7 February 2008 on flight STS-122. It is designed for ten years of operation. The module is controlled by the Columbus Control Centre, located at the German Space Operations Centre, part of the German Aerospace Center in Oberpfaffenhofen near Munich, Germany. The European Space Agency has spent €1.4 billion (about US$2 billion) on building "Columbus", including the experiments that will orbit in "Columbus" and the ground control infrastructure necessary to operate the experiments. The Japanese Experiment Module (JEM), nicknamed , is a Japanese science module for the ISS developed by JAXA. It is the largest single ISS module, and is attached to the "Harmony" module. The first two pieces of the module were launched on Space Shuttle missions STS-123 and STS-124. The third and final components were launched on STS-127. The "Cupola" is an ESA-built observatory module of the ISS. Its name derives from the Italian word "", which means "dome". Its seven windows are used to conduct experiments, dockings and observations of Earth. It was launched aboard Space Shuttle mission STS-130 on 8 February 2010 and attached to the "Tranquility" (Node 3) module. With the "Cupola" attached, ISS assembly reached 85 percent completion. The "Cupola" central window has a diameter of . "Rassvet" (; lit. "dawn"), also known as the Mini-Research Module 1 (MRM-1) (, ) and formerly known as the Docking Cargo Module (DCM), is a component of the ISS. The module's design is similar to the Mir Docking Module launched on STS-74 in 1995. "Rassvet" is primarily used for cargo storage and as a docking port for visiting spacecraft. It was flown to the ISS aboard on the STS-132 mission on 14 May 2010, and was connected to the ISS on 18 May. The hatch connecting "Rassvet" with the ISS was first opened on 20 May. On 28 June 2010, the Soyuz TMA-19 spacecraft performed the first docking with the module. The "Leonardo" Permanent Multipurpose Module (PMM) is a module of the ISS. It was flown into space aboard the Space Shuttle on STS-133 on 24 February 2011 and installed on 1 March. "Leonardo" is primarily used for storage of spares, supplies and waste on the ISS, which was until then stored in many different places within the space station. The "Leonardo" PMM was a Multi-Purpose Logistics Module (MPLM) before 2011, but was modified into its current configuration. It was formerly one of three MPLM used for bringing cargo to and from the ISS with the Space Shuttle. The module was named for Italian polymath Leonardo da Vinci. The Bigelow Expandable Activity Module (BEAM) is an experimental expandable space station module developed by Bigelow Aerospace, under contract to NASA, for testing as a temporary module on the ISS from 2016 to at least 2020. It arrived at the ISS on 10 April 2016, was berthed to the station on 16 April, and was expanded and pressurised on 28 May 2016. The International Docking Adapter (IDA) is a spacecraft docking system adapter developed to convert APAS-95 to the NASA Docking System (NDS)/International Docking System Standard (IDSS). An IDA is placed on each of the ISS' two open Pressurised Mating Adapters (PMAs), both of which are connected to the "Harmony" module. IDA-1 was lost during the launch failure of SpaceX CRS-7 on 28 June 2015. IDA-2 was launched on SpaceX CRS-9 on 18 July 2016. It was attached and connected to PMA-2 during a spacewalk on 19 August 2016. First docking was achieved with the arrival of Crew Dragon Demo-1 on 3 March 2019. IDA-3 was launched on the SpaceX CRS-18 mission in July 2019. IDA-3 is constructed mostly from spare parts to speed construction. It was attached and connected to PMA-3 during a spacewalk on 21 August 2019. The ISS has a large number of external components that do not require pressurisation. The largest of these is the Integrated Truss Structure (ITS), to which the station's main solar arrays and thermal radiators are mounted. The ITS consists of ten separate segments forming a structure long. The station was intended to have several smaller external components, such as six robotic arms, three External Stowage Platforms (ESPs) and four ExPRESS Logistics Carriers (ELCs). While these platforms allow experiments (including MISSE, the STP-H3 and the Robotic Refueling Mission) to be deployed and conducted in the vacuum of space by providing electricity and processing experimental data locally, their primary function is to store spare Orbital Replacement Units (ORUs). ORUs are parts that can be replaced when they fail or pass their design life, including pumps, storage tanks, antennas, and battery units. Such units are replaced either by astronauts during EVA or by robotic arms. Several shuttle missions were dedicated to the delivery of ORUs, including STS-129, STS-133 and STS-134. , only one other mode of transportation of ORUs had been utilised—the Japanese cargo vessel HTV-2—which delivered an FHRC and CTC-2 via its Exposed Pallet (EP). There are also smaller exposure facilities mounted directly to laboratory modules; the "Kibō" Exposed Facility serves as an external "porch" for the "Kibō" complex, and a facility on the European "Columbus" laboratory provides power and data connections for experiments such as the European Technology Exposure Facility and the Atomic Clock Ensemble in Space. A remote sensing instrument, SAGE III-ISS, was delivered to the station in February 2017 aboard CRS-10, and the NICER experiment was delivered aboard CRS-11 in June 2017. The largest scientific payload externally mounted to the ISS is the Alpha Magnetic Spectrometer (AMS), a particle physics experiment launched on STS-134 in May 2011, and mounted externally on the ITS. The AMS measures cosmic rays to look for evidence of dark matter and antimatter. The commercial "Bartolomeo" External Payload Hosting Platform, manufactured by Airbus, was launched on 6 March 2020 aboard CRS-20 and attached to the European "Columbus" module. It will provide an additional 12 external payload slots, supplementing the eight on the ExPRESS Logistics Carriers, ten on "Kibō", and four on "Columbus". The system is designed to be robotically serviced and will require no astronaut intervention. It is named after Christopher Columbus's younger brother. The Integrated Truss Structure serves as a base for the station's primary remote manipulator system, called the Mobile Servicing System (MSS), which is composed of three main components. Canadarm2, the largest robotic arm on the ISS, has a mass of and is used to dock and manipulate spacecraft and modules on the USOS, hold crew members and equipment in place during EVAs and move Dextre around to perform tasks. Dextre is a robotic manipulator with two arms, a rotating torso and has power tools, lights and video for replacing orbital replacement units (ORUs) and performing other tasks requiring fine control. The Mobile Base System (MBS) is a platform which rides on rails along the length of the station's main truss. It serves as a mobile base for Canadarm2 and Dextre, allowing the robotic arms to reach all parts of the USOS. To gain access to the Russian Segment a grapple fixture was added to "Zarya" on STS-134, so that Canadarm2 can inchworm itself onto the ROS. Also installed during STS-134 was the Orbiter Boom Sensor System (OBSS), which had been used to inspect heat shield tiles on Space Shuttle missions and can be used on station to increase the reach of the MSS. Staff on Earth or the station can operate the MSS components via remote control, performing work outside the station without space walks. Japan's Remote Manipulator System, which services the "Kibō" Exposed Facility, was launched on STS-124 and is attached to the "Kibō" Pressurised Module. The arm is similar to the Space Shuttle arm as it is permanently attached at one end and has a latching end effector for standard grapple fixtures at the other. The European Robotic Arm, which will service the Russian Orbital Segment, will be launched alongside the Multipurpose Laboratory Module in 2020. The ROS does not require spacecraft or modules to be manipulated, as all spacecraft and modules dock automatically and may be discarded the same way. Crew use the two "Strela" (Russian: Стрела́; lit. Arrow) cargo cranes during EVAs for moving crew and equipment around the ROS. Each Strela crane has a mass of . "Nauka" (; lit. "Science"), also known as the Multipurpose Laboratory Module (MLM), (Russian: "Многофункциональный лабораторный модуль", or "МЛМ"), is a component of the ISS which has not yet been launched into space. The MLM is funded by the Roscosmos State Corporation. In the original ISS plans, "Nauka" was to use the location of the Docking and Stowage Module. Later, the DSM was replaced by the "Rassvet" module and it was moved to "Zarya"s nadir port. Planners anticipate "Nauka" will dock at "Zvezda"s nadir port, replacing "Pirs". The launch of "Nauka", initially planned for 2007, has been repeatedly delayed for various reasons. , the launch to the ISS is assigned to no earlier than spring 2021. After this date, the warranties of some of Nauka's systems will expire. "Prichal", also known as "Uzlovoy" Module or UM (, "Nodal Module Berth"), is a ball-shaped module that will allow docking of two scientific and power modules during the final stage of the station assembly, and provide the Russian segment additional docking ports to receive Soyuz MS and Progress MS spacecraft. UM is due to be launched in the third quarter of 2021. It will be integrated with a special version of the Progress cargo ship and launched by a standard Soyuz rocket, docking to the nadir port of the "Nauka" module. One port is equipped with an active hybrid docking port, which enables docking with the MLM module. The remaining five ports are passive hybrids, enabling docking of Soyuz and Progress vehicles, as well as heavier modules and future spacecraft with modified docking systems. The node module was intended to serve as the only permanent element of the cancelled OPSEK. Science Power Module 1 (SPM-1, also known as NEM-1) and Science Power Module 2 (SPM-2, also known as NEM-2) are modules planned to arrive at the ISS not earlier than 2024. It is going to dock to the "Prichal" module, which is planned to be attached to the "Nauka" module. If "Nauka" is cancelled, then the "Prichal", SPM-1, and SPM-2 would dock at the zenith port of "Zvezda". SPM-1 and SPM-2 would also be required components for the OPSEK space station. The NanoRacks Bishop Airlock Module is a commercially-funded airlock module intended to be launched to the ISS on SpaceX CRS-21 in August 2020. The module is being built by NanoRacks, Thales Alenia Space, and Boeing. It will be used to deploy CubeSats, small satellites, and other external payloads for NASA, CASIS, and other commercial and governmental customers. In January 2020, NASA awarded Axiom Space a contract to build a commercial module for the space station with it launching in 2024. The contract is under the NextSTEP2 program. NASA said it will begin negotiations with Axiom on a firm-fixed-price contract to build and deliver the module, which will attach to the forward port on space station's Harmony module, or Node 2. Although NASA has only commissioned one module, Axiom plans to build an entire segment which would consist of five modules. These modules would include a node module, an orbital research and manufacturing facility, a crew habitat, and a "large-windowed Earth observatory". The Axiom segment would greatly increase the capabilities and value of the station and allow for larger crews and private spaceflight by other organisations. Axiom plans to turn its segment into its own space station once the ISS is decommissioned and would let it act as a successor to the station. Several modules planned for the station were cancelled over the course of the ISS programme. Reasons include budgetary constraints, the modules becoming unnecessary, and station redesigns after the 2003 "Columbia" disaster. The US Centrifuge Accommodations Module would have hosted science experiments in varying levels of artificial gravity. The US Habitation Module would have served as the station's living quarters. Instead, the living quarters are now spread throughout the station. The US Interim Control Module and ISS Propulsion Module would have replaced the functions of "Zvezda" in case of a launch failure. Two Russian Research Modules were planned for scientific research. They would have docked to a Russian Universal Docking Module. The Russian Science Power Platform would have supplied power to the Russian Orbital Segment independent of the ITS solar arrays. The critical systems are the atmosphere control system, the water supply system, the food supply facilities, the sanitation and hygiene equipment, and fire detection and suppression equipment. The Russian Orbital Segment's life support systems are contained in the "Zvezda" service module. Some of these systems are supplemented by equipment in the USOS. The "Nauka" laboratory has a complete set of life support systems. The atmosphere on board the ISS is similar to the Earth's. Normal air pressure on the ISS is ; the same as at sea level on Earth. An Earth-like atmosphere offers benefits for crew comfort, and is much safer than a pure oxygen atmosphere, because of the increased risk of a fire such as that responsible for the deaths of the Apollo 1 crew. Earth-like atmospheric conditions have been maintained on all Russian and Soviet spacecraft. The "Elektron" system aboard "Zvezda" and a similar system in "Destiny" generate oxygen aboard the station. The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters, a chemical oxygen generator system. Carbon dioxide is removed from the air by the Vozdukh system in "Zvezda". Other by-products of human metabolism, such as methane from the intestines and ammonia from sweat, are removed by activated charcoal filters. Part of the ROS atmosphere control system is the oxygen supply. Triple-redundancy is provided by the Elektron unit, solid fuel generators, and stored oxygen. The primary supply of oxygen is the Elektron unit which produces and by electrolysis of water and vents overboard. The system uses approximately one litre of water per crew member per day. This water is either brought from Earth or recycled from other systems. "Mir" was the first spacecraft to use recycled water for oxygen production. The secondary oxygen supply is provided by burning -producing Vika cartridges (see also ISS ECLSS). Each 'candle' takes 5–20 minutes to decompose at , producing of . This unit is manually operated. The US Orbital Segment has redundant supplies of oxygen, from a pressurised storage tank on the "Quest" airlock module delivered in 2001, supplemented ten years later by ESA-built Advanced Closed-Loop System (ACLS) in the "Tranquility" module (Node 3), which produces by electrolysis. Hydrogen produced is combined with carbon dioxide from the cabin atmosphere and converted to water and methane. Double-sided solar arrays provide electrical power to the ISS. These bifacial cells collect direct sunlight on one side and light reflected off from the Earth on the other, and are more efficient and operate at a lower temperature than single-sided cells commonly used on Earth. The Russian segment of the station, like most spacecraft, uses 28 V low voltage DC from four rotating solar arrays mounted on "Zarya" and "Zvezda". The USOS uses 130–180 V DC from the USOS PV array, power is stabilised and distributed at 160 V DC and converted to the user-required 124 V DC. The higher distribution voltage allows smaller, lighter conductors, at the expense of crew safety. The two station segments share power with converters. The USOS solar arrays are arranged as four wing pairs, for a total production of 75 to 90 kilowatts. These arrays normally track the sun to maximise power generation. Each array is about in area and long. In the complete configuration, the solar arrays track the sun by rotating the "alpha gimbal" once per orbit; the "beta gimbal" follows slower changes in the angle of the sun to the orbital plane. The Night Glider mode aligns the solar arrays parallel to the ground at night to reduce the significant aerodynamic drag at the station's relatively low orbital altitude. The station originally used rechargeable nickel–hydrogen batteries () for continuous power during the 35 minutes of every 90-minute orbit that it is eclipsed by the Earth. The batteries are recharged on the day side of the orbit. They had a 6.5-year lifetime (over 37,000 charge/discharge cycles) and were regularly replaced over the anticipated 20-year life of the station. Starting in 2016, the nickel–hydrogen batteries were replaced by lithium-ion batteries, which are expected to last until the end of the ISS program. The station's large solar panels generate a high potential voltage difference between the station and the ionosphere. This could cause arcing through insulating surfaces and sputtering of conductive surfaces as ions are accelerated by the spacecraft plasma sheath. To mitigate this, plasma contactor units (PCU)s create current paths between the station and the ambient plasma field. The station's systems and experiments consume a large amount of electrical power, almost all of which is converted to heat. To keep the internal temperature within workable limits, a passive thermal control system (PTCS) is made of external surface materials, insulation such as MLI, and heat pipes. If the PTCS cannot keep up with the heat load, an External Active Thermal Control System (EATCS) maintains the temperature. The EATCS consists of an internal, non-toxic, water coolant loop used to cool and dehumidify the atmosphere, which transfers collected heat into an external liquid ammonia loop. From the heat exchangers, ammonia is pumped into external radiators that emit heat as infrared radiation, then back to the station. The EATCS provides cooling for all the US pressurised modules, including "Kibō" and "Columbus", as well as the main power distribution electronics of the S0, S1 and P1 trusses. It can reject up to 70 kW. This is much more than the 14 kW of the Early External Active Thermal Control System (EEATCS) via the Early Ammonia Servicer (EAS), which was launched on STS-105 and installed onto the P6 Truss. Radio communications provide telemetry and scientific data links between the station and mission control centres. Radio links are also used during rendezvous and docking procedures and for audio and video communication between crew members, flight controllers and family members. As a result, the ISS is equipped with internal and external communication systems used for different purposes. The Russian Orbital Segment communicates directly with the ground via the "Lira" antenna mounted to "Zvezda". The "Lira" antenna also has the capability to use the "Luch" data relay satellite system. This system fell into disrepair during the 1990s, and so was not used during the early years of the ISS, although two new "Luch" satellites—"Luch"-5A and "Luch"-5B—were launched in 2011 and 2012 respectively to restore the operational capability of the system. Another Russian communications system is the Voskhod-M, which enables internal telephone communications between "Zvezda", "Zarya", "Pirs", "Poisk", and the USOS and provides a VHF radio link to ground control centres via antennas on "Zvezda" exterior. The US Orbital Segment (USOS) makes use of two separate radio links mounted in the Z1 truss structure: the S band (audio) and Ku band (audio, video and data) systems. These transmissions are routed via the United States Tracking and Data Relay Satellite System (TDRSS) in geostationary orbit, allowing for almost continuous real-time communications with Christopher C. Kraft Jr. Mission Control Center (MCC-H) in Houston. Data channels for the Canadarm2, European "Columbus" laboratory and Japanese "Kibō" modules were originally also routed via the S band and Ku band systems, with the European Data Relay System and a similar Japanese system intended to eventually complement the TDRSS in this role. Communications between modules are carried on an internal wireless network. UHF radio is used by astronauts and cosmonauts conducting EVAs and other spacecraft that dock to or undock from the station. Automated spacecraft are fitted with their own communications equipment; the ATV uses a laser attached to the spacecraft and the Proximity Communications Equipment attached to "Zvezda" to accurately dock with the station. The ISS is equipped with about 100 IBM/Lenovo ThinkPad and HP ZBook 15 laptop computers. The laptops have run Windows 95, Windows 2000, Windows XP, Windows 7, Windows 10 and Linux operating systems. Each computer is a commercial off-the-shelf purchase which is then modified for safety and operation including updates to connectors, cooling and power to accommodate the station's 28V DC power system and weightless environment. Heat generated by the laptops does not rise but stagnates around the laptop, so additional forced ventilation is required. Laptops aboard the ISS are connected to the station's wireless LAN via Wi-Fi and ethernet, which connects to the ground via Ku band. While originally this provided speeds of 10 Mbit/s download and 3 Mbit/s upload from the station, NASA upgraded the system in late August 2019 and increased the speeds to 600 Mbit/s. Laptop hard drives occasionally fail and must be replaced. Other computer hardware failures include instances in 2001, 2007 and 2017; some of these failures have required EVAs to replace computer modules in externally mounted devices. The operating system used for key station functions is the Debian Linux distribution. The migration from Microsoft Windows was made in May 2013 for reasons of reliability, stability and flexibility. In 2017, an SG100 Cloud Computer was launched to the ISS as part of OA-7 mission. It was manufactured by NCSIST of Taiwan and designed in collaboration with Academia Sinica, and National Central University under contract for NASA. Each permanent crew is given an expedition number. Expeditions run up to six months, from launch until undocking, an 'increment' covers the same time period, but includes cargo ships and all activities. Expeditions 1 to 6 consisted of three-person crews. Expeditions 7 to 12 were reduced to the safe minimum of two following the destruction of the NASA Shuttle Columbia. From Expedition 13 the crew gradually increased to six around 2010. With the planned arrival of crew on US commercial vehicles in the early 2020s, expedition size may be increased to seven crew members, the number ISS is designed for. Gennady Padalka, member of Expeditions 9, 19/20, 31/32, and 43/44, and Commander of Expedition 11, has spent more time in space than anyone else, a total of 878 days, 11 hours, and 29 minutes. Peggy Whitson has spent the most time in space of any American, totalling 665 days, 22 hours, and 22 minutes during her time on Expeditions 5, 16, and 50/51/52. Travellers who pay for their own passage into space are termed spaceflight participants by Roscosmos and NASA, and are sometimes referred to as "space tourists", a term they generally dislike. All seven were transported to the ISS on Russian Soyuz spacecraft. When professional crews change over in numbers not divisible by the three seats in a Soyuz, and a short-stay crewmember is not sent, the spare seat is sold by MirCorp through Space Adventures. When the space shuttle retired in 2011, and the station's crew size was reduced to six, space tourism was halted, as the partners relied on Russian transport seats for access to the station. Soyuz flight schedules increase after 2013, allowing five Soyuz flights (15 seats) with only two expeditions (12 seats) required. The remaining seats are sold for around to members of the public who can pass a medical exam. ESA and NASA criticised private spaceflight at the beginning of the ISS, and NASA initially resisted training Dennis Tito, the first person to pay for his own passage to the ISS. Anousheh Ansari became the first Iranian in space and the first self-funded woman to fly to the station. Officials reported that her education and experience make her much more than a tourist, and her performance in training had been "excellent." Ansari herself dismisses the idea that she is a tourist. She did Russian and European studies involving medicine and microbiology during her 10-day stay. The documentary "Space Tourists" follows her journey to the station, where she fulfilled "an age-old dream of man: to leave our planet as a "normal person" and travel into outer space." In 2008, spaceflight participant Richard Garriott placed a geocache aboard the ISS during his flight. This is currently the only non-terrestrial geocache in existence. At the same time, the Immortality Drive, an electronic record of eight digitised human DNA sequences, was placed aboard the ISS. A wide variety of crewed and uncrewed spacecraft have supported the station's activities. Thirty-seven Space Shuttle ISS flights were conducted before retirement. 75 Progress resupply spacecraft (including the modified M-MIM2 and M-SO1 module transports), 59 crewed Soyuz spacecraft, five European ATV, nine Japanese HTV 'Kounotori', 20 SpaceX Dragon, and 12 orbital ATK Cygnus have flown to the ISS. All Russian spacecraft and self-propelled modules are able to rendezvous and dock to the space station without human intervention using the Kurs radar docking system from over 200 kilometres away. The European ATV uses star sensors and GPS to determine its intercept course. When it catches up it uses laser equipment to optically recognise "Zvezda", along with the Kurs system for redundancy. Crew supervise these craft, but do not intervene except to send abort commands in emergencies. Progress and ATV supply craft can remain at the ISS for six months, allowing great flexibility in crew time for loading and unloading of supplies and trash. From the initial station programs, the Russians pursued an automated docking methodology that used the crew in override or monitoring roles. Although the initial development costs were high, the system has become very reliable with standardisations that provide significant cost benefits in repetitive operations. Soyuz spacecraft used for crew rotation also serve as lifeboats for emergency evacuation; they are replaced every six months and were used after the "Columbia" disaster to return stranded crew from the ISS. Expeditions require, on average, of supplies, and , crews had consumed a total of around . Soyuz crew rotation flights and Progress resupply flights visit the station on average two and three times respectively each year. Other vehicles berth instead of docking. The Japanese H-II Transfer Vehicle parks itself in progressively closer orbits to the station, and then awaits 'approach' commands from the crew, until it is close enough for a robotic arm to grapple and berth the vehicle to the USOS. Berthed craft can transfer International Standard Payload Racks. Japanese spacecraft berth for one to two months. The berthing Cygnus and SpaceX Dragon were contracted to fly cargo to the station under the phase 1 of the Commercial Resupply Services program. From 26 February 2011 to 7 March 2011 four of the governmental partners (United States, ESA, Japan and Russia) had their spacecraft (NASA Shuttle, ATV, HTV, Progress and Soyuz) docked at the ISS, the only time this has happened to date. On 25 May 2012, SpaceX delivered the first commercial cargo with a Dragon spacecraft. Prior to a ship's docking to the ISS, navigation and attitude control (GNC) is handed over to the ground control of the ship's country of origin. GNC is set to allow the station to drift in space, rather than fire its thrusters or turn using gyroscopes. The solar panels of the station are turned edge-on to the incoming ships, so residue from its thrusters does not damage the cells. Before its retirement, Shuttle launches were often given priority over Soyuz, with occasional priority given to Soyuz arrivals carrying crew and time-critical cargoes, such as biological experiment materials. The components of the ISS are operated and monitored by their respective space agencies at mission control centres across the globe, including: Orbital Replacement Units (ORUs) are spare parts that can be readily replaced when a unit either passes its design life or fails. Examples of ORUs are pumps, storage tanks, controller boxes, antennas, and battery units. Some units can be replaced using robotic arms. Most are stored outside the station, either on small pallets called ExPRESS Logistics Carriers (ELCs) or share larger platforms called External Stowage Platforms which also hold science experiments. Both kinds of pallets provide electricity for many parts that could be damaged by the cold of space and require heating. The larger logistics carriers also have local area network (LAN) connections for telemetry to connect experiments. A heavy emphasis on stocking the USOS with ORU's occurred around 2011, before the end of the NASA shuttle programme, as its commercial replacements, Cygnus and Dragon, carry one tenth to one quarter the payload. Unexpected problems and failures have impacted the station's assembly time-line and work schedules leading to periods of reduced capabilities and, in some cases, could have forced abandonment of the station for safety reasons. Serious problems include an air leak from the USOS in 2004, the venting of fumes from an "Elektron" oxygen generator in 2006, and the failure of the computers in the ROS in 2007 during STS-117 that left the station without thruster, "Elektron", "Vozdukh" and other environmental control system operations. In the latter case, the root cause was found to be condensation inside electrical connectors leading to a short-circuit. During STS-120 in 2007 and following the relocation of the P6 truss and solar arrays, it was noted during the solar array had torn and was not deploying properly. An EVA was carried out by Scott Parazynski, assisted by Douglas Wheelock. Extra precautions were taken to reduce the risk of electric shock, as the repairs were carried out with the solar array exposed to sunlight. The issues with the array were followed in the same year by problems with the starboard Solar Alpha Rotary Joint (SARJ), which rotates the arrays on the starboard side of the station. Excessive vibration and high-current spikes in the array drive motor were noted, resulting in a decision to substantially curtail motion of the starboard SARJ until the cause was understood. Inspections during EVAs on STS-120 and STS-123 showed extensive contamination from metallic shavings and debris in the large drive gear and confirmed damage to the large metallic bearing surfaces, so the joint was locked to prevent further damage. Repairs to the joints were carried out during STS-126 with lubrication and the replacement of 11 out of 12 trundle bearings on the joint. In September 2008, damage to the S1 radiator was first noticed in Soyuz imagery. The problem was initially not thought to be serious. The imagery showed that the surface of one sub-panel has peeled back from the underlying central structure, possibly because of micro-meteoroid or debris impact. On 15 May 2009 the damaged radiator panel's ammonia tubing was mechanically shut off from the rest of the cooling system by the computer-controlled closure of a valve. The same valve was then used to vent the ammonia from the damaged panel, eliminating the possibility of an ammonia leak. It is also known that a Service Module thruster cover struck the S1 radiator after being jettisoned during an EVA in 2008, but its effect, if any, has not been determined. Early on 1 August 2010, a failure in cooling Loop A (starboard side), one of two external cooling loops, left the station with only half of its normal cooling capacity and zero redundancy in some systems. The problem appeared to be in the ammonia pump module that circulates the ammonia cooling fluid. Several subsystems, including two of the four CMGs, were shut down. Planned operations on the ISS were interrupted through a series of EVAs to address the cooling system issue. A first EVA on 7 August 2010, to replace the failed pump module, was not fully completed because of an ammonia leak in one of four quick-disconnects. A second EVA on 11 August successfully removed the failed pump module. A third EVA was required to restore Loop A to normal functionality. The USOS's cooling system is largely built by the US company Boeing, which is also the manufacturer of the failed pump. The four Main Bus Switching Units (MBSUs, located in the S0 truss), control the routing of power from the four solar array wings to the rest of the ISS. Each MBSU has two power channels that feed 160V DC from the arrays to two DC-to-DC power converters (DDCUs) that supply the 124V power used in the station. In late 2011 MBSU-1 ceased responding to commands or sending data confirming its health. While still routing power correctly, it was scheduled to be swapped out at the next available EVA. A spare MBSU was already on board, but a 30 August 2012 EVA failed to be completed when a bolt being tightened to finish installation of the spare unit jammed before the electrical connection was secured. The loss of MBSU-1 limited the station to 75% of its normal power capacity, requiring minor limitations in normal operations until the problem could be addressed. On 5 September 2012, in a second six-hour EVA, astronauts Sunita Williams and Akihiko Hoshide successfully replaced MBSU-1 and restored the ISS to 100% power. On 24 December 2013, astronauts installed a new ammonia pump for the station's cooling system. The faulty cooling system had failed earlier in the month, halting many of the station's science experiments. Astronauts had to brave a "mini blizzard" of ammonia while installing the new pump. It was only the second Christmas Eve spacewalk in NASA history. A typical day for the crew begins with a wake-up at 06:00, followed by post-sleep activities and a morning inspection of the station. The crew then eats breakfast and takes part in a daily planning conference with Mission Control before starting work at around 08:10. The first scheduled exercise of the day follows, after which the crew continues work until 13:05. Following a one-hour lunch break, the afternoon consists of more exercise and work before the crew carries out its pre-sleep activities beginning at 19:30, including dinner and a crew conference. The scheduled sleep period begins at 21:30. In general, the crew works ten hours per day on a weekday, and five hours on Saturdays, with the rest of the time their own for relaxation or work catch-up. The time zone used aboard the ISS is Coordinated Universal Time (UTC). The windows are covered at night hours to give the impression of darkness because the station experiences 16 sunrises and sunsets per day. During visiting Space Shuttle missions, the ISS crew mostly follows the shuttle's Mission Elapsed Time (MET), which is a flexible time zone based on the launch time of the Space Shuttle mission. The station provides crew quarters for each member of the expedition's crew, with two 'sleep stations' in the "Zvezda" and four more installed in "Harmony". The USOS quarters are private, approximately person-sized soundproof booths. The ROS crew quarters include a small window, but provide less ventilation and sound proofing. A crew member can sleep in a crew quarter in a tethered sleeping bag, listen to music, use a laptop, and store personal items in a large drawer or in nets attached to the module's walls. The module also provides a reading lamp, a shelf and a desktop. Visiting crews have no allocated sleep module, and attach a sleeping bag to an available space on a wall. It is possible to sleep floating freely through the station, but this is generally avoided because of the possibility of bumping into sensitive equipment. It is important that crew accommodations be well ventilated; otherwise, astronauts can wake up oxygen-deprived and gasping for air, because a bubble of their own exhaled carbon dioxide has formed around their heads. During various station activities and crew rest times, the lights in the ISS can be dimmed, switched off, and colour temperatures adjusted. On the USOS, most of the food aboard is vacuum sealed in plastic bags; cans are rare because they are heavy and expensive to transport. Preserved food is not highly regarded by the crew and taste is reduced in microgravity, so efforts are taken to make the food more palatable, including using more spices than in regular cooking. The crew looks forward to the arrival of any ships from Earth as they bring fresh fruit and vegetables. Care is taken that foods do not create crumbs, and liquid condiments are preferred over solid to avoid contaminating station equipment. Each crew member has individual food packages and cooks them using the on-board galley. The galley features two food warmers, a refrigerator which was added in November 2008, and a water dispenser that provides both heated and unheated water. Drinks are provided as dehydrated powder that is mixed with water before consumption. Drinks and soups are sipped from plastic bags with straws, while solid food is eaten with a knife and fork attached to a tray with magnets to prevent them from floating away. Any food that floats away, including crumbs, must be collected to prevent it from clogging the station's air filters and other equipment. Showers on space stations were introduced in the early 1970s on "Skylab" and "Salyut" 3. By "Salyut" 6, in the early 1980s, the crew complained of the complexity of showering in space, which was a monthly activity. The ISS does not feature a shower; instead, crewmembers wash using a water jet and wet wipes, with soap dispensed from a toothpaste tube-like container. Crews are also provided with rinseless shampoo and edible toothpaste to save water. There are two space toilets on the ISS, both of Russian design, located in "Zvezda" and "Tranquility". These Waste and Hygiene Compartments use a fan-driven suction system similar to the Space Shuttle Waste Collection System. Astronauts first fasten themselves to the toilet seat, which is equipped with spring-loaded restraining bars to ensure a good seal. A lever operates a powerful fan and a suction hole slides open: the air stream carries the waste away. Solid waste is collected in individual bags which are stored in an aluminium container. Full containers are transferred to Progress spacecraft for disposal. Liquid waste is evacuated by a hose connected to the front of the toilet, with anatomically correct "urine funnel adapters" attached to the tube so that men and women can use the same toilet. The diverted urine is collected and transferred to the Water Recovery System, where it is recycled into drinking water. On 12 April 2019, NASA reported medical results from the Astronaut Twin Study. One astronaut twin spent a year in space on the ISS, while the other twin spent the year on Earth. Several long-lasting changes were observed, including those related to alterations in DNA and cognition, when one twin was compared with the other. In November 2019, researchers reported that astronauts experienced serious blood flow and clot problems while on board the International Space Station, based on a six-month study of 11 healthy astronauts. The results may influence long-term spaceflight, including a mission to the planet Mars, according to the researchers. The ISS is partially protected from the space environment by Earth's magnetic field. From an average distance of about , depending on Solar activity, the magnetosphere begins to deflect solar wind around Earth and ISS. Solar flares are still a hazard to the crew, who may receive only a few minutes warning. In 2005, during the initial 'proton storm' of an X-3 class solar flare, the crew of Expedition 10 took shelter in a more heavily shielded part of the ROS designed for this purpose. Subatomic charged particles, primarily protons from cosmic rays and solar wind, are normally absorbed by Earth's atmosphere. When they interact in sufficient quantity, their effect is visible to the naked eye in a phenomenon called an aurora. Outside Earth's atmosphere, crews are exposed to about 1 millisievert each day, which is about a year of natural exposure on Earth, resulting in a higher risk of cancer. Radiation can penetrate living tissue and damage the DNA and chromosomes of lymphocytes. These cells are central to the immune system, and so any damage to them could contribute to the lower immunity experienced by astronauts. Radiation has also been linked to a higher incidence of cataracts in astronauts. Protective shielding and drugs may lower risks to an acceptable level. Radiation levels on the ISS are about five times greater than those experienced by airline passengers and crew, as Earth's electromagnetic field provides almost the same level of protection against solar and other radiation in low Earth orbit as in the stratosphere. For example, on a 12-hour flight an airline passenger would experience 0.1 millisieverts of radiation, or a rate of 0.2 millisieverts per day; only 1/5 the rate experienced by an astronaut in LEO. Additionally, airline passengers experience this level of radiation for a few hours of flight, while ISS crew are exposed for their whole stay. There is considerable evidence that psychosocial stressors are among the most important impediments to optimal crew morale and performance. Cosmonaut Valery Ryumin wrote in his journal during a particularly difficult period on board the "Salyut" 6 space station: "All the conditions necessary for murder are met if you shut two men in a cabin measuring 18 feet by 20 and leave them together for two months." NASA's interest in psychological stress caused by space travel, initially studied when their crewed missions began, was rekindled when astronauts joined cosmonauts on the Russian space station "Mir". Common sources of stress in early US missions included maintaining high performance under public scrutiny and isolation from peers and family. The latter is still often a cause of stress on the ISS, such as when the mother of NASA Astronaut Daniel Tani died in a car accident, and when Michael Fincke was forced to miss the birth of his second child. A study of the longest spaceflight concluded that the first three weeks are a critical period where attention is adversely affected because of the demand to adjust to the extreme change of environment. ISS crew flights typically last about five to six months. The ISS working environment includes further stress caused by living and working in cramped conditions with people from very different cultures who speak a different language. First-generation space stations had crews who spoke a single language; second- and third-generation stations have crew from many cultures who speak many languages. Astronauts must speak English and Russian, and knowing additional languages is even better. Due to the lack of gravity, confusion often occurs. Even though there is no up and down in space, some crew members feel like they are oriented upside down. They may also have difficulty measuring distances. This can cause problems like getting lost inside the space station, pulling switches in the wrong direction or misjudging the speed of an approaching vehicle during docking. Medical effects of long-term weightlessness include muscle atrophy, deterioration of the skeleton (osteopenia), fluid redistribution, a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, and puffiness of the face. Sleep is disturbed on the ISS regularly because of mission demands, such as incoming or departing ships. Sound levels in the station are unavoidably high. Because the atmosphere is unable to thermosiphon, fans are required at all times to allow processing of the atmosphere which would stagnate in the freefall (zero-g) environment. To prevent some of these adverse physiological effects, the station is equipped with two treadmills (including the COLBERT), and the aRED (advanced Resistive Exercise Device) which enables various weightlifting exercises which add muscle but do not compensate for or raise astronauts' reduced bone density, and a stationary bicycle; each astronaut spends at least two hours per day exercising on the equipment. Astronauts use bungee cords to strap themselves to the treadmill. Hazardous moulds which can foul air and water filters may develop aboard space stations. They can produce acids which degrade metal, glass, and rubber. They can also be harmful for the crew's health. Microbiological hazards have led to a development of the LOCAD-PTS that can identify common bacteria and moulds faster than standard methods of culturing, which may require a sample to be sent back to Earth. , 76 types of unregulated micro-organisms have been detected on the ISS. Researchers in 2018 reported, after detecting the presence of five "Enterobacter bugandensis" bacterial strains on the ISS, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts. Reduced humidity, paint with mould-killing chemicals, and antiseptic solutions can be used to prevent contamination in space stations. All materials used in the ISS are tested for resistance against fungi. In April 2019, NASA reported that a comprehensive study of microorganisms and fungi present on the International Space Station has been conducted. The results can be useful in improving health and safety conditions for astronauts. Space flight is not inherently quiet, with noise levels exceeding acoustic standards as far back as the Apollo missions. For this reason, NASA and the International Space Station international partners have developed noise control and hearing loss prevention goals as part of the health program for crew members. Specifically, these goals have been the primary focus of the ISS Multilateral Medical Operations Panel (MMOP) Acoustics Subgroup since the first days of ISS assembly and operations. The effort includes contributions from acoustical engineers, audiologists, industrial hygienists, and physicians who comprise the subgroup's membership from NASA, the Russian Space Agency (RSA), the European Space Agency (ESA), the Japanese Aerospace Exploration Agency (JAXA), and the Canadian Space Agency (CSA). When compared to terrestrial environments, the noise levels incurred by astronauts and cosmonauts on the ISS may seem insignificant and typically occur at levels that would not be of major concern to the Occupational Safety and Health Administration – rarely reaching 85 dBA. But crew members are exposed to these levels 24 hours a day, seven days a week, with current missions averaging six months in duration. These levels of noise also impose risks to crew health and performance in the form of sleep interference and communication, as well as reduced alarm audibility. Over the 19 plus year history of the ISS, significant efforts have been put forth to limit and reduce noise levels on the ISS. During design and pre-flight activities, members of the Acoustic Subgroup have written acoustic limits and verification requirements, consulted to design and choose quietest available payloads, and then conducted acoustic verification tests prior to launch. During spaceflights, the Acoustics Subgroup has assessed each ISS module's in flight sound levels, produced by a large number of vehicle and science experiment noise sources, to assure compliance with strict acoustic standards. The acoustic environment on ISS changed when additional modules were added during its construction, and as additional spacecraft arrive at the ISS. The Acoustics Subgroup has responded to this dynamic operations schedule by successfully designing and employing acoustic covers, absorptive materials, noise barriers, and vibration isolators to reduce noise levels. Moreover, when pumps, fans, and ventilation systems age and show increased noise levels, this Acoustics Subgroup has guided ISS managers to replace the older, noisier instruments with quiet fan and pump technologies, significantly reducing ambient noise levels. NASA has adopted most-conservative damage risk criteria (based on recommendations from the National Institute for Occupational Safety and Health and the World Health Organization), in order to protect all crew members. The MMOP Acoustics Subgroup has adjusted its approach to managing noise risks in this unique environment by applying, or modifying, terrestrial approaches for hearing loss prevention to set these conservative limits. One innovative approach has been NASA's Noise Exposure Estimation Tool (NEET), in which noise exposures are calculated in a task-based approach to determine the need for hearing protection devices (HPDs). Guidance for use of HPDs, either mandatory use or recommended, is then documented in the Noise Hazard Inventory, and posted for crew reference during their missions. The Acoustics Subgroup also tracks spacecraft noise exceedances, applies engineering controls, and recommends hearing protective devices to reduce crew noise exposures. Finally, hearing thresholds are monitored on-orbit, during missions . There have been no persistent mission-related hearing threshold shifts among US Orbital Segment crewmembers (JAXA, CSA, ESA, NASA) during what is approaching 20 years of ISS mission operations, or nearly 175,000 work hours. In 2020, the MMOP Acoustics Subgroup received the Safe-In-Sound Award for Innovation for their combined efforts to mitigate any health effects of noise. An onboard fire or a toxic gas leak are other potential hazards. Ammonia is used in the external radiators of the station and could potentially leak into the pressurised modules. The ISS is maintained in a nearly circular orbit with a minimum mean altitude of and a maximum of , in the centre of the thermosphere, at an inclination of 51.6 degrees to Earth's equator. This orbit was selected because it is the lowest inclination that can be directly reached by Russian Soyuz and Progress spacecraft launched from Baikonur Cosmodrome at 46° N latitude without overflying China or dropping spent rocket stages in inhabited areas. It travels at an average speed of , and completes orbits per day (93 minutes per orbit). The station's altitude was allowed to fall around the time of each NASA shuttle flight to permit heavier loads to be transferred to the station. After the retirement of the shuttle, the nominal orbit of the space station was raised in altitude. Other, more frequent supply ships do not require this adjustment as they are substantially higher performance vehicles. Orbital boosting can be performed by the station's two main engines on the "Zvezda" service module, or Russian or European spacecraft docked to "Zvezda" aft port. The Automated Transfer Vehicle is constructed with the possibility of adding a second docking port to its aft end, allowing other craft to dock and boost the station. It takes approximately two orbits (three hours) for the boost to a higher altitude to be completed. Maintaining ISS altitude uses about 7.5 tonnes of chemical fuel per annum at an annual cost of about $210 million. The Russian Orbital Segment contains the Data Management System, which handles Guidance, Navigation and Control (ROS GNC) for the entire station. Initially, "Zarya", the first module of the station, controlled the station until a short time after the Russian service module "Zvezda" docked and was transferred control. "Zvezda" contains the ESA built DMS-R Data Management System. Using two fault-tolerant computers (FTC), "Zvezda" computes the station's position and orbital trajectory using redundant Earth horizon sensors, Solar horizon sensors as well as Sun and star trackers. The FTCs each contain three identical processing units working in parallel and provide advanced fault-masking by majority voting. "Zvezda" uses gyroscopes (reaction wheels) and thrusters to turn itself around. Gyroscopes do not require propellant, rather they use electricity to 'store' momentum in flywheels by turning in the opposite direction to the station's movement. The USOS has its own computer controlled gyroscopes to handle the extra mass of that section. When gyroscopes 'saturate', thrusters are used to cancel out the stored momentum. During Expedition 10, an incorrect command was sent to the station's computer, using about 14 kilograms of propellant before the fault was noticed and fixed. When attitude control computers in the ROS and USOS fail to communicate properly, it can result in a rare 'force fight' where the ROS GNC computer must ignore the USOS counterpart, which has no thrusters. Docked spacecraft can also be used to maintain station attitude, such as for troubleshooting or during the installation of the S3/S4 truss, which provides electrical power and data interfaces for the station's electronics. The low altitudes at which the ISS orbits are also home to a variety of space debris, including spent rocket stages, defunct satellites, explosion fragments (including materials from anti-satellite weapon tests), paint flakes, slag from solid rocket motors, and coolant released by US-A nuclear-powered satellites. These objects, in addition to natural micrometeoroids, are a significant threat. Objects large enough to destroy the station can be tracked, and are not as dangerous as smaller debris. Objects too small to be detected by optical and radar instruments, from approximately 1 cm down to microscopic size, number in the trillions. Despite their small size, some of these objects are a threat because of their kinetic energy and direction in relation to the station. Spacewalking crew in spacesuits are also at risk of suit damage and consequent exposure to vacuum. Ballistic panels, also called micrometeorite shielding, are incorporated into the station to protect pressurised sections and critical systems. The type and thickness of these panels depend on their predicted exposure to damage. The station's shields and structure have different designs on the ROS and the USOS. On the USOS, Whipple shields are used. The US segment modules consist of an inner layer made from 1.5 cm thick aluminum, a 10 cm thick intermediate layers of Kevlar and Nextel, and an outer layer of stainless steel, which causes objects to shatter into a cloud before hitting the hull, thereby spreading the energy of impact. On the ROS, a Carbon fiber reinforced polymer honeycomb screen is spaced from the hull, an aluminium honeycomb screen is spaced from that, with a screen-vacuum thermal insulation covering, and glass cloth over the top. Space debris is tracked remotely from the ground, and the station crew can be notified. If necessary, thrusters on the Russian Orbital Segment can alter the station's orbital altitude, avoiding the debris. These Debris Avoidance Manoeuvres (DAMs) are not uncommon, taking place if computational models show the debris will approach within a certain threat distance. Ten DAMs had been performed by the end of 2009. Usually, an increase in orbital velocity of the order of 1 m/s is used to raise the orbit by one or two kilometres. If necessary, the altitude can also be lowered, although such a manoeuvre wastes propellant. If a threat from orbital debris is identified too late for a DAM to be safely conducted, the station crew close all the hatches aboard the station and retreat into their Soyuz spacecraft in order to be able to evacuate in the event the station was seriously damaged by the debris. This partial station evacuation has occurred on 13 March 2009, 28 June 2011, 24 March 2012 and 16 June 2015. The ISS is visible to the naked eye as a slow-moving, bright white dot because of reflected sunlight, and can be seen in the hours after sunset and before sunrise, when the station remains sunlit but the ground and sky are dark. The ISS takes about 10 minutes to pass from one horizon to another, and will only be visible part of that time because of moving into or out of the Earth's shadow. Because of the size of its reflective surface area, the ISS is the brightest artificial object in the sky (excluding other satellite flares), with an approximate maximum magnitude of −4 when overhead (similar to Venus). The ISS, like many satellites including the Iridium constellation, can also produce flares of up to 16 times the brightness of Venus as sunlight glints off reflective surfaces. The ISS is also visible in broad daylight, albeit with a great deal more difficulty. Tools are provided by a number of websites such as Heavens-Above (see "Live viewing" below) as well as smartphone applications that use orbital data and the observer's longitude and latitude to indicate when the ISS will be visible (weather permitting), where the station will appear to rise, the altitude above the horizon it will reach and the duration of the pass before the station disappears either by setting below the horizon or entering into Earth's shadow. In November 2012 NASA launched its "Spot the Station" service, which sends people text and email alerts when the station is due to fly above their town. The station is visible from 95% of the inhabited land on Earth, but is not visible from extreme northern or southern latitudes. Using a telescope-mounted camera to photograph the station is a popular hobby for astronomers, while using a mounted camera to photograph the Earth and stars is a popular hobby for crew. The use of a telescope or binoculars allows viewing of the ISS during daylight hours. Some amateur astronomers also use telescopic lenses to photograph the ISS while it transits the Sun, sometimes doing so during an eclipse (and so the Sun, Moon, and ISS are all positioned approximately in a single line). One example is during the 21 August solar eclipse, where at one location in Wyoming, images of the ISS were captured during the eclipse. Similar images were captured by NASA from a location in Washington. Parisian engineer and astrophotographer Thierry Legault, known for his photos of spaceships transiting the Sun, travelled to Oman in 2011 to photograph the Sun, Moon and space station all lined up. Legault, who received the Marius Jacquemetton award from the Société astronomique de France in 1999, and other hobbyists, use websites that predict when the ISS will transit the Sun or Moon and from what location those passes will be visible. Involving five space programs and fifteen countries, the International Space Station is the most politically and legally complex space exploration programme in history. The 1998 Space Station Intergovernmental Agreement sets forth the primary framework for international cooperation among the parties. A series of subsequent agreements govern other aspects of the station, ranging from jurisdictional issues to a code of conduct among visiting astronauts. According to the Outer Space Treaty, the United States and Russia are legally responsible for all modules they have launched. Natural orbital decay with random reentry (as with "Skylab"), boosting the station to a higher altitude (which would delay reentry), and a controlled targeted de-orbit to a remote ocean area were considered as ISS disposal options. As of late 2010, the preferred plan is to use a slightly modified Progress spacecraft to de-orbit the ISS. This plan was seen as the simplest, cheapest and with the highest margin. The Orbital Piloted Assembly and Experiment Complex (OPSEK) was previously intended to be constructed of modules from the Russian Orbital Segment after the ISS is decommissioned. The modules under consideration for removal from the current ISS included the Multipurpose Laboratory Module ("Nauka"), planned to be launched in spring 2021 , and the other new Russian modules that are proposed to be attached to "Nauka". These newly launched modules would still be well within their useful lives in 2024. At the end of 2011, the Exploration Gateway Platform concept also proposed using leftover USOS hardware and "Zvezda 2" as a refuelling depot and service station located at one of the Earth-Moon Lagrange points. However, the entire USOS was not designed for disassembly and will be discarded. In February 2015, Roscosmos announced that it would remain a part of the ISS programme until 2024. Nine months earlier—in response to US sanctions against Russia over the annexation of Crimea—Russian Deputy Prime Minister Dmitry Rogozin had stated that Russia would reject a US request to prolong the orbiting station's use beyond 2020, and would only supply rocket engines to the US for non-military satellite launches. On 28 March 2015, Russian sources announced that Roscosmos and NASA had agreed to collaborate on the development of a replacement for the current ISS. Igor Komarov, the head of Russia's Roscosmos, made the announcement with NASA administrator Charles Bolden at his side. In a statement provided to SpaceNews on 28 March, NASA spokesman David Weaver said the agency appreciated the Russian commitment to extending the ISS, but did not confirm any plans for a future space station. On 30 September 2015, Boeing's contract with NASA as prime contractor for the ISS was extended to 30 September 2020. Part of Boeing's services under the contract will relate to extending the station's primary structural hardware past 2020 to the end of 2028. Regarding extending the ISS, on 15 November 2016 General Director Vladimir Solntsev of RSC Energia stated "Maybe the ISS will receive continued resources. Today we discussed the possibility of using the station until 2028", with discussion to continue under the new presidential administration. There have also been suggestions that the station could be converted to commercial operations after it is retired by government entities. In July 2018, the Space Frontier Act of 2018 was intended to extend operations of the ISS to 2030. This bill was unanimously approved in the Senate, but failed to pass in the U.S. House. In September 2018, the Leading Human Spaceflight Act was introduced with the intent to extend operations of the ISS to 2030, and was confirmed in December 2018. The ISS has been described as the most expensive single item ever constructed. As of 2010 the total cost was US$150 billion. This includes NASA's budget of $58.7 billion (inflation-unadjusted) for the station from 1985 to 2015 ($72.4 billion in 2010 dollars), Russia's $12 billion, Europe's $5 billion, Japan's $5 billion, Canada's $2 billion, and the cost of 36 shuttle flights to build the station, estimated at $1.4 billion each, or $50.4 billion in total. Assuming 20,000 person-days of use from 2000 to 2015 by two- to six-person crews, each person-day would cost $7.5 million, less than half the inflation-adjusted $19.6 million ($5.5 million before inflation) per person-day of "Skylab".
https://en.wikipedia.org/wiki?curid=15043
IA-32 IA-32 (short for "Intel Architecture, 32-bit", sometimes also called i386) is the 32-bit version of the x86 instruction set architecture, designed by Intel and first implemented in the 80386 microprocessor in 1985. IA-32 is the first incarnation of x86 that supports 32-bit computing; as a result, the "IA-32" term may be used as a metonym to refer to all x86 versions that support 32-bit computing. Within various programming language directives, IA-32 is still sometimes referred to as the "i386" architecture. In some other contexts, certain iterations of the IA-32 ISA are sometimes labelled i486, i586 and i686, referring to the instruction supersets offered by the 80486, the P5 and the P6 microarchitectures respectively. These updates offered numerous additions alongside the base IA-32 set, i.e. floating-point capabilities and the MMX extensions. Intel was historically the largest manufacturer of IA-32 processors, with the second biggest supplier having been AMD. During the 1990s, VIA, Transmeta and other chip manufacturers also produced IA-32 compatible processors (e.g. WinChip). In the modern era, Intel still produces IA-32 processors under the Intel Quark microcontroller platform, however, since the 2000s, the majority of manufacturers (Intel included) moved almost exclusively to implementing CPUs based on the 64-bit variant of x86, x86-64. x86-64, by specification, offers legacy operating modes that operate on the IA-32 ISA for backwards compatibility. Even given the contemporary prevalence of x86-64, as of 2018, IA-32 protected mode versions of many modern operating systems are still maintained, e.g. Microsoft Windows and the Ubuntu Linux distribution. In spite of IA-32's name (and causing some potential confusion), the 64-bit evolution of x86 that originated out of AMD would not be known as "IA-64", that name instead belonging to Intel's Itanium architecture. The primary defining characteristic of IA-32 is the availability of 32-bit general-purpose processor registers (for example, EAX and EBX), 32-bit integer arithmetic and logical operations, 32-bit offsets within a segment in protected mode, and the translation of segmented addresses to 32-bit linear addresses. The designers took the opportunity to make other improvements as well. Some of the most significant changes are described below.
https://en.wikipedia.org/wiki?curid=15046
Internalism and externalism Internalism and externalism are two opposing ways of explaining various subjects in several areas of philosophy. These include human motivation, knowledge, justification, meaning, and truth. The distinction arises in many areas of debate with similar but distinct meanings. Internalism is the thesis that no fact about the world can provide reasons for action independently of desires and beliefs. Externalism is the thesis that reasons are to be identified with objective features of the world. In contemporary moral philosophy, motivational internalism (or moral internalism) is the view that moral convictions (which are not necessarily beliefs, e.g. feelings of moral approval or disapproval) are intrinsically motivating. That is, the motivational internalist believes that there is an internal, necessary connection between one's conviction that X ought to be done and one's motivation to do X. Conversely, the motivational externalist (or moral externalist) claims that there is no necessary internal connection between moral convictions and moral motives. That is, there is no necessary connection between the conviction that X is wrong and the motivational drive not to do X. (The use of these terms has roots in W.D. Falk's (1947) paper "'Ought' and Motivation"). These views in moral psychology have various implications. In particular, if motivational internalism is true, then an amoralist is unintelligible (and metaphysically impossible). An amoralist is not simply someone who is immoral, rather it is someone who knows what the moral things to do are, yet is not motivated to do them. Such an agent is unintelligible to the motivational internalist, because moral judgments about the right thing to do have built into them corresponding motivations to do those things that are judged by the agent to be the moral things to do. On the other hand, an amoralist is entirely intelligible to the motivational "externalist", because the motivational externalist thinks that moral judgments about the right thing to do not necessitate some motivation to do those things that are judged to be the right thing to do; rather, an independent desire—such as the desire to do the right thing—is required (Brink, 2003), (Rosati, 2006). There is also a distinction in ethics and action theory, largely made popular by Bernard Williams (1979, reprinted in 1981), concerning internal and external reasons for action. An "internal reason" is, roughly, something that one has in light of one's own "subjective motivational set"—one's own commitments, desires (or wants), goals, etc. On the other hand, an "external reason" is something that one has independent of one's subjective motivational set. For example, suppose that Sally is going to drink a glass of poison, because she wants to commit suicide and believes that she can do so by drinking the poison. Sally has an internal reason to drink the poison, because she wants to commit suicide. However, one might say that she has an external reason not to drink the poison because, even though she wants to die, one ought not kill oneself no matter what—regardless of whether one wants to die. Some philosophers embrace the existence of both kinds of reason, while others deny the existence of one or the other. For example, Bernard Williams (1981) argues that there are really only internal reasons for action. Such a view is called "internalism about reasons" (or "reasons internalism"). "Externalism about reasons" (or "reasons externalism") is the denial of reasons internalism. It is the view that there are external reasons for action; that is, there are reasons for action that one can have even if the action is not part of one's subjective motivational set. Consider the following situation. Suppose that it's against the moral law to steal from the poor, and Sasha knows this. However, Sasha doesn't desire to follow the moral law, and there is currently a poor person next to him. Is it intelligible to say that Sasha has a reason to follow the moral law right now (to not steal from the poor person next to him), even though he doesn't care to do so? The reasons externalist answers in the affirmative ("Yes, Sasha has a reason not to steal from that poor person."), since he believes that one can have reasons for action even if one does not have the relevant desire. Conversely, the reasons internalist answers the question in the negative ("No, Sasha does not have a reason not to steal from that poor person, though others might."). The reasons internalist claims that external reasons are unintelligible; one has a reason for action only if one has the relevant desire (that is, only internal reasons can be reasons for action). The reasons internalist claims the following: the moral facts are a reason "for Sasha's action" not to steal from the poor person next to him only if he currently "wants" to follow the moral law (or if not stealing from the poor person is a way to satisfy his other current goals—that is, part of what Williams calls his "subjective motivational set"). In short, the reasoning behind reasons internalism, according to Williams, is that reasons for action must be able to explain one's action; and only internal reasons can do this. Generally speaking, internalist conceptions of epistemic justification require that one's justification for a belief be internal to the believer in some way. Two main varieties of epistemic internalism about justification are access internalism and ontological internalism. Access internalists require that a believer must have internal access to the justifier(s) of her belief "p" in order to be justified in believing "p". For the access internalist, justification amounts to something like the believer being aware (or capable of being aware) of certain facts that make her belief in "p" rational, or her being able to give reasons for her belief in "p". At minimum, access internalism requires that the believer have some kind of reflective access or awareness to whatever justifies her belief. Ontological internalism is the view that justification for a belief is established by one's mental states. Ontological internalism can be distinct from access internalism, but the two are often thought to go together since we are generally considered to be capable of having reflective access to mental states. One popular argument for internalism is known as the 'new evil demon problem'. The new evil demon problem indirectly supports internalism by challenging externalist views of justification, particularly reliabilism. The argument asks us to imagine a subject with beliefs and experiences identical to ours, but the subject is being systematically deceived by a malicious Cartesian demon so that all their beliefs turn out false. In spite of the subject's unfortunate deception, the argument goes, we do not think this subject ceases to be rational in taking things to be as they appear as we do. After all, it is possible that we could be radically deceived in the same way, yet we are still justified in holding most of our beliefs in spite of this possibility. Since reliabilism maintains that one's beliefs are justified via reliable belief-forming processes (where reliable means yielding true beliefs), the subject in the evil demon scenario would not likely have any justified beliefs according to reliabilism because all of their beliefs would be false. Since this result is supposed to clash with our intuitions that the subject is justified in their beliefs in spite of being systematically deceived, some take the new evil demon problem as a reason for rejecting externalist views of justification. Externalist views of justification emerged in epistemology during the late 20th century. Externalist conceptions of justification assert that facts external to the believer can serve as the justification for a belief. According to the externalist, a believer need not have any internal access or cognitive grasp of any reasons or facts which make their belief justified. The externalist's assessment of justification can be contrasted with access internalism, which demands that the believer have internal reflective access to reasons or facts which corroborate their belief in order to be justified in holding it. Externalism, on the other hand, maintains that the justification for someone's belief can come from facts that are entirely external to the agent's subjective awareness. Alvin Goldman, one of the most well-known proponents of externalism in epistemology, is known for developing a popular form of externalism called reliabilism. In his paper, “What is Justified Belief?” Goldman characterizes the reliabilist conception of justification as such: "If S’s believing "p" at "t" results from a reliable cognitive belief-forming process (or set of processes), then S’s belief in "p" at "t" is justified.” Goldman notes that a reliable belief-forming process is one which generally produces true beliefs. A unique consequence of reliabilism (and other forms of externalism) is that one can have a justified belief without knowing one is justified (this is not possible under most forms of epistemic internalism). In addition, we do not yet know which cognitive processes are in fact reliable, so anyone who embraces reliabilism must concede that we do not always know whether some of our beliefs are justified (even though there is a fact of the matter). In responding to skepticism, Hilary Putnam (1982) claims that semantic externalism yields "an argument we can give that shows we are not brains in a vat (BIV). (See also DeRose, 1999.) If semantic externalism is true, then the meaning of a word or sentence is not wholly determined by what individuals think those words mean. For example, semantic externalists maintain that the word "water" referred to the substance whose chemical composition is H2O even before scientists had discovered that chemical composition. The fact that the substance out in the world we were calling "water" actually had that composition at least partially determined the meaning of the word. One way to use this in a response to skepticism is to apply the same strategy to the terms used in a skeptical argument in the following way (DeRose, 1999): To clarify how this argument is supposed to work: Imagine that there is brain in a vat, and a whole world is being simulated for it. Call the individual who is being deceived "Steve." When Steve is given an experience of walking through a park, semantic externalism allows for his thought, "I am walking through a park" to be true so long as the simulated reality is one in which he is walking through a park. Similarly, what it takes for his thought, "I am a brain in a vat," to be true is for the simulated reality to be one where he is a brain in a vat. But in the simulated reality, he is not a brain in a vat. Apart from disputes over the success of the argument or the plausibility of the specific type of semantic externalism required for it to work, there is question as to what is gained by defeating the skeptical worry with this strategy. Skeptics can give new skeptical cases that wouldn't be subject to the same response (e.g., one where the person was very recently turned into a brain in a vat, so that their words "brain" and "vat" still pick out real brains and vats, rather than simulated ones). Further, if even brains in vats can correctly believe "I am not a brain in a vat," then the skeptic can still press us on how we know we are not in that situation (though the externalist will point out that it may be difficult for the skeptic to describe that situation). Another attempt to use externalism to refute skepticism is done by Brueckner and Warfield. It involves the claim that our thoughts are "about" things, unlike a BIV's thoughts, which cannot be "about" things (DeRose, 1999). Semantic externalism comes in two varieties, depending on whether meaning is construed cognitively or linguistically. On a cognitive construal, externalism is the thesis that what concepts (or contents) are available to a thinker is determined by their environment, or their relation to their environment. On a linguistic construal, externalism is the thesis that the meaning of a word is environmentally determined. Likewise, one can construe semantic internalism in two ways, as a denial of either of these two theses. Externalism and internalism in semantics is closely tied to the distinction in philosophy of mind concerning mental content, since the contents of one's thoughts (specifically, intentional mental states) are usually taken to be semantic objects that are truth-evaluable. See also: Within the context of the philosophy of mind, externalism is the theory that the contents of at least some of one's mental states are dependent in part on their relationship to the external world or one's environment. The traditional discussion on externalism was centered around the semantic aspect of mental content. This is by no means the only meaning of externalism now. Externalism is now a broad collection of philosophical views considering all aspects of mental content and activity. There are various forms of externalism that consider either the content or the vehicles of the mind or both. Furthermore, externalism could be limited to cognition, or it could address broader issues of consciousness. As to the traditional discussion on semantic externalism (often dubbed "content externalism"), some mental states, such as believing that water is wet, and fearing that the Queen has been insulted, have contents we can capture using 'that' clauses. The content externalist often appeal to observations found as early as Hilary Putnam's seminal essay, "The Meaning of 'Meaning'," (1975). Putnam stated that we can easily imagine pairs of individuals that are microphysical duplicates embedded in different surroundings who use the same words but mean different things when using them. For example, suppose that Ike and Tina's mothers are identical twins and that Ike and Tina are raised in isolation from one another in indistinguishable environments. When Ike says, "I want my mommy," he expresses a want satisfied only if he is brought to his mommy. If we brought Tina's mommy, Ike might not notice the difference, but he doesn't get what he wants. It seems that what he wants and what he says when he says, "I want my mommy," will be different from what Tina wants and what she says she wants when she says, "I want my mommy." Externalists say that if we assume competent speakers know what they think, and say what they think, the difference in what these two speakers mean corresponds to a difference in the thoughts of the two speakers that is not (necessarily) reflected by a difference in the internal make up of the speakers or thinkers. They urge us to move from externalism about meaning of the sort Putnam defended to externalism about contentful states of mind. The example pertains to singular terms, but has been extended to cover kind terms as well such as natural kinds (e.g., 'water') and for kinds of artifacts (e.g., 'espresso maker'). There is no general agreement amongst content externalists as to the scope of the thesis. Philosophers now tend to distinguish between "wide content" (externalist mental content) and "narrow content" (anti-externalist mental content). Some, then, align themselves as endorsing one view of content exclusively, or both. For example, Jerry Fodor (1980) argues for narrow content (although he comes to reject that view in his 1995), while David Chalmers (2002) argues for a two dimensional semantics according to which the contents of mental states can have both wide and narrow content. Critics of the view have questioned the original thought experiments saying that the lessons that Putnam and later writers such as Tyler Burge (1979, 1982) have urged us to draw can be resisted. Frank Jackson and John Searle, for example, have defended internalist accounts of thought content according to which the contents of our thoughts are fixed by descriptions that pick out the individuals and kinds that our thoughts intuitively pertain to the sorts of things that we take them to. In the Ike/Tina example, one might agree that Ike's thoughts pertain to Ike's mother and that Tina's thoughts pertain to Tina's but insist that this is because Ike thinks of that woman as his mother and we can capture this by saying that he thinks of her as 'the mother of the speaker'. This descriptive phrase will pick out one unique woman. Externalists claim this is implausible, as we would have to ascribe to Ike knowledge he wouldn't need to successfully think about or refer to his mother. Critics have also claimed that content externalists are committed to epistemological absurdities. Suppose that a speaker can have the concept of water we do only if the speaker lives in a world that contains H2O. It seems this speaker could know a priori that they think that water is wet. This is the thesis of privileged access. It also seems that they could know on the basis of simple thought experiments that they can only think that water is wet if they live in a world that contains water. What would prevent her from putting these together and coming to know a priori that the world contains water? If we should say that no one could possibly know whether water exists a priori, it seems either we cannot know content externalism to be true on the basis of thought experiments or we cannot know what we are thinking without first looking into the world to see what it is like. As mentioned, content externalism (limited to the semantic aspects) is only one among many other options offered by externalism by and large. See also: Internalism in the historiography of science claims that science is completely distinct from social influences and pure natural science can exist in any society and at any time given the intellectual capacity. Imre Lakatos is a notable proponent of historiographical internalism. Externalism in the historiography of science is the view that the history of science is due to its social context – the socio-political climate and the surrounding economy determines scientific progress. Thomas Kuhn is a notable proponent of historiographical externalism.
https://en.wikipedia.org/wiki?curid=15047
Isolationism Isolationism is a category of foreign policies institutionalized by leaders who assert that nations' best interests are best served by keeping the affairs of other countries at a distance. One possible motivation for limiting international involvement is to avoid being drawn into dangerous and otherwise undesirable conflicts. There may also be a perceived benefit from avoiding international trade agreements or other mutual assistance pacts. Isolationism has been defined as: Before 1999, Bhutan had banned television and the Internet in order to preserve its culture, environment, identity etc. Eventually, Jigme Singye Wangchuck lifted the ban on television and the Internet. His son, Jigme Khesar Namgyel Wangchuck, was elected as Druk Gyalpo of Bhutan, which helped forge the Bhutanese democracy. Bhutan has subsequently undergone a transition from an absolute monarchy to a constitutional monarchy multi-party democracy. The development of "Bhutanese democracy" has been marked by the active encouragement and participation of reigning Bhutanese monarchs since the 1950s, beginning with legal reforms such as the abolition of slavery, and culminating in the enactment of Bhutan's Constitution After Zheng He's voyages in the 15th century, the foreign policy of the Ming dynasty in China became increasingly isolationist. The Hongwu Emperor was the not first to propose the policy to ban all maritime shipping in 1390. The Qing dynasty that came after the Ming dynasty often continued the Ming dynasty's isolationist policies. Wokou, which literally translates to "Japanese pirates" or "dwarf pirates", were pirates who raided the coastlines of China, Japan, and Korea, and were one of the key primary concerns, although the maritime ban was not without some control. From 1641 to 1853, the Tokugawa shogunate of Japan enforced a policy which it called "kaikin". The policy prohibited foreign contact with most outside countries. The commonly held idea that Japan was entirely closed, however, is misleading. In fact, Japan maintained limited-scale trade and diplomatic relations with China, Korea and Ryukyu Islands, as well as the Dutch Republic as the only Western trading partner of Japan for much of the period. The culture of Japan developed with limited influence from the outside world and had one of the longest stretches of peace in history. During this period, Japan developed thriving cities, castle towns, increasing commodification of agriculture and domestic trade, wage labor, increasing literacy and concomitant print culture, laying the groundwork for modernization even as the shogunate itself grew weak. In 1863, Emperor Gojong took the throne of the Joseon Dynasty when he was a child. His father, Regent Heungseon Daewongun, ruled for him until Gojong reached adulthood. During the mid-1860s he was the main proponent of isolationism and the principal instrument of the persecution of both native and foreign Catholics. Following the division of the peninsula after independence from Japan in 1945–48, Kim il-Sung inaugurated an isolationist totalitarian regime in the North, which has been continued by his son and grandson to the present day. North Korea is often referred to as "The Hermit Kingdom". Just after independence was achieved, Paraguay was governed from 1814 by the dictator José Gaspar Rodríguez de Francia, who closed the country's borders and prohibited trade or any relation with the outside world until his death in 1840. The Spanish settlers who had arrived just before independence had to intermarry with either the old colonists or with the native Guarani, in order to create a single Paraguayan people. Francia had a particular dislike of foreigners and any who came to Paraguay during his rule (which would have been very difficult) were not allowed to leave for the rest of their lives. An independent character, he hated European influences and the Catholic Church, turning church courtyards into artillery parks and confession boxes into border sentry posts, in an attempt to keep foreigners at bay. While some scholars, such as Robert J. Art, believe that the United States has an isolationist history, other scholars dispute this by describing the United States as following a strategy of unilateralism or non-interventionism instead. Robert Art makes his argument in "A Grand Strategy for America" (2003). Books that have made the argument that the United States followed unilaterism instead of isolationism include Walter A. McDougall's "Promised Land, Crusader State" (1997), John Lewis Gaddis's "Surprise, Security, and the American Experience" (2004), and Bradley F. Podliska's "Acting Alone" (2010). Both sides claim policy prescriptions from George Washington's Farewell Address as evidence for their argument. Bear F. Braumoeller argues that even the best case for isolationism, the United States in the interwar period, has been widely misunderstood and that Americans proved willing to fight as soon as they believed a genuine threat existed. Events during and after the Revolution related to the treaty of alliance with France, as well as difficulties arising over the neutrality policy pursued during the French revolutionary wars and the Napoleonic wars, encouraged another perspective. A desire for separateness and unilateral freedom of action merged with national pride and a sense of continental safety to foster the policy of isolation. Although the United States maintained diplomatic relations and economic contacts abroad, it sought to restrict these as narrowly as possible in order to retain its independence. The Department of State continually rejected proposals for joint cooperation, a policy made explicit in the Monroe Doctrine's emphasis on unilateral action. Not until 1863 did an American delegate attend an international conference.
https://en.wikipedia.org/wiki?curid=15048
Indianapolis Colts The Indianapolis Colts are an American football team based in Indianapolis. The Colts compete in the National Football League (NFL) as a member club of the league's American Football Conference (AFC) South division. Since the 2008 season, the Colts have played their games in Lucas Oil Stadium. Previously, the team had played for over two decades (1984–2007) at the RCA Dome. Since 1987, the Colts have served as the host team for the NFL Scouting Combine. The Colts have competed as a member club of the NFL since their founding in Baltimore in 1953. They were one of three NFL teams to join those of the American Football League (AFL) to form the AFC following the 1970 merger. While in Baltimore, the team advanced to the playoffs 10 times and won three NFL Championship games in 1958, 1959, and 1968. The Colts played in two Super Bowl games while they were based in Baltimore, losing to the New York Jets in Super Bowl III and defeating the Dallas Cowboys in Super Bowl V. The Colts relocated to Indianapolis in 1984 and have since appeared in the playoffs 16 times, won two conference championships, and won one Super Bowl, in which they defeated the Chicago Bears in Super Bowl XLI. Following World War II, a competing professional football league was organized known as the All America Football Conference which began to play in the 1946 season. In its second year the franchise assigned to the Miami Seahawks was relocated to Maryland's major commercial and manufacturing city of Baltimore. After a fan contest the team was renamed the Baltimore Colts and used the team colors of silver and green. The Colts played for the next three seasons in the old AAFC. until they agreed to merge with the old National Football League (of 1920–1922 to 1950) when the NFL was reorganized. The Baltimore Colts were one of the three former AAFC powerhouse teams to merge with the NFL at that time, the others being the San Francisco 49ers and the Cleveland Browns. This Colts team, now in the "big league" of professional American football for the first time, although with shaky financing and ownership, played only in the 1950 season of the NFL, and was later disbanded. In 1953, a new Baltimore-based group, heavily supported by the City's municipal government and with a large subscription-base of fan-purchased season tickets, led by local owner Carroll Rosenbloom won the rights to a new Baltimore NFL franchise. Rosenbloom was awarded the remains of the former Dallas Texans team, who themselves had a long and winding history starting as the Boston Yanks in 1944, merging later with the Brooklyn Tigers, and who were previously known as the Dayton Triangles, one of the original old NFL teams established even before the League itself, in 1913. The league began with theorganization in 1920 of the original "American Professional Football Conference" [APFC], (soon renamed the "American Professional Football Association", [APF.]), then two years later in 1922, renamed a second time, now permanently as the "National Football League". That team later became the New York Yanks in 1950, and many of the players from the New York Yankees of the former competing All-America Football Conference (1946–49) were added to the team to begin playing in the newly merged League for the 1950 season. The Yanks then moved to Dallas in Texas after the 1951 season having competed for two seasons, but played their final two "home" games of the 1952 season as a so-called "road team" at the Rubber Bowl football stadium in Akron, Ohio. The NFL considers the Texans and Colts to be separate teams, although many of the earlier teams shared the same colors of blue and white. Thus, the Indianapolis Colts are legally considered to be a 1953 expansion team. The current version of the Colts football team played their first season in Baltimore in 1953, where the team compiled a 3–9 record under first-year head coach Keith Molesworth. The franchise struggled during the first few years in Baltimore, with the team not achieving their first winning record until the 1957 season. However, under head coach Weeb Ewbank and the leadership of quarterback Johnny Unitas, the Colts went on to a 9–3 record during the 1958 season and reached the NFL Championship Game for the first time in their history by winning the NFL Western Conference. The Colts faced the New York Giants in the 1958 NFL Championship Game, which is considered to be among the greatest contests in professional football history. The Colts defeated the Giants 23–17 in the first game ever to utilize the overtime rule, a game seen by 45 million people. Following the Colts first NFL championship, the team posted a 9–3 record during the 1959 season and once again defeated the Giants in the NFL Championship Game to claim their second title in back to back fashion. Following the two championships in 1958 and 1959, the Colts did not return to the NFL Championship for four seasons and replaced the head coach Ewbank with the young Don Shula in 1963. In Shula's second season the Colts compiled a 12–2 record, but lost to the Cleveland Browns in the NFL Championship. However, in 1968 the Colts returned with the continued leadership of Unitas and Shula and went on to win the Colts' third NFL Championship and made an appearance in Super Bowl III. Leading up to the Super Bowl and following the 34–0 trouncing of the Cleveland Browns in the NFL Championship, many were calling the 1968 Colts team one of the "greatest pro football teams of all time" and were favored by 18 points against their counterparts from the American Football League, the New York Jets. The Colts, however, were stunned by the Jets, who won the game 16–7 in the first Super Bowl victory for the young AFL. The result of the game surprised many in the sports media as Joe Namath and Matt Snell led the Jets to the Super Bowl victory under head coach Weeb Ewbank, who had previously won two NFL Championships with the Colts. Rosenbloom of the Colts, Art Modell of the Browns, and Art Rooney of the Pittsburgh Steelers agreed to have their teams join the ten AFL teams in the American Football Conference as part of the AFL–NFL merger in 1970. The Colts immediately went on a rampage in the new league, as new head coach Don McCafferty led the 1970 team to an 11–2–1 regular season record, winning the AFC East title. In the first round of the NFL Playoffs, the Colts beat the Cincinnati Bengals 17–0; one week later in the first ever AFC Championship Game, they beat the Oakland Raiders 27–17. Baltimore went on to win the first post-merger Super Bowl (Super Bowl V), defeating the National Football Conference's Dallas Cowboys 16–13 on a Jim O'Brien field goal with five seconds left to play. The victory gave the Colts their fourth NFL championship and first Super Bowl victory. Following the championship, the Colts returned to the playoffs in 1971 and defeated the Cleveland Browns in the first round, but lost to the Miami Dolphins in the AFC Championship. Citing friction with the City of Baltimore and the local press, Rosenbloom traded the Colts franchise to Robert Irsay on July 13, 1972 and received the Los Angeles Rams in return. Under the new ownership, the Colts did not reach the postseason for three consecutive seasons after 1971, and after the 1972 season, starting quarterback and legend Johnny Unitas was traded to the San Diego Chargers. Following Unitas' departure, the Colts made the playoffs three consecutive seasons from 1975 to 1977, losing in the divisional round each time. The Colts 1977 playoff loss in double overtime against the Oakland Raiders was famous for the fact that it was the last playoff game for the Colts in Baltimore and is also known for the Ghost to the Post play. These consecutive championship teams featured 1976 NFL Most Valuable Player Bert Jones at quarterback and an outstanding defensive line, nicknamed the "Sack Pack." Following the 1970s success, the team endured nine consecutive losing seasons beginning in 1978. In 1981, the Colts defense allowed an NFL-record 533 points, set an all-time record for fewest sacks (13), and also set a modern record for fewest punt returns (12). The following year, the offense collapsed, including a game against the Buffalo Bills where the Colts' offense did not cross mid-field the entire game. The Colts finished 0–8–1 in the strike-shortened 1982 season, thereby earning the right to select Stanford quarterback John Elway with the first overall pick. Elway, however, refused to play for Baltimore, and using leverage as a draftee of the New York Yankees baseball club, forced a trade to Denver. Behind an improved defense the team finished 7–9 in 1983, but that would be their last season in Baltimore. The Baltimore Colts played their final home game in Baltimore on December 18, 1983, against the then Houston Oilers. Irsay continued to request upgrades to Memorial Stadium or construction of a new stadium. As a result of the poor performance on the field and the stadium issues, fan attendance and team revenue continued to dwindle. City officials were precluded from using tax-payer funds for the building of a new stadium, and the modest proposals that were offered by the city were not acceptable to either the Colts or the city's MLB franchise the Orioles. However, all sides continued to negotiate. Relations between Irsay and the city of Baltimore deteriorated. Although Irsay assured fans that his ultimate desire was to stay in Baltimore, he nevertheless began discussions with several other cities willing to build new football stadiums, eventually narrowing the list of cities to two: Phoenix and Indianapolis. Under the administration of mayors Richard Lugar and then William Hudnut, Indianapolis had undertaken an ambitious effort to reinvent itself into a 'Great American City'. The Hoosier Dome, which was later renamed the RCA Dome, had been built specifically for, and was ready to host, an NFL expansion team. Meanwhile, in Baltimore, the situation worsened. The Maryland General Assembly intervened when a bill was introduced to give the city of Baltimore the right to seize ownership of the team by eminent domain. As a result, Irsay began serious negotiations with Indianapolis Mayor William Hudnut in order to move the team before the Maryland legislature could pass the law. Indianapolis offered loans as well as the Hoosier Dome and a training complex. After the deal was reached, moving vans from Indianapolis-based Mayflower Transit were dispatched overnight to the team's Maryland training complex, arriving on the morning of March 29, 1984. Once in Maryland, workers loaded all of the team's belongings, and by midday the trucks departed for Indianapolis, leaving nothing of the Colts organization that could be seized by Baltimore. The Baltimore Colts' Marching Band had to scramble to retrieve their equipment and uniforms before they were shipped to Indianapolis as well. The move triggered a flurry of legal activity that ended when representatives of the city of Baltimore and the Colts organization reached a settlement in March 1986. Under the agreement, all lawsuits regarding the relocation were dismissed, and the Colts agreed to endorse a new NFL team for Baltimore. Upon the Colts' arrival in Indianapolis over 143,000 requests for season tickets were received in just two weeks. The move to Indianapolis, however, did not change the recent fortune of the Colts, with the team appearing in the postseason only once in the first 11 seasons in Indianapolis. During the 1984 season, the first in Indianapolis, the team went 4–12 and accounted for the lowest offensive yardage in the league that season. The 1985 and 1986 teams combined for only eight wins, including an 0–13 start in 1986 which prompted the firing of head coach Rod Dowhower, who was replaced by Ron Meyer. The Colts, however, did receive eventual Hall of Fame running back Eric Dickerson as a result of a trade during the 1987 season, and went on to compile a 9–6 record, thereby winning the AFC East and advancing to the postseason for the first time in Indianapolis; they lost that game to the Cleveland Browns. Following 1987, the Colts did not see any real success for quite some time, with the team missing the postseason for seven consecutive seasons. The struggles came to a climax in 1991 when the team went 1–15 and was just one point away from the first "imperfect" season in the history of a 16-game schedule. The season resulted in the firing of head coach Ron Meyer and the return of former head coach Ted Marchibroda to the organization in 1992; he had coached the team from 1975 to 1979. The team continued to struggle under Marchibroda and Jim Irsay, son of Robert Irsay and general manager at the time. It was in 1994 that Robert Irsay brought in Bill Tobin to become the general manager of the Indianapolis Colts. Under Tobin, the Colts drafted running back Marshall Faulk with the second overall pick in the 1994 and acquired quarterback Jim Harbaugh as well. These moves along with others saw the Colts begin to turn their fortunes around with playoff appearances in 1995 and 1996. The Colts won their first postseason game as the Indianapolis Colts in 1995 and advanced to the AFC Championship Game against the Pittsburgh Steelers, coming just a Hail Mary pass reception away from a trip to Super Bowl XXX. Marchibroda retired following the 1995 season and was replaced by Lindy Infante in 1996. After two consecutive playoff appearances, the Colts regressed and went 3–13 during the 1997 season. Along with the disappointing season, the principal owner and man who moved the team to Indianapolis, Robert Irsay, died in January 1997 after years of declining health. Jim Irsay, Robert Irsay's son, entered the role of principal owner following his father's death and quickly began to change the organization. Irsay replaced general manager Tobin with Bill Polian in 1997 as the team decided to build through their number one overall pick in the 1998 draft. Jim Irsay began to shape the Colts one year after assuming control from his father by firing head coach Lindy Infante and hiring Bill Polian as the general manager of the organization. Polian in turn hired Jim Mora to become the next head coach of the team and drafted Tennessee Volunteer quarterback Peyton Manning, the son of New Orleans Saints legend Archie Manning, with the first overall pick in the 1998 NFL Draft. The team and Manning struggled during the 1998 season, winning only three games; Manning threw a league high 28 interceptions. However, Manning did pass for 3,739 yards and threw 26 touchdown passes and was named to the NFL All-Rookie First Team. The Colts began to improve towards the end of the 1998 season and showed continued growth in 1999. Indianapolis drafted Edgerrin James in 1999 and continued to improve their roster heading into the upcoming season. The Colts went 13–3 in 1999 and finished first in the AFC East, their first division title since 1987. Indianapolis lost to the eventual AFC champion Tennessee Titans in the divisional playoffs. The 2000 and 2001 Colts teams were considerably less successful compared to the 1999 team, and pressure began to mount on team administration and the coaching staff following a 6–10 season in 2001. Head coach Jim Mora was fired at the end of the season and was replaced by former Tampa Bay Buccaneers head coach Tony Dungy. Dungy and the team quickly changed the atmosphere of the organization and returned to the playoffs in 2002 with a 10–6 record. The Colts also returned to the playoffs in 2003 and 2004 with 12–4 records and AFC South championships. The Colts lost to the New England Patriots and Tom Brady in the 2003 AFC Championship Game and in the 2004 divisional playoffs, thereby beginning a rivalry between the two teams, and between Manning and Brady. Following two consecutive playoff losses to the Patriots, the Colts began the 2005 season with a 13–0 record, including a regular season victory over the Patriots, the first in the Manning era. During the season Manning and Marvin Harrison broke the NFL record for touchdowns by a quarterback and receiver tandem. Indianapolis finished the 2005 season with a 14–2 record, the best record in the league that year and the best in a 16 games season for the franchise, but lost to the Pittsburgh Steelers in the divisional round, a disappointing end to the season. Indianapolis entered the 2006 season with a veteran quarterback, receivers, and defenders, and chose running back Joseph Addai in the 2006 draft. As in the previous season, the Colts began the season undefeated and went 9–0 before losing their first game against the Dallas Cowboys. Indianapolis finished the season with a 12–4 record and entered the playoffs for the fifth consecutive year, this time as the number three seed in the AFC. The Colts won their first two playoff games against the Kansas City Chiefs and the Baltimore Ravens to return to the AFC Championship Game for the first time since the 2003 playoffs, where they faced their rivals, the New England Patriots. In a classic game, the Colts overcame a 21–3 first half deficit to win the game 38–34 and earned a trip to Super Bowl XLI, the franchise's first Super Bowl appearance since 1970 and for the first as Indianapolis. The Colts faced the Chicago Bears in the Super Bowl, winning the game 29–17 and giving Manning, Polian, Irsay, and Dungy, as well as the city of Indianapolis, their first Super Bowl title. Following their Super Bowl championship, the Colts compiled a 13–3 record during the 2007 season; they lost to the San Diego Chargers in the divisional playoffs, in what was the final game the Colts played at the RCA Dome before moving into Lucas Oil Stadium in 2008. The 2008 season began with Manning being sidelined for most of the pre-season due to surgery. Indianapolis began the season with a 3–4 record, but then won nine consecutive games to end the season at 12–4 and make in into the playoffs as a wild card team, eventually losing to the Chargers in the wild card round. Following the season, Tony Dungy announced his retirement after seven seasons as head coach, having compiled an overall record of 92–33 with the team. Jim Caldwell was hired as head coach of the team following Dungy, and led the team during the 2009 season. The Colts went 14–0 during the season to finish with an overall record of 14–2 after controversially benching their starters during the last two games. The Colts for the second time in the Manning era entered the playoffs with the best record in the AFC. The Colts managed victories over the Baltimore Ravens and New York Jets to advance to Super Bowl XLIV against the New Orleans Saints, but lost to the Saints 31–17 to end the season in disappointment. At the completion of the 2009 season, the Colts had finished the first decade of the 2000s (2000–2009) with the most regular season wins (115) and highest winning percentage (.719) of any team in the NFL during that span. The 2010 team compiled a 10–6 record, the first time the Colts did not win 12 games since 2002, and lost to the New York Jets in the wild card round of the playoffs. The loss to the Jets was the last game for Peyton Manning as a Colt. After missing the preseason, Manning was ruled out for the Colts' opening game in Houston and eventually the entire 2011 season. Taking over as starter was veteran quarterback Kerry Collins, who had been signed to the team after dissatisfaction with backup quarterback Curtis Painter and Dan Orlovsky. However, even with a veteran quarterback, the Colts lost their first 13 games and finished the season with a 2–14 record, enough to receive the first overall pick in the 2012 draft. Immediately following the season, team president Bill Polian was fired, ending his 14-year tenure with the team. The change built the anticipation of the organization's decision regarding Manning's future with the team. The Peyton Manning era came to an end on March 8, 2012 when Jim Irsay announced that Manning was being released from the roster after 13 seasons. During the 2012 off-season owner Jim Irsay hired Ryan Grigson to be the General Manager. Grigson decided to let Head Coach Jim Caldwell go and Chuck Pagano was hired as the new Head Coach shortly thereafter. The Colts also began to release some higher paid and oft-injured veteran players, including Joseph Addai, Dallas Clark, and Gary Brackett. The Colts used their number one overall draft pick in 2012 to draft Stanford Cardinal quarterback Andrew Luck and also drafted his teammate Coby Fleener in the second round. The team also switched to a 3–4 defensive scheme. With productive seasons from both Luck and veteran receiver Reggie Wayne, the Colts rebounded from the 2–14 season of 2011 with a 2012 season record of 11–5. The franchise, team, and fan base rallied behind Head Coach Chuck Pagano during his fight with leukemia. Clinching an unexpected playoff spot in the 2012–13 NFL playoffs, the 14th playoff berth for the club since 1995. The season ended in a 24–9 playoff loss to the eventual Super Bowl Champion Baltimore Ravens. Two weeks into the 2013 season, the Colts traded their first round selection in the 2014 NFL Draft to the Cleveland Browns for running back Trent Richardson. In Week 7, Luck led the Colts to a 39–33 win over his predecessor, Peyton Manning, and the undefeated Broncos. Luck went on to lead the Colts to a 15th division championship later that season. In the first round of the 2013 NFL playoffs, Andrew Luck led the Colts to a 45–44 victory over Kansas City, outscoring the Chiefs 35–13 in the second half in the 2nd biggest comeback in NFL playoff history. During the 2014 season, Luck led the Colts to the AFC Championship game for the first time in his career after breaking the Colts' single season passing yardage record previously held by Manning. After the Colts finished 8–8 in both the 2015 and 2016 seasons and missed the playoffs in back-to-back seasons for the first time since 1997–98, Grigson was fired as general manager. Just three of his previous 18 draft picks remained on the team at the time of his firing. On January 30, 2017 the team hired Chris Ballard, who served as the Kansas City Chiefs Director of Football Operations, to replace Grigson. On December 31, 2017, after winning the final game of the season and a final record of 4-12, the Colts parted ways with Pagano. Luck, who had suffered multiple injuries and missed nine games during the 2015 season, sat out the entire 2017 season recovering from shoulder surgery. In the weeks following the end of the 2017 season, after two interviews, it was widely reported that the Colts would hire Josh McDaniels, offensive coordinator of the New England Patriots, to replace Pagano, after McDaniels fulfilled his obligations to the Patriots in Super Bowl LII. On February 8, 2018, the Colts announced McDaniels as their new head coach. Hours later, however, McDaniels rescinded his decision to be the head coach, and he returned to the Patriots. On February 11, 2018, the Colts announced Frank Reich, then offensive coordinator of the Philadelphia Eagles, as their new head coach. In Reich's first season as head coach, Andrew Luck's return to the field got off to a shaky start, as the Colts began the 2018 season 1-5. However, they would surge back to win nine of their last ten games to secure a 10-6 record and a playoff berth. They would win a Wild-Card game against their division rival Houston Texans before falling to the Kansas City Chiefs in the Divisional Round. Luck, benefiting from the Colts' best offensive line of his career, was named the 2018 Comeback Player of the Year. Colts General Manager Chris Ballard achieved a historic feat in 2018 when two players he had drafted that year, guard Quenton Nelson and linebacker Darius Leonard were both named First-Team All-Pro. This was the first time two rookies from the same team received that honor since Hall-of-Famers Dick Butkus and Gale Sayers achieved the feat in 1965. On August 24, 2019, Luck informed the Colts that he would be retiring from the NFL after not attending training camp. He cited an unfulfilling cycle of injury and rehab as his primary reason for leaving football. On November 17, 2019, the Colts defeated the Jacksonville Jaguars for the team's 300th win in the Indianapolis era, with a record of 300–267. Despite a promising 5-2 start and strong seasons from Leonard, Nelson, and newly acquired defensive end Justin Houston, the Colts struggled in the second half of the 2019 season with new starting quarterback Jacoby Brissett at the helm and finished the year with a 7-9 record. On March 17, 2020, the Colts signed longtime Los Angeles Chargers quarterback and eight-time Pro Bowler Philip Rivers to a one-year deal worth $25 million. The Colts' helmets in 1953 were white with a blue stripe. In 1954–55 they were blue with a white stripe and a pair of horseshoes at the rear of the helmet. For 1956, the colors were reversed, white helmet, blue stripe and horseshoes at the rear. In 1957 the horseshoes moved to their current location, one on each side of the helmet. The blue jerseys have white shoulder stripes and the white jerseys have blue stripes. The team also wears white pants with blue stripes down the sides. For much of the team's history, the Colts wore blue socks, accenting them with two or three white stripes for much of their history in Baltimore and during the 2004 and 2005 seasons. From 1982 to 1987, the blue socks featured gray stripes. For a period lasting 1955 to 1958 and again from 1988 to 1992, the Colts wore white socks with either two or three blue stripes. From 1982 through 1986, the Colts wore gray pants with their blue jerseys. The gray pants featured a horseshoe on the top of the sides with the player's number inside the horseshoe. The Colts continued to wear white pants with their white jerseys throughout this period, and in 1987, the gray pants were retired. The Colts wore blue pants with their white jerseys for the first three games of the 1995 season (pairing them with white socks), but then returned to white pants with both the blue and white jerseys. The team made some minor uniform adjustments before the start of the 2004 season, including reverting from blue to the traditional gray face masks, darkening their blue colors from a royal blue to speed blue, as well as adding two white stripes to the socks. In 2006, the stripes were removed from the socks. In 2002, the Colts made a minor striping pattern change on their jerseys, having the stripes only on top of the shoulders then stop completely. Previously, the stripes used to go around to underneath the jersey sleeves. This was done because the Colts, like many other football teams, were beginning to manufacture the jerseys to be tighter to reduce holding calls and reduce the size of the sleeves. Although the white jerseys of the Minnesota Vikings at the time also had a similar striping pattern and continued as such (as well as the throwbacks the New England Patriots wore in the Thanksgiving game against the Detroit Lions in 2002, though the Patriots later wore the same throwbacks in 2009 with truncated stripes and in 2010 became their official alternate uniform), the Colts and most college teams with this striping pattern did not make this adjustment. In 2017, the Colts brought back the blue pants but paired them with the blue jerseys as part of the NFL Color Rush program. The club officially revealed an updated wordmark logo, as well as updated numeral fonts, on April 13, 2020. After 24 years of playing at the RCA Dome, the Colts moved to their new home Lucas Oil Stadium in the fall of 2008. In December 2004, the City of Indianapolis and Jim Irsay agreed to a new stadium deal at an estimated cost of $1 billion (including the Indiana Convention Center upgrades). In a deal estimated at $122 million, Lucas Oil Products won the naming rights to the stadium for 20 years. Lucas Oil Stadium is a seven-level stadium which seats 63,000 for football. It can be reconfigured to seat 70,000 or more for NCAA basketball and football and concerts. It covers . The stadium features a retractable roof allowing the Colts to play home games outdoors for the first time since arriving in Indianapolis. Using FieldTurf, the playing surface is roughly below ground level. In addition to being larger than the RCA Dome, the new stadium features: 58 permanent concession stands, 90 portable concession stands, 13 escalators, 11 passenger elevators, 800 restrooms, HD video displays from Daktronics and replay monitors and 142 luxury suites. The stadium also features a retractable roof, with electrification technology developed by VAHLE, Inc. Other than being the home of the Colts, the stadium will host games in both the Men's and Women's NCAA Basketball Tournaments and will serve as the back up host for all NCAA Final Four Tournaments. The stadium hosted the Super Bowl for the 2011 season (Super Bowl XLVI) and has a potential economic impact estimated at $286 million. Lucas Oil Stadium has also hosted the Drum Corps International World Championships since 2009. As a transplant from the AFC East into the AFC South upon the realignment of the NFL's divisions in , the Colts merely share loose rivalries with the other three teams in its division, namely the Houston Texans, Jacksonville Jaguars, and Tennessee Titans (formerly the Houston Oilers). They have dominated the AFC South for much of the division's history under quarterbacks Peyton Manning and Andrew Luck, but have faced competition for divisional supremacy in recent years from the Texans. The rivalry between the Indianapolis Colts and New England Patriots is one of the NFL's newest rivalries. The rivalry is fueled by the quarterback comparison between Peyton Manning and Tom Brady. The Patriots owned the beginning of the series, defeating the Colts in six consecutive contests including the 2003 AFC Championship game and a 2004 AFC Divisional game. The Colts won the next three matches, notching two regular season victories and a win in the 2006 AFC Championship game on the way to their win in Super Bowl XLI. On November 4, 2007 the Patriots defeated the Colts 24–20; in the next matchup on November 2, 2008, the Colts won 18–15 in a game that was one of the reasons the Patriots failed to make the playoffs; in the 2009 meeting, the Colts staged a spirited comeback to beat the Patriots 35–34; in 2010 the Colts almost staged another comeback, pulling within 31–28 after trailing 31–14 in the fourth quarter, but fell short due to a Patriots interception of a Manning pass late in the game; it turned out to be Manning's final meeting against the Patriots as a member of the Colts. After a dismal 2011 season that included a 31–24 loss to the Patriots, the Colts drafted Andrew Luck and in November of 2012 the two teams met with identical 6–3 records; the Patriots erased a 14–7 gap to win 59–24. The nature of this rivalry is ironic because the Colts and Patriots were division rivals from 1970 to 2001, but it did not become prominent in league circles until after Indianapolis was relocated to the AFC South. On November 16, 2014, the New England Patriots traveled at 7–2 to play the 6–3 Colts at Lucas Oil Stadium. After a stellar four touchdown performance by New England running back Jonas Gray, the Patriots defeated the Colts 42–20. The Patriots followed up with a 45–7 defeat of the Colts in the 2014 AFC Championship Game. In the years 1953–66, the Colts played in the NFL Western Conference (also known as division), but did not have significant rivalries with other franchises in that alignment, as they were the eastern-most team and the rest of the division included the Great Lakes franchises Green Bay, Detroit Lions, Chicago Bears, and after 1961, the Minnesota Vikings, along with the league's two West Coast teams in San Francisco and Los Angeles. The closest team to Baltimore was the Washington Redskins, but they were not in the same division and not very competitive during most years at that time. In 1958, Baltimore played its first NFL Championship Game against the 10–3 New York Giants. The Giants qualified for the championship after a tie-breaking playoff against the Cleveland Browns. Having already been defeated by the Giants in the regular season, Baltimore was not favored to win, yet proceeded to take the title in sudden death overtime. The Colts then repeated the feat by posting an identical record and routing the Giants in the 1959 final. Up until the Colts' back-to-back titles, the Giants had been the premier club in the NFL, and continued to be post-season stalwarts the next decade, losing three straight finals. The situation was reversed by the end of the decade, with Baltimore winning the 1968 NFL title and New York compiling less impressive results. In recent years, the Colts and Giants featured brothers as their starting quarterbacks (Peyton and Eli Manning respectively), leading to their occasional match-up being referred to as the "Manning Bowl". Super Bowl III became the most famous upset in professional sports history as the American Football League's New York Jets won 16–7 over the overwhelmingly favored Colts. With the merger of the AFL and NFL the Colts and Jets were placed in the new AFC East. The two teams met twice a year (interrupted in 1982 by a player strike) 1970–2001; with the move of the Colts to the AFC South the two teams' rivalry actually escalated, as they met three times in the playoffs in the South's first nine seasons of existence; the Jets crushed the Colts 41–0 in the 2002 Wild Card playoff round; the Colts then defeated the Jets 30–17 in the 2009 AFC Championship Game; but the next year in the Wild Card round the Jets pulled off another playoff upset of the Colts, winning 17–16; it was Peyton Manning's final game with the Colts. The Jets defeated the Colts 35–9 in 2012 in Andrew Luck's debut season; after two straight losses Luck led a 45–10 rout of the Jets in 2016. Joe Namath and Johnny Unitas were the focal point of the rivalry at its beginning, but they did not meet for a full game until September 24, 1972. Namath erupted with six touchdowns and 496 passing yards despite only 28 throws and 15 completions. Unitas threw for 376 yards and two scores but was sacked six times as the Jets won 44–34; the game was considered one of the top ten passing duels in NFL history. Baltimore's post NFL-AFL merger passage to the AFC saw them thrust into a new environment with little in common with its fellow divisional teams: the Jets, Miami Dolphins, Buffalo Bills, and Boston Patriots. One angle where the two clubs did have something in common, however, lay in new Miami coach Don Shula. Shula had coached the Colts the previous seven pre-merger seasons (1963–69) and was signed by Joe Robbie after the merger was consummated; because the signing came after the merger the NFL's rules on tampering came into play, and the Dolphins had to give up their first-round pick to the Colts. Powered by QB Earl Morrall Baltimore was the first non-AFL franchise to win a division title in the conference, outlasting the Miami Dolphins by one game, and leading the division since Week 3 of 1970. The two franchises were denied a playoff confrontation by Miami's first-round defeat to the Oakland Raiders, whereas Baltimore won its first Super Bowl title that year. Yet in 1971, the teams were engaged in a heated race that went down to the final week of the season, where Miami won its first division title with a 10–3–1 title compared to the 10–4 Baltimore record after the Colts won the Week 13 matchup between them at home, but proceeded to lose the last game of the season to Boston. In the playoffs Baltimore advanced to the AFC title game after a 20–3 rout of the Cleveland Browns, whereas Miami survived a double-overtime nailbiter against the Kansas City Chiefs. This set up a title game that was favored for the defending league champion Colts. Yet Miami won the AFC championship with a 21–0 shutout and advanced to lose Super Bowl VI to Dallas. In 1975 Baltimore and Miami tied with 10–4 records, yet the Colts advanced to the playoffs based on a head-to-head sweep of their series. In 1977 Baltimore tied for first for the third straight year (in 1976 they tied with the now-New England Patriots) with Miami, and this time advanced to the playoffs on even slimmer pretenses, with a conference record of 9–3 compared to Miami's 8–4, as they had split the season series. The rivalry in the following years was virtually negated by very poor play of the Colts; the Colts won just 117 games in the twenty-one seasons (1978–98) that bracketed their 1977 playoff loss to the Oakland Raiders and the 1999 trade of star running back Marshall Faulk; this included a 0–8–1 record during the NFL's strike shortened 1982 season. In 1995, now as Indianapolis, the two both posted borderline 9–7 records to tie for second against Buffalo, yet the Colts once again reached the post-season having swept the season series. The following season they edged out Miami by posting a 9–7 record and winning the ordinarily meaningless 3rd-place position, but qualifying for the wild card. The two clubs' 1999 meetings were dramatic affairs between Hall Of Fame-bound Dan Marino and up-and-coming star Peyton Manning. Marino led a 25-point fourth quarter comeback for a 34–31 Dolphins win at the RCA Dome, and then in Miami Marino led another comeback to tie the game 34–34 with 36 seconds remaining; Manning, however, drove the Colts in range for a 53-yard field goal as time expired (37–34 Colts win). The last truly meaningful matchup between the two franchises was in the 2000 season, when Miami edged out Indianapolis with an 11–5 record for the division championship. The two then met in the wild-card round where the Dolphins won 23–17 before being blown out by Oakland 27–0 (the Colts themselves had suffered a bitter loss to the Raiders in Week 2 of the season when the Raiders erased a 24–7 gap to win 38–31). In 2002 the Colts moved to the newly created AFC South division; the two clubs met at the RCA Dome on September 15 where the Dolphins edged the Colts 21–13 after stopping a late Colts drive. The rivalry was effectively retired after this; the two clubs did meet in a memorable "Monday Night Football" matchup in 2009 where the Colts, despite having the ball for only 15 minutes, defeated the Dolphins 27–23. The rivalry saw a rekindling after the 2012 NFL Draft brought new quarterbacks to both teams in Ryan Tannehill and Luck. The two met during the 2012 season with Luck breaking the rookie record for passing yards in a game in a 23–20 win over the Dolphins, but Tannehill and the Dolphins beat the Colts 24–20 the next season. The Dolphins win began a slump for Luck and the Colts against AFC East teams (eight straight losses by the Colts) that ended in December 2016 against the Jets, when they defeated them by a score of 41–10. The Ring of Honor was established on September 23, 1996. There have been 15 inductees. This is a partial list of the Colts' last five completed seasons. For the full season-by-season franchise results, see List of Indianapolis Colts seasons. "Note: The Finish, Wins, Losses, and Ties columns list regular season results and exclude any postseason play." "see also: List of Indianapolis Colts broadcasters" The Colts' flagship radio stations since 2007 are WFNI (1070 AM, later adding repeater signals at 93.5 FM and 107.5 FM) and WLHK 97.1 FM. The 1070 AM frequency, then known as WIBC, had also been the flagship from 1984 to 1992 and from 1995 to 1997. Matt Taylor is the team's play-by-play announcer, succeeding Bob Lamey in 2018. Lamey held the job from 1984 to 1991 and again from 1995 to 2018. Former Colts backup quarterback Jim Sorgi serves as the "color commentator". Mike Jansen serves as the public address announcer at all Colts home games. Jansen has been the public address announcer since the 1998 season. The team's local TV carriage rights were shaken up in mid-2014 when WTTV's owner Tribune Media came to terms with CBS to become the network's Indianapolis affiliate as of January 1, 2015, replacing WISH-TV. With the deal, both Tribune Media stations, including WXIN (channel 59) carry the bulk of the team's regular season games starting with the 2015 NFL season. Also as of the 2015 season, WTTV and WXIN became the official Colts stations and air the team's preseason games, along with official team programming and coach's shows, and have a signage presence along the fascia of Lucas Oil Stadium. WISH's sister station WNDY-TV aired preseason games from 2011–2014, having replaced WTTV at that time. Before the third regular season game of 2017, against the Cleveland Browns, more than ten Indianapolis Colts players kneeled on one knee as opposed to the tradition of standing during the playing of "The Star-Spangled Banner", while thousands of fans booed and others posted responses to social media. The following day, then Colts head coach Chuck Pagano commented, “I’m proud of our players and their commitment and their compassion toward the game and the [horse] shoe and each community. We are a unified group,” and former head coach, Tony Dungy was quoted saying "A group of our family got attacked, and called names ... and said they should be fired for what we feel is demonstrating our first amendment right". Before the fourth regular season game of 2017, against the Seattle Seahawks, the Colts stood during "The Star-Spangled Banner", however the entire team, including quarterback Andrew Luck locked arms in protest, instead of the customary holding of the right hand over the heart. Ratings for this "NBC Sunday Night Football" game was down five percent from the prior week's game in the same time slot. Before the fifth regular season game of 2017, against the San Francisco 49ers, the entire Colts team as in the Week 4 game, stood during "The Star-Spangled Banner", however with locking of arms, instead of the customary holding of the right hand over the heart. In addition to the Colts response, more than 20 members of the opposing team, the San Francisco 49ers, kneeled for "The Star-Spangled Banner". In attendance within the stadium, was then Vice President of the United States and former Governor of Indiana, Mike Pence who responded to these protests by leaving the stadium. This was a heavily attended home game for the halftime retirement of the #18 jersey of former quarterback and 2-time Super Bowl winner, Peyton Manning. During warmups prior to the sixth regular game of the 2017 season, a "Monday Night Football" game between the Colts and the Tennessee Titans, the Colts wore black T-shirts with the words "We will" on the front and "Stand for equality, justice, unity, respect, dialogue, opportunity" on the back for the third straight week. The Colts players stood with their arms locked during the playing of "The Star-Spangled Banner" instead of the customary holding of the right hand over the heart.
https://en.wikipedia.org/wiki?curid=15049
Immigration to the United States Immigration to the United States is the international movement of non-U.S. nationals in order to reside permanently in the country. Immigration has been a major source of population growth and cultural change throughout much of the U.S. history. Because the United States is a settler colonial society, all Americans, with the exception of the small percentage of Native Americans, can trace their ancestry to immigrants from other nations around the world. In absolute numbers, the United States has a larger immigrant population than any other country, with 47 million immigrants as of 2015. This represents 19.1% of the 244 million international migrants worldwide, and 14.4% of the U.S. population. Some other countries have larger proportions of immigrants, such as Switzerland with 24.9% and Canada with 21.9%. According to the 2016 Yearbook of Immigration Statistics, the United States admitted a total of 1.18 million legal immigrants (618k new arrivals, 565k status adjustments) in 2016. Of these, 48% were the immediate relatives of U.S. citizens, 20% were family-sponsored, 13% were refugees and/or asylum seekers, 12% were employment-based preferences, 4.2% were part of the Diversity Immigrant Visa program, 1.4% who were victims of a crime (U1) or their family members (U2 to U5), and 1.0% who were granted the Special Immigrant Visa (SIV) for Iraqis and Afghans employed by U.S. Government. The remaining 0.4% included small numbers from several other categories, including 0.2% who were granted suspension of deportation as an immediate relative of a citizen (Z13); persons admitted under the Nicaraguan and Central American Relief Act; children born subsequent to the issuance of a parent's visa; and certain parolees from the former Soviet Union, Cambodia, Laos, and Vietnam who were denied refugee status. The economic, social, and political aspects of immigration have caused controversy regarding such issues as maintaining ethnic homogeneity, workers for employers versus jobs for non-immigrants, settlement patterns, impact on upward social mobility, crime, and voting behavior. Between 1921 and 1965, policies such as the national origins formula limited immigration and naturalization opportunities for people from areas outside Western Europe. Exclusion laws enacted as early as the 1880s generally prohibited or severely restricted immigration from Asia, and quota laws enacted in the 1920s curtailed Eastern European immigration. The civil rights movement led to the replacement of these ethnic quotas with per-country limits for family-sponsored and employment-based preference visas. Since then, the number of first-generation immigrants living in the United States has quadrupled. Research suggests that immigration to the United States is beneficial to the U.S. economy. With few exceptions, the evidence suggests that on average, immigration has positive economic effects on the native population, but it is mixed as to whether low-skilled immigration adversely affects low-skilled natives. Studies also show that immigrants have lower crime rates than natives in the United States. American immigration history can be viewed in four epochs: the colonial period, the mid-19th century, the start of the 20th century, and post-1965. Each period brought distinct national groups, races and ethnicities to the United States. During the 17th century, approximately 400,000 English people migrated to Colonial America. However, only half stayed permanently. They comprised 85-90% of white immigrants. From 1700 to 1775 between 350-500,000 Europeans immigrated: the estimates vary in the sources. Only 52,000 English supposedly immigrated in the period 1701 to 1775., a figure questioned as too low. The rest, 400-450,000 were Scots, Scots-Irish from Ulster, Germans and Swiss, French Huguenots, and involuntarily 300,000 Africans. Over half of all European immigrants to Colonial America during the 17th and 18th centuries arrived as indentured servants. They numbered 350,000. On the eve of the War for Independence 1770 to 1775 7,000 English, 15,00 Scots, 13,200 Scots-Irish, 5,200 Germans, and 3,900 Irish Catholics arrived Fully half the English immigrants were young single men, well-skilled, trained artisans like the Huguenots The European populations of the Middle Colonies of New York, New Jersey, Pennsylvania and Delaware were ethnically very mixed, the English constituting only 30% in Pennsylvania, 40-45% in New Jersey, to 18% in New York numbered 22,000. The mid-19th century saw an influx mainly from northern Europe from the same major ethnic groups as for the Colonial Period but with large numbers of Catholic Irish and Scandinavians added to the mix; the late 19th and early 20th-century immigrants were mainly from Southern and Eastern Europe, but there were also several million immigrants from Canada; post-1965 most came from Latin America and Asia. Historians estimate that fewer than 1 million immigrants moved to the United States from Europe between 1600 and 1799. By comparison, in the first federal census, in 1790, the population of the United States was enumerated to be 3,929,214. The Naturalization Act of 1790 limited naturalization to "free white persons"; it was expanded to include blacks in the 1860s and Asians only in the 1950s. This made the United States an outlier, since laws that made racial distinctions were uncommon in the world in the 18th Century. In the early years of the United States, immigration was fewer than 8,000 people a year, including French refugees from the slave revolt in Haiti. After 1820, immigration gradually increased. From 1836 to 1914, over 30 million Europeans migrated to the United States. The death rate on these transatlantic voyages was high, during which one in seven travelers died. In 1875, the nation passed its first immigration law, the Page Act of 1875. After an initial wave of immigration from China following the California Gold Rush, Congress passed a series of laws culminating in the Chinese Exclusion Act of 1882, banning virtually all immigration from China until the law's repeal in 1943. In the late 1800s, immigration from other Asian countries, especially to the West Coast, became more common. The peak year of European immigration was in 1907, when 1,285,349 persons entered the country. By 1910, 13.5 million immigrants were living in the United States. While the Chinese Exclusion Act of 1882 had already excluded immigrants from China, the immigration of people from Asian countries in addition to China was banned by the sweeping Immigration Act of 1917, also known as the Asiatic Barred Zone Act, which also banned homosexuals, people with intellectual disability, and people with an anarchist worldview. The Emergency Quota Act was enacted in 1921, followed by the Immigration Act of 1924. The 1924 Act was aimed at further restricting immigrants from Southern and Eastern Europe, particularly Jews, Italians, and Slavs, who had begun to enter the country in large numbers beginning in the 1890s, and consolidated the prohibition of Asian immigration. Immigration patterns of the 1930s were affected by the Great Depression. In the final prosperous year, 1929, there were 279,678 immigrants recorded, but in 1933, only 23,068 moved to the U.S. In the early 1930s, more people emigrated from the United States than to it. The U.S. government sponsored a Mexican Repatriation program which was intended to encourage people to voluntarily move to Mexico, but thousands were deported against their will. Altogether, approximately 400,000 Mexicans were repatriated; half of them were US citizens. Most of the Jewish refugees fleeing the Nazis and World War II were barred from coming to the United States. In the post-war era, the Justice Department launched Operation Wetback, under which 1,075,168 Mexicans were deported in 1954. The Immigration and Nationality Act of 1965, also known as the Hart-Cellar Act, abolished the system of national-origin quotas. By equalizing immigration policies, the act resulted in new immigration from non-European nations, which changed the ethnic make-up of the United States. In 1970, 60% of immigrants were from Europe; this decreased to 15% by 2000. In 1990, George H. W. Bush signed the Immigration Act of 1990, which increased legal immigration to the United States by 40%. In 1991, Bush signed the Armed Forces Immigration Adjustment Act 1991, allowing foreign service members who had served 12 or more years in the US Armed Forces to qualify for permanent residency and, in some cases, citizenship. In November 1994, California voters passed Proposition 187 amending the state constitution, denying state financial aid to illegal immigrants. The federal courts voided this change, ruling that it violated the federal constitution. Appointed by Bill Clinton, the U.S. Commission on Immigration Reform recommended reducing legal immigration from about 800,000 people per year to approximately 550,000. While an influx of new residents from different cultures presents some challenges, "the United States has always been energized by its immigrant populations," said President Bill Clinton in 1998. "America has constantly drawn strength and spirit from wave after wave of immigrants ... They have proved to be the most restless, the most adventurous, the most innovative, the most industrious of people." In 2001, President George W. Bush discussed an accord with Mexican President Vincente Fox. This possible accord was derailed by the September 11 attacks. From 2005 to 2013, the US Congress discussed various ways of controlling immigration. The Senate and House were unable to reach an agreement. Nearly 14 million immigrants entered the United States from 2000 to 2010, and over one million persons were naturalized as U.S. citizens in 2008. The per-country limit applies the same maximum on the number of visas to all countries regardless of their population and has therefore had the effect of significantly restricting immigration of persons born in populous nations such as Mexico, China, India, and the Philippines—the leading countries of origin for legally admitted immigrants to the United States in 2013; nevertheless, China, India, and Mexico were the leading countries of origin for immigrants overall to the United States in 2013, regardless of legal status, according to a U.S. Census Bureau study. Nearly 8 million people immigrated to the United States from 2000 to 2005; 3.7 million of them entered without papers. In 1986 president Ronald Reagan signed immigration reform that gave amnesty to 3 million undocumented immigrants in the country. Hispanic immigrants suffered job losses during the late-2000s recession, but since the recession's end in June 2009, immigrants posted a net gain of 656,000 jobs. Over 1 million immigrants were granted legal residence in 2011. For those who enter the US illegally across the Mexico–United States border and elsewhere, migration is difficult, expensive and dangerous. Virtually all undocumented immigrants have no avenues for legal entry to the United States due to the restrictive legal limits on green cards, and lack of immigrant visas for low-skilled workers. Participants in debates on immigration in the early twenty-first century called for increasing enforcement of existing laws governing illegal immigration to the United States, building a barrier along some or all of the Mexico-U.S. border, or creating a new guest worker program. Through much of 2006 the country and Congress was immersed in a debate about these proposals. few of these proposals had become law, though a partial border fence had been approved and subsequently canceled. According to a report released by ICE, during the fiscal year of 2016 ICE removed 240,255 immigrants. During the fiscal year of 2018, ICE removed 256,085 immigrants. There has been a significant increase in the removal of immigrants since President Trump took office. The reason for the increase in removals is due to the policies that the Trump administrations have put in place. In January 2017, U.S. President Donald Trump signed an executive order temporarily suspending entry to the United States by nationals of seven Muslim-majority countries. It was replaced by another executive order in March 2017 and by a presidential proclamation in September 2017, with various changes to the list of countries and exemptions. The orders were temporarily suspended by federal courts but later allowed to proceed by the Supreme Court, pending a definite ruling on their legality. Another executive order called for the immediate construction of a wall across the U.S.–Mexico border, the hiring of 5,000 new border patrol agents and 10,000 new immigration officers, and federal funding penalties for sanctuary cities. The most recent Trump policy to affect immigration to the United States was his ‘zero tolerance policy’. The ‘zero tolerance’ policy was put in place by President Trump in 2018, Attorney General Jeff Sessions made a formal statement putting in place the ‘zero tolerance’ policy, this policy legally allows children to be separated from adults unlawfully entering the United States. This is justified by labeling all adults that enter unlawfully as criminals thus subjecting them to criminal prosecution. The policy has faced a lot of criticism and backlash and was reportedly stopped in June 2018. “The United Nations condemned the USA government’s Zero Tolerance policy as ‘The Trump administration’s practice of separating children from migrant families entering the United States violates their rights and international law’”. Only after the stopping the ‘zero tolerance policy’ did the Trump administration uncover that there were no official plans in place to reunite families; resulting in further separation. Learn more about the Trump administration family separation policy. The Trump Administration has continued their promise of a heavy hand on immigration and is now making it harder for asylum seekers. Most recent policies are attacking what it means for an asylum seeker to claim credible fear, these policies are changing the ways in which asylum officers assess an asylee’s circumstance, “A passage has been altered on individuals’ ‘demeanor, candor, and responsiveness’ as a factor in their credibility. Both the 2017 and 2014 versions note that migrants’ demeanor is often affected by cultural factors, including being detained in a foreign land and perhaps not speaking the language, as well as by trauma sustained at home or on the journey to the US. But the new version removes guidance that said these factors shouldn't be ‘significant factors’ in determining someone’s credibility — essentially allowing asylum officers to consider signs of stress as a reason to doubt someone’s credibility”. To further decrease the amount of asylum seekers into the United States, Attorney Jeff Sessions released a decision that restricts those fleeing gang violence and domestic abuse as ‘private crime’, therefore making their claims ineligible for asylum, “The 31-page decision narrows the ground for asylum for victims of ‘private crime’ and will cut off an avenue to refuge for women fleeing to the United States from Central America. ‘Generally, claims by aliens pertaining to domestic violence or gang violence perpetrated by non-governmental actors will not qualify for asylum,’ Sessions said in the opinion”. These new policies that have been put in place are putting many lives at risk, to the point that the ACLU has officially sued Jeff Sessions along with other members of the Trump Administration. The ACLU claims that the policies that are currently being put in place by this Presidential Administration is undermining the fundamental human rights of those immigrating into the United States, specifically women. They also claim that these policies violate decades of settle asylum law (. Since the Trump Administration took office, it remained true to its hard stance on immigration. Trump and his administration almost immediately looked to remove the DACA program that was put in place by the Obama Administration. A policy was passed to stop granting citizenship requests. If you go to the DACA page on the United States Citizenship and Immigration Services a warning appears that states: “Important information about DACA requests: Due to federal court orders, USCIS has resumed accepting requests to renew a grant of deferred action under DACA. USCIS is not accepting requests from individuals who have never before been granted deferred action under DACA. Until further notice, and unless otherwise provided in this guidance, the DACA policy will be operated on the terms in place before it was rescinded on Sept. 5, 2017”. The Trump administration ordered federal courts to no longer grant citizenship to DACA requestors, making the process to citizenship for young children brought to the country illegally by their parents almost non-existent. In April 2020, President Trump said he will sign an executive order to temporarily suspend immigration to the United States because of the COVID-19 pandemic in the United States. Note: "Other Latin America" includes Central America, South America and the Caribbean. According to the Department of State, in the 2016 fiscal year 84,988 refugees were accepted into the US from around the world. In the fiscal year of 2017, 53,691 refugees were accepted to the US. There was a significant decrease after Trump took office and it continues in the fiscal year of 2018 when only 22,405 refugees were accepted into the US. This displays a massive drop in acceptance of refugees since the Trump Administration has been in place. Approximately half of immigrants living in the United States are from Mexico and other Latin American countries. Many Central Americans are fleeing because of desperate social and economic circumstances created in part by U.S. foreign policy in Central America over many decades. The large number of Central American refugees arriving in the U.S. have been explained as "blowback" to policies such as U.S. military interventions and covert operations that installed or maintained in power authoritarian leaders allied with wealthy land owners and multinational corporations who crush family farming and democratic efforts, which have caused drastically sharp social inequality, wide scale poverty and rampant crime. Economic austerity dictated by neoliberal policies imposed by the International Monetary Fund and its ally, the U.S., has also been cited as a driver of the dire social and economic conditions, as has the U.S. "War on Drugs," which has been understood as fueling murderous gang violence in the region. Another major migration driver from central America (Guatemala, Honduras, and El Salvador) are crop failures, which are (partly) caused by climate change. “The current debate … is almost totally about what to do about immigrants when they get here. But the 800-pound gorilla that’s missing from the table is what we have been doing there that brings them here, that drives them here," according to Jeff Faux, an economist who is a distinguished fellow at the Economic Policy Institute. Until the 1930s most legal immigrants were male. By the 1990s women accounted for just over half of all legal immigrants. Contemporary immigrants tend to be younger than the native population of the United States, with people between the ages of 15 and 34 substantially overrepresented. Immigrants are also more likely to be married and less likely to be divorced than native-born Americans of the same age. Immigrants are likely to move to and live in areas populated by people with similar backgrounds. This phenomenon has held true throughout the history of immigration to the United States. Seven out of ten immigrants surveyed by Public Agenda in 2009 said they intended to make the U.S. their permanent home, and 71% said if they could do it over again they would still come to the US. In the same study, 76% of immigrants say the government has become stricter on enforcing immigration laws since the September 11, 2001 attacks ("9/11"), and 24% report that they personally have experienced some or a great deal of discrimination. Public attitudes about immigration in the U.S. were heavily influenced in the aftermath of the 9/11 attacks. After the attacks, 52% of Americans believed that immigration was a good thing overall for the U.S., down from 62% the year before, according to a 2009 Gallup poll. A 2008 Public Agenda survey found that half of Americans said tighter controls on immigration would do "a great deal" to enhance U.S. national security. Harvard political scientist and historian Samuel P. Huntington argued in his 2004 book "Who Are We? The Challenges to America's National Identity" that a potential future consequence of continuing massive immigration from Latin America, especially Mexico, could lead to the bifurcation of the United States. The estimated population of illegal Mexican immigrants in the US fell from approximately 7 million in 2007 to 6.1 million in 2011 Commentators link the reversal of the immigration trend to the economic downturn that started in 2008 and which meant fewer available jobs, and to the introduction of tough immigration laws in many states. According to the Pew Hispanic Center the net immigration of Mexican born persons had stagnated in 2010, and tended toward going into negative figures. More than 80 cities in the United States, including Washington D.C., New York City, Los Angeles, Chicago, San Francisco, San Diego, San Jose, Salt Lake City, Phoenix, Dallas, Fort Worth, Houston, Detroit, Jersey City, Minneapolis, Denver, Baltimore, Seattle, Portland, Oregon and Portland, Maine, have sanctuary policies, which vary locally. Source: US Department of Homeland Security, Office of Immigration Statistics Top 10 sending countries in the recent years: The United States admitted more legal immigrants from 1991 to 2000, between ten and eleven million, than in any previous decade. In the most recent decade, the 10 million legal immigrants that settled in the U.S. represent roughly one third of the annual growth, as the U.S. population grew by 32 million (from 249 million to 281 million). By comparison, the highest previous decade was the 1900s, when 8.8 million people arrived, increasing the total U.S. population by one percent every year. Specifically, "nearly 15% of Americans were foreign-born in 1910, while in 1999, only about 10% were foreign-born." By 1970, immigrants accounted for 4.7 percent of the US population and rising to 6.2 percent in 1980, with an estimated 12.5 percent in 2009. , 25% of US residents under age 18 were first- or second-generation immigrants. Eight percent of all babies born in the U.S. in 2008 belonged to illegal immigrant parents, according to a recent analysis of U.S. Census Bureau data by the Pew Hispanic Center. Legal immigration to the U.S. increased from 250,000 in the 1930s, to 2.5 million in the 1950s, to 4.5 million in the 1970s, and to 7.3 million in the 1980s, before resting at about 10 million in the 1990s. Since 2000, legal immigrants to the United States number approximately 1,000,000 per year, of whom about 600,000 are "Change of Status" who already are in the U.S. Legal immigrants to the United States now are at their highest level ever, at just over 37,000,000 legal immigrants. In reports in 2005-2006, estimates of illegal immigration ranged from 700,000 to 1,500,000 per year. Immigration led to a 57.4% increase in foreign born population from 1990 to 2000. Foreign-born immigration has caused the U.S. population to continue its rapid increase with the foreign-born population doubling from almost 20 million in 1990 to over 47 million in 2015. In 2018, there were almost 90 million immigrants and U.S.-born children of immigrants (second-generation Americans) in the United States, accounting for 28% of the overall U.S. population. While immigration has increased drastically over the last century, the foreign born share of the population is, at 13.4, only somewhat below what it was at its peak in 1910 at 14.7%. A number of factors may be attributed to the decrease in the representation of foreign born residents in the United States. Most significant has been the change in the composition of immigrants; prior to 1890, 82% of immigrants came from North and Western Europe. From 1891 to 1920, that number dropped to 25%, with a rise in immigrants from East, Central, and South Europe, summing up to 64%. Animosity towards these different and foreign immigrants rose in the United States, resulting in much legislation to limit immigration. Contemporary immigrants settle predominantly in seven states, California, New York, Florida, Texas, Pennsylvania, New Jersey and Illinois, comprising about 44% of the U.S. population as a whole. The combined total immigrant population of these seven states was 70% of the total foreign-born population in 2000. The Census Bureau estimates the US population will grow from 317 million in 2014 to 417 million in 2060 with immigration, when nearly 20% will be foreign born. A 2015 report from the Pew Research Center projects that by 2065, non-Hispanic whites will account for 46% of the population, down from the 2005 figure of 67%. Non-Hispanic whites made up 85% of the population in 1960. It also foresees the Hispanic population rising from 17% in 2014 to 29% by 2060. The Asian population is expected to nearly double in 2060. Overall, the Pew Report predicts the population of the United States will rise from 296 million in 2005 to 441 million in 2065, but only to 338 million with no immigration. In 35 of the country's 50 largest cities, non-Hispanic whites were at the last census or are predicted to be in the minority. In California, non-Hispanic whites slipped from 80% of the state's population in 1970 to 42% in 2001 and 39% in 2013. Immigrant segregation declined in the first half of the 20th century, but has been rising over the past few decades. This has caused questioning of the correctness of describing the United States as a melting pot. One explanation is that groups with lower socioeconomic status concentrate in more densely populated area that have access to public transit while groups with higher socioeconomic status move to suburban areas. Another is that some recent immigrant groups are more culturally and linguistically different from earlier groups and prefer to live together due to factors such as communication costs. Another explanation for increased segregation is white flight. Source: 1990, 2000 and 2010 decennial Census and 2017 American Community Survey A survey of leading economists shows a consensus behind the view that high-skilled immigration makes the average American better off. A survey of the same economists also shows strong support behind the notion that low-skilled immigration makes the average American better off. According to David Card, Christian Dustmann, and Ian Preston, "most existing studies of the economic impacts of immigration suggest these impacts are small, and on average benefit the native population". In a survey of the existing literature, Örn B Bodvarsson and Hendrik Van den Berg write, "a comparison of the evidence from all the studies ... makes it clear that, with very few exceptions, there is no strong statistical support for the view held by many members of the public, namely that immigration has an adverse effect on native-born workers in the destination country." Whereas the impact on the average native tends to be small and positive, studies show more mixed results for low-skilled natives, but whether the effects are positive or negative, they tend to be small either way.
https://en.wikipedia.org/wiki?curid=15051
Image and Scanner Interface Specification Image and Scanner Interface Specification (ISIS) is an industry standard interface for image scanning technologies, developed by Pixel Translations in 1990 (which became EMC Corporation's Captiva Software and later acquired by OpenText). ISIS is an open standard for scanner control and a complete image-processing framework. It is currently supported by a number of application and scanner vendors. The modular design allows the scanner to be accessed both directly or with built-in routines to handle most situations automatically. A message-based interface with tags is used so that features, operations, and formats not yet supported by ISIS can be added as desired without waiting for a new version of the specification. The standard addresses all of the issues that an application using a scanner needs to be concerned with. Functions include but are not limited to selecting, installing, and configuring a new scanner; setting scanner-specific parameters; scanning, reading and writing files, and fast image scaling, rotating, displaying, and printing. Drivers have been written to dynamically process data for operations such as converting grayscale to binary image data. An ISIS interface can run scanners at or above their rated speed by linking drivers together in a pipe so that data flows from a scanner driver to compression driver, to packaging driver, to a file, viewer, or printer in a continuous stream, usually without the need to buffer more than a small portion of the full image. As a result of using the piping method, each driver can be optimised to perform one function well. Drivers are typically small and modular in order to make it simple to add new functionality to an existing application.
https://en.wikipedia.org/wiki?curid=15052
Ivo Caprino Ivo Caprino (17 February 1920 – 8 February 2001) was a Norwegian film director and writer, best known for his puppet films. His most famous film is "Flåklypa Grand Prix" ("Pinchcliffe Grand Prix"), made in 1975. In the mid-1940s, Caprino helped his mother design puppets for a puppet theatre, which inspired him to try making a film using his mother's designs. The result of their collaboration was "Tim og Tøffe", an 8-minute film released in 1949. Several films followed in the next couple of years, including two 15-minute shorts that are still shown regularly in Norway today, "Veslefrikk med Fela" (Little Freddy and his Fiddle), based on a Norwegian folk tale, and "Karius og Baktus", a story by Thorbjørn Egner of two little trolls, representing Caries and Bacterium, living in a boy's teeth. Ingeborg Gude made the puppets for these films as well, as she would continue to do up until her death in the mid sixties. When making "Tim og Tøffe", Caprino invented an ingenious method for controlling the puppet's movements in real time. The technique can be described as a primitive, mechanical version of animatronics. Caprino's films received rave reviews, and he quickly became a celebrity in Norway. In particular, the public were fascinated with the secret technology used to make his films. When he switched to traditional stop motion, Caprino tried to maintain the impression that he was still using some kind of "magic" technology to make the puppets move, even though all his later films were made with traditional stop motion techniques. In addition to the short films, Caprino produced dozens of advertising films with puppets. In 1959, he directed a live action feature film, "Ugler i Mosen", which also contained stop motion sequences. He then embarked on his most ambitious project, a feature film about Peter Christen Asbjørnsen, who travelled around Norway in the 19th century collecting traditional folk tales. The plan was to use live action for the sequences showing Asbjørnsen, and then to realise the folk tales using stop motion. Unfortunately, Caprino was unable to secure funding for the project, so he ended up making the planned folk tale sequences as separate 16-minute puppet films, bookended by live action sequences showing Asbjørnsen. In 1970, Caprino and his small team of collaborators, started work on a 25 minutes TV special, which would eventually become "The Pinchcliffe Grand Prix". Based on a series of books by Norwegian cartoonist and author Kjell Aukrust, it featured a group of eccentric characters all living in the small village of Pinchcliffe. The TV special was a collection of sketches based on Aukrust's books, with no real story line. After 1.5 years of work, it was decided that it didn't really work as a whole, so production on the TV special was stopped (with the exception of some very short clips, no material from it has ever been seen by the public), and Caprino and Aukrust instead wrote a screenplay for a feature film using the characters and environments that had already been built. The result was "The Pinchcliffe Grand Prix", which stars Theodore Rimspoke (No. Reodor Felgen) and his two assistants, Sonny Duckworth (No. Solan Gundersen), a cheerful and optimistic bird, and Lambert (No. Ludvig), a nervous, pessimistic and melancholic hedgehog. Theodore works as a bicycle repairman, though he spends most of his time inventing weird Rube Goldberg-like contraptions. One day, the trio discover that one of Theodore's former assistants, Rudolph Gore-Slimey (), has stolen his design for a race car engine, and has become a world champion Formula One driver. Sonny secures funding from an Arab oil sheik who happens to be vacationing in Pinchcliffe, and the trio then build a gigantic racing car, "Il Tempo Gigante" – a fabulous construction with two engines, radar and its own blood bank. Theodore then enters a race, and ends up winning, beating Gore-Slimey despite his attempts at sabotage. The film was made in 3.5 years by a team of approximately 5 people. Caprino directed and animated, Bjarne Sandemose (Caprino's principal collaborator throughout his career) built the sets and the cars, and was in charge of the technical side, Ingeborg Riiser modeled the puppets and Gerd Alfsen made the costumes and props. When it came out in 1975, The Pinchcliffe Grand Prix was an enormous success in Norway, selling 1 million tickets in its first year of release. It remains the biggest box office hit of all time in Norway (Caprino Studios claim it has sold 5.5 million tickets to date) and was also released in many other countries. To help promote the film abroad, Caprino and Sandemose built a full-scale replica of Il Tempo Gigante that is legal for public roads, but is usually exposited at Hunderfossen Familiepark. Except for some TV work in the late 1970s, Caprino made no more puppet films, focusing instead on creating attractions for the "Hunderfossen" theme park outside Lillehammer based on his folk tale movies, and making tourist films using a custom built multi camera setup of his own design that shoots 280 degrees panorama movies. Caprino was the son of Italian furniture designer Mario Caprino and the artist Ingeborg Gude, who was a granddaughter of the painter Hans Gude. He was born and died in Oslo, but lived all of his life at Snarøya in Bærum. He died in 2001 after having lived several years with a cancer diagnosis. Since Caprino's death, his son Remo has had great success developing a computer game based on "Flåklypa Grand Prix".
https://en.wikipedia.org/wiki?curid=15053
Intel 80286 The Intel 80286 (also marketed as the iAPX 286 and often called Intel 286) is a 16-bit microprocessor that was introduced on February 1, 1982. It was the first 8086-based CPU with separate, non-multiplexed address and data buses and also the first with memory management and wide protection abilities. The 80286 used approximately 134,000 transistors in its original nMOS (HMOS) incarnation and, just like the contemporary 80186, it could correctly execute most software written for the earlier Intel 8086 and 8088 processors. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s. Intel's first 80286 chips were specified for a maximum clockrate of 4, 6 or 8 MHz and later releases for 12.5 MHz. AMD and Harris later produced 16 MHz, 20 MHz and 25 MHz parts, respectively. Intersil and Fujitsu also designed fully static CMOS versions of Intel's original depletion-load nMOS implementation, largely aimed at battery-powered devices. On average, the 80286 was reportedly measured to have a speed of about 0.21 instructions per clock on "typical" programs, although it could be significantly faster on optimized code and in tight loops, as many instructions could execute in 2 clock cycles each. The 6 MHz, 10 MHz and 12 MHz models were reportedly measured to operate at 0.9 MIPS, 1.5 MIPS and 2.66 MIPS respectively. The later E-stepping level of the 80286 was free of the several significant errata that caused problems for programmers and operating-system writers in the earlier B-step and C-step CPUs (common in the AT and AT clones). Intel did not expect personal computers to use the 286. The CPU was designed for multi-user systems with multitasking applications, including communications (such as automated PBXs) and real-time process control. It had 134,000 transistors and consisted of four independent units: address unit, bus unit, instruction unit and execution unit, organized into a loosely coupled (buffered) pipeline just as in the 8086. The significantly increased performance over the 8086 was primarily due to the non-multiplexed address and data buses, more address-calculation hardware (most importantly, a dedicated adder) and a faster (more hardware-based) multiplier. It was produced in a 68-pin package, including PLCC (plastic leaded chip carrier), LCC (leadless chip carrier) and PGA (pin grid array) packages. The performance increase of the 80286 over the 8086 (or 8088) could be more than 100% per clock cycle in many programs (i.e., a doubled performance at the same clock speed). This was a large increase, fully comparable to the speed improvements around a decade later when the i486 (1989) or the original Pentium (1993) were introduced. This was partly due to the non-multiplexed address and data buses, but mainly to the fact that address calculations (such as base+index) were less expensive. They were performed by a dedicated unit in the 80286, while the older 8086 had to do effective address computation using its general ALU, consuming several extra clock cycles in many cases. Also, the 80286 was more efficient in the prefetch of instructions, buffering, execution of jumps, and in complex microcoded numerical operations such as MUL/DIV than its predecessor. The 80286 included, in addition to all of the 8086 instructions, all of the new instructions of the 80186: ENTER, LEAVE, BOUND, INS, OUTS, PUSHA, POPA, PUSH immediate, IMUL immediate, and immediate shifts and rotates. The 80286 also added new instructions for protected mode: ARPL, CLTS, LAR, LGDT, LIDT, LLDT, LMSW, LSL, LTR, SGDT, SIDT, SLDT, SMSW, STR, VERR, and VERW. Some of the instructions for protected mode can (or must) be used in real mode to set up and switch to protected mode, and a few (such as SMSW and LMSW) are useful for real mode itself. The Intel 80286 had a 24-bit address bus and was able to address up to 16 MB of RAM, compared to the 1 MB addressability of its predecessor. However, memory cost and the initial rarity of software using the memory above 1 MB meant that 80286 computers were rarely shipped with more than one megabyte of RAM. Additionally, there was a performance penalty involved in accessing extended memory from real mode (in which DOS, the dominant PC operating system until the mid-1990s, ran), as noted below. The 286 was the first of the x86 CPU family to support "protected virtual-address mode", commonly called "protected mode". In addition, it was the first commercially available microprocessor with on-chip MMU capabilities (systems using the contemporaneous Motorola 68010 and NS320xx could be equipped with an optional MMU controller). This would allow IBM compatibles to have advanced multitasking OSes for the first time and compete in the Unix-dominated server/workstation market. Several additional instructions were introduced in protected mode of 80286, which are helpful for multitasking operating systems. Another important feature of 80286 is prevention of unauthorized access. This is achieved by: In 80286 (and in its co-processor Intel 80287), arithmetic operations can be performed on the following different types of numbers: By design, the 286 could not revert from protected mode to the basic 8086-compatible "real address mode" ("real mode") without a hardware-initiated reset. In the PC/AT introduced in 1984, IBM added external circuitry, as well as specialized code in the ROM BIOS and the 8042 peripheral microcontroller to enable software to cause the reset, allowing real-mode reentry while retaining active memory and returning control to the program that initiated the reset. (The BIOS is necessarily involved because it obtains control directly whenever the CPU resets.) Though it worked correctly, the method imposed a huge performance penalty. In theory, real-mode applications could be directly executed in 16-bit protected mode if certain rules (newly proposed with the introduction of the 80286) were followed; however, as many DOS programs did not conform to those rules, protected mode was not widely used until the appearance of its successor, the 32-bit Intel 80386, which was designed to go back and forth between modes easily and to provide an emulation of real mode within protected mode. When Intel designed the 286, it was not designed to be able to multitask real-mode applications; real mode was intended to be a simple way for a bootstrap loader to prepare the system and then switch to protected mode; essentially, in protected mode the 80286 was designed to be a new processor with many similarities to its predecessors, while real mode on the 80286 was offered for smaller-scale systems that could benefit from a more advanced version of the 80186 CPU core, with advantages such as higher clock rates, faster instruction execution (measured in clock cycles), and unmultiplexed buses, but not the 24-bit (16 MB) memory space. To support protected mode, new instructions have been added: ARPL, VERR, VERW, LAR, LSL, SMSW, SGDT, SIDT, SLDT, STR, LMSW, LGDT, LIDT, LLDT, LTR, CLTS. There are also new exceptions (internal interrupts): invalid opcode, coprocessor not available, double fault, coprocessor segment overrun, stack fault, segment overrun/general protection fault, and others only for protected mode. The protected mode of the 80286 was not utilized until many years after its release, in part because of the high cost of adding extended memory to a PC, but also because of the need for software to support the large user base of 8086 PCs. For example, in 1986 the only program that made use of it was VDISK, a RAM disk driver included with PC DOS 3.0 and 3.1. A DOS could utilize the additional RAM available in protected mode (extended memory) either via a BIOS call (INT 15h, AH=87h), as a RAM disk, or as emulation of expanded memory. The difficulty lay in the incompatibility of older real-mode DOS programs with protected mode. They simply could not natively run in this new mode without significant modification. In protected mode, memory management and interrupt handling were done differently than in real mode. In addition, DOS programs typically would directly access data and code segments that did not belong to them, as real mode allowed them to do without restriction; in contrast, the design intent of protected mode was to prevent programs from accessing any segments other than their own unless special access was explicitly allowed. While it was possible to set up a protected-mode environment that allowed all programs access to all segments (by putting all segment descriptors into the GDT and assigning them all the same privilege level), this undermined nearly all of the advantages of protected mode except the extended (24-bit) address space. The choice that OS developers faced was either to start from scratch and create an OS that would not run the vast majority of the old programs, or to come up with a version of DOS that was slow and ugly (i.e., ugly from an internal technical viewpoint) but would still run a majority of the old programs. Protected mode also did not provide a significant enough performance advantage over the 8086-compatible real mode to justify supporting its capabilities; actually, except for task switches when multitasking, it actually yielded only a performance disadvantage, by slowing down many instructions through a litany of added privilege checks. In protected mode, registers were still 16-bit, and the programmer was still forced to use a memory map composed of 64 kB segments, just like in real mode. In January 1985, Digital Research previewed the Concurrent DOS 286 1.0 operating system developed in cooperation with Intel. The product would function strictly as an 80286 native-mode (i.e. protected-mode) operating system, allowing users to take full advantage of the protected mode to perform multi-user, multitasking operations while running 8086 emulation. This worked on the B-1 prototype step of the chip, but Digital Research discovered problems with the emulation on the production level C-1 step in May, which would not allow Concurrent DOS 286 to run 8086 software in protected mode. The release of Concurrent DOS 286 was delayed until Intel would develop a new version of the chip. In August, after extensive testing on E-1 step samples of the 80286, Digital Research acknowledged that Intel corrected all documented 286 errata, but said that there were still undocumented chip performance problems with the prerelease version of Concurrent DOS 286 running on the E-1 step. Intel said that the approach Digital Research wished to take in emulating 8086 software in protected mode differed from the original specifications. Nevertheless, in the E-2 step, they implemented minor changes in the microcode that would allow Digital Research to run emulation mode much faster. Named IBM 4680 OS, IBM originally chose DR Concurrent DOS 286 as the basis of their IBM 4680 computer for IBM Plant System products and point-of-sale terminals in 1986. Digital Research's FlexOS 286 version 1.3, a derivation of Concurrent DOS 286, was developed in 1986, introduced in January 1987, and later adopted by IBM for their IBM 4690 OS, but the same limitations affected it. The problems led to Bill Gates famously referring to the 80286 as a "brain-dead chip", since it was clear that the new Microsoft Windows environment would not be able to run multiple MS-DOS applications with the 286. It was arguably responsible for the split between Microsoft and IBM, since IBM insisted that OS/2, originally a joint venture between IBM and Microsoft, would run on a 286 (and in text mode). Other operating systems that used the protected mode of the 286 were Microsoft Xenix (around 1984), Coherent, and Minix. These were less hindered by the limitations of the 80286 protected mode because they did not aim to run MS-DOS applications or other real-mode programs. In its successor 80386 chip, Intel enhanced the protected mode to address more memory and also added the separate virtual 8086 mode, a mode within protected mode with much better MS-DOS compatibility, in order to satisfy the diverging needs of the market.
https://en.wikipedia.org/wiki?curid=15054
Kerameikos Kerameikos (, ) also known by its Latinized form Ceramicus, is an area of Athens, Greece, located to the northwest of the Acropolis, which includes an extensive area both within and outside the ancient city walls, on both sides of the Dipylon (Δίπυλον) Gate and by the banks of the Eridanos River. It was the potters' quarter of the city, from which the English word "ceramic" is derived, and was also the site of an important cemetery and numerous funerary sculptures erected along the road out of the city towards Eleusis. The area took its name from the city square or dēmos (δῆμος) of the Kerameis (Κεραμεῖς, potters), which in turn derived its name from the word κέραμος ("kéramos", "pottery clay", from which the English word "ceramic" is derived). The "Inner Kerameikos" was the former "potters' quarter" within the city and "Outer Kerameikos" covers the cemetery and also the "Dēmósion Sēma" (δημόσιον σῆμα, public graveyard) just outside the city walls, where Pericles delivered his funeral oration in 431 BC. The cemetery was also where the Ηiera Hodos (the Sacred Way, i.e. the road to Eleusis) began, along which the procession moved for the Eleusinian Mysteries. The quarter was located there because of the abundance of clay mud carried over by the Eridanos River. The area has undergone a number of archaeological excavations in recent years, though the excavated area covers only a small portion of the ancient "dēmos". It was originally an area of marshland along the banks of the Eridanos river which was used as a cemetery as long ago as the 3rd millennium BC. It became the site of an organised cemetery from about 1200 BC; numerous cist graves and burial offerings from the period have been discovered by archaeologists. Houses were constructed on the higher drier ground to the south. During the Archaic period increasingly large and complex grave mounds and monuments were built along the south bank of the Eridanos, lining the Sacred Way. The building of the new city wall in 478 BC, following the Persian sack of Athens in 480 BC, fundamentally changed the appearance of the area. At the suggestion of Themistocles, all of the funerary sculptures were built into the city wall and two large city gates facing north-west were erected in the Kerameikos. The Sacred Way ran through the Sacred Gate, on the southern side, to Eleusis. On the northern side a wide road, the Dromos, ran through the double-arched Dipylon Gate (also known as the Thriasian Gate) and on to the Platonic Academy a few miles away. State graves were built on either side of the Dipylon Gate, for the interment of prominent personages such as notable warriors and statesmen, including Pericles and Cleisthenes. After the construction of the city wall, the Sacred Way and a forking street known as the Street of the Tombs again became lined with imposing sepulchral monuments belonging to the families of rich Athenians, dating to before the late 4th century BC. The construction of such lavish mausolea was banned by decree in 317 BC, following which only small columns or inscribed square marble blocks were permitted as grave stones. The Roman occupation of Athens led to a resurgence of monument-building, although little is left of them today. During the Classical period an important public building, the Pompeion, stood inside the walls in the area between the two gates. This served a key function in the procession ("pompē", πομπή) in honour of Athena during the Panathenaic Festival. It consisted of a large courtyard surrounded by columns and banquet rooms, where the nobility of Athens would eat the sacrificial meat for the festival. According to ancient Greek sources, a hecatomb (a sacrifice of 100 cows) was carried out for the festival and the people received the meat in the Kerameikos, possibly in the Dipylon courtyard; excavators have found heaps of bones in front of the city wall. The Pompeion and many other buildings in the vicinity of the Sacred Gate were razed to the ground by the marauding army of the Roman dictator Sulla, during his sacking of Athens in 86 BC; an episode that Plutarch described as a bloodbath. During the 2nd century AD, a storehouse was constructed on the site of the Pompeion, but it was destroyed during the invasion of the Heruli in 267 AD. The ruins became the site of potters' workshops until about 500 AD, when two parallel colonnades were built behind the city gates, overrunning the old city walls. A new Festival Gate was constructed to the east with three entrances leading into the city. This was in turn destroyed in raids by the invading Avars and Slavs at the end of the 6th century, and the Kerameikos fell into obscurity. It was not rediscovered until a Greek worker dug up a stele in April 1863. Archaeological excavations in the Kerameikos began in 1870 under the auspices of the Greek Archaeological Society. They have continued from 1913 to the present day under the German Archaeological Institute at Athens. During the construction of Kerameikos station for the expanded Athens Metro, a plague pit and approximately 1,000 tombs from the 4th and 5th centuries BC were discovered. The Greek archaeologist Efi Baziotopoulou-Valavani, who excavated the site, has dated the grave to between 430 and 426 BC. Thucydides described the panic caused by the plague, possibly an epidemic of typhoid which struck the besieged city in 430 BC. The epidemic lasted for two years and killed an estimated one third of the population. He wrote that bodies were abandoned in temples and streets, to be subsequently collected and hastily buried. The disease reappeared in the winter of 427 BC. Latest findings in the Kerameikos include the excavation of a 2.1 m tall Kouros, unearthed by the German Archaeological Institute at Athens under the direction of Professor Wolf-Dietrich Niemeier. This Kouros is the larger twin of the one now kept in the Metropolitan Museum of Art in New York, and both were made by the same anonymous sculptor called the "Dipylon Master". Large areas adjacent to those already excavated remain in to be explored, as they lie under the fabric of modern-day Athens. Expropriation of these areas has been delayed until funding is secured. The area is enclosed and visitable through an entrance on the last block of Ermou Street, close to the intersection with Peiraios Street. The Kerameikos Museum is housed there, in a small neoclassical building that houses the most extensive collection of burial-related artifacts in Greece, varying from large-scale marble sculpture to funerary urns, stelae, jewelry, toys etc. The original burial monument sculptures are displayed within the museum, having been replaced by plaster replicas "in situ". The museum incorporates inner and outer courtyards, where the larger sculptures are kept. Down the hill from the museum, visitors can wander among the Outer Kerameikos ruins, the "Demosion Sema", the banks of the Eridanos where some water still flows, the remains of the "Pompeion" and the "Dipylon" Gate, and walk the first blocks of the Sacred Way towards Eleusis and of the Panathenaic Way towards the Acropolis. The bulk of the area lies about 7–10 meters below modern street level, having in the past been inundated by centuries' worth of sediment accumulation from the floods of the Eridanos. As of spring 2007 Keramikos is the name given to the metro station which belongs to Line 3 of the Athens Metro is adjacent to the Technopolis of Gazi.
https://en.wikipedia.org/wiki?curid=16928
Kabir Bedi Kabir Bedi (born 16 January 1946) is an Indian film actor. His career has spanned three continents covering India, the United States and especially Italy among other European countries in three media: film, television and theatre. He is noted for his role as Emperor Shah Jahan in "" and the villainous Sanjay Verma in the 80s blockbuster "Khoon Bhari Maang". He is best known in Italy and Europe for playing the pirate "Sandokan" in the popular Italian TV mini series and for his role as the villainous Gobinda in the 1983 James Bond film "Octopussy". Kabir Bedi is well known in Italy and is fluent in Italian. He is based in India and lives in Mumbai. Bedi was one of three children born into a Sikh family that had devoted itself to India's fight for independence from British colonial rule. His father, Baba Pyare Lal Singh Bedi, a Punjabi Sikh, was an author and philosopher. His mother, Freda Bedi was a British woman born in Derby, England, who became famous as the first Western woman to take ordination in Tibetan Buddhism. Kabir Bedi did his schooling at Sherwood College, Nainital, Uttarakhand and graduation from St. Stephen's College, Delhi. Bedi married four times and had three children, Pooja, Siddharth (deceased) and Adam. He was married to Protima Bedi, an Odissi dancer. Their daughter Pooja Bedi is a magazine/newspaper columnist and former actress. Their son, Siddharth, who went to Carnegie Mellon University, was diagnosed with schizophrenia and committed suicide in 1997 at the age of 26. As his marriage with Protima began to break down he famously started a relationship with Parveen Babi. They never married. He later married British-born fashion designer Susan Humphreys. Their son, Adam Bedi, is an international model who made his Hindi Film debut with the thriller, "Hello? Kaun Hai!". This marriage ended in divorce. In the early 1990s, Bedi married TV and radio presenter Nikki Bedi. They had no children and divorced in 2005. After that, Bedi has been in a relationship with British-born Parveen Dusanj, whom he married a day before his 70th birthday. Bedi supports the anti-Government struggle in Myanmar, and is an official ambassador of the Burma Campaign UK. He is also the Brand Ambassador for Rotary International South Asia for their Teach Programme and the Total Literacy Mission in India and South Asia. Kabir Bedi began his career in Indian theatre and then moved on to Hindi films. Bedi remains one of the first international actors from India who started out in Hindi films, worked in Hollywood films and became a star in Europe. As a stage actor, Kabir has performed Shakespeare's "Othello" as well as portrayed a historical Indian king, "Tughlaq"; and a self-destructive alcoholic in "The Vultures". In London he also starred in "The Far Pavilions", the West End musical adaptation of M. M. Kaye's novel, at the Shaftesbury Theatre. In 2011 Kabir played Emperor Shah Jahan, in 'Taj', a play written by John Murrell, a Canadian playwright for the Luminato Festival in Toronto. In 2013, this play was recommissioned and went on an 8-week multi-city tour of Canada. In the James Bond film "Octopussy", he played the villain's aide Gobinda. Kabir has acted in over 60 Indian films. In the historical epic "", Kabir starred as the Emperor Shah Jahan. Other starring Hindi film roles include Raj Khosla's "Kacche Dhaage", Rakesh Roshan's "Khoon Bhari Maang" and Farah Khan's "Main Hoon Na". Kabir Bedi shot a movie with Hrithik Roshan ("Kites"), Govinda ("Showman"), and Akshay Kumar ("Blue"). He also starred in Deepa Mehta's film, "Kamagata Maru" with Amitabh Bachchan and John Abraham. He acted in the Tamil film 'Aravaan', directed by Vasanthabalan. Kabir played roles in Columbia Pictures' "The Beast of War", a film on the Russian war in Afghanistan, directed by Kevin Reynolds, as well as the acclaimed Italian film "Andata Ritorno", by Marco Ponti, winner of the David di Donatello Award. In 2017, he acted in a Telugu historical movie "Gautamiputra Satakarni", as Nahapana, an important ruler of the Western Kshatrapas. Kabir has appeared on American television, in Hallmark's African epic "Forbidden Territory", and Ken Follett's "On Wings of Eagles" and also "Red Eagle". He played Friar Sands in "The Lost Empire", for NBC. Kabir also acted in "Dynasty", "Murder, She Wrote", "Magnum, P.I.", "Hunter", "Knight Rider" and "". In Europe, his greatest success was "Sandokan", the saga of a romantic Southeast Asian pirate during British colonial times; an Italian-German-French TV series which broke viewership records across Europe. Kabir recently starred in a prime-time Italian television series, "Un Medico in Famiglia", on RAI TV. For over a year, Kabir starred in "The Bold and the Beautiful", the second most-watched television show in the world, seen by over a billion people in 149 countries. He had his own cinematic talk show on Indian TV, ""Director's Cut"", a 13-part special series interviewing the country's leading directors. His success on television continued in 2013 with award-winning prime-time shows “"Guns and Glory: The Indian Soldier"” & “"Vandemataram"”, for India's news channels - Headlines Today & Aaj Tak. In the Indian Biblical television series 'Bible Ki Kahaniya', Bedi played both the young and aged Abraham. In 2007 he starred in "Chat", a radio show aired by RAI Radio2, in the role of Sandokan. In 2012, he did a series of Radio One programmes titled, 'Women of Gold' and 'Men of Steel' in honour of industry champions in India. In 2017 he did another series in English for Radio One, 'Ten On Ten' celebrating top ten innovations out of India. He also did the year end special series, 'Best of 2017'. Kabir Bedi is a regular contributor to Indian publications including the Times of India and Tehelka on political and social issues affecting the country. He is also seen debating such topics on Indian national television. In February 2017, Bedi was announced as the new 'brand ambassador' for international development organisation, Sightsavers, saying on his appointment, "Today there is immense awareness and attempt towards eye health and care in India and Sightsavers have shown way to people at large in the country with their achievements in the area of eye care." Since 1982 Kabir has been a voting member of the Academy of Motion Picture Arts and Sciences (who are responsible for presenting the Oscar awards) and he is a voting member of the Screen Actors Guild. He has won numerous film, advertising and popularity awards across Europe and India. By decree of the President of the Italian Republic of 2 June 2010, Kabir Bedi was officially knighted. He received the highest ranking civilian honour of the Italian Republic and was bestowed the title of "Cavaliere" (Knight) of the Order of Merit of the Italian Republic. He has recently received Honorary Degree from Kalinga Institute of Industrial Technology (KIIT) University, Bhubaneswar, Odisha,India.
https://en.wikipedia.org/wiki?curid=16930
Kamov Ka-25 The Kamov Ka-25 (NATO reporting name "Hormone") was a naval helicopter, developed for the Soviet Navy in the USSR from . In the late 1950s there was an urgent demand for anti-submarine helicopters for deployment on new ships equipped with helicopter platforms entering service with the Soviet Navy. Kamov's compact design was chosen for production in 1958. To speed the development of the new anti-submarine helicopter Kamov designed and built a prototype to prove the cabin and dynamic components layout; designated Ka-20, this demonstrator was not equipped with mission equipment, corrosion protection or shipboard operational equipment. The Ka-20 was displayed at the 1961 Tushino Aviation Day display. Definitive prototypes of the Ka-25 incorporated mission equipment and corrosion protection for the structure. The rotor system introduced aluminium alloy blades pressurised with nitrogen for crack detection, lubricated hinges, hydraulic powered controls, alcohol de-icing and automatic blade folding. Power was supplied by two free-turbine engines sat atop the cabin, with electrically de-iced inlets, plain lateral exhausts with no infrared countermeasures, driving the main gearbox directly and a cooling fan for the gearbox and hydraulic oil coolers aft of the main gearbox. Construction was of stressed skin duralumin throughout with flush-riveting, as well as some bonding and honeycomb sandwich panels. The 1.5m × 1,25m × 3.94m cabin had a sliding door to port flight deck forward of the cabin and fuel tanks underfloor filled using a pressure refueling nozzle on the port side. A short boom at the rear of the cabin had a central fin and twin toed-in fins at the ends of the tailplane mainly for use during auto-rotation. The undercarriage consisted of two noncastoring mainwheels with sprag brakes attached to the fuselage by parallel 'V' struts with a single angled shock absorber to dissipate landing loads, and two castoring nosewheels on straight shock absorbing legs attached directly to the fuselage either side of the cockpit which folded rearwards to reduce interference with the RADAR, all wheels were fitted with emergency rapid inflation flotation collars. Flying controls all act on the co-axial rotors with pitch, roll and collective similar to a conventional single rotor helicopter. Yaw was through differential collective which has a secondary effect of torque, an automatic mixer box ensured that total lift on the rotors remained constant during yaw maneuvers, to improve handling during deck landings. Optional extras included fold up seats for 12 passengers, rescue hoist, external auxiliary fuel tanks or containers for cameras, flares, smoke floats or beacons. Current operators Former operators
https://en.wikipedia.org/wiki?curid=16932
KAIST KAIST (formally the Korea Advanced Institute of Science and Technology) is a national research university located in Daedeok Innopolis, Daejeon, South Korea. KAIST was established by the Korean government in 1971 as the nation's first research-oriented science and engineering institution. KAIST also has been internationally accredited in business education, and hosting the Secretariat of AAPBS. KAIST has approximately 10,200 full-time students and 1,140 faculty researchers and had a total budget of US$765 million in 2013, of which US$459 million was from research contracts. In 2007, KAIST partnered with international institutions and adopted dual degree programs for its students. Its partner institutions include the Technical University of Denmark, Carnegie Mellon University, the Georgia Institute of Technology, the Technical University of Berlin, and the Technical University of Munich. The institute was founded in 1971 as the Korea Advanced Institute of Science (KAIS) by a loan of US$6 million (US$38 million 2019) from the United States Agency for International Development (USAID) and supported by President Park Chung-Hee. The institute's academic scheme was mainly designed by Frederick E. Terman, then vice president of Stanford University, and Chung Geum-mo, a professor at the Polytechnic Institution of Brooklyn. The institute's two main functions were to train advanced scientists and engineers and develop a structure of graduate education in the country. Research studies had begun by 1973 and undergraduates studied for bachelor's degrees by 1984. In 1981 the government merged the Korean Advanced Institute of Science and the Korean Institute of Science and Technology (KIST) to form the Korea Advanced Institute of Science and Technology, or KAIST. Due to differing research philosophies, KIST and KAIST split in 1989. In the same year KAIST and the Korea Institute of Technology (KIT) combined and moved from Seoul to the Daedeok Science Town in Daejeon. The first act of President Suh upon his inauguration in July 2006 was to lay out the KAIST Development Plan. The ‘KAIST Development Five-Year Plan’ was finalized on February 5, 2007 by KAIST Steering Committee. The goals of KAIST set by Suh were to become one of the best science and technology universities in the world, and to become one of the top-10 universities by 2011. In January 2008, the university dropped its full name, "Korea Advanced Institute of Science and Technology", and changed its official name to only "KAIST". Admission to KAIST is based on overall grades, grades on math and science courses, recommendation letters from teachers, study plan, personal statements, and other data that show the excellence of potential students, and does not rely on a standardized test conducted by the university. In 2014, the acceptance rate for local students was 14.9%, and for international students at 13.2%. Full scholarships are given to all students including international students in the bachelor, master and doctorate courses. Doctoral students are given military-exemption benefits from South Korea's compulsory military service. Up to 80% of courses taught in KAIST are conducted in English. Undergraduate students can join the school through an “open major system” that allows students to take classes for three terms and then choose a discipline that suits their aptitude, and undergraduates are allowed to change their major anytime. KAIST has also produced many doctorates through the integrated master's and doctoral program and early-completion system. Students must publish papers in internationally renowned academic journals for graduation. KAIST produced a total of 48,398 alumni from 1975 to 2014, with 13,743 bachelor's, 24,776 master's, and 9,879 doctorate degree holders. As of October 2015, 11,354 students were enrolled in KAIST with 4,469 bachelor's, 3,091 master's, and 3,794 doctoral students. More than 70 percent of KAIST undergraduates come from specialized science high schools. On average, about 600 international students from more than 70 different countries come to study at KAIST, making KAIST one of the most ethnically diverse universities in the country. KAIST is organized into 6 colleges, 2 schools and 33 departments/divisions. KAIST also has three affiliated institutes including the Korea Institute of Advanced Study (KIAS), National NanoFab Center (NNFC), and Korea Science Academy (KSA). KAIST has two campuses in Daejeon and one campus in Seoul. The university is mainly located in the Daedeok Science Town in the city of Daejeon, 150 kilometers south of the capital Seoul. Daedeok is also home to some 50 public and private research institutes, universities such as CNU and high-tech venture capital companies. Most lectures, research activities, and housing services are located in the Daejeon main campus. It has a total of 29 dormitories. Twenty-three dormitories for male students and four dormitories for female students are located on the outskirts of the campus, and two apartments for married students are located outside the campus. The Seoul campus is the home of the Business Faculty of the university. The graduate schools of finance, management and information & media management are located there. The total area of the Seoul campus is . The Munji campus, the former campus of Information and Communications University until its merger with KAIST, is located ca. away from the main campus. It has a total of two dormitories, one for undergraduate students and the other for graduate students. The Institute for Basic Science (IBS) Center for Axion and Precision Physics Research is located here doing particle and nuclear physics related to dark matter and the Rare Isotope Science Project has the Superconducting Radio Frequency test facility. The KAIST main library was established in 1971 as KAIS library, and it went through a merge and separation process with KIST library. It merged with KIT in March 1990. A contemporary 5 story building was constructed as the main library, and it is being operated with an annex library. The library uses the American LC Classification Schedule. The library underwent expansion and remodeling, which finished in 2018, to include conference rooms, collaboration rooms, and media rooms. KAIST's Seokrim Taeulje is a festival held by KAIST for three days every spring semester. The festival preparation committee under the undergraduate student council will be in charge of planning and execution, various food booths and experience booths will be opened, and stage events such as club performances and a song festival will be held. Also called the Cherry Blossom Festival, students eat strawberries on the lawn. Seven KAIST Institutes (KIs) have been set up: the KI for the BioCentury, the KI for Information Technology Convergence, the KI for the Design of Complex Systems, the KI for Entertainment Engineering, the KI for the NanoCentury, the KI for Eco-Energy, and the KI for Urban Space and Systems. Each KI is operated as an independent research center at the level of a college, receiving support in terms of finance and facilities. In terms of ownership of intellectual property rights, KAIST holds 2,694 domestic patents and 723 international patents so far. Researchers at KAIST have developed the Online Electric Vehicle (OLEV), a technique of powering vehicles through cables underneath the surface of the road via non-contact magnetic charging (a power source is placed underneath the road surface and power is wirelessly picked up on the vehicle itself). In July 2009 the researchers successfully supplied up to 60% power to a bus over a gap of from a power line embedded in the ground using power supply and pick up technology developed in-house. In February 2018 the Korea Times published an article which stated that KAIST was starting an AI weapons research project together with the Korean arms manufacturer Hanwa. The allegations were of developing lethal autonomous weapons with Hanwa. This has led to researchers from 30 countries boycotting KAIST, which has denied existence of the program. In 2016 and 2017 Thomson Reuters named KAIST the sixth most innovative university in the world and the most innovative university in the Asia Pacific region. In 2016/17 QS World University Rankings ranked KAIST 46th overall in the world and 6th within Asia, coming 13th in Material Sciences and 14th in Engineering and Technology. In the 2009 THE-QS World University Rankings (in 2010 Times Higher Education World University Rankings and QS World University Rankings parted ways to produce separate rankings) for Engineering & IT, the University was placed 21st in the world and 1st in Korea and was placed 69th overall. KAIST was again recognized as a number one University in Korea by JoongAng Ilbo Review. In the year of 2009, KAIST's department of industrial design has also been listed in the top 30 Design Schools by Business Week. KAIST ranked the best university in Republic of Korea and the 7th university in Asia in the Top 100 Asian Universities list, the first regional ranking issued by THE-QS World Rankings. Times Higher Education ranked KAIST the 3rd best university in the world under the age of 50 years in its 2015 league table.
https://en.wikipedia.org/wiki?curid=16934
Kaolinite Kaolinite () is a clay mineral, part of the group of industrial minerals with the chemical composition Al2Si2O5(OH)4. It is a layered silicate mineral, with one tetrahedral sheet of silica () linked through oxygen atoms to one octahedral sheet of alumina () octahedra. Rocks that are rich in kaolinite are known as kaolin or china clay. The name "kaolin" is derived from "Gaoling" (), a Chinese village near Jingdezhen in southeastern China's Jiangxi Province. The name entered English in 1727 from the French version of the word: "kaolin", following François Xavier d'Entrecolles's reports on the making of Jingdezhen porcelain. Kaolinite has a low shrink–swell capacity and a low cation-exchange capacity (1–15 meq/100 g). It is a soft, earthy, usually white, mineral (dioctahedral phyllosilicate clay), produced by the chemical weathering of aluminium silicate minerals like feldspar. In many parts of the world it is colored pink-orange-red by iron oxide, giving it a distinct rust hue. Lighter concentrations yield white, yellow, or light orange colors. Alternating layers are sometimes found, as at Providence Canyon State Park in Georgia, United States. Commercial grades of kaolin are supplied and transported as dry powder, semi-dry noodle, or liquid slurry. The chemical formula for kaolinite as used in mineralogy is , however, in ceramics applications the formula is typically written in terms of oxides, thus the formula for kaolinite is . Kaolinite group clays undergo a series of phase transformations upon thermal treatment in air at atmospheric pressure. Below , exposure to dry air will slowly remove liquid water from the kaolin. The end-state for this transformation is referred to as "leather dry". Between 100 °C and about , any remaining liquid water is expelled from kaolinite. The end state for this transformation is referred to as "bone dry". Throughout this temperature range, the expulsion of water is reversible: if the kaolin is exposed to liquid water, it will be reabsorbed and disintegrate into its fine particulate form. Subsequent transformations are "not" reversible, and represent permanent chemical changes. Endothermic dehydration of kaolinite begins at 550–600 °C producing disordered metakaolin, but continuous hydroxyl loss is observed up to . Although historically there was much disagreement concerning the nature of the metakaolin phase, extensive research has led to a general consensus that metakaolin is not a simple mixture of amorphous silica () and alumina (), but rather a complex amorphous structure that retains some longer-range order (but not strictly crystalline) due to stacking of its hexagonal layers.Al2Si2O5(OH)4 -> Al2Si2O7 + 2 H2O Further heating to 925–950 °C converts metakaolin to an aluminium-silicon spinel which is sometimes also referred to as a gamma-alumina type structure:2 Al2Si2O7 -> Si3Al4O12 + SiO2 Upon calcination above 1050 °C, the spinel phase nucleates and transforms to platelet mullite and highly crystalline cristobalite:3 Si3Al4O12 -> 2(3 Al2O3. 2 SiO2) + 5 SiO2 Finally, at 1400 °C the "needle" form of mullite appears, offering substantial increases in structural strength and heat resistance. This is a structural but not chemical transformation. See stoneware for more information on this form. Kaolinite is one of the most common minerals; it is mined, as kaolin, in Malaysia, Pakistan, Vietnam, Brazil, Bulgaria, Bangladesh, France, the United Kingdom, Iran, Germany, India, Australia, South Korea, the People's Republic of China, the Czech Republic, Spain, South Africa, and the United States. Mantles of kaolinitic saprolite are common in Western and Northern Europe. The ages of these mantles are Mesozoic to Early Cenozoic. Kaolinite clay occurs in abundance in soils that have formed from the chemical weathering of rocks in hot, moist climates—for example in tropical rainforest areas. Comparing soils along a gradient towards progressively cooler or drier climates, the proportion of kaolinite decreases, while the proportion of other clay minerals such as illite (in cooler climates) or smectite (in drier climates) increases. Such climatically-related differences in clay mineral content are often used to infer changes in climates in the geological past, where ancient soils have been buried and preserved. In the "Institut National pour l'Etude Agronomique au Congo Belge" (INEAC) classification system, soils in which the clay fraction is predominantly kaolinite are called "kaolisol" (from kaolin and soil). In the US, the main kaolin deposits are found in central Georgia, on a stretch of the Atlantic Seaboard fall line between Augusta and Macon. This area of thirteen counties is called the "white gold" belt; Sandersville is known as the "Kaolin Capital of the World" due to its abundance of kaolin. In the late 1800s, an active kaolin surface-mining industry existed in the extreme southeast corner of Pennsylvania, near the towns of Landenberg and Kaolin, and in what is present-day White Clay Creek Preserve. The product was brought by train to Newark, Delaware, on the Newark-Pomeroy line, along which can still be seen many open-pit clay mines. The deposits were formed between the late Cretaceous and early Paleogene, about 100 to 45 million years ago, in sediments derived from weathered igneous and metakaolin rocks. Kaolin production in the US during 2011 was 5.5 million tons. During the Paleocene–Eocene Thermal Maximum sediments were enriched with kaolinite from a detrital source due to denudation. Difficulties are encountered when trying to explain kaolinite formation under atmospheric conditions by extrapolation of thermodynamic data from the more successful high-temperature syntheses (as for example Meijer and Van der Plas, 1980 have pointed out). La Iglesia and Van Oosterwijk-Gastuche (1978) thought that the conditions under which kaolinite will nucleate can be deduced from stability diagrams, based as they are on dissolution data. Because of a lack of convincing results in their own experiments, La Iglesia and Van Oosterwijk-Gastuche (1978) had to conclude, however, that there were other, still unknown, factors involved in the low-temperature nucleation of kaolinite. Because of the observed very slow crystallization rates of kaolinite from solution at room temperature Fripiat and Herbillon (1971) postulated the existence of high activation energies in the low-temperature nucleation of kaolinite. At high temperatures, equilibrium thermodynamic models appear to be satisfactory for the description of kaolinite dissolution and nucleation, because the thermal energy suffices to overcome the energy barriers involved in the nucleation process. The importance of syntheses at ambient temperature and atmospheric pressure towards the understanding of the mechanism involved in the nucleation of clay minerals lies in overcoming these energy barriers. As indicated by Caillère and Hénin (1960) the processes involved will have to be studied in well-defined experiments, because it is virtually impossible to isolate the factors involved by mere deduction from complex natural physico-chemical systems such as the soil environment. Fripiat and Herbillon (1971), in a review on the formation of kaolinite, raised the fundamental question how a disordered material (i.e., the amorphous fraction of tropical soils) could ever be transformed into a corresponding ordered structure. This transformation seems to take place in soils without major changes in the environment, in a relatively short period of time, and at ambient temperature (and pressure). Low-temperature synthesis of clay minerals (with kaolinite as an example) has several aspects. In the first place the silicic acid to be supplied to the growing crystal must be in a monomeric form, i.e., silica should be present in very dilute solution (Caillère et al., 1957; Caillère and Hénin, 1960; Wey and Siffert, 1962; Millot, 1970). In order to prevent the formation of amorphous silica gels precipitating from supersaturated solutions without reacting with the aluminium or magnesium cations to form crystalline silicates, the silicic acid must be present in concentrations below the maximum solubility of amorphous silica. The principle behind this prerequisite can be found in structural chemistry: "Since the polysilicate ions are not of uniform size, they cannot arrange themselves along with the metal ions into a regular crystal lattice." (Iler, 1955, p. 182) The second aspect of the low-temperature synthesis of kaolinite is that the aluminium cations must be hexacoordinated with respect to oxygen (Caillère and Hénin, 1947; Caillère et al., 1953; Hénin and Robichet, 1955). Gastuche et al. (1962), as well as Caillère and Hénin (1962) have concluded, that only in those instances when the aluminium hydroxide is in the form of gibbsite, kaolinite can ever be formed. If not, the precipitate formed will be a "mixed alumino-silicic gel" (as Millot, 1970, p. 343 put it). If it were the only requirement, large amounts of kaolinite could be harvested simply by adding gibbsite powder to a silica solution. Undoubtedly a marked degree of adsorption of the silica in solution by the gibbsite surfaces will take place, but, as stated before, mere adsorption does not create the layer lattice typical of kaolinite crystals. The third aspect is that these two initial components must be incorporated into one and the same mixed crystal with a layer structure. From the following equation (as given by Gastuche and DeKimpe, 1962) for kaolinite formation2Al(OH)3 + 2H4SiO4 -> Si2O5 . 2Al(OH)3 + 5H2Oit can be seen, that five molecules of water must be removed from the reaction for every molecule of kaolinite formed. Field evidence illustrating the importance of the removal of water from the kaolinite reaction has been supplied by Gastuche and DeKimpe (1962). While studying soil formation on a basaltic rock in Kivu (Zaïre), they noted how the occurrence of kaolinite depended on the of the area involved. A clear distinction was found between areas with good drainage (i.e., areas with a marked difference between wet and dry seasons) and those areas with poor drainage (i.e., perennially swampy areas). Only in the areas with distinct seasonal alternations between wet and dry was kaolinite found. The possible significance of alternating wet and dry conditions on the transition of allophane into kaolinite has been stressed by Tamura and Jackson (1953). The role of alternations between wetting and drying on the formation of kaolinite has also been noted by Moore (1964). Syntheses of kaolinite at high temperatures (more than ) are relatively well known. There are for example the syntheses of Van Nieuwenberg and Pieters (1929); Noll (1934); Noll (1936); Norton (1939); Roy and Osborn (1954); Roy (1961); Hawkins and Roy (1962); Tomura et al. (1985); Satokawa et al. (1994) and Huertas et al. (1999). Relatively few low-temperature syntheses have become known (cf. Brindley and DeKimpe (1961); DeKimpe (1969); Bogatyrev et al. (1997)). Laboratory syntheses of kaolinite at room temperature and atmospheric pressure have been described by DeKimpe et al. (1961). From those tests the role of periodicity becomes convincingly clear. DeKimpe et al. (1961) had used daily additions of alumina (as ) and silica (in the form of ethyl silicate) during at least two months. In addition, adjustments of the pH took place every day by way of adding either hydrochloric acid or sodium hydroxide. Such daily additions of Si and Al to the solution in combination with the daily titrations with hydrochloric acid or sodium hydroxide during at least 60 days will have introduced the necessary element of periodicity. Only now the actual role of what has been described as the "aging" ("Alterung") of amorphous alumino-silicates (as for example Harder, 1978 had noted) can be fully understood. Time as such is not bringing about any change in a closed system at equilibrium; but a series of alternations, of periodically changing conditions (by definition, taking place in an open system), will bring about the low-temperature formation of more and more of the stable phase kaolinite instead of (ill-defined) amorphous alumino-silicates. The main use of the mineral kaolinite (about 50% of the time) is the production of paper; its use ensures the gloss on some grades of coated paper. Kaolin is also known for its capabilities to induce and accelerate blood clotting. In April 2008 the US Naval Medical Research Institute announced the successful use of a kaolinite-derived aluminosilicate infusion in traditional gauze, known commercially as QuikClot Combat Gauze, which is still the hemostat of choice for all branches of the US military. Kaolin is used (or was used in the past): Humans sometimes eat kaolin for health or to suppress hunger, a practice known as geophagy. Consumption is greater among women, especially during pregnancy. This practice has also been observed within a small population of African-American women in the Southern United States, especially Georgia. There, the kaolin is called "white dirt", "chalk", or "white clay". People can be exposed to kaolin in the workplace by breathing in the powder or from skin or eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for kaolin exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure TWA 5 mg/m3 respiratory exposure over an 8-hour workday.
https://en.wikipedia.org/wiki?curid=16938
Kawasaki Ki-56 The Kawasaki Ki-56 (一式貨物輸送機, Type 1 Freight Transport) was a Japanese two-engine light transport aircraft used during World War II. It was known to the Allies by the reporting name "Thalia". 121 were built between 1940 and 1943. The Kawasaki Ki-56 was derived from the Lockheed Model 14 Super Electra aircraft that the "Kawasaki Kokuki Kogyo Kabushiki Kaisha" (The Kawasaki Aircraft Engineering Company Limited) had built under licence. In September 1939 Kawasaki was asked by the Koku Hombu to design an improved version as Ki-56. A number was also built by Tachikawa Hikoki K.K.. The Japanese invasion of Sumatra in the Dutch East Indies campaign began with a paratroop drop from Ki-56 transports on Airfield P1 and the oil refineries near Palembang. Royal Air Force Hawker Hurricane fighters flying from P1 to locate the Japanese invasion fleet passed the incoming Ki-56s, but thought them to be friendly Lockheed Hudsons (also developed from the Lockheed Model 14) returning from a raid. The defending anti-aircraft gunners at P1 were equally fooled, until the parachutes began to open. Once the AA guns opened fire one transport was shot down, another force-landed, and others veered off course, but the paratroop drop was effective and the airfield and oil installations were overrun.
https://en.wikipedia.org/wiki?curid=16941
MV Wilhelm Gustloff MV "Wilhelm Gustloff" was a German armed military transport ship which was sunk on 30 January 1945 by in the Baltic Sea while evacuating German civilian refugees from East Prussia, Lithuania, Latvia, Poland and Estonia and military personnel from Gotenhafen (Gdynia) as the Red Army advanced. By one estimate, 9,400 people died, which makes it the largest loss of life in a single ship sinking in history. Constructed as a cruise ship for the Nazi "Kraft durch Freude" (Strength Through Joy) organisation in 1937, she had been requisitioned by the "Kriegsmarine" (German navy) in 1939. She served as a hospital ship in 1939 and 1940. She was then assigned as a floating barracks for naval personnel in Gdynia (Gotenhafen) before being armed and put into service to transport evacuees in 1945. "Wilhelm Gustloff" was constructed by the Blohm & Voss shipyards. Measuring long by wide, with a capacity of , she was launched on 5 May 1937. The ship was originally intended to be named "Adolf Hitler" but instead was christened after Wilhelm Gustloff, leader of the National Socialist Party's Swiss branch, who had been assassinated by a Jewish medical student in 1936. Hitler decided on the name change after sitting next to Gustloff's widow during his memorial service. After completing sea trials in the North Sea from 15 to 16 March 1938 she was handed over to her owners. "Wilhelm Gustloff" was the first purpose-built cruise ship for the German Labour Front ("Deutsche Arbeitsfront", DAF) and used by subsidiary organisation "Kraft durch Freude" (KdF) (Strength Through Joy). Her purposes were to provide recreational and cultural activities for German functionaries and workers, including concerts, cruises, and other holiday trips, and to serve as a public relations tool, to present "a more acceptable image of the Third Reich". She was the flagship of the KdF cruise fleet, her last civilian role, until the spring of 1939. She made her unofficial maiden voyage between 24 and 27 March 1938 carrying Austrians in an attempt to convince them to vote for the annexation of Austria by Germany. On 29 March 1938 she departed on her second voyage carrying workers and their families from the Blohm & Voss shipyard on a three-day cruise. For her third voyage she left Hamburg on 1 April 1938 under the command of Carl Lübbe to join the KdF ships "Der Deutsche", "Oceania" and "Sierra Cordoba" on a group cruise of a North Sea. A storm developed on 3 April with winds up to that forced the four ships apart. On 2 April the 1,836 gross ton coal freighter "Pegaway" had departed Tyne under the command of Captain G. W. Ward with a load of coal for Hamburg. The storm washed cargo and machinery from her decks and as the storm increased in intensity she lost manoeuvrability. By 4 April, it was taking on water and slowly sinking. At 4 am, the captain issued an SOS when the ship was 20 miles northwest of the island of Terschelling in the West Frisian Islands group off the coast of the Netherlands. The closest of the ships that answered the distress call was the "Wilhelm Gustloff" which reached the "Pegaway" at 6 am. She launched her Lifeboat No. 1 with a crew of 12 under the command of Second Officer Meyer. The oar-powered lifeboat was unable in the heavy seas to come aside the Pegaway and looked in danger of needing rescuing. Lifeboat No. 6 with a crew of ten under the command of Second Officer Schürmann was then lowered. As it had a motor, it was better able to handle the waves. After first assisting their shipmates in lifeboat No. 1 to head back towards the Wilhelm Gustloff, Schürmann was able to reach the "Pegaway". One by one the 19 men on the "Pegaway" jumped into the sea and were hauled onto Lifeboat No. 6, with both them and the crew of the lifeboat back at the Wilhelm Gustloff by 7:45 am. By now a Dutch tugboat had arrived but was unable to save the "Pegaway", which soon rolled to port and sank. Lifeboat No. 1 had been so badly damaged by the waves that after its crew had climbed up via ladders to the safety of their ship it was set adrift to later be washed up on the shores of Terschelling on 2 May. On 8 April 1938 the "Wilhelm Gustloff" under the command of Captain Carl Lübbe departed Hamburg for England where she anchored over three miles offshore of Tilbury so as to remain in international waters. This allowed her to act as a floating polling station for German and Austrian citizens living in England who wished to vote on the approaching plebiscite on Anschluss (Union of Austria with Germany). During 10 April, 1,172 Germans and 806 Austrian eligible voters were ferried between the docks at Tilbury to the ship where 1,968 votes were cast in favour of the union and 10 voted against. Once the voting was complete, the "Wilhelm Gustloff" departed, reaching Hamburg on 12 April. After undertaking a further voyage on 14 to 19 April 1938, she went on an Osterfahrt (Easter Voyage) before her actual official maiden voyage, which was undertaken between 21 April to 6 May 1938 when she joined the "Der Deutsche", "Oceania" and "Sierra Cordoba" on a group cruise to the Madeira Islands. On the second day of her voyage, the 58-year-old Captain Carl Lübbe died on the bridge from a heart attack. He was replaced by Friedrich Petersen, who after commanding the ship for the remainder of this cruise left the ship until he returned to command it on the ship's final voyage. Between 20 May to 2 June 1939, she was diverted from her pleasure cruises. With seven other ships in the KdF fleet, she transported the Condor Legion back from Spain following the victory of the Nationalist forces under General Francisco Franco in the Spanish Civil War. From 14 March 1938 until 26 August 1939, the "Wilhelm Gustloff" took over 80,000 passengers on a total of 60 voyages, all around Europe. From September 1939 to November 1940, she served as a hospital ship, officially designated "Lazarettschiff D". Beginning on 20 November 1940, the medical equipment was removed from the ship, and she was repainted from the hospital ship colours of white with a green stripe to standard naval grey. As a consequence of the Allied blockade of the German coastline, she was used as an accommodations ship (barracks) for approximately 1,000 U-boat trainees of the 2nd Submarine Training Division (2. "Unterseeboot-Lehrdivision") in the port of Gdynia, which had been occupied by Germany and renamed "Gotenhafen", located near Danzig (Gdańsk). "Wilhelm Gustloff" sat in dock there for over four years. In 1942, was used as a stand-in for in the German film version of the disaster. Filmed in Gotenhafen, the 2nd Submarine Training Division acted as extras in the movie. Eventually she was put back into service to transport civilians and military personnel as part of Operation Hannibal. Operation Hannibal was the naval evacuation of German troops and civilians as the Red Army advanced. The "Wilhelm Gustloff's" final voyage was to evacuate German refugees, military personnel, and technicians from Courland, East Prussia, and Danzig-West Prussia. Many had worked at advanced weapon bases in the Baltic from Gdynia/Gotenhafen to Kiel. The ship's complement and passenger lists cited 6,050 people on board, but these did not include many civilians who boarded the ship without being recorded in the official embarkation records. Heinz Schön, a German archivist and "Gustloff" survivor who extensively researched the sinking during the 1980s and 1990s, concluded that "Wilhelm Gustloff" was carrying a crew of 173 (naval armed forces auxiliaries), 918 officers, NCOs, and men of the 2 "Unterseeboot-Lehrdivision", 373 female naval auxiliary helpers, 162 wounded soldiers, and 8,956 civilians, for a total of 10,582 passengers and crew. The passengers, besides civilians, included Gestapo personnel, members of the Organisation Todt, and Nazi officials with their families. The ship was overcrowded, and due to the temperature and humidity inside, many passengers defied orders not to remove their life jackets. The ship left Danzig (Gdańsk) at 12:30 pm on 30 January 1945, accompanied by the passenger liner "Hansa", also filled with civilians and military personnel, and two torpedo boats. "Hansa" and one torpedo boat developed mechanical problems and could not continue, leaving "Wilhelm Gustloff" with one torpedo boat escort, "Löwe". The ship had four captains ("Wilhelm Gustloff"'s captain, two merchant marine captains, and the captain of the U-Boat complement housed on the vessel) on board, and they disagreed on the best course of action to guard against submarine attacks. Against the advice of the military commander, Lieutenant Commander Wilhelm Zahn (a submariner who argued for a course in shallow waters close to shore and without lights), the "Wilhelm Gustloff"'s captain Friedrich Petersen decided to head for deep water which was known to have been cleared of mines. When he was informed by a mysterious radio message of an oncoming German minesweeper convoy, he decided to activate his ship's red and green navigation lights so as to avoid a collision in the dark, making "Wilhelm Gustloff" easy to spot in the night. As "Wilhelm Gustloff" had been fitted with anti-aircraft guns, and the Germans did not mark her as a hospital ship, no notification of her operating in a hospital capacity had been given and, as she was transporting military personnel, she did not have any protection as a hospital ship under international accords. The ship was soon sighted by the , under the command of Captain Alexander Marinesko. The submarine sensor on board the escorting torpedo boat had frozen, rendering it inoperable, as had "Wilhelm Gustloff"s anti-aircraft guns, leaving the vessels defenseless. Marinesko followed the ships to their starboard (seaward) side for two hours before making a daring move to surface his submarine and steer it around "Wilhelm Gustloff"s stern, to attack it from the port side closer to shore, from whence the attack would be less expected. At around 9 pm (CET), Marinesko ordered his crew to launch four torpedoes at "Wilhelm Gustloff"s port side, about offshore, between Großendorf and Leba. The first was nicknamed "for the Motherland", the second "for Leningrad", the third "for the Soviet people", and the fourth, which got jammed in the torpedo tubes and had to be dismantled, "for Stalin". The three torpedoes which were fired successfully all struck "Wilhelm Gustloff" on her port side. The first torpedo struck "Wilhelm Gustloff"s bow, causing the watertight doors to seal off the area which contained quarters where off-duty crew members were sleeping. The second torpedo hit the accommodations for the women's naval auxiliary, located in the ship's drained swimming pool, dislodging the pool tiles at high velocity, which caused heavy casualties; only three of the 373 quartered there survived. The third torpedo was a direct hit on the engine room located amidships, disabling all power and communications. Reportedly, only nine lifeboats were able to be lowered; the rest had frozen in their davits and had to be broken free. About 20 minutes after the torpedoes' impact, "Wilhelm Gustloff" listed dramatically to port, so that the lifeboats lowered on the high starboard side crashed into the ship's tilting side, destroying many lifeboats and spilling their occupants across the ship's side. The water temperature in the Baltic Sea at that time of year is usually around ; however, this was a particularly cold night, with an air temperature of and ice floes covering the surface. Many deaths were caused either directly by the torpedoes or by drowning in the onrushing water. Others were crushed in the initial stampede caused by panicked passengers on the stairs and decks. Many others jumped into the icy Baltic. The majority of those who perished succumbed to exposure in the freezing water. Less than 40 minutes after being struck, "Wilhelm Gustloff" was lying on her side. She sank bow-first 10 minutes later, in of water. German forces were able to rescue 996 of the survivors from the attack: the torpedo boat rescued 564 people; the torpedo boat "Löwe" (ex-), 472; the minesweeper "M387", 98; the minesweeper "M375", 43; the minesweeper "M341", 37; the steamer "Göttingen", 28; the torpedo recovery boat ("Torpedofangboot") "TF19", 7; the freighter "Gotenland", two; and the patrol boat ("Vorpostenboot") "V1703", one baby. All four captains on "Wilhelm Gustloff" survived her sinking, but an official naval inquiry was only started against Wilhelm Zahn. His degree of responsibility was never resolved, however, because of Nazi Germany's collapse in 1945. The figures from Heinz Schön's research make the loss in the sinking to be "9,343 men, women and children". Schön's more recent research is backed up by estimates made by a different method. An "Unsolved History" episode that aired in March 2003, on the Discovery Channel, undertook a computer analysis of her sinking. Using "maritime EXODUS" software, it was estimated 9,600 people died out of more than 10,600 on board. This analysis considered the passenger density based on witness reports and a simulation of escape routes and survivability with the timeline of the sinking. Many ships carrying civilians were sunk during the war by both the Allies and Axis Powers. However, based on the latest estimates of passenger numbers and those known to be saved, "Wilhelm Gustloff" remains by far the largest loss of life resulting from the sinking of one vessel in maritime history. Günter Grass said in an interview published by "The New York Times" in April 2003, "One of the many reasons I wrote "Crabwalk" was to take the subject away from the extreme Right... They said the tragedy of "Wilhelm Gustloff" was a war crime. It wasn't. It was terrible, but it was a result of war, a terrible result of war." About 1,000 German naval officers and men were aboard during, and died in, the sinking of "Wilhelm Gustloff". The women on board the ship at the time of the sinking were inaccurately described by Soviet propaganda as "SS personnel from the German concentration camps". There were, however, 373 female naval auxiliaries amongst the passengers. On the night of 9–10 February, just 11 days after the sinking, "S-13" sank another German ship, , killing about 4,500 people. Before sinking "Wilhelm Gustloff", Alexander Marinesko was facing a court martial due to his problems with alcohol and for being caught in a brothel while he and his crew were off duty, so Marinesko was thus deemed "not suitable to be a hero" for his actions. Therefore, instead of gaining the title Hero of the Soviet Union, he was awarded the lesser Order of the Red Banner. He was downgraded in rank to lieutenant and dishonorably discharged from the Soviet navy in October 1945. In 1960, he was reinstated as captain third class and granted a full pension. In 1963, Marinesko was given the traditional ceremony due to a captain upon his successful return from a mission. He died three weeks later from cancer at age 50. Marinesko was posthumously named a Hero of the Soviet Union by Mikhail Gorbachev in 1990. Noted as "Obstacle No. 73" on Polish navigation charts, and classified as a war grave, "Wilhelm Gustloff" rests at , about offshore, east of Łeba and west of Władysławowo (the former Leba and Großendorf respectively). It is one of the largest shipwrecks on the Baltic Sea floor and has been attracting much interest from treasure hunters searching for the lost Amber Room. In order to protect the property on board the war grave-wreck of "Wilhelm Gustloff" and to protect the environment, the Polish Maritime Office in Gdynia has forbidden diving within a radius of the wreck. In 2006, a bell recovered from the wreck and subsequently used as a decoration in a Polish seafood restaurant was lent to the privately funded "Forced Paths" exhibition in Berlin. The most prolific German author and historian on the subject of "Wilhelm Gustloff" is Heinz Schön, one of the shipwreck's survivors, whose books (in German) include: Recent years have seen increased interest in "Wilhelm Gustloff" disaster in countries outside Germany, with various books either written in or translated into English, including:
https://en.wikipedia.org/wiki?curid=16942
Kerberos (protocol) Kerberos () is a computer-network authentication protocol that works on the basis of "tickets" to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. The protocol was named after the character "Kerberos" (or "Cerberus") from Greek mythology, the ferocious three-headed guard dog of Hades. Its designers aimed it primarily at a client–server model and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks. Kerberos builds on symmetric key cryptography and requires a trusted third party, and optionally may use public-key cryptography during certain phases of authentication. Kerberos uses UDP port 88 by default. Massachusetts Institute of Technology (MIT) developed Kerberos to protect network services provided by Project Athena. The protocol is based on the earlier Needham–Schroeder symmetric key protocol. Several versions of the protocol exist; versions 1–3 occurred only internally at MIT. Kerberos version 4 was primarily designed by Steve Miller and Clifford Neuman. Published in the late 1980s, version 4 was also targeted at Project Athena. Neuman and John Kohl published version 5 in 1993 with the intention of overcoming existing limitations and security problems. Version 5 appeared as RFC 1510, and was made obsolete by RFC 4120 in 2005. Authorities in the United States classified Kerberos as "Auxiliary Military Equipment" on the US Munitions List and banned its export because it used the Data Encryption Standard (DES) encryption algorithm (with 56-bit keys). A Kerberos 4 implementation developed at the Royal Institute of Technology in Sweden named KTH-KRB (rebranded to Heimdal at version 5) made the system available outside the US before the US changed its cryptography export regulations ("circa" 2000). The Swedish implementation was based on a limited version called eBones. eBones was based on the exported MIT Bones release (stripped of both the encryption functions and the calls to them) based on version Kerberos 4 patch-level 9. In 2005, the Internet Engineering Task Force (IETF) Kerberos working group updated specifications. Updates included: MIT makes an implementation of Kerberos freely available, under copyright permissions similar to those used for BSD. In 2007, MIT formed the Kerberos Consortium to foster continued development. Founding sponsors include vendors such as Oracle, Apple Inc., Google, Microsoft, Centrify Corporation and TeamF1 Inc., and academic institutions such as the Royal Institute of Technology in Sweden, Stanford University, MIT, and vendors such as CyberSafe offering commercially supported versions. Windows 2000 and later versions use Kerberos as its default authentication method. Some Microsoft additions to the Kerberos suite of protocols are documented in RFC 3244 "Microsoft Windows 2000 Kerberos Change Password and Set Password Protocols". RFC 4757 documents Microsoft's use of the RC4 cipher. While Microsoft uses and extends the Kerberos protocol, it does not use the MIT software. Kerberos is used as preferred authentication method: In general, joining a client to a Windows domain means enabling Kerberos as default protocol for authentications from that client to services in the Windows domain and all domains with trust relationships to that domain. In contrast, when either client or server or both are not joined to a domain (or not part of the same trusted domain environment), Windows will instead use NTLM for authentication between client and server. Intranet web applications can enforce Kerberos as an authentication method for domain joined clients by using APIs provided under SSPI. Many UNIX and UNIX-like operating systems, including FreeBSD, OpenBSD, Apple's macOS, Red Hat Enterprise Linux, Oracle's Solaris, IBM's AIX and z/OS, HP's HP-UX and OpenVMS and others, include software for Kerberos authentication of users or services. Embedded implementation of the Kerberos V authentication protocol for client agents and network services running on embedded platforms is also available from companies. The client authenticates itself to the Authentication Server (AS) which forwards the username to a key distribution center (KDC). The KDC issues a ticket-granting ticket (TGT), which is time stamped and encrypts it using the ticket-granting service's (TGS) secret key and returns the encrypted result to the user's workstation. This is done infrequently, typically at user logon; the TGT expires at some point although it may be transparently renewed by the user's session manager while they are logged in. When the client needs to communicate with a service on another node (a "principal", in Kerberos parlance), the client sends the TGT to the TGS, which usually shares the same host as the KDC. Service must be registered at TGS with a Service Principal Name (SPN). The client uses the SPN to request access to this service. After verifying that the TGT is valid and that the user is permitted to access the requested service, the TGS issues ticket and session keys to the client. The client then sends the ticket to the service server (SS) along with its service request. The protocol is described in detail below. The Data Encryption Standard (DES) cipher can be used in combination with Kerberos, but is no longer an Internet standard because it is weak. Security vulnerabilities exist in many legacy products that implement Kerberos because they have not been updated to use newer ciphers like AES instead of DES. In November 2014, Microsoft released a patch (MS14-068) to rectify an exploitable vulnerability in Windows implementation of the Kerberos Key Distribution Center (KDC). The vulnerability purportedly allows users to "elevate" (and abuse) their privileges, up to Domain level.
https://en.wikipedia.org/wiki?curid=16947
Ketamine Ketamine is a medication mainly used for starting and maintaining anesthesia. It induces a trance-like state while providing pain relief, sedation, and memory loss. Other uses include sedation in intensive care and treatment of pain and depression. Heart function, breathing, and airway reflexes generally remain functional. Effects typically begin within five minutes when given by injection, and last up to approximately 25 minutes. Common side effects include agitation, confusion, or hallucinations as the medication wears off. Elevated blood pressure and muscle tremors are relatively common. Spasms of the larynx may rarely occur. Ketamine is an NMDA receptor antagonist, but it may also have other actions. Ketamine was discovered in 1962, first tested in humans in 1964, and approved for use in the United States in 1970. It was extensively used for surgical anesthesia in the Vietnam War due to its safety. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. The wholesale price in the developing world is between US$0.84 and US$3.22 per vial. Ketamine is also used as a recreational drug for its hallucinogenic and dissociative effects. Uses as an anesthetic: Since it suppresses breathing much less than most other available anesthetics, ketamine is used in medicine as an anesthetic; however, due to the hallucinations it may cause, it is not typically used as a primary anesthetic, although it is the anesthetic of choice when reliable ventilation equipment is not available. Ketamine is frequently used in severely injured people and appears to be safe in this group. A 2011 clinical practice guideline supports the use of ketamine as a dissociative sedative in emergency medicine. It is the drug of choice for people in traumatic shock who are at risk of hypotension. Low blood pressure is harmful in people with severe head injury and ketamine is least likely to cause low blood pressure, often even able to prevent it. The effect of ketamine on the respiratory and circulatory systems is different from that of other anesthetics. When used at anesthetic doses, it will usually stimulate rather than depress the circulatory system. It is sometimes possible to perform ketamine anesthesia without protective measures to the airways. Ketamine is considered relatively safe because protective airway reflexes are preserved. It has been successfully used to prevent postanesthetic shivering. Ketamine is used as a bronchodilator in the treatment of severe asthma. However, evidence of clinical benefit is limited. Ketamine is sometimes used in the treatment of status epilepticus that has failed to adequately respond to standard treatments. Ketamine may be used for postoperative pain management. Low doses of ketamine may reduce morphine use, nausea, and vomiting after surgery. It is especially useful in the prehospital setting, due to its effectiveness and low risk of respiratory depression. Ketamine has similar efficacy to opioids in a hospital emergency department setting for management of acute pain and for control of procedural pain. If given intrathecally, its adverse cognitive effects are largely avoided at analgesic doses. It may also be used as an intravenous analgesic with opiates to manage otherwise intractable pain, particularly if this pain is neuropathic. It has the added benefit of counteracting spinal sensitization or wind-up phenomena experienced with chronic pain. At these doses, the psychotropic side effects are less apparent and well managed with benzodiazepines. Ketamine is an analgesic that is most effective when used alongside a low-dose opioid; because, while it does have analgesic effects by itself, the doses required for adequate pain relief when it is used as the sole analgesic agent are considerably higher and far more likely to produce disorienting side effects. A review article in 2013 concluded, "despite limitations in the breadth and depth of data available, there is evidence that ketamine may be a viable option for treatment-refractory cancer pain". Low-dose ketamine is sometimes used in the treatment of complex regional pain syndrome (CRPS). A 2013 systematic review found only low-quality evidence to support the use of ketamine for CRPS. Ketamine has been found to be a rapid-acting antidepressant in depression. It also may be effective in decreasing suicidal ideation, although based on lower quality evidence. The antidepressant effects of ketamine were first shown in small studies in 2000 and 2006. They have since been demonstrated and characterized in subsequent studies. A single low, sub-anesthetic dose of ketamine given via intravenous infusion may produce antidepressant effects within four hours in people with depression. These antidepressant effects may persist for up to several weeks following a single infusion. This is in contrast to conventional antidepressants like selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs), which generally require at least several weeks for their benefits to occur and become maximal. Moreover, based on the available preliminary evidence, the magnitude of the antidepressant effects of ketamine appears to be more than double that of conventional antidepressants. On the basis of these findings, ketamine has been described as the single most important advance in the treatment of depression in over 50 years. It has sparked interest in NMDA receptor antagonists for depression, and has shifted the direction of antidepressant research and development. Ketamine has not been approved for use as an antidepressant, but its enantiomer, esketamine, was developed as a nasal spray for treatment-resistant depression and was approved for this indication in the United States in March 2019. The effectiveness of esketamine is limited however, with significant effectiveness for treatment-resistant depression seen in only two of five clinical trials. Although there is evidence to support the effectiveness of ketamine and esketamine in treating depression, there is a lack of consensus on dosing and the effects and safety of long-term therapy. Ketamine can produce euphoria and dissociative hallucinogen effects at higher doses, and thus has an abuse potential. Moreover, ketamine has been associated with cognitive deficits, urotoxicity, hepatotoxicity, and other complications in some individuals with long-term use. These undesirable effects may serve to limit the use of ketamine and esketamine for depression. Ketamine is available in the form of solution for intravenous infusion. The use of ketamine is cautioned against in cases of: When administered by trained medical professionals, ketamine is generally safe for those people who are critically ill. Even in these cases, there are known side effects that include one or more of the following: At anesthetic doses, 10–20% of people experience adverse reactions that occur during emergence from anesthesia, reactions that can manifest as seriously as hallucinations and delirium. These reactions may be less common in some subpopulations, and when administered intramuscularly, and can occur up to 24 hours postoperatively; the chance of this occurring can be reduced by minimizing stimulation to the person during recovery and pretreating with a benzodiazepine, alongside a lower dose of ketamine. People who experience severe reactions may require treatment with a small dose of a short- or ultrashort-acting barbiturate. Tonic-clonic movements are reported at higher anesthetic doses in greater than 10% of people. In 1989, psychiatry professor John Olney reported ketamine caused irreversible changes, known as Olney's lesions, in two small areas of the rat brain. However, the rat brain has significant differences in metabolism from the human brain; therefore such changes may not occur in humans.
https://en.wikipedia.org/wiki?curid=16948
Katyusha rocket launcher The Katyusha multiple rocket launcher () is a type of rocket artillery first built and fielded by the Soviet Union in World War II. Multiple rocket launchers such as these deliver explosives to a target area more quickly than conventional artillery, but with lower accuracy and requiring a longer time to reload. They are fragile compared to artillery guns, but are inexpensive, easy to produce, and usable on any chassis. The Katyushas of World War II, the first self-propelled artillery mass-produced by the Soviet Union, were usually mounted on ordinary trucks. This mobility gave the Katyusha, and other self-propelled artillery, another advantage: being able to deliver a large blow all at once, and then move before being located and attacked with counter-battery fire. Katyusha weapons of World War II included the BM-13 launcher, light BM-8, and heavy BM-31. Today, the nickname is also applied to newer truck-mounted post-Soviet – in addition to non-Soviet – multiple rocket launchers, notably the common BM-21 Grad and its derivatives. Although this type of weapon has existed since the 15th century (Leonardo da Vinci having perfected a similar machine), the design of the Katyusha may have been influenced by Giuseppe Fieschi's "Machine infernale" - Fieschi was honored in a religious service at a Moscow church at the prompting of Soviet General Kotskov, the inventor of the Katyusha rocket launcher. Initially, concerns for secrecy kept the military designation of the Katyushas from being known by the soldiers who operated them. They were called by code names such as "Kostikov guns", after the head of the RNII, the Reaction-Engine Scientific Research Institute, and finally classed as "Guards Mortars". The name "BM-13" was only allowed into secret documents in 1942, and remained classified until after the war. Because they were marked with the letter "K" (for Voronezh Komintern Factory), Red Army troops adopted a nickname from Mikhail Isakovsky's popular wartime song, "Katyusha", about a girl longing for her absent beloved, who has gone away on military service. Katyusha is the Russian equivalent of "Katie", an endearing diminutive form of the name Katherine: "Yekaterina →Katya →Katyusha". German troops coined the nickname "Stalin's organ" (), after Soviet leader Joseph Stalin, comparing the visual resemblance of the launch array to a pipe organ, and the sound of the weapon's rocket motors, a distinctive howling sound which terrified the German troops, adding a psychological warfare aspect to their use. Weapons of this type are known by the same name in Denmark (), Finland (), France (), Norway (), the Netherlands and Belgium (), Hungary (), Spain and other Spanish-speaking countries () as well as in Sweden (). The heavy BM-31 launcher was also referred to as "Andryusha" ("Андрюша", an affectionate diminutive of "Andrew"). Katyusha rocket launchers, which were invented in Voronezh, were mounted on many platforms during World War II, including on trucks, artillery tractors, tanks, and armoured trains, as well as on naval and riverine vessels as assault support weapons. Soviet engineers also mounted single Katyusha rockets on lengths of railway track to serve in urban combat. The design was relatively simple, consisting of racks of parallel rails on which rockets were mounted, with a folding frame to raise the rails to launch position. Each truck had 14 to 48 launchers. The M-13 rocket of the BM-13 system was long, in diameter and weighed . The weapon is less accurate than conventional artillery guns, but is extremely effective in saturation bombardment, and was particularly feared by German soldiers. A battery of four BM-13 launchers could fire a salvo in 7–10 seconds that delivered 4.35 tons of high explosives over a impact zone, making its power roughly equivalent to that of 72 conventional artillery guns. With an efficient crew, the launchers could redeploy to a new location immediately after firing, denying the enemy the opportunity for counterbattery fire. Katyusha batteries were often massed in very large numbers to create a shock effect on enemy forces. The weapon's disadvantage was the long time it took to reload a launcher, in contrast to conventional guns which could sustain a continuous low rate of fire. In June 1938, the Soviet Reaction-Engine Scientific Research Institute (RNII) in Moscow was authorized by the Main Artillery Directorate (GAU) to develop a multiple rocket launcher for the RS-132 aircraft rocket (RS for , 'rocket-powered shell'). I. Gvay led a design team in Chelyabinsk, Russia, which built several prototype launchers firing the modified 132 mm M-132 rockets over the sides of ZiS-5 trucks. These proved unstable, and V.N. Galkovskiy proposed mounting the launch rails longitudinally. In August 1939, the result was the BM-13 (BM stands for "боевая машина" (translit. "boyevaya mashina"), 'combat vehicle' for M-13 rockets). The first large-scale testing of the rocket launchers took place at the end of 1938, when 233 rounds of various types were used. A salvo of rockets could completely straddle a target at a range of 5,500 metres (3.4 mi). But the artillery branch was not fond of the Katyusha, because it took up to 50 minutes to load and fire 24 rounds, while a conventional howitzer could fire 95 to 150 rounds in the same time. Testing with various rockets was conducted through 1940, and the BM-13-16 with launch rails for sixteen rockets was authorized for production. Only forty launchers were built before Germany invaded the Soviet Union in June 1941. After their success in the first month of the war, mass production was ordered and the development of other models proceeded. The Katyusha was inexpensive and could be manufactured in light industrial installations which did not have the heavy equipment to build conventional artillery gun barrels. By the end of 1942, 3,237 Katyusha launchers of all types had been built, and by the end of the war total production reached about 10,000. The truck-mounted Katyushas were installed on ZiS-6 6×4 trucks, as well as the two-axle ZiS-5 and ZiS-5V. In 1941, a small number of BM-13 launchers were mounted on STZ-5 artillery tractors. A few were also tried on KV tank chassis as the KV-1K, but this was a needless waste of heavy armour. Starting in 1942, they were also mounted on various British, Canadian and U.S. Lend-Lease trucks, in which case they were sometimes referred to as BM-13S. The cross-country performance of the Studebaker US6 2½-ton 6x6 truck was so good that it became the GAU's standard mounting in 1943, designated BM-13N ("normalizovanniy", 'standardized'), and more than 1,800 of this model were manufactured by the end of World War II. After World War II, BM-13s were based on Soviet-built ZiS-151 trucks. The 82 mm BM-8 was approved in August 1941, and deployed as the BM-8-36 on truck beds and BM-8-24 on T-40 and T-60 light tank chassis. Later these were also installed on GAZ-67 jeeps as the BM-8-8, and on the larger Studebaker trucks as the BM-8-48. In 1942, the team of scientists Leonid Shvarts, Moisei Komissarchik and engineer Yakov Shor received the Stalin prize for the development of the BM-8-48. Based on the M-13, the M-30 rocket was developed in 1942. Its bulbous warhead required it to be fired from a grounded frame, called the M-30 (single frame, four round; later double frame, 8 round), instead of a launch rail mounted on a truck. In 1944 it became the basis for the BM-31-12 truck-mounted launcher. A battery of BM-13-16 launchers included four firing vehicles, two reload trucks and two technical support trucks, with each firing vehicle having a crew of six. Reloading was executed in 3–4 minutes, although the standard procedure was to switch to a new position some 10 km away due to the ease with which the battery could be identified by the enemy. Three batteries were combined into a division (company), and three divisions into a separate mine-firing regiment of rocket artillery. Soviet World War II rocket systems were named according to standard templates which are the following: where: In particular, BM-8-16 is a vehicle which fires M-8 missiles and has 16 rails. BM-31-12 is a vehicle which fires M-31 missiles and has 12 launch tubes. Short names such as BM-8 or BM-13 were used too. Number of launch rails/tubes is absent here. Such names describe launchers only no matter what vehicle they are mounted on. In particular BM-8-24 had a number of variants: vehicle mounted (ZiS-5 truck), tank mounted (T-40) and tractor mounted (STZ-3). All of them had the same name: BM-8-24. Other launchers had a number of variants mounted on different vehicles too. Typical set of vehicles for soviet missile systems is the following: Note: There was also an experimental KV-1K – Katyusha mounted on KV-1 tank which was not taken in service. A list of some implementations of the Katyusha follows: Rockets used in the above implementations were: The M-8 and M-13 rocket could also be fitted with smoke warheads, although this was not common. The multiple rocket launchers were top secret in the beginning of World War II. A special unit of the NKVD troops was raised to operate them. On July 14, 1941, an experimental artillery battery of seven launchers was first used in battle at Orsha in the Vitebsk Region of Belarus, under the command of Captain Ivan Flyorov, destroying a concentration of German troops with tanks, armored vehicles and trucks at the marketplace, causing massive German Army casualties and its retreat from the town in panic. Following the success, the Red Army organized new Guards mortar batteries for the support of infantry divisions. A battery's complement was standardized at four launchers. They remained under NKVD control until German "Nebelwerfer" rocket launchers became common later in the war. On August 8, 1941, Stalin ordered the formation of eight special Guards mortar regiments under the direct control of the Reserve of the Supreme High Command (RVGK). Each regiment comprised three battalions of three batteries, totalling 36 BM-13 or BM-8 launchers. Independent Guards mortar battalions were also formed, comprising 12 launchers in three batteries of four. By the end of 1941, there were eight regiments, 35 independent battalions, and two independent batteries in service, fielding a total of 554 launchers. In June 1942 heavy Guards mortar battalions were formed around the new M-30 static rocket launch frames, consisting of 96 launchers in three batteries. In July, a battalion of BM-13s was added to the establishment of a tank corps. In 1944, the BM-31 was used in motorized heavy Guards mortar battalions of 48 launchers. In 1943, Guards mortar brigades, and later divisions, were formed equipped with static launchers. By the end of 1942, 57 regiments were in service—together with the smaller independent battalions, this was the equivalent of 216 batteries: 21% BM-8 light launchers, 56% BM-13, and 23% M-30 heavy launchers. By the end of the war, the equivalent of 518 batteries were in service. The success and economy of multiple rocket launchers (MRL) have led them to continue to be developed. In the years following WWII, the BM-13 was replaced by the 140 mm BM-14 and the BM-31 was replaced by the 240 mm BM-24. During the Cold War, the Soviet Union fielded several models of Katyusha-like MRL, notably the BM-21 Grad launchers somewhat inspired by the earlier weapon, and the larger BM-27 Uragan. Advances in artillery munitions have been applied to some Katyusha-type multiple launch rocket systems, including bomblet submunitions, remotely deployed land mines, and chemical warheads. With the breakup of the Soviet Union, Russia inherited most of its military arsenal including its large complement of MRLs. In recent history, they have been used by Russian forces during the First and Second Chechen Wars and by Armenian and Azerbaijani forces during the Nagorno-Karabakh War. Georgian government forces are reported to have used BM-21 Grad or similar rocket artillery in fighting in the 2008 South Ossetia war. Katyusha-like launchers were exported to Afghanistan, Angola, Czechoslovakia, Egypt, East Germany, Hungary, Iran, Iraq, Mongolia, North Korea, Poland, Syria, Yemen and Vietnam. They were also built in Czechoslovakia, the People's Republic of China, North Korea, and Iran. Proper Katyushas (BM-13s) also saw action in the Korean War, used by the Chinese People's Volunteer Army against the South and United Nations forces. Soviet BM-13s were known to have been imported to China before the Sino-Soviet split and were operational in the People's Liberation Army. Israel captured BM-24 MRLs during the Six-Day War (1967), used them in two battalions during the Yom Kippur War (1973) and the 1982 Lebanon War, and later developed the MAR-240 launcher for the same rockets, based on a Sherman tank chassis. The rockets were employed by the Tanzania People's Defence Force in the Uganda-Tanzania War. Tanzanian forces called them "Baba Mtakatifu" (Kiswahili for "Holy Father") while the Ugandans called them Saba Saba. During the 2006 Lebanon War, Hezbollah fired between 3,970 and 4,228 rockets, from light truck-mounts and single-rail man-portable launchers. About 95% of these were 122 mm (4.8 in) Syrian-manufactured M-21OF type artillery rockets which carried warheads up to 30 kg (66 lb) and had a range of 20 km, perhaps up to 30 km (19 mi). Most rockets fired at Israel from the Gaza Strip are of the simpler Qassam rocket type, but Hamas has also launched 122-mm Grad-type Katyusha rockets against several cities in Israel, although they are not reported to have truck-mounted launchers. Although Katyusha originally referred to the mobile launcher, today the rockets are often referred to as Katyushas. Some allege that the CIA bought Katyushas from the Egyptian military and supplied them to the Mujahideen (via Pakistan's ISI) during the Soviet Afghan war. Katyusha-like MRLs were also allegedly used by the Rwandan Patriotic Front during its 1990 invasion of Rwanda, through the 1994 genocide. They were effective in battle, but translated into much anti-Tutsi sentiment in the local media. It was reported that BM-21 Grad launchers were used against American forces during the 2003 invasion of Iraq. They have also been used in the Afghanistan and Iraq insurgencies. In Iraq, according to Associated Press and Agence France-Presse reports, Katyusha-like rockets were fired at the Green Zone late March 2008. Katyusha rockets were reportedly used by both Gaddafi Loyalists and anti-Gaddafi forces during the Libyan Civil War. In February 2013, the Defence Ministry of Yemen reported seizing an Iranian ship, and that the ship's cargo included (among its other weapons) Katyusha rockets. On May 19, 2019, a Katyusha rocket was fired inside the Green Zone in Baghdad, Iraq, landing less than a mile from the US Embassy near the statue of the Unknown Soldier. No casualties were reported. On January 4, 2020, four Katyusha rockets were fired in the Baghdad area. According to two Iraqi police sources and an official Iraqi military statement, one Katyusha rocket landed in the Green Zone in Celebration Square near the U.S. Embassy and another landed in the nearby Jadriya neighborhood. Two other Katyusha rockets landed in the Balad air base, which houses U.S. troops, according to two security sources. Notes Bibliography Further reading
https://en.wikipedia.org/wiki?curid=16959
Kathy Acker Kathy Acker (April 18, 1947 – November 30, 1997) was an American experimental novelist, playwright, essayist, and postmodernist writer. She was influenced by the Black Mountain School poets, William S. Burroughs, David Antin, French critical theory, Carolee Schneeman, Eleanor Antin, and by philosophy, mysticism, and pornography, as well as classic literature that she artistically plagiarized from. The sole biological daughter of Donald and Claire (née Weill) Lehman, Kathy Acker was born Karen Lehman in New York City, in 1947, although the Library of Congress gives her birth year as 1948, while The Editors of Encyclopædia Britannica gave her birth year as April 18, 1948, New York, New York, U.S. and died Nov. 30, 1997, Tijuana, Mexico.) and most obituaries, including "The New York Times", cited the year as 1944. Her family was from a wealthy, assimilated, German-Jewish background that was culturally, but not religiously Jewish. Her paternal grandmother, Florence Weill, was an Austrian Jew who had inherited a small fortune from the glove-making business. Acker's grandparents went into political exile from Alsace-Lorraine prior to World War I due to the rising nationalism of pre-Nazi Germany, moving to Paris and then to the United States. According to Acker, her grandparents were "first generation French-German Jews" whose ancestors originally hailed from the Pale of Settlement. In an interview with the magazine "Tattoo Jew", Acker stated that religious Judaism "means nothing to me. I don't run away from it, it just means nothing to me" and elaborated that her parents were "high-German Jews" who held cultural prejudices against Yiddish-speaking Eastern European Jews ("I was trained to run away from Polish Jews."). The pregnancy was unplanned; Donald Lehman abandoned the family before Karen's birth. Her stepfather's name, Albert Alexander, appears on the birth certificate but not on the April 18, 1947 registry of births in NYC (New York, New York, Birth Index, 1910-1965), which clearly states Karen Lehman. Her relationship with her domineering mother even into adulthood was fraught with hostility and anxiety because Acker felt unloved and unwanted. Her mother soon remarried, to Albert Alexander, whose surname Kathy was given, although the writer later described her mother's union with Alexander as a passionless marriage to an ineffectual man. Karen (later Kathy) had a half-sister, Wendy, by her mother's second marriage, but the two women were never close and long estranged. By the time of Kathy's death, she had requested that her friends not contact Wendy, as some had suggested. Acker was raised in her mother and stepfather's home on New York's prosperous Upper East Side. In 1978, Claire Alexander, Karen's mother, committed suicide. As an adult, Acker tried to track down her father, but abandoned her search after she discovered that her father had killed a trespasser on his yacht and spent six months in a psychiatric asylum until the state excused him of murder charges. In 1966, she married Robert Acker, and changed her last name from Alexander to Acker. Robert Acker was the son of lower-middle-class Polish-Jewish immigrants. Kathy's parents had held hopes that their daughter would marry a wealthy man and did not expect the marriage to last long.. Although her birth name was Karen, she was known as Kathy by her friends and family. Her first work appeared in print as part of the burgeoning New York City literary underground of the mid-1970s. Like a number of other young women struggling to be writers and artists, she worked for a few months as a stripper, and listening to the stories of women so different from those she had known before profoundly influenced her early work, and changed her understanding of gender and power relationships. During the 1970s Acker often moved back and forth between San Diego, San Francisco and New York. She married composer and experimental musician Peter Gordon shortly before the end of their seven-year relationship. Later, she had relationships with theorist, publisher, and critic Sylvère Lotringer and then with filmmaker and film theorist Peter Wollen. In 1996, Acker left San Francisco and moved to London to live with writer and music critic Charles Shaar Murray. She married twice. While most of her relationships were with men she was openly bisexual. In 1979, she won the Pushcart Prize for her short story "New York City in 1979". During the early 1980s she lived in London, where she wrote several of her most critically acclaimed works. After returning to the United States in the late 1980s she worked as an adjunct professor at the San Francisco Art Institute for about six years and as a visiting professor at several universities, including the University of Idaho, the University of California, San Diego, University of California, Santa Barbara, the California Institute of Arts, and Roanoke College. In April 1996 Acker was diagnosed with breast cancer and she elected to have a double mastectomy. In January 1997 she wrote about her loss of faith in conventional medicine in a "Guardian" article, "The Gift of Disease". In the article, she explains that after unsuccessful surgery, which left her feeling physically mutilated and emotionally debilitated, she rejected the passivity of the patient in the medical mainstream and began to seek out the advice of nutritionists, acupuncturists, psychic healers, and Chinese herbalists. She found appealing the claim that instead of being an object of knowledge, as in Western medicine, the patient becomes a seer, a seeker of wisdom, that illness becomes the teacher and the patient the student. However, after pursuing several forms of alternative medicine in England and the United States, Acker died a year and a half later, on November 30, 1997, aged 50, from complications of cancer in a Tijuana, Mexico alternative cancer clinic, the only alternative-treatment facility that accepted her with her advanced stage of cancer. She died in what was called "Room 101", to which her friend Alan Moore quipped, "There's nothing that woman can't turn into a literary reference". (Room 101, in the climax of George Orwell's "Nineteen Eighty-Four", is the basement torture chamber in which the Party attempts to subject a prisoner to his or her own worst fears.) At Brandeis University she engaged in undergraduate coursework in Classics at a time when Angela Davis was also at the university. She became interested in writing novels, and moved to California to attend University of California, San Diego where David Antin, Eleanor Antin, and Jerome Rothenberg were among her teachers. She received her bachelor's degree in 1968. After moving to New York, she attended two years of graduate school at the City University of New York in Classics, specializing in Greek. She did not earn a graduate degree. During her time in New York she was employed as a file clerk, secretary, stripper, and porn performer. Acker was associated with the New York punk movement of the late 1970s and early 1980s. The punk aesthetic influenced her literary style. In the 1970s, before the term "postmodernism" was popular, Acker began writing her books. These books contain features that would eventually be considered postmodernist work. Her controversial body of work borrows heavily from the experimental styles of William S. Burroughs and Marguerite Duras. Her writing strategies at times used forms of pastiche and deployed Burroughs's cut-up technique, involving cutting-up and scrambling passages and sentences into a somewhat random remix. Acker defined her writing as existing post-nouveau roman European tradition. In her texts, she combines biographical elements, power, sex and violence. Indeed, critics often compare her writing to that of Alain Robbe-Grillet and Jean Genet. Critics have noticed links to Gertrude Stein and photographers Cindy Sherman and Sherrie Levine. Acker's novels also exhibit a fascination with and an indebtedness to tattoos. She dedicated "Empire of the Senseless" to her tattooist. Acker published her first book, "Politics", in 1972. Although the collection of poems and essays did not garner much critical or public attention, it did establish her reputation within the New York punk scene. In 1973, she published her first novel (under the pseudonym Black Tarantula), "The Childlike Life of the Black Tarantula: Some Lives of Murderesses". The following year she published her second novel, "I Dreamt I Was a Nymphomaniac: Imagining". Both works are reprinted in "Portrait of an Eye". In 1979, she received popular attention when she won a Pushcart Prize for her short story "New York City in 1979". She did not receive critical attention, however, until she published "Great Expectations" in 1982. The opening of "Great Expectations" is an obvious re-writing of Charles Dickens's work of the same name. It features her usual subject matter, including a semi-autobiographical account of her mother's suicide and the appropriation of several other texts, including Pierre Guyotat's violent and sexually explicit "Eden Eden Eden". That same year, Acker published a chapbook, entitled "Hello, I'm Erica Jong". She appropriated from a number of influential writers. These writers include Charles Dickens, Nathaniel Hawthorne, John Keats, William Faulkner, T.S Eliot, Charlotte and Emily Brontë, Marquis de Sade, Georges Bataille, and Arthur Rimbaud. Acker wrote the script for the 1983 film "Variety". Acker wrote a text on the photographer Marcus Leatherdale that was published in 1983, in an art catalogue for the Molotov Gallery in Vienna. In 1984, Acker's first British publication, the novel "Blood and Guts in High School" was published soon after its publication by Grove Press in New York. That same year, she was signed by Grove Press, one of the legendary independent publishers committed to controversial and avant-garde writing; she was one of the last writers taken on by Barney Rosset before the end of his tenure there. Most of her work was published by them, including re-issues of important earlier work. She wrote for several magazines and anthologies, including the periodicals "RE/Search", "Angel Exhaust", "monochrom" and "Rapid Eye". As she neared the end of her life, her work was more well received by the conventional press; for example, "The Guardian" published a number of her essays, interviews and articles, among them was an interview with the Spice Girls. "In Memoriam to Identity" draws attention to popular analyses of Rimbaud's life and "The Sound and the Fury", constructing or revealing social and literary identity. Although known in the literary world for creating a whole new style of feminist prose and for her transgressive fiction, she was also a punk and feminist icon for her devoted portrayals of subcultures, strong-willed women, and violence. Notwithstanding the increased recognition she got for "Great Expectations", "Blood and Guts in High School" is often considered Acker's breakthrough work. Published in 1984, it is one of her most extreme explorations of sexuality and violence. Borrowing from, among other texts, Nathaniel Hawthorne's "The Scarlet Letter", "Blood and Guts" details the experiences of Janey Smith, a sex addicted and pelvic inflammatory disease-ridden urbanite who is in love with a father who sells her into slavery. Many critics criticized it for being demeaning toward women, and Germany banned it completely. Acker published the German court judgment against "Blood and Guts in High School" in "Hannibal Lecter, My Father". Acker published "Empire of the Senseless" in 1988 and considered it a turning point in her writing. While she still borrows from other texts, including Mark Twain's "The Adventures of Huckleberry Finn", the appropriation is less obvious. However, one of Acker's more controversial appropriations is from William Gibson's 1984 text, "Neuromancer", in which Acker equates code with the female body and its militaristic implications. In 1988, she published "Literal Madness: Three Novels", which included three previously published works: "Florida" deconstructs and reduces John Huston's 1948 film noir "Key Largo" into its base sexual politics, "Kathy Goes to Haiti" details a young woman's relationship and sexual exploits while on vacation, and "My Death My Life by Pier Paolo Pasolini" provides a fictional "autobiography" of the Italian filmmaker in which he solves his own murder. Between 1990–93, she published four more books: "In Memoriam to Identity" (1990); "Hannibal Lecter, My Father" (1991); "Portrait of an Eye: Three Novels" (1992), also composed of already published works; and "My Mother: Demonology" (1992). Her last novel, "Pussy, King of the Pirates", was published in 1996, which she, Rico Bell, and the rest of the Mekons - the rock band - also reworked into an operetta, which they performed at the Museum of Contemporary Art, Chicago, in 1997. In 2007, Amandla Publishing re-published Acker's articles that she wrote for the "New Statesman" from 1989–91. Grove Press published two unpublished early novellas in the volume "Rip-Off Red, Girl Detective and The Burning Bombing of America", and a collection of selected work, "Essential Acker", edited by Amy Scholder and Dennis Cooper in 2002. Three volumes of her non-fiction have been published and re-published since her death. In 2002, New York University staged "Discipline and Anarchy", a retrospective exhibition of her works, while in 2008 London's Institute of Contemporary Arts screened an evening of films influenced by Acker. A collection of essays on Acker's work, "Lust For Life: On the Writings of Kathy Acker", edited by Carla Harryman, Avital Ronell, and Amy Scholder, was published by Verso in 2006 and includes essays by Nayland Blake, Leslie Dick, Robert Glück, Carla Harryman, Laurence Rickels, Avital Ronell, Barrett Watten, and Peter Wollen. In 2009, the first collection of essays to focus on academic study of Acker, "Kathy Acker and Transnationalism" was published. In 2015, Semiotext(e) published "I'm Very Into You", a book of Acker's email correspondence with media theorist McKenzie Wark, edited by Matias Viegener, her executor and head of the Kathy Acker Literary Trust. Her personal library is housed in a reading room at the University of Cologne in Germany, and her papers are divided between NYU's Fales Library and the Rubenstein Rare Book and Manuscript Library at Duke University. A limited body of her recorded readings and discussions of her works exists in the special collections archive of University of California, San Diego. In 2013, the Acker Award was launched and named for Kathy Acker. Awarded to living and deceased members of the San Francisco or New York avant-garde art scene, the award is financed by Alan Kaufman and Clayton Patterson. In 2017, American writer and artist Chris Kraus published "After Kathy Acker: A Literary Biography", the first book-length biography of Acker's life experiences and literary strategies. In 2018, British writer Olivia Laing published "Crudo", a fictional text covered by references to Acker's texts and whose main character is a woman called Kathy, suffering double breast cancer; yet book's events are situated in August–September 2017. In 2019, Amy Scholder and Douglas A. Martin co-edited "Kathy Acker: The Last Interview and Other Conversations".
https://en.wikipedia.org/wiki?curid=16960
Koh-i-Noor The Koh-i-Noor (Persian: کوه نور, ; "Mountain of light"), also spelt Kohinoor and Koh-i-Nur, is one of the largest cut diamonds in the world, weighing . It is part of the British Crown Jewels. Mined in Kollur Mine, subcontinent, during the period of the Delhi Sultanate, there is no record of its original weight – but the earliest well-attested weight is 186 old carats (191 metric carats or 38.2 g). The diamond was part of the Mughal Peacock Throne. It changed hands between various factions in south and west Asia, until being ceded to Queen Victoria after the British annexation of the Punjab in 1849. Originally, the stone was of a similar cut to other Mughal-era diamonds, like the Darya-i-Noor, which are now in the Iranian Crown Jewels. In 1851, it went on display at the Great Exhibition in London, but the lacklustre cut failed to impress viewers. Prince Albert, husband of Queen Victoria, ordered it to be re-cut as an oval brilliant by Coster Diamonds. By modern standards, the culet (point at the bottom of a gemstone) is unusually broad, giving the impression of a black hole when the stone is viewed head-on; it is nevertheless regarded by gemologists as "full of life". Because its history involves a great deal of fighting between men, the Koh-i-Noor acquired a reputation within the British royal family for bringing bad luck to any man who wears it. Since arriving in the UK, it has only been worn by female members of the family. Victoria wore the stone in a brooch and a circlet. After she died in 1901, it was set in the Crown of Queen Alexandra, wife of Edward VII. It was transferred to the Crown of Queen Mary in 1911, and finally to the Crown of Queen Elizabeth (later known as the Queen Mother) in 1937 for her coronation as Queen consort. Today, the diamond is on public display in the Jewel House at the Tower of London, where it is seen by millions of visitors each year. The governments of India, Pakistan, Iran, and Afghanistan have all claimed rightful ownership of the Koh-i-Noor and demanded its return ever since the subcontinent gained independence from the UK in 1947. The British government insists the gem was obtained legally under the terms of the Last Treaty of Lahore and has rejected the claims. The diamond may have been mined from Kollur Mine, a series of deep gravel-clay pits on the banks of Krishna River in the Golconda (present-day Andhra Pradesh), India. It is impossible to know exactly when or where it was found, and many unverifiable theories exist as to its original owner. Babur, the Turco-Mongol founder of the Mughal Empire, wrote about a "famous" diamond that weighed just over 187 old carats – approximately the size of the 186-carat Koh-i-Noor. Some historians think Babur's diamond is the earliest reliable reference to the Koh-i-Noor. According to his diary, it was acquired by Alauddin Khalji, second ruler of the Khalji dynasty of the Delhi Sultanate, when he invaded the kingdoms of southern India at the beginning of the 14th century and was probably in the possession of the Kakatiya dynasty. It later passed to succeeding dynasties of the Sultanate, and Babur received the diamond in 1526 as a tribute for his conquest of Delhi and Agra at the Battle of Panipat. Shah Jahan, the fifth Mughal emperor, had the stone placed into his ornate Peacock Throne. In 1658, his son and successor, Aurangzeb, confined the ailing emperor to Agra Fort. While in the possession of Aurangzeb, it was allegedly cut by Hortenso Borgia, a Venetian lapidary, reducing the weight of the large stone to . For this carelessness, Borgia was reprimanded and fined 10,000 rupees. According to recent research, the story of Borgia cutting the diamond is not correct, and most probably mixed up with the Orlov, part of Catherine the Great's imperial Russian sceptre in the Kremlin. Following the 1739 invasion of Delhi by Nadir Shah, the Afsharid Shah of Persia, the treasury of the Mughal Empire was looted by his army in an organised and thorough acquisition of the Mughal nobility's wealth. Along with millions of rupees and an assortment of historic jewels, the Shah also carried away the Koh-i-Noor. He exclaimed "Koh-i-Noor!", Persian for "Mountain of Light", when he obtained the famous stone. One of his consorts said, "If a strong man were to throw four stones – one north, one south, one east, one west, and a fifth stone up into the air – and if the space between them were to be filled with gold, all would not equal the value of the Koh-i-Noor". After Nadir Shah was killed and his empire collapsed in 1747, the Koh-i-Noor fell to his grandson, who in 1751 gave it to Ahmad Shah Durrani, founder of the Afghan Empire, in return for his support. One of Ahmed's descendants, Shuja Shah Durrani, wore a bracelet containing the Koh-i-Noor on the occasion of Mountstuart Elphinstone's visit to Peshawar in 1808. A year later, Shuja formed an alliance with the United Kingdom to help defend against a possible invasion of Afghanistan by Russia. He was quickly overthrown, but fled with the diamond to Lahore, where Ranjit Singh, founder of the Sikh Empire, in return for his hospitality, insisted upon the gem being given to him, and he took possession of it in 1813. Ranjit Singh had the diamond examined by jewelers of Lahore for two days to ensure that Shuja had not tricked him. After the jewelers confirmed its genuineness, he donated 125,000 rupees to Shuja. Ranjit Singh then asked the principal jewelers of Amritsar to estimate the diamond's value; the jewelers declared that the value of the diamond was "far beyond all computation". Ranjit Singh then fixed the diamond in the front of his turban, and paraded on an elephant to enable his subjects to see the diamond. He used to wear it as an armlet during major festivals such as Diwali and Dusserah, and took it with him during travel. He would exhibit it to prominent visitors, especially British officers. One day, Ranjit Singh asked the diamond's former owners - Shuja and his wife Wafa Begum - to estimate its value. Wafa Begum replied that if a strong man threw a stone in four cardinal directions and vertically, Koh-i-Noor would be worth more than the gold and precious stones filled in the space. Ranjit Singh grew paranoid about the Koh-i-Noor being stolen, because in the past, another valuable jewel had been stolen from him while he was intoxicated. He kept the diamond within a high-security facility at the Gobindgarh Fort when it was not in use. When the diamond was to be transported, it was placed in a pannier on a guarded camel; 39 other camels with identical panniers were included in the convoy; the diamond was always placed on the first camel immediately behind the guards, but great secrecy was maintained regarding which camel carried it. Only Ranjit Singh's treasurer Misr Beli Ram knew which camel carried the diamond. In June 1839, Ranjit Singh suffered his third stroke, and it became apparent that he would die soon. On his deathbed, he started giving away his valuable possessions to religious charities, and appointed his eldest son Kharak Singh as his successor. A day before his death, on 26 June 1839, a major argument broke out between his courtiers regarding the fate of Koh-i-Noor. Ranjit Singh himself was too weak to speak, and communicated using gestures. Bhai Gobind Ram, the head Brahmin of Ranjit Singh, insisted that the king had willed Koh-i-Noor and other jewels to the Jagannath Temple in Puri: the king apparently supported this claim through gestures, as recorded in his court chronicle "Umdat ul-Tawarikh". However, treasurer Beli Ram insisted that it was a state property rather than Ranjit Singh's personal property, and therefore, should be handed over to Kharak Singh. After Ranjit Singh's death, Beli Ram refused to send the diamond to the temple, and hid it in his vaults. Meanwhile, Kharak Singh and prime minister Dhian Singh also issued orders stating that the diamond should not be taken out of Lahore. On 8 October 1839 the new emperor Kharak Singh, was overthrown in a coup by his prime minister Dhian Singh. The prime minister's brother Gulab Singh, Raja of Jammu, came into possession of the Koh-i-Noor. Kharak Singh later died in prison, soon followed by the mysterious death of his son and successor Nau Nihal Singh on 5 November 1840. Gulab Singh held onto the stone until January 1841, when he presented it to emperor Sher Singh in order to win his favour, after his brother Dhian Singh negotiated a ceasefire between Sher Singh and the overthrown empress Chand Kaur. Gulab Singh had attempted to defend the widowed empress at her fort in Lahore, during two days of conflict and shelling by Sher Singh and his troops. Despite handing over the Koh-i-noor, Gulab Singh as a result of the ceasefire returned safely to Jammu with a wealth of gold and other jewels taken from the treasury. On 15 September 1843, both Sher Singh and prime minister Dhian Singh were assassinated in a coup led by Ajit Singh Sandhawalia. However, the next day in a counter coup led by Dhian's son Hira Singh the assassins were killed. Aged 24, Hira Singh succeeded his father as prime minister, and installed the five-year old infant Duleep Singh as emperor. The Koh-i-noor was now fastened to the arm of the child emperor in court at Lahore. Duleep Singh and his mother empress Jind Kaur, had till then resided in Jammu, the kingdom governed by Gulab Singh. Following his nephew Prime Minister Hira Singh's assassination on 27 March 1844, and the subsequent outbreak of the First Anglo-Sikh War, Gulab Singh himself led the Sikh empire as its prime minister, and despite defeat in the war, he became the first Maharaja of Jammu and Kashmir on 16 March 1846, under the Treaty of Amritsar. On 29 March 1849, following the conclusion of the Second Anglo-Sikh War, the Kingdom of Punjab was formally annexed to Company rule, and the Last Treaty of Lahore was signed, officially ceding the Koh-i-Noor to Queen Victoria and the Maharaja's other assets to the company. Article III of the treaty read: "The gem called the Koh-i-Noor, which was taken from Shah Sooja-ool-moolk by Maharajah Ranjeet Singh, shall be surrendered by the Maharajah of Lahore to the Queen of England ("sic")". The lead signatory of the treaty for the eleven-year-old Maharaja Duleep Singh was his commander-in-chief Tej Singh, a loyalist of Maharaja Gulab Singh who had previously been in possession of the Koh-i-noor and gained Kashmir from the Sikh empire, via treaty with Britain, following the First Anglo-Sikh War. The Governor-General in charge of the ratification of this treaty was the Marquess of Dalhousie. The manner of his aiding in the transfer of the diamond was criticized even by some of his contemporaries in Britain. Although some thought it should have been presented as a gift to Queen Victoria by the East India Company, it is clear that Dalhousie believed the stone was a spoil of war, and treated it accordingly, ensuring that it was officially surrendered to her by Duleep Singh, the youngest son of Ranjit Singh. The presentation of the Koh-i-Noor by the East India Company to the queen was the latest in a long history of transfers of the diamond as a coveted spoil of war. Duleep Singh had been placed in the guardianship of Dr John Login, a surgeon in the British Army serving in the Presidency of Bengal. Duleep Singh would move to England in 1854. In due course, the Governor-General received the Koh-i-Noor from Dr Login, who had been appointed Governor of the Citadel, on 6 April 1848 under a receipt dated 7 December 1849, in the presence of members of the Board of Administration for the affairs of the Punjab: Sir Henry Lawrence (President), C. G. Mansel, John Lawrence and Sir Henry Elliot (Secretary to the Government of India). Legend in the Lawrence family has it that before the voyage, John Lawrence left the jewel in his waistcoat pocket when it was sent to be laundered, and was most grateful when it was returned promptly by the valet who found it. On 1 February 1850, the jewel was sealed in a small iron safe inside a red dispatch box, both sealed with red tape and a wax seal and kept in a chest at Bombay Treasury awaiting a steamer ship from China. It was then sent to England for presentation to Queen Victoria in the care of Captain J. Ramsay and Brevet Lt. Col F. Mackeson under tight security arrangements, one of which was the placement of the dispatch box in a larger iron safe. They departed from Bombay on 6 April on board HMS "Medea", captained by Captain Lockyer. The ship had a difficult voyage: an outbreak of cholera on board when the ship was in Mauritius had the locals demanding its departure, and they asked their governor to open fire on the vessel and destroy it if there was no response. Shortly afterwards, the vessel was hit by a severe gale that blew for some 12 hours. On arrival in Britain on 29 June, the passengers and mail were unloaded in Plymouth, but the Koh-i-Noor stayed on board until the ship reached Spithead, near Portsmouth, on 1 July. The next morning, Ramsay and Mackeson, in the company of Mr Onslow, the private secretary to the Chairman of the Court of Directors of the British East India Company, proceeded by train to East India House in the City of London and passed the diamond into the care of the chairman and deputy chairman of the East India Company. The Koh-i-Noor was formally presented to Queen Victoria on 3 July 1850 at Buckingham Palace by the deputy chairman of the East India Company. The date had been chosen to coincide with the Company's 250th anniversary. Members of the public were given a chance to see the Koh-i-Noor when The Great Exhibition was staged at Hyde Park, London, in 1851. It represented the might of the British Empire and took pride of place in the eastern part of the central gallery. Its mysterious past and advertised value of £1–2 million drew large crowds. At first, the stone was put inside a gilded birdcage, but after complaints about its dull appearance, the Koh-i-Noor was moved to a case with black velvet and gas lamps in the hope that it would sparkle better. Despite this, the flawed and asymmetrical diamond still failed to please viewers. Originally, the diamond had 169 facets and was long, wide, and deep. It was high-domed, with a flat base and both triangular and rectangular facets, similar in overall appearance to other Mughal era diamonds which are now in the Iranian Crown Jewels. Disappointment in the appearance of the stone was not uncommon. After consulting mineralogists, including Sir David Brewster, it was decided by Prince Albert, the husband of Queen Victoria, with the consent of the government, to polish the Koh-i-Noor. One of the largest and most famous Dutch diamond merchants, Mozes Coster, was employed for the task. He sent to London one of his most experienced artisans, Levie Benjamin Voorzanger, and his assistants. On 17 July 1852, the cutting began at the factory of Garrard & Co. in Haymarket, using a steam-powered mill built specially for the job by Maudslay, Sons and Field. Under the supervision of Prince Albert and the Duke of Wellington, and the technical direction of the queen's mineralogist, James Tennant, the cutting took thirty-eight days. Albert spent a total of £8,000 on the operation, which reduced the weight of the diamond from 186 old carats (191 modern carats or 38.2 g) to its current . The stone measures long, wide, and deep. Brilliant-cut diamonds usually have fifty-eight facets, but the Koh-i-Noor has eight additional "star" facets around the culet, making a total of sixty-six facets. The great loss of weight is to some extent accounted for by the fact that Voorzanger discovered several flaws, one especially big, that he found it necessary to cut away. Although Prince Albert was dissatisfied with such a huge reduction, most experts agreed that Voorzanger had made the right decision and carried out his job with impeccable skill. When Queen Victoria showed the re-cut diamond to the young Maharaja Duleep Singh, the Koh-i-Noor's last non-British owner, he was apparently unable to speak for several minutes afterwards. The much lighter but more dazzling stone was mounted in a honeysuckle brooch and a circlet worn by the queen. At this time, it belonged to her personally, and was not yet part of the Crown Jewels. Although Victoria wore it often, she became uneasy about the way in which the diamond had been acquired. In a letter to her eldest daughter, Victoria, Princess Royal, she wrote in the 1870s: "No one feels more strongly than I do about India or how much I opposed our taking those countries and I think no more will be taken, for it is very wrong and no advantage to us. You know also how I dislike wearing the Koh-i-Noor". After Queen Victoria's death, the Koh-i-Noor was set in the Crown of Queen Alexandra, the wife of Edward VII, that was used to crown her at their coronation in 1902. The diamond was transferred to Queen Mary's Crown in 1911, and finally to The Queen Mother's Crown in 1937. When The Queen Mother died in 2002, the crown was placed on top of her coffin for the lying-in-state and funeral. All these crowns are on display in the Jewel House at the Tower of London with crystal replicas of the diamond set in the older crowns. The original bracelet given to Queen Victoria can also be seen there. A glass model of the Koh-i-Noor shows visitors how it looked when it was brought to the United Kingdom. Replicas of the diamond in this and its re-cut forms can also be seen in the 'Vault' exhibit at the Natural History Museum in London. During the Second World War, the Crown Jewels were moved from their home at the Tower of London to Windsor Castle. In 1990, "The Sunday Telegraph", citing a biography of the French army general, Jean de Lattre de Tassigny, by his widow, Simonne, reported that George VI hid the Koh-i-Noor at the bottom of a pond or lake near Windsor Castle, about 32 km (20 miles) outside London, where it remained until after the war. The only people who knew of the hiding place were the king and his librarian, Sir Owen Morshead, who apparently revealed the secret to the general and his wife on their visit to England in 1949. The Koh-i-Noor has long been a subject of diplomatic controversy, with India, Pakistan, Iran, and Afghanistan all demanding its return from the UK at various points. The Government of India, believing the gem was rightfully theirs, first demanded the return of the Koh-i-Noor as soon as independence was granted in 1947. A second request followed in 1953, the year of the coronation of Queen Elizabeth II. Each time, the British Government rejected the claims, saying that ownership was non-negotiable. In 2000, several members of the Indian Parliament signed a letter calling for the diamond to be given back to India, claiming it was taken illegally. British officials said that a variety of claims meant it was impossible to establish the diamond's original owner, and that it had been part of Britain's heritage for more than 150 years. In July 2010, while visiting India, David Cameron, the Prime Minister of the United Kingdom, said of returning the diamond, "If you say yes to one you suddenly find the British Museum would be empty. I am afraid to say, it is going to have to stay put". On a subsequent visit in February 2013, he said, "They're not having that back". In April 2016, the Indian Culture Ministry stated it would make "all possible efforts" to arrange the return of the Koh-i-Noor to India. It was despite the Indian Government earlier conceding that the diamond was a gift. The Solicitor General of India had made the announcement before the Supreme Court of India due to public interest litigation by a campaign group. He said "It was given voluntarily by Last Sikh Ruler to the British as compensation for help in the Sikh wars. The Koh-i-Noor is not a stolen object". In 1976, Pakistan asserted its ownership of the diamond, saying its return would be "a convincing demonstration of the spirit that moved Britain voluntarily to shed its imperial encumbrances and lead the process of decolonisation". In a letter to the Prime Minister of Pakistan, Zulfikar Ali Bhutto, the Prime Minister of the United Kingdom, James Callaghan, wrote, "I need not remind you of the various hands through which the stone has passed over the past two centuries, nor that explicit provision for its transfer to the British crown was made in the peace treaty with the Maharajah of Lahore in 1849. I could not advise Her Majesty that it should be surrendered". In 2000, the Taliban's foreign affairs spokesman, Faiz Ahmed Faiz, said the Koh-i-Noor was the legitimate property of Afghanistan, and demanded for it to be handed over to the regime. "The history of the diamond shows it was taken from us (Afghanistan) to India, and from there to Britain. We have a much better claim than the Indians", he said. The Afghan claim derives from Shah Shuja Durrani memoirs, which states he surrendered the diamond to Ranjit Singh while Singh was having his son tortured in front of him, so argue the Maharajah of Lahore acquired the stone illegitimately. Because of the quadripartite dispute over the diamond's rightful ownership, there have been various compromises suggested to bring the dispute to an end. These include dividing the diamond into four, with a piece given to each of Afghanistan, India, and Pakistan, with the final piece retained by the British Crown. Another suggestion is that the jewel be housed in a special museum at the Wagah border between India and Pakistan. However this suggestion does not cater to Afghan claims, nor the reality of current British possession. These compromises are presently unaligned with the position of the British Government, which since the end of the British Raj is that the status of the diamond is 'non-negotiatable'. The Koh-i-Noor made its first appearance in popular culture in "The Moonstone" (1868), a 19th-century British epistolary novel by Wilkie Collins, generally considered to be the first full length detective novel in the English language. In his preface to the first edition of the book, Collins says that he based his eponymous "Moonstone" on the histories of two stones: the Orlov, a diamond in the Russian Imperial Sceptre, and the Koh-i-Noor. In the 1966 Penguin Books edition of "The Moonstone", J. I. M. Stewart states that Collins used G. C. King's "The Natural History, Ancient and Modern, of Precious Stones …" (1865) to research the history of the Koh-i-Noor. The Koh-i-Noor also features in Agatha Christie's 1925 novel "The Secret of Chimneys" where it is hidden somewhere inside a large country house and is discovered at the end of the novel. The diamond had been stolen from the Tower of London by a Parisian gang leader who replaced it with a replica stone.
https://en.wikipedia.org/wiki?curid=16966
Kvass Kvass (see List of names below) is a traditional fermented Slavic and Baltic beverage commonly made from rye bread, which is known in many Central and Eastern European and Asian countries as "black bread". The colour of the bread used contributes to the colour of the resulting drink. Kvass is classified as a "non-alcoholic" drink by Ukrainian, Belarusian, Russian, Latvian, Lithuanian, Polish, Hungarian, Serbian, and Romanian standards, as the alcohol content from fermentation is typically low (0.5–1.0% or 1–2 proof). It may be flavoured with fruits such as strawberries or raisins, or with herbs such as mint. The word "kvass" is derived from Old Church Slavonic квасъ from Proto-Slavic *"kvasъ" ('leaven', 'fermented drink') and ultimately from Proto-Indo-European base *"kwat-" ('sour'). Today the words used are almost the same: in Belarusian: , '; Russian: , '; Ukrainian: , ; in Polish: ' ('bread kvass', the adjective being used to differentiate it from "kwas", 'acid', originally from "kwaśne", 'sour'); Latvian: '; Romanian: '; Hungarian: '; Serbian: '; Chinese: , . Non-cognates include Lithuanian ' ('beverage', similar to Latvian '), Estonian ', and Finnish "". Kvass is made by the natural fermentation of bread, such as wheat, rye, or barley, and sometimes flavoured using fruit, berries, raisins, or birch sap. Modern homemade kvass most often uses black or regular rye bread, usually dried (called plural "suhari"), baked into croutons, or fried, with the addition of sugar or fruit (e.g. apples or raisins), and with a yeast culture and "zakvaska" ("kvass fermentation starter"). Kvass originated at the time of ancient Rus'. It has been a common drink in Russia and other Eastern European countries since at least the Middle Ages. The drink is comparable to some other ancient fermented grain beverages including beer brewed from barley by the ancient Egyptians, the pombe or millet beer of Africa, the so-called rice wines of Asia, the chicha made with corn or cassava by the natives of the Americas. Kvass was invented by the Slavs and became the most popular among East Slavs. The word "kvass" was first mentioned in the "Primary Chronicle", in the description of events of the year 996, following the Christianization of the Kievan Rus'. According to the Merriam-Webster Dictionary and Oxford English Dictionary the first mention of kvass in an English text took place sometime around 1553. In times of Old Russia kvass developed into a national drink and was consumed by all classes. The peak of its popularity was the 15th and 16th centuries, where every Russian on average drank 200 to 250 liters of Kvass per year, from the poor to the Tsars. Already back then there existed many different kvass varieties: red, white, sweet, sour, mint, honey, berry and so on, with many different local variations. In Russia, under Peter the Great, it was the most common non-alcoholic drink in every class of society. William Tooke, describing Russian drinking habits in 1799, stated that "The most common domestic drink is "quas", a liquor prepared from pollard, meal, and bread, or from meal and malt, by an acid fermentation. It is cooling and well-tasted." Apart from drinking kvass, families (especially the poor ones) used it as the basis of many dishes they consumed. Traditional cold summertime soups of Russian cuisine, such as okroshka, botvinya and tyurya, are based on kvas. A similar tradition is found in Romanian cuisine, where the liquid used for cooking is made by fermenting wheat or barley bran and called borș. Kvass was reported to be consumed in excess by peasants, low-class citizens, and monks; in fact, it is sometimes said that it was usual for them to drink more kvass than water. In the 19th century, the kvass industry was created and less natural versions of the drink became increasingly widespread. On the other hand, the popularity of kvass and the market competition led to the emergence of many varieties, which included herbs, fruits and berries. At that time kvass vendors called "kvasnik" (pl. "kvasniki") were on the streets in almost every city. They often specialized in particular kinds of kvass: strawberry kvass, apple kvass, etc. Nowadays, kvass production is a multimillion-dollar industry, though it has been struggling ever since the introduction of Western soft drinks in Eastern European countries. Kvass was once sold during the summer only, but is now produced, packaged, and sold year-round. Although the massive flood of western soft drinks after the fall of the USSR such as Coca-Cola and Pepsi substantially shrank the market share of kvass in Russia, in recent years it has regained its original popularity, often marketed as a national soft drink or "patriotic" alternative to cola. For example, the Russian company Nikola (by coincidence its name sounds like "not-cola" in Russian) has promoted its brand of kvass with an advertising campaign emphasizing "anti cola-nisation." Moscow-based Business Analytica reported in 2008 that bottled kvass sales had tripled since 2005 and estimated that per-capita consumption of kvass in Russia would reach three liters in 2008. Between 2005 and 2007, cola's share of the Moscow soft drink market fell from 37% to 32%. Meanwhile, kvass' share more than doubled over the same time period, reaching 16% in 2007. In response, Coca-Cola launched its own brand of kvass in May 2008. This is the first time a foreign company has made an appreciable entrance into the Russian kvass market. Pepsi has also signed an agreement with a Russian kvass manufacturer to act as a distribution agent. The development of new technologies for storage and distribution, and heavy advertising, have contributed to this surge in popularity; three new major brands have been introduced since 2004. Kvass is produced in Russia in different flavors, matched to the taste of the different regions of Russia that prefer sweet kvass or the more sour variety. There are existing various versions of kvass, like for example red bilberry and cranberry. Kvass may have appeared in Poland as early as the 10th century mainly due to the trade between the Kingdom of Poland and Kievan Rus', where it originated. The production of kvass went on for several hundred years, as recipes were passed down from parent to offspring. This continued in the Polish–Lithuanian Commonwealth. It was at first commonly drunk among peasants who worked on the fields in the eastern parts of the country. This eventually spread to the szlachta (Polish nobility). One example of this is "kwas chlebowy sapieżyński kodeński", an old type of Polish kvass that is still sold as a contemporary brand. Its origins can be traced back to the 1500s, when Jan Sapieha – a magnate of the House of Sapieha – was granted land by the Polish king. On those lands he founded the town of Kodeń. He then bought the mills and 24 villages of the surrounding areas from their previous landowners. It was then that the taste of kvass became known among the Polish szlachta, who used it for its supposed healing qualities. After the last Partition of the Polish–Lithuanian Commonwealth in 1795, Poland ceased to be an independent state for 123 years. Throughout the 19th century, kvass remained popular among Poles who lived in the Congress Poland of Imperial Russia and in Austrian Galicia, especially the inhabitants of rural areas. Production of the beverage in Poland on an industrial scale can be traced back to the more recent interwar period, when the Polish state regained independence as the Second Polish Republic. In interwar Poland, kvass was brewed and sold in mass numbers by magnates of the Polish drinks market like the Varsovian brewery "Haberbusch i Schiele" or the "Karpiński" company. Kvass was remained popular in eastern Poland, partly due to the plentiful numbers of Belarusian and Ukrainian minorities that lived there. However, with the collapse of many prewar businesses and much of the Polish industry during World War II, kvass lost popularity following the aftermath of the war. It also lost favour upon the introduction of Coca-Cola onto the Polish market. Although not as popular in Poland nowadays as it is in neighbouring Ukraine, kvass can still be found in some supermarkets and grocery stores where it is known in Polish as "kwas chlebowy" (). Commercial bottled versions of the drink are the most common variant, as there are companies that specialize in manufacturing a more modern version of the drink (some variants are manufactured in Poland whilst others are imported from its neighbouring countries, Lithuania and Ukraine being the most popular source). However, recipes for a traditional version of kvass exist; some of them originate from eastern Poland. Although commercial kvass is much easier to find in Polish shops, Polish manufacturers of more natural and healthier variants of kvass have become increasingly popular both within and outside of the country's borders. After the dissolution of the Soviet Union in 1991, the street vendors disappeared from the streets of Latvia due to new health laws that banned its sale on the street. Economic disruptions forced many kvass factories to close. The Coca-Cola Company moved in and quickly dominated the market for soft drinks. In 1998 the local soft drink industry adapted by selling bottled kvass and launching aggressive marketing campaigns. This surge in sales was stimulated by the fact that kvass sold for about half the price of Coca-Cola. In just three years, kvass constituted as much as 30% of the soft drink market in Latvia, while the market share of Coca-Cola fell from 65% to 44%. The Coca-Cola company had losses in Latvia of about $1 million in 1999 and 2000. The situation was similar in the other Baltic countries and Russia. Coca-Cola responded by buying kvass manufacturers as well as making kvass at their own soft drink plants. In Lithuania, kvass is known as "gira" and is widely available in bottles and draft. Many restaurants in Vilnius make their own "gira", which they sell on the premises. Strictly speaking, "gira" can be made from anything fermentable — such as caraway tea, beetroot juice, or berries — but it is made mainly from black bread or barley/rye malt. In the United Kingdom, kvass is practically unknown, as there are no cultural ties to it within the nation's history and there are no renowned kvass breweries in the country. However, with the influx of immigrants following the 2004 enlargement of the European Union, a number of stores selling cuisine and beverages from Eastern Europe cropped up throughout the UK, many of them stocking kvass on their shelves. In recent years, kvass has also become more popular in Serbia. In Norway kvass has traditionally been brewed by locals in Troms og Finnmark, the county closest to Russia. The influx of Eastern European migrant workers lead to a rise in demand of their native cuisines. This led Norwegian food-companies to start producing, among other products, kvass. "Kvas" is a surname in Russia and some other countries. The name of Kvasir, a wise being in Norse mythology, is possibly related to kvass. The Russian expression "Перебиваться с хлеба на квас" (literally "to clamber from bread to kvass") means to barely make ends meet, remotely similar to (and may be translated as) the expression "to be on the breadline". To better understand the Russian phrase one has to know that in poor families kvass was made from stale leftovers of rye bread. In the Polish language, there is an old folk rhyming song. It shows the history of kvass in the country as having been drunk by generations of Polish reapers as a thirst-quenching beverage used during periods of hard work during the harvest season, long before it became popular as a medicinal drink among the szlachta. The words of the song go as following: In Tolstoy's "War and Peace", French soldiers are aware of kvass on entering Moscow, enjoying it but referring to it as "pig's lemonade". In Sholem Aleichem's "Motl, Peysi the Cantor's Son", diluted kvass is the focus of one of Motl's older brother's get-rich-quick schemes. Other beverages from around the world that are traditionally low-alcohol and lacto-fermented include:
https://en.wikipedia.org/wiki?curid=16971
Kolmogorov–Arnold–Moser theorem The Kolmogorov–Arnold–Moser (KAM) theorem is a result in dynamical systems about the persistence of quasiperiodic motions under small perturbations. The theorem partly resolves the small-divisor problem that arises in the perturbation theory of classical mechanics. The problem is whether or not a small perturbation of a conservative dynamical system results in a lasting quasiperiodic orbit. The original breakthrough to this problem was given by Andrey Kolmogorov in 1954. This was rigorously proved and extended by Jürgen Moser in 1962 (for smooth twist maps) and Vladimir Arnold in 1963 (for analytic Hamiltonian systems), and the general result is known as the KAM theorem. Arnold originally thought that this theorem could apply to the motions of the solar system or other instances of the -body problem, but it turned out to work only for the three-body problem because of a degeneracy in his formulation of the problem for larger numbers of bodies. Later, Gabriella Pinzari showed how to eliminate this degeneracy by developing a rotation-invariant version of the theorem. The KAM theorem is usually stated in terms of trajectories in phase space of an integrable Hamiltonian system. The motion of an integrable system is confined to an invariant torus (a doughnut-shaped surface). Different initial conditions of the integrable Hamiltonian system will trace different invariant tori in phase space. Plotting the coordinates of an integrable system would show that they are quasiperiodic. The KAM theorem states that if the system is subjected to a weak nonlinear perturbation, some of the invariant tori are deformed and survive, while others are destroyed. Surviving tori meet the non-resonance condition, i.e., they have “sufficiently irrational” frequencies. This implies that the motion continues to be quasiperiodic, with the independent periods changed (as a consequence of the non-degeneracy condition). The KAM theorem quantifies the level of perturbation that can be applied for this to be true. Those KAM tori that are destroyed by perturbation become invariant Cantor sets, named "Cantori" by Ian C. Percival in 1979. The non-resonance and non-degeneracy conditions of the KAM theorem become increasingly difficult to satisfy for systems with more degrees of freedom. As the number of dimensions of the system increases, the volume occupied by the tori decreases. As the perturbation increases and the smooth curves disintegrate we move from KAM theory to Aubry–Mather theory which requires less stringent hypotheses and works with the Cantor-like sets. The existence of a KAM theorem for perturbations of quantum many-body integrable systems is still an open question, although it is believed that arbitrarily small perturbations will destroy integrability in the infinite size limit. An important consequence of the KAM theorem is that for a large set of initial conditions the motion remains perpetually quasiperiodic. The methods introduced by Kolmogorov, Arnold, and Moser have developed into a large body of results related to quasiperiodic motions, now known as KAM theory. Notably, it has been extended to non-Hamiltonian systems (starting with Moser), to non-perturbative situations (as in the work of Michael Herman) and to systems with fast and slow frequencies (as in the work of Mikhail B. Sevryuk).
https://en.wikipedia.org/wiki?curid=16972
Knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items. The problem often arises in resource allocation where the decision makers have to choose from a set of non-divisible projects or tasks under a fixed budget or time constraint, respectively. The knapsack problem has been studied for more than a century, with early works dating as far back as 1897. The name "knapsack problem" dates back to the early works of mathematician Tobias Dantzig (1884–1956), and refers to the commonplace problem of packing the most valuable or useful items without overloading the luggage. A 1999 study of the Stony Brook University Algorithm Repository showed that, out of 75 algorithmic problems, the knapsack problem was the 19th most popular and the third most needed after suffix trees and the bin packing problem. Knapsack problems appear in real-world decision-making processes in a wide variety of fields, such as finding the least wasteful way to cut raw materials, selection of investments and portfolios, selection of assets for asset-backed securitization, and generating keys for the Merkle–Hellman and other knapsack cryptosystems. One early application of knapsack algorithms was in the construction and scoring of tests in which the test-takers have a choice as to which questions they answer. For small examples, it is a fairly simple process to provide the test-takers with such a choice. For example, if an exam contains 12 questions each worth 10 points, the test-taker need only answer 10 questions to achieve a maximum possible score of 100 points. However, on tests with a heterogeneous distribution of point values, it is more difficult to provide choices. Feuerman and Weiss proposed a system in which students are given a heterogeneous test with a total of 125 possible points. The students are asked to answer all of the questions to the best of their abilities. Of the possible subsets of problems whose total point values add up to 100, a knapsack algorithm would determine which subset gives each student the highest possible score. The most common problem being solved is the 0-1 knapsack problem, which restricts the number "formula_1" of copies of each kind of item to zero or one. Given a set of "formula_2" items numbered from 1 up to "formula_2", each with a weight "formula_4" and a value "formula_5", along with a maximum weight capacity "formula_6", Here "formula_1" represents the number of instances of item "formula_11" to include in the knapsack. Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity. The bounded knapsack problem (BKP) removes the restriction that there is only one of each item, but restricts the number formula_1 of copies of each kind of item to a maximum non-negative integer value formula_13: The unbounded knapsack problem (UKP) places no upper bound on the number of copies of each kind of item and can be formulated as above except for that the only restriction on formula_1 is that it is a non-negative integer. One example of the unbounded knapsack problem is given using the figure shown at the beginning of this article and the text "if any number of each box is available" in the caption of that figure. The knapsack problem is interesting from the perspective of computer science for many reasons: There is a link between the "decision" and "optimization" problems in that if there exists a polynomial algorithm that solves the "decision" problem, then one can find the maximum value for the optimization problem in polynomial time by applying this algorithm iteratively while increasing the value of k . On the other hand, if an algorithm finds the optimal value of the optimization problem in polynomial time, then the decision problem can be solved in polynomial time by comparing the value of the solution output by this algorithm with the value of k . Thus, both versions of the problem are of similar difficulty. One theme in research literature is to identify what the "hard" instances of the knapsack problem look like, or viewed another way, to identify what properties of instances in practice might make them more amenable than their worst-case NP-complete behaviour suggests. The goal in finding these "hard" instances is for their use in public key cryptography systems, such as the Merkle-Hellman knapsack cryptosystem. Furthermore, notable is the fact that the hardness of the knapsack problem depends on the form of the input. If the weights and profits are given as integers, it is weakly NP-complete, while it is strongly NP-complete if the weights and profits are given as rational numbers. However, in the case of rational weights and profits it still admits a fully polynomial-time approximation scheme. Several algorithms are available to solve knapsack problems, based on dynamic programming approach, branch and bound approach or hybridizations of both approaches. The unbounded knapsack problem (UKP) places no restriction on the number of copies of each kind of item. Besides, here we assume that formula_21 Observe that formula_25 has the following properties: 1. formula_26 (the sum of zero items, i.e., the summation of the empty set). 2. formula_27 , formula_28, where formula_5 is the value of the formula_11-th kind of item. The second property needs to be explained in detail. During the process of the running of this method, how do we get the weight formula_31? There are only formula_11 ways and the previous weights are formula_33 where there are total formula_11 kinds of different item (by saying different, we mean that the weight and the value are not completely the same). If we know each value of these formula_11 items and the related maximum value previously, we just compare them to each other and get the maximum value ultimately and we are done. Here the maximum of the empty set is taken to be zero. Tabulating the results from formula_36 up through formula_37 gives the solution. Since the calculation of each formula_25 involves examining at most formula_2 items, and there are at most formula_6 values of formula_25 to calculate, the running time of the dynamic programming solution is formula_42. Dividing formula_43 by their greatest common divisor is a way to improve the running time. Even if P≠NP, the formula_42 complexity does not contradict the fact that the knapsack problem is NP-complete, since formula_6, unlike formula_2, is not polynomial in the length of the input to the problem. The length of the formula_6 input to the problem is proportional to the number of bits in formula_6, formula_49, not to formula_6 itself. However, since this runtime is pseudopolynomial, this makes the (decision version of the) knapsack problem a weakly NP-complete problem. A similar dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. Assume formula_51 are strictly positive integers. Define formula_52 to be the maximum value that can be attained with weight less than or equal to formula_31 using items up to formula_11 (first formula_11 items). We can define formula_52 recursively as follows: (Definition A) The solution can then be found by calculating formula_62. To do this efficiently, we can use a table to store previous computations. The following is pseudo code for the dynamic program: // Input: // Values (stored in array v) // Weights (stored in array w) // Number of distinct items (n) // Knapsack capacity (W) // NOTE: The array "v" and array "w" are assumed to store all relevant values starting at index 1. for j from 0 to W do: for i from 1 to n do: This solution will therefore run in formula_42 time and formula_42 space. However, if we take it a step or two further, we should know that the method will run in the time between formula_42 and formula_66. From Definition A, we can know that there is no need for computing all the weights when the number of items and the items themselves that we chose are fixed. That is to say, the program above computes more than expected because that the weight changes from 0 to W all the time. All we need to do is to compare m[i-1, j] and m[i-1, j-w[i]] + v[i] for m[i, j], and when m[i-1, j-w[i]] is out of range, we just give the value of m[i-1, j] to m[i, j]. From this perspective, we can program this method so that it runs recursively. // Input: // Values (stored in array v) // Weights (stored in array w) // Number of distinct items (n) // Knapsack capacity (W) // NOTE: The array "v" and array "w" are assumed to store all relevant values starting at index 1. Define value[n, W] Initialize All value[i, j] = -1 Define m:=(i,j) //Define function m so that it represents the maximum value we can get under the condition: use first i items, total weight limit is j Run m(n,W) For example, there are 10 different items and the weight limit is 67. So, If you use above method to compute for formula_68, you will get ("excluding calls that produce m(i,j) = 0"): Besides, we can break the recursion and convert it into a tree. Then we can cut some leaves and use parallel computing to expedite the running of this method. Another algorithm for 0-1 knapsack, discovered in 1974 and sometimes called "meet-in-the-middle" due to parallels to a similarly named algorithm in cryptography, is exponential in the number of different items but may be preferable to the DP algorithm when formula_6 is large compared to "n". In particular, if the formula_4 are nonnegative but not integers, we could still use the dynamic programming algorithm by scaling and rounding (i.e. using fixed-point arithmetic), but if the problem requires formula_72 fractional digits of precision to arrive at the correct answer, formula_6 will need to be scaled by formula_74, and the DP algorithm will require formula_75 space and formula_76 time. The algorithm takes formula_77 space, and efficient implementations of step 3 (for instance, sorting the subsets of B by weight, discarding subsets of B which weigh more than other subsets of B of greater or equal value, and using binary search to find the best match) result in a runtime of formula_78. As with the meet in the middle attack in cryptography, this improves on the formula_79 runtime of a naive brute force approach (examining all subsets of formula_80), at the cost of using exponential rather than constant space (see also baby-step giant-step). As for most NP-complete problems, it may be enough to find workable solutions even if they are not optimal. Preferably, however, the approximation comes with a guarantee on the difference between the value of the solution found and the value of the optimal solution. As with many useful but computationally complex algorithms, there has been substantial research on creating and analyzing algorithms that approximate a solution. The knapsack problem, though NP-Hard, is one of a collection of algorithms that can still be approximated to any specified degree. This means that the problem has a polynomial time approximation scheme. To be exact, the knapsack problem has a fully polynomial time approximation scheme (FPTAS). George Dantzig proposed a greedy approximation algorithm to solve the unbounded knapsack problem. His version sorts the items in decreasing order of value per unit of weight, formula_81. It then proceeds to insert them into the sack, starting with as many copies as possible of the first kind of item until there is no longer space in the sack for more. Provided that there is an unlimited supply of each kind of item, if formula_82 is the maximum value of items that fit into the sack, then the greedy algorithm is guaranteed to achieve at least a value of formula_83. However, for the bounded problem, where the supply of each kind of item is limited, the algorithm may be far from optimal. The fully polynomial time approximation scheme (FPTAS) for the knapsack problem takes advantage of the fact that the reason the problem has no known polynomial time solutions is because the profits associated with the items are not restricted. If one rounds off some of the least significant digits of the profit values then they will be bounded by a polynomial and 1/ε where ε is a bound on the correctness of the solution. This restriction then means that an algorithm can find a solution in polynomial time that is correct within a factor of (1-ε) of the optimal solution. Theorem: The set formula_90 computed by the algorithm above satisfies formula_91, where formula_92 is an optimal solution. Solving the unbounded knapsack problem can be made easier by throwing away items which will never be needed. For a given item formula_11, suppose we could find a set of items formula_94 such that their total weight is less than the weight of formula_11, and their total value is greater than the value of formula_11. Then formula_11 cannot appear in the optimal solution, because we could always improve any potential solution containing formula_11 by replacing formula_11 with the set formula_94. Therefore, we can disregard the formula_11-th item altogether. In such cases, formula_94 is said to dominate formula_11. (Note that this does not apply to bounded knapsack problems, since we may have already used up the items in formula_94.) Finding dominance relations allows us to significantly reduce the size of the search space. There are several different types of dominance relations, which all satisfy an inequality of the form: formula_105, and formula_106 for some formula_107 where formula_108 and formula_109. The vector formula_110 denotes the number of copies of each member of formula_94. There are many variations of the knapsack problem that have arisen from the vast number of applications of the basic problem. The main variations occur by changing the number of some problem parameter such as the number of items, number of objectives, or even the number of knapsacks. This variation changes the goal of the individual filling the knapsack. Instead of one objective, such as maximizing the monetary profit, the objective could have several dimensions. For example, there could be environmental or social concerns as well as economic goals. Problems frequently addressed include portfolio and transportation logistics optimizations. As an example, suppose you ran a cruise ship. You have to decide how many famous comedians to hire. This boat can handle no more than one ton of passengers and the entertainers must weigh less than 1000 lbs. Each comedian has a weight, brings in business based on their popularity and asks for a specific salary. In this example, you have multiple objectives. You want, of course, to maximize the popularity of your entertainers while minimizing their salaries. Also, you want to have as many entertainers as possible. In this variation, the weight of knapsack item formula_11 is given by a D-dimensional vector formula_159 and the knapsack has a D-dimensional capacity vector formula_160. The target is to maximize the sum of the values of the items in the knapsack so that the sum of weights in each dimension formula_72 does not exceed formula_162. Multi-dimensional knapsack is computationally harder than knapsack; even for formula_163, the problem does not have EPTAS unless Pformula_164NP. However, the algorithm in is shown to solve sparse instances efficiently. An instance of multi-dimensional knapsack is sparse if there is a set formula_165 for formula_166 such that for every knapsack item formula_11, formula_168 such that formula_169 and formula_170. Such instances occur, for example, when scheduling packets in a wireless network with relay nodes. The algorithm from also solves sparse instances of the multiple choice variant, multiple-choice multi-dimensional knapsack. The IHS (Increasing Height Shelf) algorithm is optimal for 2D knapsack (packing squares into a two-dimensional unit size square): when there are at most five square in an optimal packing. This variation is similar to the Bin Packing Problem. It differs from the Bin Packing Problem in that a subset of items can be selected, whereas, in the Bin Packing Problem, all items have to be packed to certain bins. The concept is that there are multiple knapsacks. This may seem like a trivial change, but it is not equivalent to adding to the capacity of the initial knapsack. This variation is used in many loading and scheduling problems in Operations Research and has a Polynomial-time approximation scheme. The quadratic knapsack problem maximizes a quadratic objective function subject to binary and linear capacity constraints. The problem was introduced by Gallo, Hammer, and Simeone in 1980, however the first treatment of the problem dates back to Witzgall in 1975. The subset sum problem is a special case of the decision and 0-1 problems where each kind of item, the weight equals the value: formula_171. In the field of cryptography, the term "knapsack problem" is often used to refer specifically to the subset sum problem and is commonly known as one of Karp's 21 NP-complete problems. The generalization of subset sum problem is called multiple subset-sum problem, in which multiple bins exist with the same capacity. It has been shown that the generalization does not have an FPTAS.
https://en.wikipedia.org/wiki?curid=16974
Ketoprofen Ketoprofen is one of the propionic acid class of nonsteroidal anti-inflammatory drugs (NSAID) with analgesic and antipyretic effects. It acts by inhibiting the body's production of prostaglandin. It was patented in 1967 and approved for medical use in 1980. As of 2015 the cost for a typical month of medication in the United States is $50 to $100. Ketoprofen is generally prescribed for arthritis-related inflammatory pains or severe toothaches that result in the inflammation of the gums. Ketoprofen topical patches are being used for treatment of musculoskeletal pain. Ketoprofen can also be used for treatment of some pain, especially nerve pain such as sciatica, postherpetic neuralgia and referred pain for radiculopathy, in the form of a cream, ointment, liquid, spray, or gel, which may also contain ketamine and lidocaine, along with other agents which may be useful, such as cyclobenzaprine, amitriptyline, acyclovir, gabapentin, orphenadrine and other drugs used as NSAIDs or adjuvant, atypical or potentiators for pain treatment. A 2013 systematic review indicated "The efficacy of orally administered ketoprofen in relieving moderate-severe pain and improving functional status and general condition was significantly better than that of ibuprofen and/or diclofenac." A 2017 Cochrane systematic review investigating ketoprofen as a single-dose by mouth in acute, moderate-to-severe postoperative pain concluded that its efficacy is equivalent to drugs such as ibuprofen and diclofenac. There is evidence for topic ketoprofen for osteoarthritis but not other chronic musculoskeletal pain. The patches have been shown to provide rapid and sustained delivery to underlying tissues without significantly increasing levels of drug concentration in the blood when compared to the traditional oral administration. Ketoprofen undergoes metabolism in the liver via conjugation with glucuronic acid, CYP3A4 and CYP2C9 hydroxylation of the benzoyl ring, and reduction of its keto function. Ketoprofen is used for its antipyretic, analgesic, and anti-inflammatory properties by inhibiting cyclooxygenase-1 and -2 (COX-1 and COX-2) enzymes reversibly, which decreases production of proinflammatory prostaglandin precursors. Ketoprofen was available over-the-counter in the United States in the form of 12.5 mg coated tablets (Orudis KT and Actron), but this form has been discontinued. It is available by prescription capsules. Ketoprofen is also available as a 2.5% gel for topical application, and it is also available as a patch for topical analgesia and anti-inflammatory action. However, the gel is not sold in the United States. Brand names in Australia are Orudis and Oruvail. It is available in Japan in a transdermal patch Mohrus Tape, made by Hisamitsu Pharmaceutical. It is available in the UK as Ketoflam and Oruvail, in Ireland as Fastum Gel, in Estonia as Keto, Ketonal, and Fastum Gel, in Finland as Ketorin, Keto, Ketomex, and Orudis; in France as Profénid, Bi-Profénid and Ketum; in Italy as Ketodol, Fastum Gel, Lasonil, Orudis and Oki; in Poland, Serbia, Slovenia and Croatia as Knavon and Ketonal; in Romania as Ketonal and Fastum Gel; in Mexico as Arthril; in Norway as Zon and Orudis; in Russia as ОКИ (OKI), Fastum Gel and Ketonal; in Spain as Actron and Fastum Gel; in Albania as Oki and Fastum Gel and in Venezuela as Ketoprofeno as an injectable solution of 100 mg and 150 mg capsules. In Switzerland, a ketoprofen formulation based on transfersome technology for direct application on the skin above the site to be treated has been approved. In some countries, the optically pure ("S")-enantiomer (dexketoprofen) is available; its trometamol salt is said to be particularly rapidly reabsorbed from the gastrointestinal tract, having a rapid onset of effects. The earliest report of therapeutic use in humans was in 1972. Ketoprofen is a common NSAID, antipyretic, and analgesic used in horses and other equines. It is most commonly used for musculoskeletal pain, joint problems, and soft tissue injury, as well as laminitis. It is also used to control fevers and prevent endotoxemia. It is also used as a mild painkiller in smaller animals, generally following surgical procedures. In horses, it is given at a dose of 2.2 mg/kg/day. Studies have shown that it does not inhibit 5-lipoxygenase and leukotriene B4, as originally claimed. It is therefore not considered superior to phenylbutazone as previously believed, although clinical signs of lameness are reduced with its use. In fact, phenylbutazone was shown superior to ketoprofen in cases of experimentally-induced synovitis when both drugs were used at labeled dosages. Ketoprofen, when administered intravenously, is recommended for a maximum of five days of use. Its analgesic and antipyretic effects begin to occur one to two hours following administration. The most common dosage is 1 mg/ lb, once per day, although this dosage may be lowered for ponies, which are most susceptible to NSAID side effects. It is also available as a capsule dosage form and tablet. Experiments have found ketoprofen, like diclofenac, is a veterinary drug causing lethal effects in red-headed vultures. Vultures feeding on the carcasses of recently treated livestock suffer acute kidney failure within days of exposure.
https://en.wikipedia.org/wiki?curid=16978
Korea Institute for Advanced Study The Korea Institute for Advanced Study (KIAS) is an advanced research institute in South Korea. It is located on a campus in Dongdaemun-gu, Seoul. KIAS was founded in 1996, aiming to become a world's leading research institute where international elite scholars gather and dedicate to fundamental research in basic sciences. Currently, there are three schools in the institute: mathematics, physics, and computational sciences. As of 2016, the institute has 3 distinguished professors, 26 professors, and 133 research fellows. As its name suggests, this institute was founded modeled on the Institute for Advanced Study in Princeton, New Jersey (USA), KIAS is funded by the government. KIAS is a subordinate institute of KAIST.
https://en.wikipedia.org/wiki?curid=16984
Kabuki In 2005, the "Kabuki theatre" was proclaimed by UNESCO as an intangible heritage possessing outstanding universal value. In 2008, it was inscribed in the UNESCO Representative List of the Intangible Cultural Heritage of Humanity. The individual kanji, from left to right, mean "sing" (), "dance" (), and "skill" (). Kabuki is therefore sometimes translated as "the art of singing and dancing". These are, however, "ateji" characters which do not reflect actual etymology. The kanji of 'skill' generally refers to a performer in kabuki theatre. Since the word Kabuki is believed to derive from the verb "kabuku", meaning "to lean" or "to be out of the ordinary", Kabuki can be interpreted as "avant-garde" or "bizarre" theatre. The expression "kabukimono" () referred originally to those who were bizarrely dressed. It is often translated into English as "strange things" or "the crazy ones", and referred to the style of dress worn by gangs of samurai. The history of kabuki began in 1603 when Izumo no Okuni, possibly a miko of Izumo-taisha, began performing with a troupe of female dancers a new style of dance drama, on a makeshift stage in the dry bed of the Kamo River in Kyoto. It originated in the 17th century. Japan was under the control of the Tokugawa shogunate, enforced by Tokugawa Ieyasu. The name of the Edo period derives from the relocation of the Tokugawa regime from its former home in Kyoto to the city of Edo, present-day Tokyo. Female performers played both men and women in comic playlets about ordinary life. The style was immediately popular, and Okuni was asked to perform before the Imperial Court. In the wake of such success, rival troupes quickly formed, and kabuki was born as ensemble dance and drama performed by women—a form very different from its modern incarnation. Much of its appeal in this era was due to the ribald, suggestive themes featured by many troupes; this appeal was further augmented by the fact that the performers were often also available for prostitution. For this reason, kabuki was also called "" (prostitute-singing and dancing performer) during this period. Kabuki became a common form of entertainment in the ukiyo, or Yoshiwara, the registered red-light district in Edo. A diverse crowd gathered under one roof, something that happened nowhere else in the city. Kabuki theaters were a place to see and be seen as they featured the latest fashion trends and current events. The stage provided good entertainment with exciting new music, patterns, clothing, and famous actors. Performances went from morning until sunset. The teahouses surrounding or connected to the theater provided meals, refreshments, and good company. The area around the theatres was filled with shops selling kabuki souvenirs. Kabuki, in a sense, initiated pop culture in Japan. The shogunate was never partial to kabuki and all the mischief it brought, particularly the variety of the social classes which mixed at kabuki performances. Women's kabuki, called onna-kabuki, was banned in 1629 for being too erotic. Following onna-kabuki, young boys performed in wakashū-kabuki, but since they too were eligible for prostitution, the shōgun government soon banned wakashū-kabuki as well. Kabuki switched to adult male actors, called yaro-kabuki, in the mid-1600s. Male actors played both female and male characters. The theatre remained popular, and remained a focus of urban lifestyle until modern times. Although kabuki was performed all over ukiyo and other portions for the country, the Nakamura-za, Ichimura-za and Kawarazaki-za theatres became the top theatres in ukiyo, where some of the most successful kabuki performances were and still are held. The modern all-male kabuki, known as "yarō-kabuki" (young man kabuki), was established during these decades. After women were banned from performing, cross-dressed male actors, known as "onnagata" ("female-role") or "oyama", took over. Young (adolescent) men were preferred for women's roles due to their less masculine appearance and the higher pitch of their voices compared to adult men. In addition, "wakashū" (adolescent male) roles, played by young men often selected for attractiveness, became common, and were often presented in an erotic context. Along with the change in the performer's gender came a change in the emphasis of the performance: increased stress was placed on drama rather than dance. Performances were equally ribald, and the male actors too were available for prostitution (to both female and male customers). Audiences frequently became rowdy, and brawls occasionally broke out, sometimes over the favors of a particularly handsome young actor, leading the shogunate to ban first "onnagata" and then "wakashū" roles. Both bans were rescinded by 1652. During the Genroku era, kabuki thrived. The structure of a kabuki play was formalized during this period, as were many elements of style. Conventional character types were established. Kabuki theater and "ningyō jōruri", the elaborate form of puppet theater that later came to be known as "bunraku", became closely associated with each other, and each has since influenced the other's development. The famous playwright Chikamatsu Monzaemon, one of the first professional kabuki playwrights, produced several influential works, though the piece usually acknowledged as his most significant, "Sonezaki Shinjū" ("The Love Suicides at Sonezaki"), was originally written for bunraku. Like many "bunraku" plays, it was adapted for kabuki, and it spawned many imitators—in fact, it and similar plays reportedly caused so many real-life "copycat" suicides that the government banned "shinju mono" (plays about lovers' double suicides) in 1723. Ichikawa Danjūrō I also lived during this time; he is credited with the development of "mie" poses and mask-like kumadori make-up. Male actors played both female and male characters. In the 1840s, fires started to affect Edo due to repeated drought. Kabuki theatres, traditionally made of wood, were constantly burning down, forcing their relocation within the ukiyo. When the area that housed the Nakamura-za was completely destroyed in 1841, the shōgun refused to allow the theatre to be rebuilt, saying that it was against fire code. The shogunate did not welcome the mixing and trading that occurred between town merchants and actors, artists, and prostitutes. The shogunate took advantage of the fire crisis in 1842 to force the Nakamura-za, Ichimura-za, and Kawarazaki-za out of the city limits and into Asakusa, a northern suburb of Edo. Actors, stagehands, and others associated with the performances were forced out as well. Those in areas and lifestyles centered around the theatres also migrated, but the inconvenience of the new location reduced attendance. These factors, along with strict regulations, pushed much of kabuki "underground" in Edo, with performances changing locations to avoid the authorities. The theatres' new location was called Saruwaka-chō, or Saruwaka-machi. The last thirty years of the Tokugawa shogunate's rule is often referred to as the Saruwaka-machi period. This period produced some of the gaudiest kabuki in Japanese history. The Saruwaka-machi became the new theatre district for the Nakamura-za, Ichimura-za and Kawarazaki-za theatres. The district was located on the main street of Asakusa, which ran through the middle of the small city. The street was renamed after Saruwaka Kanzaburo, who initiated Edo kabuki in the Nakamura Theatre in 1624. European artists began noticing Japanese theatrical performances and artwork, and many artists (for example, Claude Monet) were inspired by Japanese wood block prints. This Western interest prompted Japanese artists to increase their depictions of daily life including theatres, brothels, main streets and so on. One artist in particular, Utagawa Hiroshige, did a series of prints based on Saruwaka from the Saruwaka-machi period in Asakusa. The relocation diminished the tradition's most abundant inspiration for costuming, make-up, and story line. Ichikawa Kodanji IV was one of the most active and successful actors during the Saruwaka-machi period. Deemed unattractive, he mainly performed buyō, or dancing, in dramas written by Kawatake Mokuami, who also wrote during the Meiji era to follow. Kawatake Mokuami commonly wrote plays that depicted the common lives of the people of Edo. He introduced shichigo-cho (seven-and-five syllable meter) dialogue and music such as kiyomoto. His kabuki performances became quite popular once the Saruwaka-machi period ended and theatre returned to Edo; many of his works are still performed. In 1868, the Tokugawa shogunate fell apart. Emperor Meiji was restored to power and moved from Kyoto to the new capital of Edo, or Tokyo, beginning the Meiji period. Kabuki returned to the ukiyo of Edo. Kabuki became more radical in the Meiji period, and modern styles emerged. New playwrights created new genres and twists on traditional stories. Beginning in 1868 enormous cultural changes, such as the fall of the Tokugawa shogunate, the elimination of the samurai class, and the opening of Japan to the West, helped to spark kabuki's re-emergence. As the culture struggled to adapt to the influx of foreign ideas and influence, actors strove to increase the reputation of kabuki among the upper classes and to adapt the traditional styles to modern tastes. They ultimately proved successful in this regard—on 21 April 1887, Emperor Meiji sponsored a performance. After World War II, the occupying forces briefly banned kabuki, which had strongly supported Japan's war since 1931; however, by 1947 the ban had been rescinded. The immediate post–World War II era was a difficult time for kabuki. Besides the war's physical devastation, many rejected the styles and thoughts of the past, kabuki among them. Director Tetsuji Takechi's popular and innovative productions of kabuki classics at this time are credited with bringing about a rebirth of interest in kabuki in the Kansai region. Of the many popular young stars who performed with the Takechi Kabuki, Nakamura Ganjiro III (b. 1931) was the leading figure. He was first known as Nakamura Senjaku, and this period in Osaka kabuki became known as the "Age of Senjaku" in his honor. Today, kabuki is the most popular of the traditional styles of Japanese drama—and its star actors often appear in television or film roles. For example, the well-known onnagata Bandō Tamasaburō V has appeared in several (non-kabuki) plays and movies, often in a female role. Kabuki appears in works of Japanese popular culture such as anime. In addition to the handful of major theatres in Tokyo and Kyoto, there are many smaller theatres in Osaka and throughout the countryside. The Ōshika Kabuki troupe, based in Ōshika, Nagano Prefecture, is one example. Some local kabuki troupes today use female actors in onnagata roles. The Ichikawa Shōjo Kabuki Gekidan, an all-female troupe, debuted in 1953 to significant acclaim but failed to start a new trend. The introduction of earphone guides in 1975, including an English version in 1982, helped broaden the art's appeal. As a result, in 1991 the Kabuki-za, one of Tokyo's best known kabuki theaters, began year-round performances and, in 2005, began marketing kabuki cinema films. Kabuki troupes regularly tour Asia, Europe and America, and there have been several kabuki-themed productions of canonical Western plays such as those of Shakespeare. Western playwrights and novelists have experimented with kabuki themes, an example of which is Gerald Vizenor's "Hiroshima Bugi" (2004). Writer Yukio Mishima pioneered and popularized the use of kabuki in modern settings and revived other traditional arts, such as Noh, adapting them to modern contexts. There have even been kabuki troupes established in countries outside Japan. For instance, in Australia, the Za Kabuki troupe at the Australian National University has performed a kabuki drama each year since 1976, the longest regular kabuki performance outside Japan. In November 2002 a statue was erected in honor of kabuki's founder Okuni and to commemorate 400 years of kabuki's existence. Diagonally across from the Minami-za, the last remaining kabuki theater in Kyoto, it stands at the east end of a bridge (Shijō Ōhashi) crossing the Kamo River in Kyoto. Kabuki was inscribed on the UNESCO Intangible Cultural Heritage Lists in 2005. The kabuki stage features a projection called a "hanamichi" (, "flower path"), a walkway which extends into the audience and via which dramatic entrances and exits are made. Okuni also performed on a hanamichi stage with her entourage. The stage is used not only as a walkway or path to get to and from the main stage, but important scenes are also played on the stage. Kabuki stages and theaters have steadily become more technologically sophisticated, and innovations including revolving stages and trap doors were introduced during the 18th century. A driving force has been the desire to manifest one frequent theme of kabuki theater, that of the sudden, dramatic revelation or transformation. A number of stage tricks, including actors' rapid appearance and disappearance, employ these innovations. The term "keren" (), often translated "playing to the gallery", is sometimes used as a catch-all for these tricks. Hanamichi and several innovations including revolving stage, "seri" and "chunori" have all contributed to kabuki play. "Hanamichi" creates depth and both "seri" and "chunori" provide a vertical dimension. "Mawari-butai" (revolving stage) developed in the Kyōhō era (1716–1735). The trick was originally accomplished by the on-stage pushing of a round, wheeled platform. Later a circular platform was embedded in the stage with wheels beneath it facilitating movement. The "kuraten" ("darkened revolve") technique involves lowering the stage lights during this transition. More commonly the lights are left on for "akaten" ("lighted revolve"), sometimes simultaneously performing the transitioning scenes for dramatic effect. This stage was first built in Japan in the early eighteenth century. "Seri" refers to the stage "traps" that have been commonly employed in kabuki since the middle of the 18th century. These traps raise and lower actors or sets to the stage. "Seridashi" or "seriage" refers to trap(s) moving upward and "serisage" or "serioroshi" to traps descending. This technique is often used to lift an entire scene at once. "Chūnori" (riding in mid-air) is a technique, which appeared toward the middle of the 19th century, by which an actor's costume is attached to wires and he is made to "fly" over the stage or certain parts of the auditorium. This is similar to the wire trick in the stage musical "Peter Pan", in which Peter launches himself into the air. It is still one of the most popular "keren" (visual tricks) in kabuki today; major kabuki theaters, such as the National Theatre, Kabuki-za and Minami-za, are all equipped with chūnori installations. Scenery changes are sometimes made mid-scene, while the actors remain on stage and the curtain stays open. This is sometimes accomplished by using a "Hiki Dōgu", or "small wagon stage". This technique originated at the beginning of the 18th century, where scenery or actors move on or off stage on a wheeled platform. Also common are stagehands rushing onto the stage adding and removing props, backdrops and other scenery; these "kuroko" () are always dressed entirely in black and are traditionally considered invisible. Stagehands also assist in a variety of quick costume changes known as "hayagawari" ("quick change technique"). When a character's true nature is suddenly revealed, the devices of "hikinuki" and "bukkaeri" are often used. This involves layering one costume over another and having a stagehand pull the outer one off in front of the audience. The curtain that shields the stage before the performance and during the breaks is in the traditional colours of black, red and green, in various order, or white instead of green, vertical stripes. The curtain consists of one piece and is pulled back to one side by a staff member by hand. An additional outer curtain called "doncho" was not introduced until the Meiji era following the introduction of western influence. These are more ornate in their appearance and are woven. They depict the season in which the performance is taking place, often designed by renowned Nihonga artists. The three main categories of kabuki play are "jidaimono" (, "historical", or pre-Sengoku period stories), "sewamono" (, "domestic", or post-Sengoku stories) and "shosagoto" (, "dance pieces"). "Jidaimono", or history plays, are set within the context of major events in Japanese history. Strict censorship laws during the Edo period prohibited the representation of contemporary events and particularly prohibited criticising the shogunate or casting it in a bad light, although enforcement varied greatly over the years. Many shows were set in the context of the Genpei War of the 1180s, the Nanboku-chō Wars of the 1330s, or other historical events. Frustrating the censors, many shows used these historical settings as metaphors for contemporary events. "Kanadehon Chūshingura", one of the most famous plays in the kabuki repertoire, serves as an excellent example; it is ostensibly set in the 1330s, though it actually depicts the contemporary (18th century) affair of the revenge of the 47 "rōnin". Unlike "jidaimono" which generally focused upon the samurai class, "sewamono" focused primarily upon commoners, namely townspeople and peasants. Often referred to as "domestic plays" in English, "sewamono" generally related to themes of family drama and romance. Some of the most famous "sewamono" are the love suicide plays, adapted from works by the "bunraku" playwright Chikamatsu; these center on romantic couples who cannot be together in life due to various circumstances and who therefore decide to be together in death instead. Many if not most "sewamono" contain significant elements of this theme of societal pressures and limitations. "Shosagoto" pieces place their emphasis on dance, which may be performed with or without dialogue, where dance can be used to convey emotion, character and plot. Quick costume change techniques may sometimes be employed in such pieces. Notable examples include "Musume Dōjōji" and "Renjishi". Nagauta musicians may be seated in rows on stepped platforms behind the dancers. Important elements of kabuki include the "mie" (), in which the actor holds a picturesque pose to establish his character. At this point his house name ("yagō", ) is sometimes heard in loud shout ("kakegoe", ) from an expert audience member, serving both to express and enhance the audience's appreciation of the actor's achievement. An even greater compliment can be paid by shouting the name of the actor's father. The main actor has to convey a wide variety of emotions between a fallen, drunkard person and someone who in reality is quite different since he is only faking his weakness, for example in the character of Yuranosuke in "Chūshingura". This is called "hara-gei" or "belly acting", which means he has to perform from within to change characters. It is technically difficult to perform and takes a long time to learn, but once mastered the audience takes up on the actor's emotion. Emotions are also expressed through the colours of the costumes, a key element in kabuki. Gaudy and strong colours can convey foolish or joyful emotions, whereas severe or muted colours convey seriousness and focus. "Keshō", kabuki makeup, provides an element of style easily recognizable even by those unfamiliar with the art form. Rice powder is used to create the white "oshiroi" base for the characteristic stage makeup, and "kumadori" enhances or exaggerates facial lines to produce dramatic animal or supernatural masks. The color of the "kumadori" is an expression of the character's nature: red lines are used to indicate passion, heroism, righteousness, and other positive traits; blue or black, villainy, jealousy, and other negative traits; green, the supernatural; and purple, nobility. Kabuki, like other traditional forms of drama in Japan and other cultures, was (and sometimes still is) performed in full-day programs. Rather than attending for 2–5 hours, as one might do in a modern Western-style theater, audiences "escape" from the day-to-day world, devoting a full day to entertainment. Though some individual plays, particularly the historical "jidaimono", might last an entire day, most were shorter and sequenced with other plays in order to produce a full-day program. The structure of the full-day program, like the structure of the plays themselves, was derived largely from the conventions of "bunraku" and "Noh", conventions which also appear in other traditional Japanese arts. Chief among these is the concept of "jo-ha-kyū" (), which states that dramatic pacing should start slow, speed up, and end quickly. The concept, elaborated on at length by master Noh playwright Zeami, governs not only the actions of the actors, but also the structure of the play as well as the structure of scenes and plays within a day-long program. Nearly every full-length play occupies five acts. The first corresponds to "jo", an auspicious and slow opening which introduces the audience to the characters and the plot. The next three acts correspond to "ha", speeding events up, culminating almost always in a great moment of drama or tragedy in the third act and possibly a battle in the second or fourth acts. The final act, corresponding to "kyu", is almost always short, providing a quick and satisfying conclusion. While many plays were originally written for kabuki, many others were taken from "jōruri" plays, Noh plays, folklore, or other performing traditions such as the oral tradition of the "Tale of the Heike". While "jōruri" plays tend to have serious, emotionally dramatic, and organized plots, plays written specifically for kabuki generally have looser, sillier plots. One of the crucial differences in the philosophy of the two forms is that "jōruri" focuses primarily on the story and on the chanter who recites it, while kabuki focuses more on the actors. A "jōruri" play may sacrifice the details of sets, puppets, or action in favor of the chanter, while kabuki is known to sacrifice drama and even the plot to highlight an actor's talents. It was not uncommon in kabuki to insert or remove individual scenes from a day's schedule in order to cater to the talents or desires of an individual actor—scenes he was famed for, or that featured him, would be inserted into a program without regard to plot continuity. Kabuki traditions in Edo and in Kamigata (the Kyoto-Osaka region) were quite different. Through most of the Edo period, kabuki in Edo was defined by extravagance and bombast, as exemplified by stark makeup patterns, flashy costumes, fancy "keren" ("stage tricks"), and bold "mie" ("poses"). Kamigata kabuki, meanwhile, was much calmer and focused on naturalism and realism in acting. Only towards the end of the Edo period in the 19th century did the two regions adopt one another's styles to any significant degree. For a long time, actors from one region often failed to adjust to the styles of the other region and were unsuccessful in their performance tours of that region. Actors form schools or are associated with a particular theatre. Every actor has a stage name, which is different from the name they were born with. These stagenames, most often those of the actor's father, grandfather, or teacher, are passed down between generations of actors' lineages, and hold great honor and importance. Many names are associated with certain roles or acting styles, and the new possessor of each name must live up to these expectations; there is the feeling almost of the actor not only taking a name, but embodying the spirit, style, or skill of each actor to previously hold that name. Many actors will go through at least three names over the course of their career. The "shūmei" (, lit. name succession) are grand naming ceremonies held in Kabuki theatre in front of the audience. Most often, a number of actors will participate in a single ceremony, taking on new stage-names. Their participation in a "shūmei" represent their passage into a new chapter of their performing career.
https://en.wikipedia.org/wiki?curid=16985
Kent State University Kent State University (KSU) is a public research university in Kent, Ohio. The university also includes seven regional campuses in Northeast Ohio and additional facilities in the region and internationally. Regional campuses are located in Ashtabula, Burton, East Liverpool, Jackson Township, New Philadelphia, Salem, and Warren, Ohio, with additional facilities in Cleveland, Independence, and Twinsburg, Ohio, New York City, and Florence, Italy. The university was established in 1910 as a teacher-training school. The first classes were held in 1912 at various locations and in temporary buildings in Kent and the first buildings of the original campus opened the following year. Since then, the university has grown to include many additional baccalaureate and graduate programs of study in the arts and sciences, research opportunities, as well as over and 119 buildings on the Kent campus. During the late 1960s and early 1970s, the university was known internationally for its student activism in opposition to U.S. involvement in the Vietnam War, due mainly to the Kent State shootings in 1970. , Kent State was the fourth-largest university in Ohio with an enrollment of 35,883 students in the eight-campus system and 26,804 students at the main campus in Kent. Kent State offers over 300 degree programs, among them 250 baccalaureate, 40 associate, 50 master's, and 23 doctoral programs of study, which include such notable programs as nursing, business, history, library science, aeronautics, journalism, fashion design and the Liquid Crystal Institute. Kent State University was established in 1910 as an institution for training public school teachers. It was part of the Lowry Bill, which also created a sister school in Bowling Green, Ohio – now known as Bowling Green State University. It was initially known under the working name of the Ohio State Normal College At Kent, but was named Kent State Normal School in 1911 in honor of William S. Kent (son of Kent, Ohio, namesake Marvin Kent), who donated the used for the original campus.The first president was John Edward McGilvrey, who served from 1912 to 1926. McGilvrey had an ambitious vision for the school as a large university, instructing architect George F. Hammond, who designed the original campus buildings, to produce a master plan. Classes began in 1912 before any buildings had been completed at the campus in Kent. These classes were held at extension centers in 25 cities around the region. By May 1913, classes were being held on the campus in Kent with the opening of Merrill Hall. The school graduated 34 students in its first commencement on July 29, 1914. In 1915, the school was renamed Kent State Normal College due to the addition of four-year degrees. By then additional buildings had been added or were under construction. Kent State's enrollment growth was particularly notable during its summer terms. In 1924, the school's registration for summer classes was the largest of any teacher-training school in the United States. In 1929, the state of Ohio changed the name to Kent State College as it allowed the school to establish a college of arts and sciences. McGilvrey's vision for Kent was not shared by many others outside the school, particularly at the state level and at other state schools. His efforts to have the state funding formula changed created opposition, particularly from Ohio State University and its president William Oxley Thompson. This resulted in a 1923 "credit war" where Ohio State refused Kent transfer credits and spread to several other schools taking similar action. It was this development – along with several other factors – which led to the firing of McGilvrey in January 1926. McGilvrey was succeeded first by David Allen Anderson (1926–1928) and James Ozro Engleman from 1928 to 1938, though he continued to be involved with the school for several years as president emeritus and as head of alumni relations from 1934 to 1945. He was present in Columbus on May 17, 1935, when Kent native Governor Martin L. Davey signed a bill that allowed Kent State and Bowling Green to add schools of business administration and graduate programs, giving them each university status. From 1944 to 1963, the University was led by President George Bowman. During his tenure, the student senate, faculty senate and graduate council were organized. Although it had served Stark County from the 1920s, in 1946, the University's first regional campus, the Stark Campus, was established in Canton, Ohio. In the fall of 1947, Bowman appointed Oscar W. Ritchie as a full-time faculty member. Ritchie's appointment to the faculty made him the first African American to serve on the faculty at Kent State and also made him the first African American professor to serve on the faculty of any state university in Ohio. In 1977, the former Student Union, which had been built in 1949, was rededicated as Oscar Ritchie Hall in his honor. Recently renovated, Oscar Ritchie Hall currently houses the department of Pan-African Studies the Center of Pan-African Culture, the Henry Dumas Library, the Institute for African American Affairs, the Garrett Morgan Computer Lab and the African Community Theatre. The 1950s and 1960s saw continued growth in both enrollment and in the physical size of the campus. Several new dorms and academic buildings were built during this time, including the establishment of additional regional campuses in Warren (1954), Ashtabula (1957), New Philadelphia (1962), Salem (1962), Burton (1964), and East Liverpool, Ohio (1965). In 1961, grounds superintendent Larry Wooddell and Biff Staples of the Davey Tree Expert Company released ten cages of black squirrels obtained from Victoria Park in London, Ontario, Canada, onto the Kent State campus. By 1964 their estimated population was around 150 and today they have spread in and around Kent and have become unofficial mascots of both the city and university. Since 1981, the annual Black Squirrel Festival is held every fall on campus. In 1965, chemistry professor Glenn H. Brown established the Liquid Crystal Institute, a world leader in the research and development the multibillion-dollar liquid crystal industry. James Fergason invented and patented the basic TN LCD in 1969 and ten liquid crystal companies have been spun off from the Institute. In 1967, Kent State became the first university to run an independent, student-operated Campus Bus Service. It was unique in that it provided jobs for students, receiving funding from student fees rather than bus fares. Campus Bus Service was the largest such operation in the country until it merged with the Portage Area Regional Transportation Authority in 2004. 1969 saw the opening of a new Memorial Stadium on the far eastern edge of campus and the closure and dismantling of the old Memorial Stadium. Kent State gained international attention on May 4, 1970, when an Ohio Army National Guard unit fired at students during an anti-war protest on campus, killing four and wounding nine. The Guard had been called into Kent after several protests in and around campus had become violent, including a riot in downtown Kent and the burning of the ROTC building. The main cause of the protests was the United States' invasion of Cambodia during the Vietnam War. The shootings caused an immediate closure of the campus with students and faculty given just 60 minutes to pack belongings. Around the country, many college campuses canceled classes or closed for fear of similar violent protests. In Kent, schools were closed and the National Guard restricted entry into the city limits, patrolling the area until May 8. With the campus closed, faculty members came up with a variety of solutions—including holding classes in their homes, at public buildings and places, via telephone, or through the mail—to allow their students to complete the term, which was only a few weeks away at the time. In 1971, the University established the Center for Peaceful Change, now known as the Center for Applied Conflict Management, as a "living memorial" to the students who had died. It offers degree programs in Peace and Conflict Studies and Conflict Resolution and is one of the earliest such programs in the United States. In response to, and protest of, the Kent State shootings, Neil Young wrote the song "Ohio" which was performed by the folk rock group Crosby, Stills, Nash & Young. Also in 1970, the university opened its 12-story library, moving from the previous home of Rockwell Hall to the tallest building in Portage County. Dedicated in 1971, the library became a member of the Association of Research Libraries in 1973. Kent State joined with the University of Akron and Youngstown State University in establishing the Northeastern Ohio Universities College of Medicine in 1973. It was the world's first medical consortium. Today it includes a college of pharmacy and Cleveland State University as an additional consortium member. Kent State was again in the national spotlight in 1977 when construction was set to begin on the Memorial Gym Annex, adjacent to the area where the Kent State shootings had occurred in 1970. Protesters organized a tent city in May, which lasted into July. Several attempts were made to block construction even after the end of the tent city, including an appeal to the United States Congress and the Department of the Interior to have the area declared a National Historic Landmark, which ended up being unsuccessful. Additional rallies were held that year, including one attended by Joan Baez on August 20. After several additional unsuccessful legal challenges, construction finally began on September 19 and was finished in 1979. In March 1991, Kent State once again made history by appointing Carol Cartwright as president of the University, the first female to hold such a position at any state university in Ohio. In 1994, Kent State was named a "Research University II" by the Carnegie Foundation. Beginning in the late 1990s, the University began a series of building renovations and construction, which included the complete renovation of the historic original campus, the construction of several new dormitories, a student recreation center, and additional academic buildings on the Kent Campus and at the regional campuses. In September 2010, the university announced its largest student body ever, with a total enrollment of 41,365. U.S. News & World Report's 2017 rankings put Kent State as tied for #188 for National Universities and tied for #101 in Top Public Schools. Kent State had a Fall 2015 acceptance rate of 85%. Kent State University is an eight-campus system in northeastern Ohio, with the main administrative center in Kent. Within the Kent State University system, the main campus is officially referred to as the "Kent Campus". The Kent Campus is a landscaped suburban environment, covering approximately which house over 100 buildings, gardens, bike trails, and open greenery. There are also thousands of additional acres of bogs, marshes, and wildlife refuges adjacent to or near the campus. While the university's official mascot is Flash the golden eagle, the campus also has an unofficial mascot in the black squirrel, which were brought to Kent in 1961 and can be found on and around the campus. The campus is divided into North, South, and East sections but many areas have come to be referred to as Front Campus, Residential Campus, and Science Row. The main hub of activity and central point is the Student Center and Risman Plaza, which is adjacent to the twelve-story main library. The university also operated the 18-hole Kent State Golf Course until 2017, and currently operates Centennial Research Park just east of campus in Franklin Township and the Kent State University Airport in Stow. In addition to the Kent Campus, there are seven regional campuses. The regional campuses provide open enrollment and are generally treated as in-house community colleges as opposed to the large university feel of the Kent Campus. Students at the regional campuses can begin any of Kent State's majors at their respective campus and each campus offers its own unique programs and opportunities that may or may not be available in Kent. Regional campuses include: The Ashtabula Campus was established in 1958 and is made up of four buildings: Main Hall, a library, the Bookstore Building, and the Robert S. Morrison Health and Science Building. It is on a site in Ashtabula, just south of Lake Erie. The campus offers 27 associate and bachelor's degree programs of its own, with the nursing program being the largest. Approximately 75% of registered nurses working in Ashtabula County graduated with an associate degree in nursing from Kent State at Ashtabula. The East Liverpool Campus was established in 1965 from facilities formerly owned by the East Liverpool City School District, occupying a downtown site overlooking the Ohio River. It is composed of the Main Building, Memorial Auditorium, Mary Patterson Building, and a Commons area. The Geauga Campus is located on an campus in Burton Township, just north of the village of Burton in Geauga County. It was established in 1964 and, , has an enrollment of approximately 2,500 students. Six associate degree and seven baccalaureate degree programs can be taken in their entirety at the campus. The Geauga Campus also administers the Regional Academic Center, a facility located in Twinsburg, Ohio. Kent State at Salem is located in Salem Township, just south of the city of Salem. The campus features a lake, outdoor classroom, and nature walk. Kent State University at Salem also owns and operates the "City Center" facility in the former home of Salem Middle School and Salem High School, in which administrative offices, classes, and student services are located. The Stark Campus is the largest regional campus of Kent State University, with an enrollment of over 3,200 students . The campus serves around 11,000 students total each year through professional development and other academic coursework classes. It is located on in Jackson Township in Stark County. The campus includes seven major buildings and a natural pond. Additionally, the Stark Campus includes the Corporate University and Conference Center, an advanced meeting, training, and events facility that is one of only ten such centers in the state of Ohio affiliated with the International Association of Conference Centers. The Center also serves as a home to the Corporate University, which provides training and learning exercises for area businesses and organizations. Kent State University at Stark offers 24 complete degree programs, including three associate degree, 18 bachelor's degree, and three master's degree programs. Kent State's Trumbull Campus is located just north of Warren in Champion Heights, Ohio, on SR 45 near the SR 5–SR 82 bypass. It offers programs in 170 majors at the freshman and sophomore level, as well as 18 certificates and 15 associate degree programs. In addition, there is upper division coursework for baccalaureate degree completion in nursing, justice studies, technology, business management, Theatre, and English, as well as general studies and psychology degrees. In 2004 the campus opened a Technology Building that includes the Workforce Development and Continuing Studies Center. The Tuscarawas Campus in New Philadelphia, Ohio offers 19 associate degrees, six bachelor's degrees, and the Master of Technology Degree. Bachelor's degrees are offered in business management, general studies, justice studies, industrial technology, nursing and technology 2+2. The Science and Advanced Technology Center provides of laboratory and classroom space for science, nursing and workforce development. The Tuscarawas Campus has constructed a , $13.5 million Fine and Performing Arts center that will enable the campus to expand academic and cultural programming. In addition to the eight campuses in northeast Ohio, Kent State operates facilities for study-abroad programs in Florence, Italy; New York City; Cleveland, Ohio; and Shanghai, China. KSU-Florence opened its doors to International Studies Abroad in a collaboration that grants students the opportunity to study in historic Florence at its newly renovated Palazzo dei Cerchi. Palazzo dei Cerchi is a prestigious and ancient building located in the heart of Florence, at the corner of Via della Condotta and Vicolo dei Cerchi, next to the famous Piazza della Signoria and the birthplace of literary genius Dante Alighieri. Kent State acquired this facility in 2003 and undertook its complete renovation. The original exterior was maintained and reflects Florence as it was in the 13th century. The restoration carefully preserved the original structure while creating an efficient space for academic purposes, with an interior that houses state-of-the-art classrooms. After using the recently restorated Palazzo Vettori since January 2016, on April 17, 2016 the Kent State University Florence campus was officially moved from Palazzo dei Cerchi and Palazzo Bartolini Baldelli to Palazzo Vettori. The New York City Studio is located in the heart of New York City's Garment District. Surrounded by fabric and accessory shops, fashion showrooms, and designer studios; one-third of all clothing manufactured in the USA is designed and produced in this neighborhood. The District is home to America's world-renowned fashion designers, including Oscar de la Renta, Calvin Klein, Donna Karan, Liz Claiborne, and Nicole Miller. The facility is a state-of-the-art, space and includes a 50-person lecture room, 12-station computer lab with instructor station, and a fashion design studio fully outfitted with professional equipment. The NYC studio gives Kent State students the advantage of working within the heart of the fashion, dance and theater industry. Kent State's Cleveland Urban Design Center is located at 1309 Euclid Ave in the downtown Cleveland Theater District neighborhood, just off of East 14th Street. The Urban Design Center was created in 1983 under the sponsorship of the Urban University Program, which supports the outreach and community service efforts of Ohio's state universities working in urban areas. Under its founding director, Foster Armstrong, the Center expanded on the existing outreach and public service activities of Kent State's architecture school, focusing primarily on historic preservation and the problems of Northeast Ohio's smaller towns and cities. In 2003, the CUDC began a collaboration with the Dresden University of Technology, Kent State's sister university in Germany, with a joint vision on the revitalization of the lower Cuyahoga Valley in Cleveland. Since then, there have been a number of faculty exchanges as the two universities seek to pool their expertise both to enhance students' experiences and to better serve their respective regions. Kent State has colleges of: The university has an Honors College and interdisciplinary programs in Biomedical Sciences, Financial Engineering, and Information Architecture and Knowledge Management. The university offers a large number of opportunities for student involvement at all its campuses, including student and professional associations, service organizations, performing ensembles, student publications, student government, and intramural and club athletics. Greek life at Kent State is overseen by the Center for Student Involvement located in the Kent Student Center. Organizations belong to one of three governing councils, the Panhellenic Council, the Interfraternity Council and the Integrated Greek Council. Sorority houses are primarily located on Fraternity Drive located across the street from the main library and fraternity houses are located throughout the city of Kent. The university set aside land for the development of a Greek fraternity village in 2008, on land near the Student Recreation and Wellness Center. Sigma Nu built a new chapter house in 2008 on this land, but is now property of the Kappa Sigma fraternity. Kent State's Greek life claims numerous famous and well-known figures in society including Lou Holtz, a brother of the Kent Delta Upsilon chapter and Drew Carey, a brother of the Kent Delta Tau Delta chapter. Through the Hugh A. Glauser School of Music and the School of Theatre and Dance, the university offers performance opportunities in the performing arts, including three concert bands (Wind Ensemble, Concert Band, and Communiversity Band), Athletic Bands (Marching Golden Flashes and Flasher Brass), three jazz ensembles (Jazz Ensemble I, Jazz Ensemble II, and Jazz Lab Band), six choral ensembles (Kent Chorus, KSU Chorale, Women's Chorus, Men's Coro Cantare, Gospel Choir, and Nova Jazz Singers), one orchestra (KSU Orchestra), World Music Ensembles, as well as theater and dance opportunities. The Trumbull, Stark, and Tuscarawas campuses have theatre seasons featuring student actors. Each regional campus also offers their own performing arts opportunities. Kent State offers several student government options, the largest of which is the Undergraduate Student Government (USG), which represents students from all campuses of the university and has been in some form of operation since 1924. The current 25 person governing body was formed after the merger of the All-Campus Programming Board (ACPB) and the Undergraduate Student Senate (USS). USG is led by an executive director and is composed of eight directors, ten college senators, one senator for residence hall students, one senator for commuter and off-campus students, one senator for undergraduate studies, and 3 senators-at-large. USG oversees the USG Programming Board which hosts various concerts, comedians, and performers, as well as the USG Allocations Committee which disburses conference and programming funds to the over 250 registered student organizations on the Kent Campus. Elections for USG are held annually in March, and officers are typically inaugurated in late April. In addition to the USG, Kent State also has the Graduate Student Senate (GSS) and the Kent Interhall Council (KIC). KIC is for students who live in the on-campus residence halls and deals with policies and activities. Within the KIC is a programming board and individual councils for each residence hall. Kent State operates twenty-five on-campus residence halls, all of which are located on the main campus in Kent. Each hall is a part of a larger group, usually bound by a common name or a common central area. They are: Dining halls are in Eastway, Tri-Towers, and Prentice, as well as multiple locations in the Student Center. Each of the residence hall dining locations also houses small grocery stores where students may use their board plan. Within the halls are 12 Living-Learning Communities based on area of study: 4 Paws for Ability University Program provides university students with an opportunity to foster and socialize service dogs-in-training before they begin their professional training at the 4 Paws for Ability facility in Xenia, Ohio. A chapter was founded at Kent State in August 2016 with three service dogs-in-training; it became an official organization a year later. The chapter and organization was founded by Maxwell Newberry. , 4 Paws for Ability Kent State has 25 dogs on campus at a time. However, the amount of sitters, co-handlers, and volunteers is not capped. The chapter has approximately 325 volunteers on their e-mail list, about 30 sitters, and over 50 co-handlers. The organization shares custody of the small fenced-in discus area at the outdoor track along Johnston Drive. Discussion and plans began in late 2017 to create a separate field for the organization. In recent years, Kent State has developed extensive services to support people with autism, with many of its programs nationally recognized in different areas. Neurotypical students who wish to be involved with these activities are paired with students with autism, and one sorority is directly involved with these services. In a 2018 story, the university's autism outreach coordinator told "The Plain Dealer" of Cleveland that about 30 autistic students were registered as such with the university, but estimated that close to 500 students with autism used the school's services. These services contributed to Kent State becoming the first NCAA Division I member to sign a recruit known to be diagnosed as autistic to a National Letter of Intent in a team sport in November of that year, when Kalin Bennett committed to play for the men's basketball team starting in 2019–20, making his debut with the team in November 2019. Kent State's athletic teams are called the Golden Flashes and the school colors are shades of navy blue and gold, officially "Kent State blue" and "Kent State gold". The university sponsors 16 varsity athletic teams who compete in the National Collegiate Athletic Association (NCAA) at the Division I level with football in the Football Bowl Subdivision (FBS). Kent State is a member of the Mid-American Conference (MAC) East division and has been a member of the conference since 1951. The university athletic facilities are mainly on campus, featuring the 25,319-seat Dix Stadium and the 6,327-seat Memorial Athletic and Convocation Center, one of the oldest arenas in Division I college basketball. Through the 2014–2015 season, in MAC play, Kent State has won the Reese Cup for best men's athletic program eight times, winning in 2000, 2002, 2006, 2009, 2010, 2011, 2012, and 2013. The Flashes have also won the Jacoby Cup for best women's athletic program eight times, winning in 1989, 1996, 1997, 1999, 2004, 2005, 2010, and 2014. In 2002 the Men's Basketball team advanced to NCAA "Elite Eight", while the baseball team, women's basketball, gymnastics, men's golf, and women's golf teams have won numerous MAC titles and advanced to NCAA tournament play. Some notable athletic alumni include: Alabama Crimson Tide head football coach and five-time national champion head coach Nick Saban, former Missouri Tigers head football coach Gary Pinkel, 2003 British Open Champion and current PGA member Ben Curtis, former New York Yankees catcher Thurman Munson, Thomas Jefferson 1984 200m Olympic bronze medalist, former Pittsburgh Steelers Pro Football Hall of Fame linebacker and four-time Super Bowl champion Jack Lambert, Pittsburgh Steelers linebacker and two-time Super Bowl champion James Harrison, ESPN Analyst and former college football national champion head coach Lou Holtz, New England Patriots Wide Receiver and Super Bowl champion Julian Edelman, former San Diego/Los Angeles Chargers All-Pro tight end Antonio Gates (who played basketball at KSU, not football), former Cleveland Browns and Indianapolis Colts All Pro return specialist Joshua Cribbs, former San Diego Padres pitcher Dustin Hermanson, Tampa Bay Rays pitcher Andy Sonnanstine, Los Angeles Dodgers pitcher Matt Guerrier. pitcher Joe Crawford, New York Mets. The university operates the "Kent State University Press", located in the main library building and publishes 30 to 35 titles a year. It is a member of the Association of American University Presses, which includes over 100 university-sponsored scholarly presses. The Press was established 1965 and initially published in literary criticism. In 1972 the Press's publishing program was expanded to include regional studies and ethnomusicology. Further expansion occurred beginning in 1985 when the Press began publishing works related to the American Civil War and Ohio history. Kent State counts 227,000 living alumni . It has produced a number of individuals in the entertainment industry including comedian and current "Price is Right" host Drew Carey, comedian and talk show host Arsenio Hall, Steve Harvey, actors John de Lancie, Michael Keaton, and Ray Wise, actresses Alaina Reed Hall and Alice Ripley, "Phenomenon" star Angela Funovits, boxing promoter Don King, "30 Rock" producer Jeff Richmond, and "That 70s Show" creator Bonnie Turner. Musicians from Kent State include several members of the band Devo, which was formed at Kent State in 1973, including Mark Mothersbaugh, Bob Lewis, and Gerald Casale. Additional musicians include singers Chrissie Hynde, Jeff Timmons of 98 Degrees, Debra Byrd of "American Idol", guitarist Joe Walsh, and drummer Chris Vrenna. In politics and government, several politicians in Ohio attended Kent State including former judge and United States Representative Robert E. Cook, former minority leader C.J. Prentiss, current United States House of Representatives member Betty Sutton, former representative, Lieutenant Governor, and Governor Nancy Hollister, and Supreme Court of Ohio justice Terrence O'Donnell. Other politicians include Allen Buckley of Georgia, Ohio politician Jeffrey Dean, Pennsylvania state representative Allen Kukovich, and George Petak of Wisconsin. Politician activists from Kent State include anti-war activist Alan Canfora and former Students for a Democratic Society leaders Ken Hammond and Carl Oglesby. Literary and journalism alumni include "Funky Winkerbean" and "Crankshaft" writer Tom Batiuk, "Captain Underpants" author Dav Pilkey, and columnists Connie Schultz and Regina Brett. Television journalism alumni include CNN anchor Carol Costello, Cleveland news anchors Ted Henry, Wayne Dawson, sportscaster Jeff Phelps, and ESPN "Dream Job" winner Dave Holmes. A number of professional athletes are Kent State alumni including current WWE wrestler Dolph Ziggler and National Football League players Julian Edelman, James Harrison, Josh Cribbs, and Usama Young. Former NFL players include Don Nottingham, Cedric Brown, Bob Hallen, Abdul Salaam, Jack Lambert, and Antonio Gates, along with Canadian Football League standouts Jay McNeil, Tony Martino, and Canadian Football Hall of Fame and former Kent State football head coach Jim Corrigall. College football coaches Nick Saban, Gary Pinkel, and Lou Holtz are also Kent State alumni. Major League Baseball players to come from Kent State include current players Emmanuel Burriss, Matt Guerrier, Andy Sonnanstine and Dirk Hayhurst. Past MLB players include Gene Michael, Rich Rollins, Dustin Hermanson, Steve Stone, and Thurman Munson. Additional athletic alumni include Canadian professional golfers Corey Conners , Mackenzie Hughes, Jon Mills, and Ryan Yip, American professional golfer Ben Curtis, and Olympians Betty-Jean Maycock in gymnastics and Gerald Tinker in track and field.
https://en.wikipedia.org/wiki?curid=16986
Kelly Freas Frank Kelly Freas (August 27, 1922 – January 2, 2005) was an American science fiction and fantasy artist with a career spanning more than 50 years. He was known as the "Dean of Science Fiction Artists" and he was the second artist inducted by the Science Fiction Hall of Fame. Born in Hornell, New York, Freas (pronounced like "freeze") was the son of two photographers, and was raised in Canada. He was educated at Lafayette High School in Buffalo, where he received training from long-time art teacher Elizabeth Weiffenbach. He entered the United States Army Air Forces right out of high school (Crystal Beach, Ontario, Canada). He flew as camera man for reconnaissance in the South Pacific and painted bomber noses during World War II. He then worked for Curtis-Wright for a brief period, then went to study at The Art Institute of Pittsburgh and began to work in advertising. His first marriage was in 1948 to Nina Vaccaro, though they later divorced. He later married Pauline (Polly) Bussard in 1952; they had two children, Jacqui and Jerry. Polly died of cancer in January 1987. In 1988 he married (and is survived by) Dr. Laura Brodian. The fantasy magazine "Weird Tales" published the first cover art by Freas on its November 1950 issue: "The Piper" illustrating "The Third Shadow" by H. Russell Wakefield. His second was a year later in the same magazine, followed by several "Planet Stories" or "Weird Tales" covers and interior illustrations for three Gnome Press books in 1952. With his illustrating career underway, he continued to devise unique and imaginative concepts for other fantasy and science fiction magazines of that period. In a field where airbrushing is common practice, paintings by Freas are notable for his use of bold brush strokes, and a study of his work reveals his experimentation with a wide variety of tools and techniques. Over the next five decades, he created covers for hundreds of books and magazines (and much more interior artwork), notably "Astounding Science Fiction", both before and after its title change to "Analog", from 1953 to 2003. He started at "Mad" magazine in February 1957 and by July 1958 was the magazine's new cover artist; he painted most of its covers until October 1962 (featuring the iconic character, Alfred E. Neuman). He also created cover illustrations for DAW, Signet, Ballantine Books, Avon, all 58 Laser Books (which are now collectors' items), and over 90 covers for Ace books alone. He was editor and artist for the first ten "Starblaze" books. He illustrated the cover of Jean Shepherd, Ian Ballantine, and Theodore Sturgeon's literary hoax, "I, Libertine" (Ballantine Books, 1956). That same year he drew cartoon illustrations for Bernard Shir-Cliff's "The Wild Reader". Freas also painted insignia and posters for Skylab I; pinup girls on bombers while in the United States Army Air Forces; comic book covers; the covers of the "GURPS" worldbooks "Lensman" and "Planet Krishna"; and more than 500 saints' portraits for the Franciscans executed simultaneously with his portraits of Alfred E. Neuman for "Mad". He was very active in gaming and medical illustration. His cover of Queen's album "News of the World" (1977) was a pastiche of his October 1953 cover illustration for Tom Godwin's "The Gulf Between" for "Astounding Science Fiction" magazine. Freas published several collections of his art, frequently gave presentations, and his work appeared in numerous exhibitions. He was among several of the inaugural recipients of the Hugo Award for Best Artist in 1955 and was recipient under different names of the next three conferred in 1956, 1958, and 1959. With six more Hugo awards to his name (1970 and 1972–76), he became the first person to receive ten Hugo awards (he was nominated 20 times). No other artist in science fiction has consistently matched his record. Freas was twice a Guest of Honor at Worldcon, at Chicon IV in 1982 and at Torcon 3 in 2003, although a fall suffered shortly before the latter convention precluded him from attending. He died in West Hills, California and is buried in Oakwood Memorial Park Cemetery in Chatsworth. Freas's achievements include the Doctor of Arts, Art Institute of Pittsburgh, December 2003. The Science Fiction Hall of Fame inducted him in 2006, the second artist after Chesley Bonestell. Biography and criticism Bibliography and works
https://en.wikipedia.org/wiki?curid=16987
Kangol Kangol is an English clothing company famous for its headwear. The name Kangol reflects the original production where the K was for knitting, the ANG was for angora, and the OL was for wool. Although no Kangol hat has ever actually been manufactured in Australia, the Kangaroo logo was adopted by Kangol in 1983 because Americans commonly asked where they could get "the Kangaroo hat". Founded in the 1920s, by Jewish Polish World War I veteran Jacques Spreiregen, Kangol produced hats for workers, golfers, and especially soldiers. In 1938, Spreiregen, who was working in London as an importer, opened a factory at Cleator, Cumbria, England, which he ran with his nephew Joseph Meisner. A second factory was opened at nearby Frizington, and later, under the direction of Spreiregen's younger nephew Sylvain Meisner, a third factory, manufacturing motorcycle helmets and seat belts in Carlisle. They were the major beret suppliers to the armed forces during World War II. Kangol has been owned by Sports Direct since 2006, when it acquired the brand from private equity fund August Equity Trust. Licences to manufacture and sell Kangol apparel have been sold to many different companies, including D2 and Topshop. In 2002, the Kangol apparel brand was acquired by Kangol Clothing North America LLC, a subsidiary of Chesterfield Manufacturing Corp in Charlotte North Carolina. In 2003, Chesterfield was acquired by Tomasello Inc., which was wholly owned and led by David W. Tomasello. The global rights to Kangol hats have been held by American hatmakers Bollman Hat Company since 2002. It was announced in February 2009 that Bollman were reviewing their worldwide operations, putting 33 jobs and the future of the Kangol head office in Cleator in doubt. On 6 April 2009, it was announced that the original factory would be converted to a warehouse with the loss of 25 jobs. Only seven employees now remain employed at the company's original site and the outlet shop closed at the end of August 2009. However, hats will continue to be made at their sites in Eastern Europe and the United States. During WWII, the signature Kangol beret was worn famously by British Field Marshal Montgomery. In the 1960s, designers Mary Quant and Pierre Cardin worked with the company, whose products graced the heads of the rich and famous, including the Beatles and Arnold Palmer, and later Diana, Princess of Wales. The company also supplied uniformed organisations such as the Scout Association. In the 1980s Kangol berets entered a new phase of fashion history with their adoption by members of the hip-hop community, such as Grandmaster Flash, Run-DMC, LL Cool J, Slick Rick, Kangol Kid of UTFO, and The Notorious B.I.G. The brand was popularised even more by the 1991 movie "New Jack City". The release of more consciously stylish products in the 1990s such as the furgora (angora-wool mix) Spitfire, was helped by its presence upon the head of Samuel L. Jackson in 1997. Kevin Eubanks, bandleader for "The Tonight Show with Jay Leno", sported a Kangol beret on an almost nightly basis. In 2009, Eminem wore the Cotton Twill Army Cap Kangol hat on his "Beautiful" video.
https://en.wikipedia.org/wiki?curid=16989
Keith Moon Keith John Moon (23 August 1946 – 7 September 1978) was an English drummer for the rock band the Who. He was noted for his unique style and his eccentric, often self-destructive behaviour. Moon grew up in Alperton, a suburb of Wembley, in Middlesex, and took up the drums during the early 1960s. After playing with a local band, the Beachcombers, he joined the Who in 1964 before they recorded their first single. He remained with the band during their rise to fame, and was quickly recognised for his drumming style, which emphasised tom-toms, cymbal crashes, and drum fills. Throughout Moon's tenure with the Who his drum kit steadily grew in size, and (along with Ginger Baker) he has been credited as one of the earliest rock drummers to regularly employ double bass drums in his setup. Moon occasionally collaborated with other musicians and later appeared in films, but considered playing in the Who his primary occupation, and remained a member of the band until his death. In addition to his talent as a drummer, however, Moon developed a reputation for smashing his kit on stage and destroying hotel rooms on tour. He was fascinated by blowing up toilets with cherry bombs or dynamite, and by destroying television sets. Moon enjoyed touring and socialising, and became bored and restless when the Who were inactive. His 21st birthday party in Flint, Michigan, has been cited as a notorious example of decadent behaviour by rock groups. Moon suffered a number of setbacks during the 1970s, most notably the accidental death of chauffeur Neil Boland and the breakdown of his marriage. He became addicted to alcohol, particularly brandy and champagne, and acquired a reputation for decadence and dark humour; his nickname was "Moon the Loon". After moving to Los Angeles with personal assistant Peter "Dougal" Butler during the mid-1970s, Moon recorded his only solo album, the poorly received "Two Sides of the Moon". While touring with the Who, on several occasions he passed out on stage and was hospitalised. By the time of their final tour with him in 1976, and particularly during production of "The Kids Are Alright" and "Who Are You", the drummer's deterioration was evident. Moon moved back to London in 1978, dying in September of that year from an overdose of Heminevrin, a drug intended to treat or prevent symptoms of alcohol withdrawal. Moon's drumming continues to be praised by critics and musicians. He was posthumously inducted into the "Modern Drummer" Hall of Fame in 1982, becoming only the second rock drummer to be chosen, and in 2011, Moon was voted the second-greatest drummer in history by a "Rolling Stone" readers' poll. Keith John Moon was born to Alfred Charles (Alf) and Kathleen Winifred (Kit) Moon on 23 August 1946 at Central Middlesex Hospital in northwest London, and grew up in Wembley. He was hyperactive as a boy, with a restless imagination and a particular fondness for "The Goon Show" and music. Moon attended Alperton Secondary Modern School after failing his eleven plus exam, which precluded his attending a grammar school. His art teacher said in a report: "Retarded artistically. Idiotic in other respects". His music teacher wrote that Moon "has great ability, but must guard against a tendency to show off." Moon joined his local Sea Cadet Corps band at the age of twelve on the bugle, but found the instrument too difficult to learn and decided to take up drums instead. He was interested in practical jokes and home science kits, with a particular fondness for explosions. On his way home from school, Moon would often go to Macari's Music Studio on Ealing Road to practise on the drums there, learning his basic skills on the instrument. He left school at age fourteen, around Easter in 1961. Moon then enrolled at Harrow Technical College; this led to a job as a radio repairman, enabling him to buy his first drum kit. Moon took lessons from one of the loudest contemporary drummers, Screaming Lord Sutch's Carlo Little, at 10 shillings per lesson. Moon's early style was influenced by jazz, American surf music and rhythm and blues, exemplified by noted Los Angeles studio drummer Hal Blaine. His favourite musicians were jazz artists, particularly Gene Krupa (whose flamboyant style he subsequently copied). Moon also admired Elvis Presley's original drummer DJ Fontana, the Shadows' original drummer Tony Meehan and the Pretty Things' Viv Prince. He also enjoyed singing, with a particular interest in Motown. Moon idolised the Beach Boys; Roger Daltrey later said that given the opportunity, Moon would have left to play for the California band even at the peak of the Who's fame. During this time Moon joined his first serious band: the Escorts, replacing his best friend Gerry Evans. In December 1962 he joined the Beachcombers, a semi-professional London cover band playing hits by groups such as the Shadows. During his time in the group Moon incorporated theatrical tricks into his act, including "shooting" the group's lead singer with a starter pistol. The Beachcombers all had day jobs; Moon, who worked in the sales department at British Gypsum, had the keenest interest in turning professional. In April 1964, aged 17, he auditioned for the Who as a replacement for Doug Sandom. The Beachcombers continued as a local cover band after his departure. A commonly cited story of how Moon joined the Who is that he appeared at a show shortly after Sandom's departure, where a session drummer was used. Dressed in ginger clothes and with his hair dyed ginger (future bandmate Pete Townshend later described him as a "ginger vision"), he claimed to his would-be bandmates that he could play better; he played in the set's second half, nearly demolishing the drum kit in the process. In the words of the drummer, "they said go ahead, and I got behind this other guy's drums and did one song-'Road Runner.' I'd several drinks to get me courage up and when I got onstage I went arrgggGhhhh on the drums, broke the bass drum pedal and two skins, and got off. I figured that was it. I was scared to death. Afterwards I was sitting at the bar and Pete came over. He said: 'You ... come 'ere.' I said, mild as you please: 'Yes, yes?' And Roger, who was the spokesman then, said: 'What are you doing next Monday?' I said: 'Nothing.' I was working during the day, selling plaster. He said: 'You'll have to give up work ... there's this gig on Monday. If you want to come, we'll pick you up in the van.' I said: 'Right.' And that was it." Moon later claimed that he was never formally invited to join the Who permanently; when Ringo Starr asked how he had joined the band, he said he had "just been filling in for the last fifteen years." Moon's arrival in the Who changed the dynamics of the group. Sandom had generally been the peacemaker as Daltrey and Townshend feuded between themselves, but because of Moon's temperament the group now had four members frequently in conflict. "We used to fight regularly", remembered Moon in later years. "John [Entwistle] and I used to have fights – it wasn't very serious, it was more of an emotional spur-of-the moment thing." Moon also clashed with Daltrey and Townshend: "We really have absolutely nothing in common apart from music", he said in a later interview. Although Townshend described him as a "completely different person to anyone I've ever met", the pair had a rapport in the early years and enjoyed practical jokes and improvised comedy. Moon's drumming style affected the band's musical structure; although Entwistle initially found Moon's lack of conventional timekeeping problematic, it created an original sound. Moon was particularly fond of touring, since it was his only chance to regularly socialise with his bandmates, and was generally restless and bored when not playing live. This later carried over to other aspects of his life, as he acted them out (according to journalist and Who biographer Dave Marsh) "as if his life were one long tour." These antics earned him the nickname "Moon the Loon". Moon's style of drumming was considered unique by his bandmates, although they sometimes found his unconventional playing frustrating; Entwistle noted that he tended to play faster or slower according to his mood. "He wouldn't play across his kit", he later added. "He'd play zig-zag. That's why he had two sets of tom-toms. He'd move his arms forward like a skier." Daltrey said that Moon "just instinctively put drum fills in places that other people would never have thought of putting them." Who biographer John Atkins wrote that the group's early test sessions for Pye Records in 1964 show that "they seemed to have understood just how important was ... Moon's contribution." Contemporary critics questioned his ability to keep time, with biographer Tony Fletcher suggesting that the timing on "Tommy" was "all over the place." Who producer Jon Astley said, "You didn't think he was keeping time, but he was." Early recordings of Moon's drumming sound tinny and disorganised; it was not until the recording of "Who's Next", with Glyn Johns' no-nonsense production techniques and the need to keep time to a synthesizer track, that he began developing more discipline in the studio. Fletcher considers the drumming on this album to be the best of Moon's career. Unlike contemporary rock drummers such as Ginger Baker and John Bonham, Moon hated drum solos and refused to play them in concert. At a Madison Square Garden show on 10 June 1974, Townshend and Entwistle decided to spontaneously stop playing during "Waspman" to listen to Moon's drum solo. Moon continued briefly and then stopped, shouting "Drum solos are boring!" On 23 June 1977, he made a guest appearance at a Led Zeppelin concert in Los Angeles. Moon also aspired to sing lead vocal on some songs. While the other three members handled most of the onstage vocals, Moon would attempt to sing backup (particularly on "I Can't Explain"). He provided humorous commentary during song announcements, although sound engineer Bob Pridden preferred to mute his vocal microphone on the mixing desk whenever possible. Moon's knack for making his bandmates laugh around the microphone led them to banish him from the studio when vocals were being recorded; this led to a game in which Moon would sneak in to join the singing. At the end of "Happy Jack", Townshend can be heard saying "I saw ya!" to Moon as he tries to sneak into the studio. The drummer's interest in surf music and his desire to sing led to his performing lead vocals on several early tracks, including "Bucket T" and "Barbara Ann" ("Ready Steady Who" EP, 1966) and high backing vocals on other songs, such as "Pictures of Lily". Moon's performance on "Bell Boy" ("Quadrophenia", 1973) saw him abandon "serious" vocal performances to sing in character, which gave him (in Fletcher's words) "full licence to live up to his reputation as a lecherous drunk"; it was "exactly the kind of performance the Who needed from him to bring them back down to earth." Moon composed "I Need You", the instrumental "Cobwebs and Strange" (from the album "A Quick One", 1966), the single B-sides "In The City" (co-written with Entwistle) and "Girl's Eyes" (from "The Who Sell Out" sessions featured on "Thirty Years of Maximum R&B" and a 1995 re-release of "The Who Sell Out"), "Dogs Part Two" (1969), "Tommy's Holiday Camp" (1969) and "Waspman" (1972). Moon also co-composed "The Ox" (an instrumental from their debut album, "My Generation") with Townshend, Entwistle and keyboardist Nicky Hopkins. The setting for "Tommy's Holiday Camp" (from "Tommy") was credited to Moon; the song was primarily written by Townshend and, although there is a misconception that Moon sings on it, the album version is Townshend's demo. The drummer produced the violin solo on "Baba O'Riley". Moon sat in on congas with East of Eden at London's Lyceum Ballroom, and afterwards suggested to violinist Dave Arbus that he play on the track. Moon played a four- and later a five-piece drum kit during his early career. During much of 1964 and 1965 his setups consisted of Ludwig drums and Zildjian cymbals. He began to endorse Premier Drums in late 1965, and remained a loyal customer of the company. His first Premier kit was in red sparkle and featured two high toms. In 1966 Moon moved to an even larger kit, but without the customary hi-hat—at the time he preferred keeping backbeats with ride and crash cymbals. His new larger configuration was notable for the presence of two bass drums; he, along with Ginger Baker, has been credited as one of the early pioneers of double bass drumming in rock. This kit was not used at the Who's performance at the 1967 Monterey Pop Festival. From 1967 to 1969 Moon used the "Pictures of Lily" drum kit (named for its artwork), which had two bass drums, two floor toms and three mounted toms. In recognition of his loyalty to the company, Premier reissued the kit in 2006 as the "Spirit of Lily". By 1970 Moon had begun to use timbales, gongs and timpani, and these were included in his setup for the rest of his career. In 1973 Premier's marketing manager, Eddie Haynes, began consulting with Moon about specific requirements. At one point, Moon asked Premier to make a white kit with gold-plated fittings. When Haynes said that it would be prohibitively expensive, Moon replied: "Dear boy, do exactly as you feel it should be, but that's the way I want it." The kit was eventually fitted with copper fittings and later given to a young Zak Starkey. At an early show at the Railway Tavern in Harrow, Townshend smashed his guitar after accidentally breaking it. When the audience demanded he do it again, Moon kicked over his drum kit. Subsequent live sets culminated in what the band later described as "auto-destructive art", in which band members (particularly Moon and Townshend) elaborately destroyed their equipment. Moon developed a habit of kicking over his drums, claiming that he did so in exasperation at an audience's indifference. Townshend later said, "A set of skins is about $300 [then £96] and after every show he'd just go bang, bang, bang and then kick the whole thing over." In May 1966, Moon discovered that the Beach Boys' Bruce Johnston was visiting London. After the pair socialised for a few days, Moon and Entwistle brought Johnston to the set of "Ready Steady Go!", which made them late for a show with the Who that evening. During the finale of "My Generation", an altercation broke out on stage between Moon and Townshend which was reported on the front page of the "New Musical Express" the following week. Moon and Entwistle left the Who for a week (with Moon hoping to join the Animals or the Nashville Teens), but they changed their minds and returned. On the Who's early US package tour at the RKO 58th Street Theatre in New York in March and April 1967, Moon performed two or three shows a day, kicking over his drum kit after every show. Later that year, during their appearance on "The Smothers Brothers Comedy Hour", he bribed a stagehand to load gunpowder into one of his bass drums; the stagehand used about ten times the standard amount. During the finale of "My Generation", he kicked the drum off the riser and set off the charge. The intensity of the explosion singed Townshend's hair and embedded a piece of cymbal in Moon's arm. A clip of the incident became the opening scene for the film "The Kids Are Alright". Although Moon was known for kicking over his drum kit, Haynes claimed that it was done carefully and the kit rarely needed repairs. However, stands and foot pedals were frequently replaced; the drummer "would go through them like a knife through butter". While Moon generally said he was only interested in working with the Who, he participated in outside musical projects. In 1966 he worked with Yardbirds guitarist Jeff Beck, pianist Nicky Hopkins and future Led Zeppelin members Jimmy Page and John Paul Jones on the instrumental "Beck's Bolero", which was the B-side to "Hi Ho Silver Lining" and appeared on the album "Truth". Moon also played timpani on another track, a cover of Jerome Kern's "Ol' Man River". He was credited on the album as "You Know Who". Moon may have inspired the name for Led Zeppelin. When he briefly considered leaving the Who in 1966, he spoke with Entwistle and Page about forming a supergroup. Moon (or Entwistle) remarked that a particular suggestion had gone down like a "lead zeppelin" (a play on "lead balloon"). Although that supergroup was never formed, Page remembered the phrase and later adapted it as the name of his new band. The Beatles became friends with Moon, and this led to occasional collaborations. In 1967, he contributed backing vocals to "All You Need Is Love". On 15 December 1969, Moon joined John Lennon's Plastic Ono Band for a live performance at the Lyceum Theatre in London for a UNICEF charity concert. In 1972 the performance was released as a companion disc to Lennon and Ono's album, "Some Time in New York City". Moon's friendship with Entwistle led to an appearance on "Smash Your Head Against the Wall", Entwistle's first solo album and the first by a member of the Who. Moon did not play drums on the album; Jerry Shirley did, with Moon providing percussion. "Rolling Stone"s John Hoegel appreciated Entwistle's decision not to let Moon drum, saying that it distanced his album from the familiar sound of the Who. Moon became involved in solo work when he moved to Los Angeles during the mid-1970s. Track Records-MCA released a Moon solo single in 1974, comprising cover versions of the Beach Boys' "Don't Worry, Baby" and "Teenage Idol". The following year he released his only solo album, entitled "Two Sides of the Moon". Although it featured Moon on vocals, he played drums on only three tracks; most of the drumming was left to others (including Ringo Starr, session musicians Curly Smith and Jim Keltner, and actor-musician Miguel Ferrer). The album was received poorly by critics. "New Musical Express"s Roy Carr wrote, "Moonie, if you didn't have talent, I wouldn't care; but you have, which is why I'm not about to accept "Two Sides of the Moon"." Dave Marsh, reviewing the album in "Rolling Stone", wrote: "There isn't any legitimate reason for this album's existence." During one of his few televised solo drum performances (for ABC's "Wide World"), Moon played a five-minute drum solo dressed as a cat on transparent acrylic drums filled with water and goldfish. When asked by an audience member what would happen to the kit, he joked that "even the best drummers get hungry." His performance was not appreciated by animal lovers, several of whom called the station with complaints. In the 2007 documentary film "", Daltrey and Townshend reminisced about Moon's talent for dressing as (and embodying) a variety of characters. They remembered his dream of getting out of music and becoming a Hollywood film actor, although Daltrey did not think Moon had the patience and work ethic required by a professional actor. Who manager Bill Curbishley agreed that Moon "wasn't disciplined enough to actually turn up or commit to doing the stuff." Nevertheless, the drummer landed several acting roles. His first was in 1971, a cameo in Frank Zappa's "200 Motels" as a nun afraid of dying from a drug overdose. Although it only took 13 days to film, fellow cast member Howard Kaylan remembers Moon spending off-camera time at the Kensington Garden Hotel bar instead of sleeping. Moon's next film role was J.D. Clover, drummer for the fictional Stray Cats at a holiday camp during the early days of British rock 'n' roll, in 1973's "That'll Be the Day". He reprised the role for the film's 1974 sequel, "Stardust", and played Uncle Ernie in Ken Russell's 1975 film adaptation of "Tommy". Moon's last film appearance was in 1978's "Sextette". This was the last film to star Mae West. Moon led a destructive lifestyle. During the Who's early days he began taking amphetamines, and in a "NME" interview said his favourite food was "French Blues". He spent his share of the band's income quickly, and was a regular at London clubs such as the Speakeasy and The Bag O'Nails; the combination of pills and alcohol escalated into alcoholism and drug addiction later in his life. "[We] went through the same stages everybody goes through – the bloody drug corridor", he later reflected. "Drinking suited the group a lot better." According to Townshend, Moon began destroying hotel rooms when the Who stayed at the Berlin Hilton on tour in late 1966. In addition to hotel rooms, Moon destroyed friends' homes and even his own, throwing furniture from upper-storey windows and lighting fires. Andrew Neill and Matthew Kent estimated that his destruction of hotel toilets and plumbing cost as much as £300,000 ($500,000). These acts, often fuelled by drugs and alcohol, were Moon's way of demonstrating his eccentricity; he enjoyed shocking the public with them. Longtime friend and personal assistant Butler observed, "He was trying to make people laugh and be Mr Funny, he wanted people to love him and enjoy him, but he would go so far. Like a train ride you couldn't stop." In a limousine on the way to the airport, Moon insisted they return to their hotel, saying "I forgot something." At the hotel he ran back to his room, grabbed the television and threw it out of the window into the swimming pool below. He then jumped back into the limo, saying "I nearly forgot." Fletcher argues that the Who's lengthy break between the end of their 1972 European tour and the beginning of the "Quadrophenia" sessions devastated Moon's health, as without the rigours of lengthy shows and regular touring that had previously kept him in shape, his hard-partying lifestyle took a greater toll on his body. He did not keep a drum kit or practise at Tara, and began to deteriorate physically as a result of his lifestyle. Around the same time he became a severe alcoholic, starting the day with drinks and changing from the "lovable boozer" he presented himself as to a "boorish drunk". David Puttnam recalled, "The drinking went from being a joke to being a problem. On "That'll Be the Day" it was social drinking. By the time "Stardust" came round it was hard drinking." Moon's favourite stunt was to flush powerful explosives down toilets. According to Fletcher, Moon's toilet pyrotechnics began in 1965 when he purchased a case of 500 cherry bombs. He moved from cherry bombs to M-80 fireworks to sticks of dynamite, which became his explosive of choice. "All that porcelain flying through the air was quite unforgettable," Moon remembered. "I never realised dynamite was so powerful. I'd been used to penny bangers before." He quickly developed a reputation for destroying bathrooms and blowing up toilets. The destruction mesmerised him, and enhanced his public image as rock's premier hell-raiser. Tony Fletcher wrote that "no toilet in a hotel or changing room was safe" until Moon had exhausted his supply of explosives. Pete Townshend walked into the bathroom of Moon's hotel room and noticed the toilet had disappeared, with only the S-bend remaining. The drummer explained that since a cherry bomb was about to explode, he had thrown it down the loo and showed Townshend the case of cherry bombs. "And of course from that moment on," the guitarist remembered, "we got thrown out of every hotel we ever stayed in." Entwistle recalled being close to Moon on tour and both were often involved in blowing up toilets. In a 1981 "Los Angeles Times" interview he admitted, "A lot of times when Keith was blowing up toilets I was standing behind him with the matches." A hotel manager called Moon in his room and asked him to lower the volume on his cassette recorder because it made "too much noise." In response the drummer asked him up to his room, excused himself to go to the bathroom, put a lit stick of dynamite in the toilet and shut the bathroom door. Upon returning, he asked the manager to stay for a moment, as he wanted to explain something. Following the explosion, Moon turned the recorder back on and said, "That, dear boy, was noise. This is the 'Oo." On 23 August 1967, on tour opening for Herman's Hermits, Moon celebrated what he said was his 21st birthday (although it was thought at the time to be his 20th) at a Holiday Inn in Flint, Michigan. Entwistle later said, "He decided that if it was a publicised fact that it was his 21st birthday, he would be able to drink." The drummer immediately began drinking upon his arrival in Flint. The Who spent the afternoon visiting local radio stations with Nancy Lewis (then the band's publicist), and Moon posed for a photo outside the hotel in front of a "Happy Birthday Keith" sign put up by the hotel management. According to Lewis, Moon was drunk by the time the band went onstage at Atwood Stadium. Returning to the hotel, Moon started a food fight and soon cake began flying through the air. The drummer knocked out part of his front tooth; at the hospital, doctors could not give him an anaesthetic (due to his inebriation) before removing the remainder of the tooth. Back at the hotel a mêlée erupted; fire extinguishers were set off, guests (and objects) thrown into the swimming pool and a piano reportedly destroyed. The chaos ended only when police arrived with guns drawn. A furious Holiday Inn management presented the groups with a bill for $24,000, which was reportedly settled by Herman's Hermits tour manager Edd McCann. Townshend claimed that the Who were banned for life from all of the hotel's properties, but Fletcher wrote that they stayed at a Holiday Inn in Rochester, New York a week later. He also disputed a widely held belief that Moon drove a Lincoln Continental into the hotel's swimming pool, as claimed by the drummer in a 1972 "Rolling Stone" interview. Moon's lifestyle began to undermine his health and reliability. During the 1973 Quadrophenia tour, at the Who's debut US date at the Cow Palace in Daly City, California, Moon ingested a mixture of tranquillisers and brandy. During the concert, Moon passed out on his drum kit during "Won't Get Fooled Again." The band stopped playing, and a group of roadies carried Moon offstage. They gave him a shower and an injection of cortisone, sending him back onstage after a thirty-minute delay. Moon passed out again during "Magic Bus," and was again removed from the stage. The band continued without him for several songs before Townshend asked, "Can anyone play the drums? – I mean somebody good?" A drummer in the audience, Scot Halpin, came up and played the rest of the show. During the opening date of the band's March 1976 US tour at the Boston Garden, Moon passed out over his drum kit after two numbers and the show was rescheduled. The next evening Moon systematically destroyed everything in his hotel room, cut himself doing so and passed out. He was discovered by manager Bill Curbishley, who took him to a hospital, telling him "I'm gonna get the doctor to get you nice and fit, so you're back within two days. Because I want to break your fucking jaw ... You have fucked this band around so many times and I'm not having it any more." Doctors told Curbishley that if he had not intervened, Moon would have bled to death. Marsh suggested that at this point Daltrey and Entwistle seriously considered firing Moon, but decided that doing so would make his life worse. Entwistle has said that Moon and the Who reached their live peak in 1975–76. At the end of the 1976 US tour in Miami that August, the drummer, delirious, was treated in Hollywood Memorial Hospital for eight days. The group was concerned that he would be unable to complete the last leg of the tour, which ended at Maple Leaf Gardens in Toronto on 21 October (Moon's last public show). During the band's recording sabbatical between 1976 and 1978, Moon gained a considerable amount of weight. By the time of the Who's invitation-only show at the Gaumont State Cinema on 15 December 1977 for "The Kids are Alright", Moon was visibly overweight and had difficulty sustaining a solid performance. After recording "Who Are You", Townshend refused to follow the album with a tour unless Moon stopped drinking, and said that if Moon's playing did not improve he would be fired. Daltrey later denied threatening to fire him, but said that by this time the drummer was out of control. Because the Who's early stage act relied on smashing instruments, and owing to Moon's enthusiasm for damaging hotels, the group were in debt for much of the 1960s; Entwistle estimated they lost about £150,000. Even when the group became relatively financially stable after "Tommy", Moon continued to rack up debts. He bought a number of cars and gadgets, and flirted with bankruptcy. Moon's recklessness with money reduced his profit from the group's 1975 UK tour to £47.35 (). Before the 1998 release of Tony Fletcher's "Dear Boy: The Life of Keith Moon", Moon's date of birth was presumed to be 23 August 1947. This erroneous date appeared in several otherwise-reliable sources, including the Townshend-authorised biography "Before I Get Old: The Story of The Who". The incorrect date had been supplied by Moon in interviews before it was corrected by Fletcher to 1946. Moon's first serious relationship was with Kim Kerrigan, whom he started dating in January 1965 after she saw the Who play at Le Disque a Go! Go! in Bournemouth. By the end of the year she discovered she was pregnant; her parents, who were furious, met with the Moons to discuss their options, and she moved into the Moon family home in Wembley. She and Moon were married on 17 March 1966 at Brent Registry Office, and their daughter Amanda was born on 12 July. The marriage (and child) were kept secret from the press until May 1968. Moon was occasionally violent towards Kim: "if we went out after I had Mandy", she later said, "if someone talked to me, he'd lose it. We'd go home and he'd start a fight with me." He loved Amanda, but his absences due to touring and fondness for practical jokes made their relationship uneasy when she was very young. "He had no idea how to be a father", Kim said. "He was too much of a child himself." From 1971 to 1975 Moon owned Tara, a home in Chertsey where he initially lived with his wife and daughter. The Moons entertained extravagantly at home, and owned a number of cars. Jack McCullogh, then working for Track Records (the Who's label), recalls Moon ordering him to purchase a milk float to store in the garage at Tara. In 1973 Kim, convinced that neither she nor anyone else could moderate Keith's behaviour, left her husband and took Amanda; she sued for divorce in 1975 and later married Faces keyboard player Ian McLagan. Marsh believes that Moon never truly recovered from the loss of his family. Butler agrees; despite his relationship with Annette Walter-Lax, he believes that Kim was the only woman Moon loved. McLagan commented that Moon "couldn't handle it." Moon would harass them with phone calls, and on one occasion before Kim sued for divorce, he invited McLagan for a drink at a Richmond pub and sent several "heavies" to break into McLagan's home on Fife Road and look for Kim, forcing her to hide in a walk-in closet. She died in a car accident in Austin, Texas, on 2 August 2006. In 1975 Moon began a relationship with Swedish model Annette Walter-Lax, who later said that Moon was "so sweet when he was sober, that I was just living with him in the hope that he would kick all this craziness." She begged Malibu neighbour Larry Hagman to check Moon into a clinic to dry out (as he had attempted to do before), but when doctors recorded Moon's chemical intake at breakfast – a bottle of champagne, Courvoisier and amphetamines – they concluded that there was no hope for his rehabilitation. Moon enjoyed being the life of the party. Bill Curbishley remembered that "he wouldn't walk into any room and just listen. He was an attention seeker and he had to have it." Early in the Who's career, Moon got to know the Beatles. He would join them at clubs, forming a particularly close friendship with Ringo Starr. Moon later became friends with Bonzo Dog Doo-Dah Band members Vivian Stanshall and "Legs" Larry Smith, and the trio would drink and play practical jokes together. Smith remembers one occasion where he and Moon tore apart a pair of trousers, with an accomplice later looking for one-legged trousers. In the early 1970s Moon helped Stanshall with his "Radio Flashes" radio show for BBC Radio 1, filling in for the vacationing John Peel (see Rawlinson End Radio Flashes). Moon filled in for Peel in 1973's "A Touch of the Moon", a series of four programmes produced by John Walters. Guitarist Joe Walsh enjoyed socialising with Moon. In an interview with "Guitar World" magazine, he recalled that the drummer "taught me how to break things." In 1974, Moon struck up a friendship with actor Oliver Reed while working on the film version of "Tommy". Although Reed matched Moon drink for drink, he appeared on set the next morning ready to perform; Moon, on the other hand, would cost several hours of filming time. Reed later said that Moon "showed me the way to insanity." Peter "Dougal" Butler began working for the Who in 1967, becoming Moon's personal assistant the following year to help him stay out of trouble. He remembers managers Kit Lambert and Chris Stamp saying, "We trust you with Keith but if you ever want any time off, for a holiday or some sort of rest, let us know and we'll pay for it." Butler never took them up on the offer. He followed Moon when the drummer relocated to Los Angeles, but felt that the drug culture prevalent at the time was bad for Moon: "My job was to have eyes in the back of my head." Townshend agreed, saying that by 1975 Butler had "no influence over him whatsoever." Although he was a loyal companion to Moon, the lifestyle eventually became too much for him; he phoned Curbishley, saying that they needed to move back to England or one of them might die. Butler quit in 1978, and later wrote of his experiences in a book entitled "Full Moon: The Amazing Rock and Roll Life of Keith Moon". On 4 January 1970 Moon accidentally killed his friend, driver and bodyguard, Neil Boland, outside the Red Lion pub in Hatfield, Hertfordshire. Pub patrons had begun to attack his Bentley; Moon, drunk, began driving to escape them. During the fracas, he hit Boland. After an investigation, the coroner ruled Boland's death an accident; Moon, having been charged with a number of offences, received an absolute discharge. Those close to Moon said that he was haunted by Boland's death for the rest of his life. According to Pamela Des Barres, Moon had nightmares (which woke them both) about the incident and said he had no right to be alive. In mid-1978 Moon moved into Flat 12, 9 Curzon Place (later Curzon Square), Shepherd Market, Mayfair, London, renting from Harry Nilsson. Cass Elliot had died there four years earlier, at the age of 32; Nilsson was concerned about letting the flat to Moon, believing it was cursed. Townshend disagreed, assuring him that "lightning wouldn't strike the same place twice". After moving in, Moon began a prescribed course of Heminevrin (clomethiazole, a sedative) to alleviate his alcohol withdrawal symptoms. He wanted to get sober, but due to his fear of psychiatric hospitals he wanted to do it at home. Clomethiazole is discouraged for unsupervised detoxification because of its addictive potential, its tendency to induce tolerance, and its risk of death when mixed with alcohol. The pills were prescribed by Geoffrey Dymond, a physician who was unaware of Moon's lifestyle. Dymond prescribed a bottle of 100 pills, instructing him to take one pill when he felt a craving for alcohol but not more than three pills per day. By September 1978 Moon was having difficulty playing the drums, according to roadie Dave "Cy" Langston. After seeing Moon in the studio trying to overdub drums for "The Kids Are Alright", he said, "After two or three hours, he got more and more sluggish, he could barely hold a drum stick." On 6 September, Moon and Walter-Lax were guests of Paul and Linda McCartney at a preview of a film, "The Buddy Holly Story". After dining with the McCartneys at Peppermint Park in Covent Garden, Moon and Walter-Lax returned to their flat. He watched a film ("The Abominable Dr. Phibes"), and asked Walter-Lax to cook him steak and eggs. When she objected, Moon replied, "If you don't like it, you can fuck off!" These were his last words. Moon then took 32 clomethiazole tablets. When Walter-Lax checked on him the following afternoon, she discovered he was dead. Curbishley phoned the flat at around 5 pm looking for Moon, and Dymond gave him the news. Curbishley told Townshend, who informed the rest of the band. Entwistle was giving an interview to French journalists when he was interrupted by a phone call with the news of Moon's death. Trying to tactfully and quickly end the interview, he broke down and wept when the journalist asked him about the Who's future plans. Moon's death came shortly after the release of "Who Are You". On the album cover, he is straddling a chair to hide his weight gain; the words "Not to be taken away" are on the back of the chair. Police determined that there were 32 clomethiazole pills in Moon's system. Six were digested, sufficient to cause his death; the other 26 were undigested when he died. Max Glatt, an authority on alcoholism, wrote in "The Sunday Times" that Moon should never have been given the drug. Moon was cremated on 13 September 1978 at Golders Green Crematorium in London, and his ashes were scattered in its Gardens of Remembrance. Townshend convinced Daltrey and Entwistle to carry on touring as The Who, although he later said that it was his means of coping with Moon's death and "completely irrational, bordering on insane". AllMusic's Bruce Eder said, "When Keith Moon died, the Who carried on and were far more competent and reliable musically, but that wasn't what sold rock records." In November 1978, Faces drummer Kenney Jones joined the Who. Townshend later said that Jones "was one of the few British drummers who could fill Keith's shoes"; Daltrey was less enthusiastic, saying that Jones "wasn't the right style". Keyboardist John "Rabbit" Bundrick, who had rehearsed with Moon earlier in the year, joined the live band as an unofficial member. Jones left the Who in 1988, and drummer Simon Phillips (who praised Moon's ability to drum over the backing track of "Baba O'Riley") toured with the band the following year. Since 1996, the Who's drummer has been Ringo Starr's son Zak Starkey, who had been given a drum kit by Moon (whom he called "Uncle Keith"). Starkey had previously toured in 1994 with Roger Daltrey. The London 2012 Summer Olympic Committee contacted Curbishley about Moon performing at the games, 34 years after his death. In an interview with "The Times" Curbishley quipped, "I emailed back saying Keith now resides in Golders Green Crematorium, having lived up to the Who's anthemic line 'I hope I die before I get old' ... If they have a round table, some glasses and candles, we might contact him." Moon's drumming has been praised by critics. Author Nick Talevski described him as "the greatest drummer in rock," adding that "he was to the drums what Jimi Hendrix was to the guitar." Holly George-Warren, editor and author of "The Rock and Roll Hall of Fame: The First 25 Years", said: "With the death of Keith Moon in 1978, rock arguably lost its single greatest drummer." According to Eder, "Moon, with his manic, lunatic side, and his life of excessive drinking, partying, and other indulgences, probably represented the youthful, zany side of rock & roll, as well as its self-destructive side, better than anyone else on the planet." "The New Book of Rock Lists" ranked Moon No. 1 on its list of "50 Greatest Rock 'n' Roll Drummers," and he was ranked No. 2 on the 2011 "Rolling Stone" "Best Drummers of All Time" readers' poll. In 2016, the same magazine ranked him No. 2 in their list of the 100 Greatest Drummers of All Time, behind John Bonham. Adam Budofsky, editor of "Drummer" magazine, said that Moon's performances on "Who's Next" and "Quadrophenia" "represent a perfect balance of technique and passion" and "there's been no drummer who's touched his unique slant on rock and rhythm since." Several rock drummers, including Neil Peart and Dave Grohl, have cited Moon as an influence. The Jam paid homage to Moon on the second single from their third album, "Down in the Tube Station at Midnight"; the B-side of the single is a Who cover ("So Sad About Us"), and the back cover of the record has a photo of Moon's face. The Jam's single was released about a month after Moon's death. Animal, one of Jim Henson's Muppet characters, may have been based on Keith Moon due to their similar hair, eyebrows, personality and drumming style. Jazz drummer Elvin Jones praised Moon's work during "Underture", as integral to the song's effect. Ray Davies notably lauded Moon's drumming during his speech for the Kinks' induction into the Rock and Roll Hall of Fame, in 1990:"...Keith Moon changed the sound of drumming." "God bless his beautiful heart ..." Ozzy Osbourne told "Sounds" a month after the drummer's death. "People will be talking about Keith Moon 'til they die, man. Someone somewhere will say, 'Remember Keith Moon?' Who will remember Joe Bloggs who got killed in a car crash? No one. He's dead, so what? He didn't do anything to talk of." Clem Burke of Blondie has said "Early on all I cared about was Keith Moon and the Who. When I was about eleven or twelve, my favourite part of drum lessons was the last ten minutes, when I'd get to sit at the drumset and play along to my favourite record. I'd bring in 'My Generation'. At the end of the song, the drums go nuts. 'My Generation' was a turning point for me because before that it was all the Charlie Watts and Ringo type of thing." In 1998 Tony Fletcher published a biography of Moon, "Dear Boy: The Life of Keith Moon", in the United Kingdom. The phrase "Dear Boy" became a catchphrase of Moon's when, influenced by Kit Lambert, he began affecting a pompous English accent. In 2000, the book was released in the US as "Moon (The Life and Death of a Rock Legend)". "Q Magazine" called the book "horrific and terrific reading", and "Record Collector" said it was "one of rock's great biographies." In 2008, English Heritage declined an application for Moon to be awarded a blue plaque. Speaking to "The Guardian", Christopher Frayling said they "decided that bad behaviour and overdosing on various substances wasn't a sufficient qualification." The UK's Heritage Foundation disagreed with the decision, presenting a plaque which was unveiled on 9 March 2009. Daltrey, Townshend, Robin Gibb and Moon's mother Kit were present at the ceremony.
https://en.wikipedia.org/wiki?curid=16991
Kerosene Kerosene, also known as paraffin, lamp oil, and coal oil (an obsolete term), is a combustible hydrocarbon liquid which is derived from petroleum. It is widely used as a fuel in aviation as well as households. Its name derives from ("keros") meaning "wax", and was registered as a trademark by Canadian geologist and inventor Abraham Gesner in 1854 before evolving into a genericized trademark. It is sometimes spelled kerosine in scientific and industrial usage. The term kerosene is common in much of Argentina, Australia, Canada, India, New Zealand, Nigeria, and the United States, while the term paraffin (or a closely related variant) is used in Chile, eastern Africa, South Africa, Norway, and in the United Kingdom. The term lamp oil, or the equivalent in the local languages, is common in the majority of Asia. Liquid paraffin (called mineral oil in the US) is a more viscous and highly refined product which is used as a laxative. Paraffin wax is a waxy solid extracted from petroleum. Kerosene is widely used to power jet engines of aircraft (jet fuel) and some rocket engines and is also commonly used as a cooking and lighting fuel, and for fire toys such as poi. In parts of Asia, kerosene is sometimes used as fuel for small outboard motors or even motorcycles. World total kerosene consumption for all purposes is equivalent to about per day. To prevent confusion between kerosene and the much more flammable and volatile gasoline, some jurisdictions regulate markings or colorings for containers used to store or dispense kerosene. For example, in the United States, Pennsylvania requires that portable containers used at retail service stations for kerosene be colored blue, as opposed to red (for gasoline) or yellow (for diesel fuel). Kerosene is a low viscosity, clear liquid formed from hydrocarbons obtained from the fractional distillation of petroleum between , resulting in a mixture with a density of composed of carbon chains that typically contain between 10 and 16 carbon atoms per molecule. It is miscible in petroleum solvents but immiscible in water. The distribution of hydrocarbon length in the mixture making up kerosene ranges from a number of carbon atoms of C6 to C20, although typically kerosene predominantly contains C9 to C16 range hydrocarbons. The ASTM International standard specification D-3699-78 recognizes two grades of kerosene: grades 1-K (less than 0.04% sulfur by weight) and 2-K (0.3% sulfur by weight). 1-K grade kerosene burns cleaner with fewer deposits, fewer toxins, and less frequent maintenance than 2-K grade kerosene, and is the preferred grade of kerosene for indoor kerosene heaters and stoves. Regardless of crude oil source or processing history, kerosene's major components are branched and straight chain alkanes and naphthenes (cycloalkanes), which normally account for at least 70% by volume. Aromatic hydrocarbons in this boiling range, such as alkylbenzenes (single ring) and alkylnaphthalenes (double ring), do not normally exceed 25% by volume of kerosene streams. Olefins are usually not present at more than 5% by volume. The flash point of kerosene is between 37 and 65 °C (100 and 150 °F), and its autoignition temperature is . The freeze point of kerosene depends on grade, with commercial aviation fuel standardized at . 1-K grade kerosene freezes around −40 °C (−40 °F, 233 K). Heat of combustion of kerosene is similar to that of diesel fuel; its lower heating value is 43.1 MJ/kg (around 18,500 Btu/lb), and its higher heating value is . In the United Kingdom, two grades of heating oil are defined. BS 2869 Class C1 is the lightest grade used for lanterns, camping stoves, wick heaters, and mixed with gasoline in some vintage combustion engines as a substitute for tractor vaporising oil. BS 2869 Class C2 is a heavier distillate, which is used as domestic heating oil. Premium kerosene is usually sold in containers from hardware, camping and garden stores and is often dyed purple. Standard kerosene is usually dispensed in bulk by a tanker and is undyed. National and international standards define the properties of several grades of kerosene used for jet fuel. Flash point and freezing point properties are of particular interest for operation and safety; the standards also define additives for control of static electricity and other purposes. The process of distilling crude oil/petroleum into kerosene, as well as other hydrocarbon compounds, was first written about in the 9th century by the Persian scholar Rāzi (or Rhazes). In his "Kitab al-Asrar" ("Book of Secrets"), the physician and chemist Razi described two methods for the production of kerosene, termed "naft abyad" (نفط ابيض"white naphtha"), using an apparatus called an alembic. One method used clay as an absorbent, whereas the other method used ammonium chloride ("sal ammoniac"). The distillation process was repeated until most of the volatile hydrocarbon fractions had been removed and the final product was perfectly clear and safe to burn. Kerosene was also produced during the same period from oil shale and bitumen by heating the rock to extract the oil, which was then distilled. During the Chinese Ming Dynasty, the Chinese made use of kerosene through extracting and purifying petroleum and then converted it into lamp fuel. The Chinese made use of petroleum for lighting lamps and heating homes as early as 1500 BC. Although "coal oil" was well known by industrial chemists at least as early as the 1700s as a byproduct of making coal gas and coal tar, it burned with a smoky flame that prevented its use for indoor illumination. In cities, much indoor illumination was provided by piped-in coal gas, but outside the cities, and for spot lighting within the cities, the lucrative market for fueling indoor lamps was supplied by whale oil, specifically that from sperm whales, which burned brighter and cleaner. Canadian geologist Abraham Pineo Gesner claimed that in 1846, he had given a public demonstration in Charlottetown, Prince Edward Island of a new process he had discovered.
https://en.wikipedia.org/wiki?curid=16992
Kundalini Kundalini (Sanskrit: "", , "coiled snake"), in Hinduism is a form of divine feminine energy (or "shakti") believed to be located at the base of the spine, in the "muladhara". It is an important concept in , where it is believed to be a force or power associated with the divine feminine. This energy, when cultivated and awakened through tantric practice, is believed to lead to spiritual liberation. Kuṇḍalinī is associated with Paradevi or Adi Parashakti, the supreme being in Shaktism; and with the goddesses Bhairavi and Kubjika. The term, along with practices associated with it, was adopted into Hatha yoga in the 9th century. It has since then been adopted into other forms of Hinduism as well as modern spirituality and New age thought. Kuṇḍalinī awakenings have been described as occurring by means of a variety of methods. Many systems of yoga focus on awakening Kuṇḍalinī through: meditation; pranayama breathing; the practice of asana and chanting of mantras. Kundalini Yoga is influenced by Shaktism and Tantra schools of Hinduism. It derives its name from its focus upon the awakening of kundalini energy through regular practice of Mantra, Tantra, Yantra, Asanas or Meditation. The Kuṇḍalinī experience is frequently reported to be a distinct feeling of electric current running along the spine. The concept of Kuṇḍalinī is mentioned in the Upanishads (9th – 7th centuries BCE). The Sanskrit adjective ' means "circular, annular". It is mentioned as a noun for "snake" (in the sense of "coiled") in the 12th-century "Rajatarangini" chronicle (I.2). ' (a noun meaning "bowl, water-pot" is found as the name of a Nāga (serpent deity) in Mahabharata 1.4828). The 8th-century "Tantrasadbhava Tantra" uses the term "kundalī" ("ring, bracelet; coil (of a rope)"). The use of "kuṇḍalī" as a name for Goddess Durga (a form of Shakti) appears often in Tantrism and Shaktism from as early as the 11th century in the "Śaradatilaka". It was adopted as a technical term in Hatha yoga during the 15th century, and became widely used in the Yoga Upanishads by the 16th century. Eknath Easwaran has paraphrased the term as "the coiled power", a force which ordinarily rests at the base of the spine, described as being "coiled there like a serpent". Kuṇḍalinī arose as a central concept in , especially among the Śākta cults like the Kaula. In these Tantric traditions, Kuṇḍalinī is "the innate intelligence of embodied Consciousness". The first possible mention of the term is in the "Tantrasadbhāva-tantra" (8th century), though other earlier tantras mention the visualization of Śākti in the central channel and the upward movement of prana or vital force (which is often associated with Kuṇḍalinī in later works). According to David Gordon White, this feminine spiritual force is also termed "boghavati," which has a double meaning of "enjoyment" and "coiled" and signifies her strong connection to bliss and pleasure, both mundane physical pleasure and the bliss of spiritual liberation (moksha), which is the enjoyment of Shiva's creative activity and sexual union with the Goddess. In the influential Śākta tradition called Kaula, Kuṇḍalinī is seen as a "latent innate spiritual power", associated with the Goddess Kubjika (lit. "the crooked one"), who is the supreme Goddess (Paradevi). She is also pure bliss and power (Śākti), the source of all mantras, and resides in the six chakras along the central channel. In Śaiva Tantra, various practices like pranayama, bandhas, mantra recitation and tantric ritual were used in order to awaken this spiritual power and create a state of bliss and spiritual liberation. According to Abhinavagupta, the great tantric scholar and master of the Kaula and Trika lineages, there are two main forms of Kuṇḍalinī, an upward moving Kuṇḍalinī ("urdhva") associated with expansion, and a downward moving Kuṇḍalinī ("adha") associated with contraction. According to the scholar of comparative religion Gavin Flood, Abhinavagupta links Kuṇḍalinī with "the power that brings into manifestation the body, breath, and experiences of pleasure and pain", with "the power of sexuality as the source of reproduction" and with: "the force of the syllable "ha" in the mantra and the concept of "aham", the supreme subjectivity as the source of all, with "a" as the initial movement of consciousness and "m" its final withdrawal. Thus we have an elaborate series of associations, all conveying the central conception of the cosmos as a manifestation of consciousness, of pure subjectivity, with Kuṇḍalinī understood as the force inseparable from consciousness, who animates creation and who, in her particularised form in the body, causes liberation through her upward, illusion-shattering movement." According to William F. Williams, Kundalini is a type of religious experience within the Hindu tradition, within which it is held to be a kind of "cosmic energy" that accumulates at the base of the spine. When awakened, Kundalini is described as rising up from the muladhara chakra, through the central nadi (called "sushumna") inside or alongside the spine reaching the top of the head. The progress of Kundalini through the different chakras is believed to achieve different levels of awakening and a mystical experience, until Kundalini finally reaches the top of the head, Sahasrara or crown chakra, producing an extremely profound transformation of consciousness. Swami Sivananda Saraswati of the Divine Life Society stated in his book "Kundalini Yoga" that "Supersensual visions appear before the mental eye of the aspirant, new worlds with indescribable wonders and charms unfold themselves before the Yogi, planes after planes reveal their existence and grandeur to the practitioner and the Yogi gets divine knowledge, power and bliss, in increasing degrees, when Kundalini passes through Chakra after Chakra, making them to bloom in all their glory..." Reports about the Sahaja Yoga technique of Kundalini awakening state that the practice can result in a cool breeze felt on the fingertips as well as the fontanel bone area. Yogis such as Muktananda consider that Kundalini can be awakened by "shaktipat" (spiritual transmission by a Guru or teacher), or by spiritual practices such as yoga or meditation. The "passive approach" is instead a path of surrender where one lets go of all the impediments to the awakening rather than trying to actively awaken Kundalini. A chief part of the passive approach is shaktipat where one individual's Kundalini is awakened by another who already has the experience. Shaktipat only raises Kundalini temporarily but gives the student an experience to use as a basis. He subsequently came to believe "As the ancient writers have said, it is the vital force or prana which is spread over both the macrocosm, the entire Universe, and the microcosm, the human body... The atom is contained in both of these. Prana is life-energy responsible for the phenomena of terrestrial life and for life on other planets in the universe. Prana in its universal aspect is immaterial. But in the human body, Prana creates a fine biochemical substance which works in the whole organism and is the main agent of activity in the nervous system and in the brain. The brain is alive only because of Prana... The American comparative religions scholar Joseph Campbell describes the concept of Kundalini as "the figure of a coiled female serpent—a serpent goddess not of "gross" but "subtle" substance—which is to be thought of as residing in a torpid, slumbering state in a subtle center, the first of the seven, near the base of the spine: the aim of the yoga then being to rouse this serpent, lift her head, and bring her up a subtle nerve or channel of the spine to the so-called "thousand-petaled lotus" (Sahasrara) at the crown of the head...She, rising from the lowest to the highest lotus center will pass through and wake the five between, and with each waking, the psychology and personality of the practitioner will be altogether and fundamentally transformed." According to the "Goraksasataka", or "Hundred Verses of Goraksa", hatha yoga practices such as mula bandha, uddiyana bandha, jalandhara bandha and kumbhaka can awaken Kundalini. Another hathayoga text, the "Khecarīvidyā", states that khechari mudra enables one to raise Kundalini and access various stores of amrita in the head, which subsequently flood the body. The spiritual teacher Meher Baba emphasized the need for a master when actively trying to awaken Kundalini: Kundalini is a latent power in the higher body. When awakened, it pierces through six chakras or functional centers and activates them. Without a master, the awakening of the kundalini cannot take anyone very far on the Path; and such indiscriminate or premature awakening is fraught with dangers of self-deception as well as the misuse of powers. The kundalini enables man to consciously cross the lower planes and it ultimately merges into the universal cosmic power of which it is a part, and which also is at times described as kundalini ... The important point is that the awakened kundalini is helpful only up to a certain degree, after which it cannot ensure further progress. It cannot dispense with the need for the grace of a Perfect Master. The experience of Kundalini awakening can happen when one is either prepared or unprepared. According to Hindu tradition, in order to be able to integrate this spiritual energy, a period of careful purification and strengthening of the body and nervous system is usually required beforehand. Yoga and Tantra propose that Kundalini can be awakened by a guru (teacher), but body and spirit must be prepared by yogic austerities, such as pranayama, or breath control, physical exercises, visualization, and chanting. The student is advised to follow the path in an open-hearted manner. Traditionally, people visited ashrams in India to awaken their dormant kundalini energy with regular meditation, mantra chanting, spiritual studies and physical asana practice such as kundalini yoga. Kundalini is considered to occur in the chakra and nadis of the subtle body. Each chakra is said to contain special characteristics and with proper training, moving Kundalini through these chakras can help express or open these characteristics. Kundalini is described as a sleeping, dormant potential force in the human organism. It is one of the components of an esoteric description of the "subtle body", which consists of nadis (energy channels), chakras (psychic centres), prana (subtle energy), and bindu (drops of essence). Kundalini is described as being coiled up at the base of the spine. The description of the location can vary slightly, from the rectum to the navel. Kundalini is said to reside in the triangular sacrum bone in three and a half coils. Swami Vivekananda describes Kundalini briefly in his book "Raja Yoga" as follows: According to the Yogis, there are two nerve currents in the spinal column, called Pingalâ and Idâ, and a hollow canal called Sushumnâ running through the spinal cord. At the lower end of the hollow canal is what the Yogis call the "Lotus of the Kundalini". They describe it as triangular in a form in which, in the symbolical language of the Yogis, there is a power called the Kundalini, coiled up. When that Kundalini awakens, it tries to force a passage through this hollow canal, and as it rises step by step, as it were, layer after layer of the mind becomes open and all the different visions and wonderful powers come to the Yogi. When it reaches the brain, the Yogi is perfectly detached from the body and mind; the soul finds itself free. We know that the spinal cord is composed in a peculiar manner. If we take the figure eight horizontally (∞), there are two parts which are connected in the middle. Suppose you add eight after eight, piled one on top of the other, that will represent the spinal cord. The left is the Ida, the right Pingala, and that hollow canal which runs through the center of the spinal cord is the Sushumna. Where the spinal cord ends in some of the lumbar vertebrae, a fine fiber issues downwards, and the canal runs up even within that fiber, only much finer. The canal is closed at the lower end, which is situated near what is called the sacral plexus, which, according to modern physiology, is triangular in form. The different plexuses that have their centers in the spinal canal can very well stand for the different "lotuses" of the Yogi. When Kundalini Shakti is conceived as a goddess, then, when it rises to the head, it unites itself with the Supreme Being of (Lord Shiva). The aspirant then becomes engrossed in deep meditation and infinite bliss. Paramahansa Yogananda in his book "" states: Paramahansa Yogananda also states: Sir John Woodroffe (1865–1936) – also known by his pseudonym Arthur Avalon – was a British Orientalist whose published works stimulated a far-reaching interest in Hindu philosophy and Yogic practices. While serving as a High Court Judge in Calcutta, he studied Sanskrit and Hindu Philosophy, particularly as it related to Hindu Tantra. He translated numerous original Sanskrit texts and lectured on Indian philosophy, Yoga and Tantra. His book, "The Serpent Power: The Secrets of Tantric and Shaktic Yoga" became a major source for many modern Western adaptations of Kundalini yoga practice. It presents an academically and philosophically sophisticated translation of, and commentary on, two key Eastern texts: "Shatchakranirūpana" (Description and Investigation into the Six Bodily Centers) written by Tantrik Pūrnānanda Svāmī (1526) and the "Paduka-Pancakā" from the Sanskrit of a commentary by Kālīcharana (Five-fold Footstool of the Guru). The Sanskrit term "Kundali Shakti" translates as "Serpent Power". Kundalini is thought to be an energy released within an individual using specific meditation techniques. It is represented symbolically as a serpent coiled at the base of the spine. In his book "Artistic Form and Yoga in the Sacred Images of India", Heinrich Zimmer wrote in praise of the writings of Sir John Woodroffe: When Woodroffe later commented upon the reception of his work he clarified his objective, "All the world (I speak of course of those interested in such subjects) is beginning to speak of Kundalinî Shakti." He described his intention as follows: "We, who are foreigners, must place ourselves in the skin of the Hindu, and must look at their doctrine and ritual through their eyes and not our own." Western awareness of kundalini was strengthened by the interest of Swiss psychiatrist and psychoanalyst Dr. Carl Jung (1875–1961). Jung's seminar on Kundalini yoga presented to the Psychological Club in Zurich in 1932 was widely regarded as a milestone in the psychological understanding of Eastern thought and of the symbolic transformations of inner experience. Kundalini yoga presented Jung with a model for the developmental phases of higher consciousness, and he interpreted its symbols in terms of the process of individuation, with sensitivity towards a new generation's interest in alternative religions and psychological exploration. In the introduction to Jung's book "The Psychology of Kundalini Yoga", Sonu Shamdasani puts forth "The emergence of depth psychology was historically paralleled by the translation and widespread dissemination of the texts of yoga... for the depth psychologies sought to liberate themselves from the stultifying limitations of Western thought to develop maps of inner experience grounded in the transformative potential of therapeutic practices. A similar alignment of "theory" and "practice" seemed to be embodied in the yogic texts that moreover had developed independently of the bindings of Western thought. Further, the initiatory structure adopted by institutions of psychotherapy brought its social organization into proximity with that of yoga. Hence, an opportunity for a new form of comparative psychology opened up." The American writer William Buhlman, began to conduct an international survey of out-of-body experiences in 1969 in order to gather information about symptoms: sounds, vibrations and other phenomena, that commonly occur at the time of the OBE event. His primary interest was to compare the findings with reports made by yogis, such as Gopi Krishna (yogi) who have made reference to similar phenomenon, such as the 'vibrational state' as components of their kundalini-related spiritual experience. He explains: Sri Aurobindo was the other great scholarly authority on Kundalini, with a viewpoint parallel to that of Woodroffe but of a somewhat different slant - this according to Mary Scott, herself a latter-day scholar on Kundalini and its physical basis, and a former member of the Theosophical Society. Kundalini references may be found in a number of New Age presentations, and is a word that has been adopted by many new religious movements. According to Carl Jung "... the concept of Kundalini has for us only one use, that is, to describe our own experiences with the unconscious ..." Jung used the Kundalini system symbolically as a means of understanding the dynamic movement between conscious and unconscious processes. He cautioned that all forms of yoga, when used by Westerners, can be attempts at domination of the body and unconscious through the ideal of ascending into higher chakras. According to Shamdasani, Jung claimed that the symbolism of Kundalini yoga suggested that the bizarre symptomatology that patients at times presented, actually resulted from the awakening of the Kundalini. He argued that knowledge of such symbolism enabled much that would otherwise be seen as the meaningless by-products of a disease process to be understood as meaningful symbolic processes, and explicated the often peculiar physical localizations of symptoms. The popularization of eastern spiritual practices has been associated with psychological problems in the west. Psychiatric literature notes that "since the influx of eastern spiritual practices and the rising popularity of meditation starting in the 1960s, many people have experienced a variety of psychological difficulties, either while engaged in intensive spiritual practice or spontaneously". Among the psychological difficulties associated with intensive spiritual practice we find "Kundalini awakening", "a complex physio-psychospiritual transformative process described in the yogic tradition". Researchers in the fields of Transpersonal psychology, and Near-death studies have described a complex pattern of sensory, motor, mental and affective symptoms associated with the concept of Kundalini, sometimes called the Kundalini syndrome. The differentiation between spiritual emergency associated with Kundalini awakening may be viewed as an acute psychotic episode by psychiatrists who are not conversant with the culture. The biological changes of increased P300 amplitudes that occurs with certain yogic practices may lead to acute psychosis. Biological alterations by Yogic techniques may be used to warn people against such reactions. Some modern experimental research seeks to establish links between Kundalini practice and the ideas of Wilhelm Reich and his followers.
https://en.wikipedia.org/wiki?curid=16995
Kohlrabi Kohlrabi (from the German for cabbage turnip; "Brassica oleracea" Gongylodes Group), also called German turnip, is a biennial vegetable, a low, stout cultivar of wild cabbage. It is in the same family as cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, Savoy cabbage, and gai lan. It can be eaten raw or cooked. Edible preparations are made with both the stem and the leaves. Despite its common names, it is not the same species as turnip. The name comes from the German "Kohl" ("cabbage") plus "Rübe" ~ "Rabi" (Swiss German variant) ("turnip"), because the swollen stem resembles the latter. Kohlrabi is a commonly eaten vegetable in German-speaking countries and American states with large ancestral German populations such as Wisconsin. Its Group name Gongylodes (or lowercase and italicized "gongylodes" or "gongyloides" as a variety name) means "roundish" in Greek, from (, ‘round’). In the northern part of Vietnam, it is called , and in eastern parts of India (West Bengal) and Bangladesh where it is called 'ol kopi'. It is also found in the Kashmir Valley in Northern India and is there known as 'monj-hakh', 'monj' being the round part, and 'hakh' being the leafy part. It is called 'nol khol' in Northern India, 'navalkol' in Maharashtra, noolkol (நூல்கோல்) in Tamil, 'navilu kosu' in Karnataka and in Sri Lanka as 'nol col' (turnip cabbage). It is also native in Cyprus, where it is known as 'kouloumpra' (). It is eaten in the Czech Republic under name 'kedluben' or 'kedlubna'. In Slovakia, it is known as 'kaleráb'. In Romania, it is the 'gulie'. Kohlrabi has been created by artificial selection for lateral meristem growth (a swollen, nearly spherical shape); its origin in nature is the same as that of cabbage, broccoli, cauliflower, kale, collard greens, and Brussels sprouts: they are all bred from, and are the same species as, the wild cabbage plant ("Brassica oleracea"). The taste and texture of kohlrabi are similar to those of a broccoli stem or cabbage heart, but milder and sweeter, with a higher ratio of flesh to skin. The young stem in particular can be as crisp and juicy as an apple, although much less sweet. Except for the "Gigante" cultivar, spring-grown kohlrabi much over 5 cm in size tend to be woody, as do full-grown kohlrabi much over perhaps 10 cm in size; the Gigante cultivar can achieve great size while remaining of good eating quality. The plant matures in 55–60 days after sowing and has good standing ability for up to 30 days after maturity. The approximate weight is 150 g. There are several varieties commonly available, including 'White Vienna', 'Purple Vienna', 'Grand Duke', 'Gigante' (also known as "Superschmelz"), 'Purple Danube', and 'White Danube'. Coloration of the purple types is superficial: the edible parts are all pale yellow. The leafy greens can also be eaten. One commonly used variety grows without a swollen stem, having just leaves and a very thin stem, and is called "Haakh". "Haakh" and "Monj" are popular Kashmiri dishes made using this vegetable. In the second year, the plant will bloom and develop seeds. Kohlrabi stems (the enlarged vegetal part) are surrounded by two distinct fibrous layers that do not soften appreciably when cooked. These layers are generally peeled away prior to cooking or serving raw, with the result that the stems often provide a smaller amount of food than one might assume from their intact appearance. The bulbous kohlrabi stem is frequently used raw in salad or slaws. It has a texture similar to that of a broccoli stem, but with a flavor that is sweeter and less vegetal. Kohlrabi leaves are edible and can be used interchangeably with collard greens and kale. Kohlrabi is an important part of the Kashmiri cuisine where it is called "Mŏnji" and is one of the most commonly cooked vegetable along with collard greens ("haakh"). It is prepared with its leaves and served with a light soup and eaten with rice. In Cyprus it is popularly sprinkled with salt and lemon and served as an appetizer. Some varieties are grown as feed for cattle.
https://en.wikipedia.org/wiki?curid=17001
Tettigoniidae Insects in the family Tettigoniidae are commonly called katydids (in Australia, South Africa, Canada, and the United States), or bush crickets. They have previously been known as long-horned grasshoppers. More than 6,400 species are known. Part of the suborder Ensifera, the Tettigoniidae are the only extant (living) family in the superfamily Tettigonioidea. They are primarily nocturnal in habit with strident mating calls. Many katydids exhibit mimicry and camouflage, commonly with shapes and colors similar to leaves. The family name Tettigoniidae is derived from the genus "Tettigonia", first described by Carl Linnaeus in 1758. In Latin "tettigonia" means leafhopper; it is from the Greek "tettigonion", the diminutive of the imitative (onomatopoeic) τέττιξ, "tettix", cicada. All of these names such as "tettix" with repeated sounds are onomatopoeic, imitating the stridulation of these insects. The common name "katydid" is also onomatopoeic and comes from the particularly loud, three-pulsed song, often rendered ""ka-ty-did"", of the nominate subspecies of the North American "Pterophylla camellifolia", whose most common English name is the common true katydid. Tettigoniids range in size from as small as to as large as . The smaller species typically live in drier or more stressful habitats which may lead to their small size. The small size is associated with greater agility, faster development, and lower nutritional needs. Tettigoniids are tree-living insects that are most commonly heard at night during summer and early fall. Tettigoniids may be distinguished from the grasshopper by the length of their filamentous antennae, which may exceed their own body length, while grasshoppers' antennae are always relatively short and thickened. The lifespan of a katydid is about a year, with full adulthood usually developing very late. Females most typically lay their eggs at the end of summer beneath the soil or in plant stem holes. The eggs are typically oval and laid in rows on the host plant. The way their ovipositor is formed relates to its function where it lays eggs. The ovipositor is an organ used by insects for laying eggs. It consists of up to three pairs of appendages formed to transmit the egg, to make a place for it, and place it properly. Tettigoniids have either sickle-shaped ovipositors which typically lay eggs in dead or living plant matter, or uniform long ovipositors which lay eggs in grass stems. When tettigoniids hatch, the nymphs often look like smaller versions of the adults, but in some species, the nymphs look nothing at all like the adult and rather mimic other species such as spiders and assassin bugs, or flowers, to prevent predation. The nymphs remain in a mimic state only until they are large enough to escape predation. Once they complete their last molt, they are then prepared to mate. Tettigoniids are found on every continent except Antarctica. The vast majority of katydid species live in the tropical regions of the world. For example, the Amazon basin rain forest is home to over 2000 species of katydids. However, katydids are found in the cool, dry temperate regions, as well, with about 255 species in North America. The Tettigoniidae are a large family and have been divided into a number of subfamilies: The Copiphorinae were previously considered a subfamily, but are now placed as tribe Copiphorini in the subfamily Conocephalinae. The genus "Acridoxena" is now placed in the tribe Acridoxenini of the Mecopodinae (previously its own subfamily, Acridoxeninae). The "Orthoptera species file" lists: The genus †"Triassophyllum" is extinct and may be placed here or in the Archaeorthoptera. The diet of most tettigoniids includes leaves, flowers, bark, and seeds, but many species are exclusively predatory, feeding on other insects, snails, or even small vertebrates such as snakes and lizards. Some are also considered pests by commercial crop growers and are sprayed to limit growth, but population densities are usually low, so a large economic impact is rare. Tettigoniids are serious insect pests of karuka ("Pandanus julianettii"). The species "Segestes gracilis" and "Segestidea montana" eat the leaves and can sometimes kill trees. Growers will stuff leaves and grass in between the leaves of the crown to keep insects out. By observing the head and mouthparts, where differences can be seen in relation to function, it is possible to determine what type of food the tettigoniids consume. Large tettigoniids can inflict a painful bite or pinch if handled, but seldom break the skin. Some species of bush crickets are consumed by people, such as the "nsenene" ("Ruspolia baileyi") in Uganda and neighbouring areas. The males of tettigoniids have sound-producing organs located on the hind angles of their front wings. In some species, females are also capable of stridulation. Females chirp in response to the shrill of the males. The males use this sound for courtship, which occurs late in the summer. The sound is produced by rubbing two parts of their bodies together, called stridulation. One is the file or comb that has tough ridges; the other is the plectrum is used to produce the vibration. For tettigoniids, the fore wings are used to sing. Tettigoniids produce continuous songs known as trills. The size of the insect, the spacing of the ridges, and the width of the scraper all influence what sound is made. Many katydids stridulate at a tempo which is governed by ambient temperature, so that the number of chirps in a defined period of time can produce a fairly accurate temperature reading. For American katydids, the formula is generally given as the number of chirps in 15 seconds plus 37 to give the temperature in degrees Fahrenheit. Some tettigoniids have spines on different parts of their bodies that work in different ways. The Listroscelinae have limb spines on the ventral surfaces of their bodies. This works in a way to confine their prey to make a temporary cage above their mouthparts. The spines are articulated and comparatively flexible, but relatively blunt. Due to this, they are used to cage and not penetrate the prey's body. Spines on the tibiae and the femora are usually more sharp and nonarticulated. They are designed more for penetration or help in the defensive mechanism they might have. This usually works with their diurnal roosting posture to maximize defense and prevent predators from going for their head. When tettigoniids go to rest during the day, they enter a diurnal roosting posture to maximize their cryptic qualities. This position fools predators into thinking the katydid is either dead or just a leaf on the plant. Various tettigoniids have bright coloration and black apical spots on the inner surfaces of the tegmina, and brightly colored hind wings. By flicking their wings open when disturbed, they use the coloration to fool predators into thinking the spots are eyes. This, in combination with their coloration mimicking leaves, allows them to blend in with their surroundings, but also makes predators unsure which side is the front and which side is the back. The males provide a nuptial gift for the females in the form of a spermatophylax, a body attached to the males' spermatophore and consumed by the female, to distract her from eating the male's spermatophore and thereby increase his paternity. The Tettigoniidae have polygamous relationships. The first male to mate is guaranteed an extremely high confidence of paternity when a second male couples at the termination of female sexual refractoriness. These investment functions are a parental paternity. The nutrients that the offspring ultimately receive will increase their fitness. The second male to mate with the female at the termination of her refractory period is usually cuckolded. The polygamous relationships of the Tettigoniidae lead to high levels of male-male competition. Male competition is caused by the decreased availability of males able to supply nutritious spermaphylanges to the females. Females produce more eggs on a high-quality diet; thus, the female looks for healthier males with a more nutritious spermatophylax. Females use the sound created by the male to judge his fitness. The louder and more fluent the trill, the higher the fitness of the male. Oftentimes in species which produce larger food gifts, the female seeks out the males to copulate. This, however, is a cost to females as they risk predation while searching for males. Also, a cost-benefit tradeoff exists in the size of the spermatophore which the male tettigoniids produce. When males possess a large spermatophore, they benefit by being more highly selected for by females, but they are only able to mate one to two times during their lifetimes. Inversely, male Tettigoniidae with smaller spermatophores have the benefit of being able to mate two to three times per night, but have lower chances of being selected by females. Even in times of nutritional stress, male Tettigoniidae continue to invest nutrients within their spermatophores. In some species, the cost of creating the spermatophore is low, but even in those which it is not low, it is still not beneficial to reduce the quality of the spermatophore, as it would lead to lower reproductive selection and success. This low reproductive success is attributed to some Tettigoniidae species in which the spermatophylax that the female receives as a food gift from the male during copulation increases the reproductive output of the reproduction attempt. However, in other cases, the female receives few, if any, benefits. The reproductive behavior of bush crickets has been studied in great depth. Studies found that the tuberous bush cricket ("Platycleis affinis") has the largest testes in proportion to body mass of any animal recorded. They account for 14% of the insect's body mass and are thought to enable a fast remating rate.
https://en.wikipedia.org/wiki?curid=17003
Kennelly–Heaviside layer The Heaviside layer, sometimes called the Kennelly–Heaviside layer, named after Arthur E. Kennelly and Oliver Heaviside, is a layer of ionised gas occurring between roughly 90 and 150 km (56 and 93 mi) above the ground — one of several layers in the Earth's ionosphere. It is also known as the E region. It reflects medium-frequency radio waves. Because of this reflective layer, radio waves radiated into the sky can return to Earth beyond the horizon. This "skywave" or "skip" propagation technique has been used since the 1920s for radio communication at long distances, up to transcontinental distances. Propagation is affected by time of day. During the daytime the solar wind presses this layer closer to the Earth, thereby limiting how far it can reflect radio waves. Conversely, on the night (lee) side of the Earth, the solar wind drags the ionosphere further away, thereby greatly increasing the range which radio waves can travel by reflection. The extent of the effect is further influenced by the season, and the amount of sunspot activity. Existence of a reflective layer was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925), as an explanation for the propagation of radio waves beyond the horizon observed by Guglielmo Marconi in 1901. However, it was not until 1924 that its existence was shown by British scientist Edward V. Appleton, for which he received the 1947 Nobel Prize in Physics. Physicists resisted the idea of the reflecting layer for one very good reason; it would require total internal reflection, which in turn would require that the speed of light in the ionosphere would be greater than in the atmosphere below it. Since the latter speed is essentially the same as the speed of light in a vacuum (""c""), scientists were unwilling to believe the speed in the ionosphere could be higher. Nevertheless, Marconi had received signals in Newfoundland that were broadcast in England, so clearly there must be "some" mechanism allowing the transmission to reach that far. The paradox was resolved by the discovery that there were two velocities of light, the "phase velocity" and the "group velocity". The phase velocity can in fact be greater than "c", but the group velocity, being capable of transmitting information, cannot, by special relativity, be greater than "c." The phase velocity for radio waves in the ionosphere is indeed greater than "c", and that makes total internal reflection possible, and so the ionosphere can reflect radio waves. The geometric mean of the phase velocity and the group velocity cannot exceed "c", so when the phase velocity goes above "c", the group velocity must go below it. In 1925, Americans Gregory Breit and Merle A. Tuve first mapped its variations in altitude. The ITU standard model of absorption and reflection of radio waves by the Heaviside Layer was developed by the British Ionospheric physicist Louis Muggleton in the 1970s. Around 1910, William Eccles proposed the name "Heaviside Layer" for the radio-wave reflecting layer in the upper atmosphere, and the name has subsequently been widely adopted. The name Kennelly–Heaviside layer was proposed in 1925 to give credit to the work of Kennelly, which predated the proposal by Heaviside by several months.
https://en.wikipedia.org/wiki?curid=17004
Knot A knot is an intentional complication in cordage which may be useful or decorative. Practical knots may be classified as hitches, bends, or splices: a "hitch" fastens a rope to another object; a "bend" unites two rope ends; and a "splice" is a multi-strand bend or loop. A "knot" may also refer, in the strictest sense, to a stopper or knob at the end of a rope to keep that end from slipping through a grommet or eye. Knots have excited interest since ancient times for their practical uses, as well as their topological intricacy, studied in the area of mathematics known as knot theory. There is a large variety of knots, each with properties that make it suitable for a range of tasks. Some knots are used to attach the rope (or other knotting material) to other objects such as another rope, cleat, ring, or stake. Some knots are used to bind or constrict objects. Decorative knots usually bind to themselves to produce attractive patterns. While some people can look at diagrams or photos and tie the illustrated knots, others learn best by watching how a knot is tied. Knot tying skills are often transmitted by sailors, scouts, climbers, canyoners, cavers, arborists, rescue professionals, stagehands, fishermen, linemen and surgeons. The International Guild of Knot Tyers is an organization dedicated to the promotion of knot tying. Truckers in need of securing a load may use a trucker's hitch, gaining mechanical advantage. Knots can save spelunkers from being buried under rock. Many knots can also be used as makeshift tools, for example, the bowline can be used as a rescue loop, and the munter hitch can be used for belaying. The diamond hitch was widely used to tie packages on to donkeys and mules. In hazardous environments such as mountains, knots are very important. In the event of someone falling into a ravine or a similar terrain feature, with the correct equipment and knowledge of knots a rappel system can be set up to lower a rescuer down to a casualty and set up a hauling system to allow a third individual to pull both the rescuer and the casualty out of the ravine. Further application of knots includes developing a high line, which is similar to a zip line, and which can be used to move supplies, injured people, or the untrained across rivers, crevices, or ravines. Note the systems mentioned typically require carabiners and the use of multiple appropriate knots. These knots include the bowline, double figure eight, munter hitch, munter mule, prusik, autoblock, and clove hitch. Thus any individual who goes into a mountainous environment should have basic knowledge of knots and knot systems to increase safety and the ability to undertake activities such as rappelling. Knots can be applied in combination to produce complex objects such as lanyards and netting. In ropework, the frayed end of a rope is held together by a type of knot called a whipping knot. Many types of textiles use knots to repair damage. Macramé, one kind of textile, is generated exclusively through the use of knotting, instead of knits, crochets, weaves or felting. Macramé can produce self-supporting three-dimensional textile structures, as well as flat work, and is often used ornamentally or decoratively. Knots weaken the rope in which they are made. When knotted rope is strained to its breaking point, it almost always fails at the knot or close to it, unless it is defective or damaged elsewhere. The bending, crushing, and chafing forces that hold a knot in place also unevenly stress rope fibers and ultimately lead to a reduction in strength. The exact mechanisms that cause the weakening and failure are complex and are the subject of continued study. Special fibers that show differences in color in response to strain are being developed and used to study stress as it relates to types of knots. Relative knot strength, also called knot efficiency, is the breaking strength of a knotted rope in proportion to the breaking strength of the rope without the knot. Determining a precise value for a particular knot is difficult because many factors can affect a knot efficiency test: the type of fiber, the style of rope, the size of rope, whether it is wet or dry, how the knot is dressed before loading, how rapidly it is loaded, whether the knot is repeatedly loaded, and so on. The efficiency of common knots ranges between 40—80% of the rope's original strength. In most situations forming loops and bends with conventional knots is far more practical than using rope splices, even though the latter can maintain nearly the rope's full strength. Prudent users allow for a large safety margin in the strength of rope chosen for a task due to the weakening effects of knots, aging, damage, shock loading, etc. The working load limit of a rope is generally specified with a significant safety factor, up to 15:1 for critical applications. For life-threatening applications, other factors come into play. Even if the rope does not break, a knot may still fail to hold. Knots that hold firm under a variety of adverse conditions are said to be more secure than those that do not. Repeated, dynamic loads will cause virtually every knot to fail. The main ways knots fail to hold are: The load creates tension that pulls the rope back through the knot in the direction of the load. If this continues far enough, the working end passes into the knot and the knot unravels and fails. This behavior can worsen when the knot is repeatedly strained and let slack, dragged over rough terrain, or repeatedly struck against hard objects such as masts and flagpoles. Even with secure knots, slippage may occur when the knot is first put under real tension. This can be mitigated by leaving plenty of rope at the working end outside of the knot, and by dressing the knot cleanly and tightening it as much as possible before loading. Sometimes, the use of a stopper knot or, even better, a backup knot can prevent the working end from passing through the knot; but if a knot is observed to slip, it is generally preferable to use a more secure knot. Life-critical applications often require backup knots to maximize safety. To capsize (or spill) a knot is to change its form and rearrange its parts, usually by pulling on specific ends in certain ways. When used inappropriately, some knots tend to capsize easily or even spontaneously. Often the capsized form of the knot offers little resistance to slipping or unraveling. A reef knot, when misused as a bend, can capsize dangerously. Sometimes a knot is intentionally capsized as a method of tying another knot, as with the "lightning method" of tying a bowline. Some knots, such as the carrick bend, are generally tied in one form then capsized to obtain a stronger or more stable form. In knots that are meant to grip other objects, failure can be defined as the knot moving relative to the gripped object. While the knot itself does not fail, it ceases to perform the desired function. For instance, a simple rolling hitch tied around a railing and pulled parallel to the railing might hold up to a certain tension, then start sliding. Sometimes this problem can be corrected by working-up the knot tighter before subjecting it to load, but usually the problem requires either a knot with more wraps or a rope of different diameter or material. Knots differ in the effort required to untie them after loading. Knots that are very difficult to untie, such as the water knot, are said to "jam" or be jamming knots. Knots that come untied with less difficulty, such as the Zeppelin bend, are referred to as "non-jamming". The list of knots is extensive, but common properties allow for a useful system of categorization. For example, "loop" knots share the attribute of having some kind of an anchor point constructed on the standing end (such as a loop or overhand knot) into which the working end is easily hitched, using a round turn. An example of this is the bowline. "Constricting" knots often rely on friction to cinch down tight on loose bundles; an example is the Miller's knot. Knots may belong to more than one category. Trick knots are knots that are used as part of a magic trick, a joke, or a puzzle. They are useful for these purposes because they have a deceptive appearance, being easier or more difficult to tie or untie than their appearance would suggest. The easiest trick knot is the slip knot. Other noted trick knots include: Knot theory is a branch of topology. It deals with the mathematical analysis of knots, their structure and properties, and with the relationships between different knots. In topology, a knot is a figure consisting of a single loop with any number of crossing or knotted elements: a closed curve in space which may be moved around so long as its strands never pass through each other. As a closed loop, a mathematical knot has no proper ends, and cannot be undone or untied; however, any physical knot in a piece of string can be thought of as a mathematical knot by fusing the two ends. A configuration of several knots winding around each other is called a "link". Various mathematical techniques are used to classify and distinguish knots and links. For instance, the Alexander polynomial associates certain numbers with any given knot; these numbers are different for the trefoil knot, the figure-eight knot, and the unknot (a simple loop), showing that one cannot be moved into the other (without strands passing through each other). A simple mathematical theory of hitches has been proposed by Bayman and extended by Maddocks and Keller. It makes predictions that are approximately correct when tested empirically. No similarly successful theory has been developed for knots in general. Knot tying consists of the techniques and skills employed in tying a knot in rope, nylon webbing, or other articles. The proper tying of a knot can be the difference between an attractive knot and a messy one, and occasionally life and death. It is important to understand the often subtle differences between what works, and what doesn't. For example, many knots "spill" or pull through, particularly if they are not "backed up," usually with a single or double overhand knot to make sure the end of the rope doesn't make its way through the main knot, causing all strength to be lost. The tying of a knot may be very straightforward (such as with an overhand knot), or it may be more complicated, such as a monkey's fist knot. Tying knots correctly requires an understanding of the type of material being tied (string, cord, monofilament line, kernmantle rope, or nylon webbing). For example, cotton string may be very small and easy to tie with lots of internal friction to keep it from falling apart once tied, while stiff 5/8" thick kernmantle rope will be very difficult to tie, and may be so slick as to tend to come apart once tied. The form of the material will influence the tying of a knot as well. Rope is round in cross-section, and has little dependence upon the manner in which the material is tied. Nylon webbing, on the other hand, is flat, and usually "tubular" in construction, meaning that it is spiral-woven, and has a hollow core. In order to retain as much of the strength as possible with webbing, the material must be tied "flat" such that parallel sections do not cross, and that the sections of webbing are not twisted when they cross each other within a knot. The crossing of strands is important when dealing with round rope in other knots; for example, the figure-eight loop loses strength when strands are crossed while the knot is being "finished" and tightened. Moreover, the standing end or the end from which the hauling will be done must have the greater radius of curvature in the finished knot to maximize the strength of the knot. Tools are sometimes employed in the finishing or untying of a knot, such as a fid, a tapered piece of wood that is often used in splicing. With the advent of wire rope, many other tools are used in the tying of "knots." However, for cordage and other non-metallic appliances, the tools used are generally limited to sharp edges or blades such as a sheepsfoot blade, occasionally a fine needle for proper whipping of laid rope, a hot cutter for nylon and other synthetic fibers, and (for larger ropes) a shoe for smoothing out large knots by rolling them on the ground. The hagfish is known to strip slime from its skin by tying itself into a simple overhand knot, and moving its body to make the knot travel toward the tail. It also uses this action in reverse (tail to head) to pry out flesh after biting into a carcass.
https://en.wikipedia.org/wiki?curid=17006
Killer whale The killer whale, or orca ("Orcinus orca"), is a toothed whale belonging to the oceanic dolphin family, of which it is the largest member. Killer whales have a diverse diet, although individual populations often specialize in particular types of prey. Some feed exclusively on fish, while others hunt marine mammals such as seals and other species of dolphin. They have been known to attack baleen whale calves, and even adult whales. Killer whales are apex predators, as no animal preys on them. A cosmopolitan species, they can be found in each of the world's oceans in a variety of marine environments, from Arctic and Antarctic regions to tropical seas, absent only from the Baltic and Black seas, and some areas of the Arctic Ocean. Killer whales are highly social; some populations are composed of matrilineal family groups (pods) which are the most stable of any animal species. Their sophisticated hunting techniques and vocal behaviours, which are often specific to a particular group and passed across generations, have been described as manifestations of animal culture. The International Union for Conservation of Nature assesses the orca's conservation status as data deficient because of the likelihood that two or more killer whale types are separate species. Some local populations are considered threatened or endangered due to prey depletion, habitat loss, pollution (by PCBs), capture for marine mammal parks, and conflicts with human fisheries. In late 2005, the southern resident killer whales, which swim in British Columbia and Washington state waters, were placed on the U.S. Endangered Species list. Wild killer whales are not considered a threat to humans and no fatal attack on humans has ever been documented, but there have been cases of captive orcas killing or injuring their handlers at marine theme parks. Killer whales feature strongly in the mythologies of indigenous cultures, and their reputation in different cultures ranges from being the souls of humans to merciless killers. "Orcinus orca" is the only recognized extant species in the genus "Orcinus", and one of many animal species originally described by Linnaeus in 1758 in "Systema Naturae". Konrad Gessner wrote the first scientific description of a killer whale in his "Piscium & aquatilium animantium natura" of 1558, part of the larger "Historia animalium", based on examination of a dead stranded animal in the Bay of Greifswald that had attracted a great deal of local interest. The killer whale is one of 35 species in the oceanic dolphin family, which first appeared about 11 million years ago. The killer whale lineage probably branched off shortly thereafter. Although it has morphological similarities with the false killer whale, the pygmy killer whale and the pilot whales, a study of cytochrome b gene sequences by Richard LeDuc indicated that its closest extant relatives are the snubfin dolphins of the genus "Orcaella". However, a more recent (2018) study places the orca as a sister taxon to the Lissodelphininae, a clade that includes "Lagenorhynchus" and "Cephalorhynchus". Although the term "orca" is increasingly used, English-speaking scientists most often use the traditional name "killer whale". The genus name "Orcinus" means "of the kingdom of the dead", or "belonging to Orcus". Ancient Romans originally used "orca" (pl. "orcae") for these animals, possibly borrowing Ancient Greek ("óryx"), which referred (among other things) to a whale species. Since the 1960s, "orca" has steadily grown in popularity. The term "orca" is preferred by some as it avoids the negative connotations of "killer", and because, being part of the family Delphinidae, the species is more closely related to other oceanic dolphins than to other whales. They are sometimes referred to as "blackfish", a name also used for other whale species. "Grampus" is a former name for the species, but is now seldom used. This meaning of "grampus" should not be confused with the genus "Grampus", whose only member is Risso's dolphin. The three to five types of killer whales may be distinct enough to be considered different races, subspecies, or possibly even species (see Species problem). The IUCN reported in 2008, "The taxonomy of this genus is clearly in need of review, and it is likely that "O. orca" will be split into a number of different species or at least subspecies over the next few years." Although large variation in the ecological distinctiveness of different killer whale groups complicate simple differentiation into types, research off the west coast of Canada and the United States in the 1970s and 1980s identified the following three types: Transients and residents live in the same areas, but avoid each other. Other populations have not been as well studied, although specialized fish and mammal eating killer whales have been distinguished elsewhere. In addition, separate populations of "generalist" (fish- and mammal-eating) and "specialist" (mammal-eating) killer whales have been identified off northwestern Europe. As with residents and transients, the lifestyle of these whales appears to reflect their diet; fish-eating killer whales in Alaska and Norway have resident-like social structures, while mammal-eating killer whales in Argentina and the Crozet Islands behave more like transients. Three types have been documented in the Antarctic. Two dwarf species, named "Orcinus nanus" and "Orcinus glacialis", were described during the 1980s by Soviet researchers, but most cetacean researchers are sceptical about their status, and linking these directly to the types described below is difficult. Types B and C live close to the ice pack, and diatoms in these waters may be responsible for the yellowish colouring of both types. Mitochondrial DNA sequences support the theory that these are recently diverged separate species. More recently, complete mitochondrial sequencing indicates the two Antarctic groups that eat seals and fish should be recognized as distinct species, as should the North Pacific transients, leaving the others as subspecies pending additional data. Advanced methods that sequenced the entire mitochondrial genome revealed systematic differences in DNA between different populations. A 2019 study of Type D orcas also found them to be distinct from other populations and possibly even a unique species. Mammal-eating killer whales in different regions were long thought likely to be closely related, but genetic testing has refuted this hypothesis. There are seven identified ecotypes inhabiting isolated ecological niches. Of three orca ecotypes in the Antarctic, one preys on minke whales, the second on seals and penguins, and the third on fish. Another ecotype lives in the eastern North Atlantic, while the three Northeast Pacific ecotypes are labelled the transient, resident and offshore populations described above. Research has supported a proposal to reclassify the Antarctic seal- and fish-eating populations and the North Pacific transients as a distinct species, leaving the remaining ecotypes as subspecies. The first split in the orca population, between the North Pacific transients and the rest, occurred an estimated 700,000 years ago. Such a designation would mean that each new species becomes subject to separate conservation assessments. A typical killer whale distinctively bears a black back, white chest and sides, and a white patch above and behind the eye. Calves are born with a yellowish or orange tint, which fades to white. It has a heavy and robust body with a large dorsal fin up to 1.8m (5ft 11in) tall. Behind the fin, it has a dark grey "saddle patch" across the back. Antarctic killer whales may have pale grey to nearly white backs. Adult killer whales are very distinctive, seldom confused with any other sea creature. When seen from a distance, juveniles can be confused with other cetacean species, such as the false killer whale or Risso's dolphin. The killer whale's teeth are very strong, and its jaws exert a powerful grip; the upper teeth fall into the gaps between the lower teeth when the mouth is closed. The firm middle and back teeth hold prey in place, while the front teeth are inclined slightly forward and outward to protect them from powerful jerking movements. Killer whales are the largest extant members of the dolphin family. Males typically range from long and weigh in excess of . Females are smaller, generally ranging from and weighing about . Calves at birth weigh about and are about long. The skeleton of the killer whale is of the typical delphinid structure, but more robust. Its integument, unlike that of most other dolphin species, is characterized by a well-developed dermal layer with a dense network of fascicles of collagen fibres. Killer whale pectoral fins, analogous to forelimbs, are large and rounded, resembling paddles, with those of males significantly larger than those of females. Dorsal fins also exhibit sexual dimorphism, with those of males about high, more than twice the size of the female's, with the male's fin more like a tall, elongated isosceles triangle, whereas the female's is shorter and more curved. Males and females also have different patterns of black and white skin in their genital areas. In the skull, adult males have longer lower jaws than females, as well as larger occipital crests. An individual killer whale can often be identified from its dorsal fin and saddle patch. Variations such as nicks, scratches, and tears on the dorsal fin and the pattern of white or grey in the saddle patch are unique. Published directories contain identifying photographs and names for hundreds of North Pacific animals. Photographic identification has enabled the local population of killer whales to be counted each year rather than estimated, and has enabled great insight into lifecycles and social structures. Occasionally a killer whale is white; they have been spotted in the northern Bering Sea and around St. Lawrence Island, and near the Russian coast. In February 2008, a white killer whale was photographed off Kanaga Volcano in the Aleutian Islands. In 2010, the Far East Russia Orca Project (FEROP), co-founded and co-directed by Alexander M. Burdin and Erich Hoyt, filmed an adult male nicknamed Iceberg. Killer whales have good eyesight above and below the water, excellent hearing, and a good sense of touch. They have exceptionally sophisticated echolocation abilities, detecting the location and characteristics of prey and other objects in the water by emitting clicks and listening for echoes, as do other members of the dolphin family. The mean body temperature of the orca is . Like most marine mammals, orcas have a layer of insulating blubber ranging from thick beneath the skin. The pulse is about 60 heartbeats per minute when the orca is at the surface, dropping to 30 beats/min when submerged. Killer whales are found in all oceans and most seas. Due to their enormous range, numbers, and density, relative distribution is difficult to estimate, but they clearly prefer higher latitudes and coastal areas over pelagic environments. Areas which serve as major study sites for the species include the coasts of Iceland, Norway, the Valdes Peninsula of Argentina, the Crozet Islands, New Zealand and parts of the west coast of North America, from California to Alaska. Systematic surveys indicate the highest densities of killer whales (>0.40 individuals per 100 km²) in the northeast Atlantic around the Norwegian coast, in the north Pacific along the Aleutian Islands, the Gulf of Alaska and in the Southern Ocean off much of the coast of Antarctica. They are considered "common" (0.20–0.40 individuals per 100 km²) in the eastern Pacific along the coasts of British Columbia, Washington and Oregon, in the North Atlantic Ocean around Iceland and the Faroe Islands. High densities have also been reported but not quantified in the western North Pacific around the Sea of Japan, Sea of Okhotsk, Kuril Islands, Kamchatka and the Commander Islands and in the Southern Hemisphere off southern Brazil and the tip of southern Africa. They are reported as seasonally common in the Canadian Arctic, including Baffin Bay between Greenland and Nunavut, as well as Tasmania and Macquarie Island. Regularly occurring or distinct populations exist off Northwest Europe, California, Patagonia, the Crozet Islands, Marion Island, southern Australia and New Zealand. The northwest Atlantic population of at least 67 individuals ranges from Labrador and Newfoundland to New England with sightings to Cape Cod and Long Island. Information for offshore regions and warmer waters is more scarce, but widespread sightings indicate that the killer whale can survive in most water temperatures. They have been sighted, though more infrequently, in the Mediterranean, the Arabian Sea, the Gulf of Mexico, Banderas Bay on Mexico's west coast and the Caribbean. Over 50 individual whales have been documented in the northern Indian Ocean, including two individuals that were sighted in the Persian Gulf in 2008 and off Sri Lanka in 2015. Those orcas may occasionally enter the Red Sea through the Gulf of Aden. The modern status of the species along coastal mainland China and its vicinity is unknown. Recorded sightings have been made from almost the entire shoreline. A wide-ranging population is likely to exist in the central Pacific, with some sightings off Hawaii. Distinct populations may also exist off the west coast of tropical Africa, and Papua New Guinea. In the Mediterranean, killer whales are considered "visitors", likely from the North Atlantic, and sightings become less frequent further east. However, a small year-round population is known to exist in the Strait of Gibraltar, mostly on the Atlantic side. Killer whales also appear to regularly occur off the Galápagos Islands. In the Antarctic, killer whales range up to the edge of the pack ice and are believed to venture into the denser pack ice, finding open leads much like beluga whales in the Arctic. However, killer whales are merely seasonal visitors to Arctic waters, and do not approach the pack ice in the summer. With the rapid Arctic sea ice decline in the Hudson Strait, their range now extends deep into the northwest Atlantic. Occasionally, killer whales swim into freshwater rivers. They have been documented up the Columbia River in the United States. They have also been found in the Fraser River in Canada and the Horikawa River in Japan. Migration patterns are poorly understood. Each summer, the same individuals appear off the coasts of British Columbia and Washington. Despite decades of research, where these animals go for the rest of the year remains unknown. Transient pods have been sighted from southern Alaska to central California. Worldwide population estimates are uncertain, but recent consensus suggests a minimum of 50,000 (2006). Local estimates include roughly 25,000 in the Antarctic, 8,500 in the tropical Pacific, 2,250–2,700 off the cooler northeast Pacific and 500–1,500 off Norway. Japan's Fisheries Agency estimated in the 2000s that 2,321 killer whales were in the seas around Japan. Killer whales are apex predators, meaning that they themselves have no natural predators. They are sometimes called the wolves of the sea, because they hunt in groups like wolf packs. Killer whales hunt varied prey including fish, cephalopods, mammals, sea birds, and sea turtles. Different populations or ecotypes may specialize, and some can have a dramatic impact on prey species. However, whales in tropical areas appear to have more generalized diets due to lower food productivity. Fish-eating killer whales prey on around 30 species of fish. Some populations in the Norwegian and Greenland sea specialize in herring and follow that fish's autumnal migration to the Norwegian coast. Salmon account for 96% of northeast Pacific residents' diet, including 65% of large, fatty Chinook. Chum salmon are also eaten, but smaller sockeye and pink salmon are not a significant food item. Depletion of specific prey species in an area is, therefore, cause for concern for local populations, despite the high diversity of prey. On average, a killer whale eats each day. While salmon are usually hunted by an individual whale or a small group, herring are often caught using carousel feeding: the killer whales force the herring into a tight ball by releasing bursts of bubbles or flashing their white undersides. They then slap the ball with their tail flukes, stunning or killing up to 15 fish at a time, then eating them one by one. Carousel feeding has only been documented in the Norwegian killer whale population, as well as some oceanic dolphin species. In New Zealand, sharks and rays appear to be important prey, including eagle rays, long-tail and short-tail stingrays, common threshers, smooth hammerheads, blue sharks, basking sharks, and shortfin makos. With sharks, orcas may herd them to the surface and strike them with their tail flukes, while bottom-dwelling rays are cornered, pinned to the ground and taken to the surface. In other parts of the world, killer whales have preyed on broadnose sevengill sharks, tiger sharks and even small whale sharks. Killer whales have also been recorded attacking and feeding on great white sharks, and appear to target the liver. Competition between killer whales and white sharks is probable in regions where their diets overlap. The arrival of orcas in an area can cause white sharks to flee and forage elsewhere. Killer whales spend most of their time at shallow depths, but occasionally dive several hundred meters depending on their prey. Killer whales are very sophisticated and effective predators of marine mammals. 32 cetacean species have been recorded as prey, from observing orcas' feeding activity, examining the stomach contents of dead orcas, and seeing scars on the bodies of surviving prey animals. Groups even attack larger cetaceans such as minke whales, grey whales, and, rarely, sperm whales or blue whales. It has been hypothesized that predation by orcas on whale calves in high-productivity, high-latitude areas is the reason for great whale migrations during breeding season to low-productivity tropical waters where orcas are scarcer. Hunting a large whale usually takes several hours. Killer whales generally attack young or weak animals; however, a group of five or more may attack a healthy adult. When hunting a young whale, a group chases it and its mother to exhaustion. Eventually, they separate the pair and surround the calf, drowning it by keeping it from surfacing. Pods of female sperm whales sometimes protect themselves by forming a protective circle around their calves with their flukes facing outwards, using them to repel the attackers. Rarely, large killer whale pods can overwhelm even adult female sperm whales. Adult bull sperm whales, which are large, powerful and aggressive when threatened, and fully grown adult blue whales, which are possibly too large to overwhelm, are not believed to be prey for killer whales. Prior to the advent of industrial whaling, great whales may have been the major food source for killer whales. The introduction of modern whaling techniques may have aided killer whales by the sound of exploding harpoons indicating availability of prey to scavenge, and compressed air inflation of whale carcasses causing them to float, thus exposing them to scavenging. However, the devastation of great whale populations by unfettered whaling has possibly reduced their availability for killer whales, and caused them to expand their consumption of smaller marine mammals, thus contributing to the decline of these as well. Other marine mammal prey species include nearly 20 species of seal, sea lion and fur seal. Walruses and sea otters are less frequently taken. Often, to avoid injury, killer whales disable their prey before killing and eating it. This may involve throwing it in the air, slapping it with their tails, ramming it, or breaching and landing on it. In the Aleutian Islands, a decline in sea otter populations in the 1990s was controversially attributed by some scientists to killer whale predation, although with no direct evidence. The decline of sea otters followed a decline in harbour seal and Steller sea lion populations, the killer whale's preferred prey, which in turn may be substitutes for their original prey, now decimated by industrial whaling. In steeply banked beaches off Península Valdés, Argentina, and the Crozet Islands, killer whales feed on South American sea lions and southern elephant seals in shallow water, even beaching temporarily to grab prey before wriggling back to the sea. Beaching, usually fatal to cetaceans, is not an instinctive behaviour, and can require years of practice for the young. Killer whales can then release the animal near juvenile whales, allowing the younger whales to practice the difficult capture technique on the now-weakened prey. "Wave-hunting" killer whales spy-hop to locate Weddell seals, crabeater seals, leopard seals, and penguins resting on ice floes, and then swim in groups to create waves that wash over the floe. This washes the prey into the water, where other killer whales lie in wait. Killer whales have also been observed preying on terrestrial mammals, such as deer swimming between islands off the northwest coast of North America. Killer whale cannibalism has also been reported based on analysis of stomach contents, but this is likely to be the result of scavenging remains dumped by whalers. One killer whale was also attacked by its companions after being shot. Although resident killer whales have never been observed to eat other marine mammals, they occasionally harass and kill porpoises and seals for no apparent reason. Killer whales in many areas may prey on cormorants and gulls. A captive killer whale at Marineland of Canada discovered it could regurgitate fish onto the surface, attracting sea gulls, and then eat the birds. Four others then learned to copy the behaviour. Day-to-day killer whale behaviour generally consists of foraging, travelling, resting and socializing. Killer whales frequently engage in surface behaviour such as breaching (jumping completely out of the water) and tail-slapping. These activities may have a variety of purposes, such as courtship, communication, dislodging parasites, or play. Spyhopping is a behaviour in which a whale holds its head above water to view its surroundings. Resident killer whales swim alongside porpoises and other dolphins. Killer whales are notable for their complex societies. Only elephants and higher primates live in comparably complex social structures. Due to orcas' complex social bonds, many marine experts have concerns about how humane it is to keep them in captivity. Resident killer whales in the eastern North Pacific live in particularly complex and stable social groups. Unlike any other known mammal social structure, resident whales live with their mothers for their entire lives. These family groups are based on matrilines consisting of the eldest female (matriarch) and her sons and daughters, and the descendants of her daughters, etc. The average size of a matriline is 5.5 animals. Because females can reach age 90, as many as four generations travel together. These matrilineal groups are highly stable. Individuals separate for only a few hours at a time, to mate or forage. With one exception, a killer whale named Luna, no permanent separation of an individual from a resident matriline has been recorded. Closely related matrilines form loose aggregations called pods, usually consisting of one to four matrilines. Unlike matrilines, pods may separate for weeks or months at a time. DNA testing indicates resident males nearly always mate with females from other pods. Clans, the next level of resident social structure, are composed of pods with similar dialects, and common but older maternal heritage. Clan ranges overlap, mingling pods from different clans. The final association layer, perhaps more arbitrarily defined than the familial groupings, is called the community, and is defined as a set of clans that regularly commingle. Clans within a community do not share vocal patterns. Transient pods are smaller than resident pods, typically consisting of an adult female and one or two of her offspring. Males typically maintain stronger relationships with their mothers than other females. These bonds can extend well into adulthood. Unlike residents, extended or permanent separation of transient offspring from natal matrilines is common, with juveniles and adults of both sexes participating. Some males become "rovers" and do not form long-term associations, occasionally joining groups that contain reproductive females. As in resident clans, transient community members share an acoustic repertoire, although regional differences in vocalizations have been noted. Like all cetaceans, killer whales depend heavily on underwater sound for orientation, feeding, and communication. They produce three categories of sounds: clicks, whistles, and pulsed calls. Clicks are believed to be used primarily for navigation and discriminating prey and other objects in the surrounding environment, but are also commonly heard during social interactions. Northeast Pacific resident groups tend to be much more vocal than transient groups in the same waters. Residents feed primarily on Chinook and chum, which are insensitive to killer whale calls (inferred from the audiogram of Atlantic salmon). In contrast, the marine mammal prey of transients hear whale calls well. Transients are typically silent. They sometimes use a single click (called a cryptic click) rather than the long train of clicks observed in other populations. Residents are silent only when resting. All members of a resident pod use similar calls, known collectively as a dialect. Dialects are composed of specific numbers and types of discrete, repetitive calls. They are complex and stable over time. Call patterns and structure are distinctive within matrilines. Newborns produce calls similar to their mothers, but have a more limited repertoire. Individuals likely learn their dialect through contact with pod members. Family-specific calls have been observed more frequently in the days following a calf's birth, which may help the calf learn them. Dialects are probably an important means of maintaining group identity and cohesiveness. Similarity in dialects likely reflects the degree of relatedness between pods, with variation growing over time. When pods meet, dominant call types decrease and subset call types increase. The use of both call types is called biphonation. The increased subset call types may be the distinguishing factor between pods and inter-pod relations. Dialects also distinguish types. Resident dialects contain seven to 17 (mean = 11) distinctive call types. All members of the North American west coast transient community express the same basic dialect, although minor regional variation in call types is evident. Preliminary research indicates offshore killer whales have group-specific dialects unlike those of residents and transients. Norwegian and Icelandic herring-eating orcas appear to have different vocalizations for activities like hunting. A population that live in McMurdo Sound, Antarctica have 28 complex burst-pulse and whistle calls. Killer whales have the second-heaviest brains among marine mammals (after sperm whales, which have the largest brain of any animal). They can be trained in captivity and are often described as intelligent, although defining and measuring "intelligence" is difficult in a species whose environment and behavioural strategies are very different from those of humans. Killer whales imitate others, and seem to deliberately teach skills to their kin. Off the Crozet Islands, mothers push their calves onto the beach, waiting to pull the youngster back if needed. People who have interacted closely with killer whales offer numerous anecdotes demonstrating the whales' curiosity, playfulness, and ability to solve problems. Alaskan killer whales have not only learned how to steal fish from longlines, but have also overcome a variety of techniques designed to stop them, such as the use of unbaited lines as decoys. Once, fishermen placed their boats several miles apart, taking turns retrieving small amounts of their catch, in the hope that the whales would not have enough time to move between boats to steal the catch as it was being retrieved. A researcher described what happened next: In other anecdotes, researchers describe incidents in which wild killer whales playfully tease humans by repeatedly moving objects the humans are trying to reach, or suddenly start to toss around a chunk of ice after a human throws a snowball. The killer whale's use of dialects and the passing of other learned behaviours from generation to generation have been described as a form of animal culture. Female killer whales begin to mature at around the age of 10 and reach peak fertility around 20, experiencing periods of polyestrous cycling separated by non-cycling periods of three to 16 months. Females can often breed until age 40, followed by a rapid decrease in fertility. As such, orcas are among the few animals that undergo menopause and live for decades after they have finished breeding. The lifespans of wild females average 50 to 80 years. Some are claimed to have lived substantially longer: Granny (J2) was estimated by some researchers to have been as old as 105 years at the time of her death, though a biopsy sample indicated her age as 65 to 80 years. Orcas held in captivity tend to live less than those in the wild, although this is subject to scientific debate. To avoid inbreeding, males mate with females from other pods. Gestation varies from 15 to 18 months. Mothers usually calve a single offspring about once every five years. In resident pods, births occur at any time of year, although winter is the most common. Mortality is extremely high during the first seven months of life, when 37–50% of all calves die. Weaning begins at about 12 months of age, and is complete by two years. According to observations in several regions, all male and female pod members participate in the care of the young. Males sexually mature at the age of 15, but do not typically reproduce until age 21. Wild males live around 29 years on average, with a maximum of about 60 years. One male, known as Old Tom, was reportedly spotted every winter between the 1840s and 1930 off New South Wales, Australia. This would have made him up to 90 years old. Examination of his teeth indicated he died around age 35, but this method of age determination is now believed to be inaccurate for older animals. One male known to researchers in the Pacific Northwest (identified as J1) was estimated to have been 59 years old when he died in 2010. Killer whales are unique among cetaceans, as their caudal sections elongate with age, making their heads relatively shorter. Infanticide, once thought to occur only in captive killer whales, was observed in wild populations by researchers off British Columbia on December 2, 2016. In this incident, an adult male killed the calf of a female within the same pod, with his mother also joining in the assault. It is theorized that the male killed the young calf in order to mate with its mother (something that occurs in other carnivore species), while the male's mother supported the breeding opportunity for her son. The attack ended when the calf's mother struck and injured the attacking male. Such behaviour matches that of many smaller dolphin species, such as the bottlenose dolphin. In 2008, the IUCN (International Union for Conservation of Nature) changed its assessment of the killer whale's conservation status from conservation dependent to data deficient, recognizing that one or more killer whale types may actually be separate, endangered species. Depletion of prey species, pollution, large-scale oil spills, and habitat disturbance caused by noise and conflicts with boats are the most significant worldwide threats. In January 2020, the first killer whale in England and Wales since 2001 was found dead with a large fragment of plastic in its stomach. Like other animals at the highest trophic levels, the killer whale is particularly at risk of poisoning from bioaccumulation of toxins, including Polychlorinated biphenyls (PCBs). European harbour seals have problems in reproductive and immune functions associated with high levels of PCBs and related contaminants, and a survey off the Washington coast found PCB levels in killer whales were higher than levels that had caused health problems in harbour seals. Blubber samples in the Norwegian Arctic show higher levels of PCBs, pesticides and brominated flame-retardants than in polar bears. When food is scarce, killer whales metabolize blubber for energy, which increases pollutant concentrations in their blood. In the Pacific Northwest, wild salmon stocks, a main resident food source, have declined dramatically in recent years. In the Puget Sound region only 75 whales remain with few births over the last few years. On the west coast of Alaska and the Aleutian Islands, seal and sea lion populations have also substantially declined. In 2005, the United States government listed the southern resident community as an endangered population under the Endangered Species Act. This community comprises three pods which live mostly in the Georgia and Haro Straits and Puget Sound in British Columbia and Washington. They do not breed outside of their community, which was once estimated at around 200 animals and later shrank to around 90. In October 2008, the annual survey revealed seven were missing and presumed dead, reducing the count to 83. This is potentially the largest decline in the population in the past 10 years. These deaths can be attributed to declines in Chinook salmon. Scientist Ken Balcomb has extensively studied killer whales since 1976; he is the research biologist responsible for discovering U.S. Navy sonar may harm killer whales. He studied killer whales from the Center for Whale Research, located in Friday Harbor, Washington. He was also able to study killer whales from "his home porch perched above Puget Sound, where the animals hunt and play in summer months". In May 2003, Balcomb (along with other whale watchers near the Puget Sound coastline) noticed uncharacteristic behaviour displayed by the killer whales. The whales seemed "agitated and were moving haphazardly, attempting to lift their heads free of the water" to escape the sound of the sonars. "Balcomb confirmed at the time that strange underwater pinging noises detected with underwater microphones were sonar. The sound originated from a U.S. Navy frigate 12 miles (19 kilometres) distant, Balcomb said." The impact of sonar waves on killer whales is potentially life-threatening. Three years prior to Balcomb's discovery, research in the Bahamas showed 14 beaked whales washed up on the shore. These whales were beached on the day U.S. Navy destroyers were activated into sonar exercise. Of the 14 whales beached, six of them died. These six dead whales were studied, and CAT scans of two of the whale heads showed hemorrhaging around the brain and the ears, which is consistent with decompression sickness. Another conservation concern was made public in September 2008 when the Canadian government decided it was not necessary to enforce further protections (including the Species at Risk Act in place to protect endangered animals along their habitats) for killer whales aside from the laws already in place. In response to this decision, six environmental groups sued the federal government, claiming killer whales were facing many threats on the British Columbia Coast and the federal government did nothing to protect them from these threats. A legal and scientific nonprofit organization, Ecojustice, led the lawsuit and represented the David Suzuki Foundation, Environmental Defence, Greenpeace Canada, International Fund for Animal Welfare, the Raincoast Conservation Foundation, and the Wilderness Committee. Many scientists involved in this lawsuit, including Bill Wareham, a marine scientist with the David Suzuki Foundation, noted increased boat traffic, water toxic wastes, and low salmon population as major threats, putting approximately 87 killer whales on the British Columbia Coast in danger. Underwater noise from shipping, drilling, and other human activities is a significant concern in some key killer whale habitats, including Johnstone Strait and Haro Strait. In the mid-1990s, loud underwater noises from salmon farms were used to deter seals. Killer whales also avoided the surrounding waters. High-intensity sonar used by the Navy disturbs killer whales along with other marine mammals. Killer whales are popular with whale watchers, which may stress the whales and alter their behaviour, particularly if boats approach too closely or block their lines of travel. The "Exxon Valdez" oil spill adversely affected killer whales in Prince William Sound and Alaska's Kenai Fjords region. Eleven members (about half) of one resident pod disappeared in the following year. The spill damaged salmon and other prey populations, which in turn damaged local killer whales. By 2009, scientists estimated the AT1 transient population (considered part of a larger population of 346 transients), numbered only seven individuals and had not reproduced since the spill. This population is expected to die out. A 2018 study published in "Science" found that global killer whale populations are poised to dramatically decline due to exposure to toxic chemical and PCB pollution. The indigenous peoples of the Pacific Northwest Coast feature killer whales throughout their art, history, spirituality and religion. The Haida regarded killer whales as the most powerful animals in the ocean, and their mythology tells of killer whales living in houses and towns under the sea. According to these myths, they took on human form when submerged, and humans who drowned went to live with them. For the Kwakwaka'wakw, the killer whale was regarded as the ruler of the undersea world, with sea lions for slaves and dolphins for warriors. In Nuu-chah-nulth and Kwakwaka'wakw mythology, killer whales may embody the souls of deceased chiefs. The Tlingit of southeastern Alaska regarded the killer whale as custodian of the sea and a benefactor of humans. The Maritime Archaic people of Newfoundland also had great respect for killer whales, as evidenced by stone carvings found in a 4,000-year-old burial at the Port au Choix Archaeological Site. In the tales and beliefs of the Siberian Yupik people, killer whales are said to appear as wolves in winter, and wolves as killer whales in summer. Killer whales are believed to assist their hunters in driving walrus. Reverence is expressed in several forms: the boat represents the animal, and a wooden carving hung from the hunter's belt. Small sacrifices such as tobacco or meat are strewn into the sea for them. Indigenous Ainu tribe often referred killer whales in their folklore and myth as "Repun Kamuy" (God of Sea/Offshore) to bring fortunes (whales) to the coasts, and there had been traditional funerals for stranded or deceased orcas akin to funerals for other animals such as brown bears. In Western cultures, killer whales were historically feared as dangerous, savage predators. The first written description of a killer whale was given by Pliny the Elder "circa" AD 70, who wrote, "Orcas (the appearance of which no image can express, other than an enormous mass of savage flesh with teeth) are the enemy of [other kinds of whale]... they charge and pierce them like warships ramming." Of the very few confirmed attacks on humans by wild killer whales, none have been fatal. In one instance, killer whales tried to tip ice floes on which a dog team and photographer of the Terra Nova Expedition were standing. The sled dogs' barking is speculated to have sounded enough like seal calls to trigger the killer whale's hunting curiosity. In the 1970s, a surfer in California was bitten, and in 2005, a boy in Alaska who was splashing in a region frequented by harbour seals was bumped by a killer whale that apparently misidentified him as prey. Unlike wild killer whales, captive killer whales have made nearly two dozen attacks on humans since the 1970s, some of which have been fatal. Competition with fishermen also led to killer whales being regarded as pests. In the waters of the Pacific Northwest and Iceland, the shooting of killer whales was accepted and even encouraged by governments. As an indication of the intensity of shooting that occurred until fairly recently, about 25% of the killer whales captured in Puget Sound for aquarium through 1970 bore bullet scars. The U.S. Navy claimed to have deliberately killed hundreds of killer whales in Icelandic waters in 1956 with machine guns, rockets, and depth charges. Western attitudes towards killer whales have changed dramatically in recent decades. In the mid-1960s and early 1970s, killer whales came to much greater public and scientific awareness, starting with the first live-capture and display of a killer whale known as Moby Doll, a resident harpooned off Saturna Island in 1964. So little was known at the time, it was nearly two months before the whale's keepers discovered what food (fish) it was willing to eat. To the surprise of those who saw him, Moby Doll was a docile, non-aggressive whale who made no attempts to attack humans. Between 1964 and 1976, 50 killer whales from the Pacific Northwest were captured for display in aquaria, and public interest in the animals grew. In the 1970s, research pioneered by Michael Bigg led to the discovery of the species' complex social structure, its use of vocal communication, and its extraordinarily stable mother–offspring bonds. Through photo-identification techniques, individuals were named and tracked over decades. Bigg's techniques also revealed the Pacific Northwest population was in the low hundreds rather than the thousands that had been previously assumed. The southern resident community alone had lost 48 of its members to captivity; by 1976, only 80 remained. In the Pacific Northwest, the species that had unthinkingly been targeted became a cultural icon within a few decades. The public's growing appreciation also led to growing opposition to whale–keeping in aquarium. Only one whale has been taken in North American waters since 1976. In recent years, the extent of the public's interest in killer whales has manifested itself in several high-profile efforts surrounding individuals. Following the success of the 1993 film "Free Willy", the movie's captive star Keiko was returned to the coast of his native Iceland in 2002. The director of the International Marine Mammal Project for the Earth Island Institute, David Phillips, led the efforts to return Keiko to the Iceland waters. Keiko however did not adapt to the harsh climate of the Arctic Ocean, and died a year into his release after contracting pneumonia, at the age of 27. In 2002, the orphan Springer was discovered in Puget Sound, Washington. She became the first whale to be successfully reintegrated into a wild pod after human intervention, crystallizing decades of research into the vocal behaviour and social structure of the region's killer whales. The saving of Springer raised hopes that another young killer whale named Luna, which had become separated from his pod, could be returned to it. However, his case was marked by controversy about whether and how to intervene, and in 2006, Luna was killed by a boat propeller. The earlier of known records of commercial hunting of killer whales date to the 18th century in Japan. During the 19th and early 20th centuries, the global whaling industry caught immense numbers of baleen and sperm whales, but largely ignored killer whales because of their limited amounts of recoverable oil, their smaller populations, and the difficulty of taking them. Once the stocks of larger species were depleted, killer whales were targeted by commercial whalers in the mid-20th century. Between 1954 and 1997, Japan took 1,178 killer whales (although the Ministry of the Environment claims that there had been domestic catches of about 1,600 whales between late 1940s to 1960s) and Norway took 987. Extensive hunting of killer whales, including an Antarctic catch of 916 in 1979–80 alone, prompted the International Whaling Commission to recommend a ban on commercial hunting of the species pending further research. Today, no country carries out a substantial hunt, although Indonesia and Greenland permit small subsistence hunts (see Aboriginal whaling). Other than commercial hunts, killer whales were hunted along Japanese coasts out of public concern for potential conflicts with fisheries. Such cases include a semi-resident male-female pair in Akashi Strait and Harimanada being killed in the Seto Inland Sea in 1957, the killing of five whales from a pod of 11 members that swam into Tokyo Bay in 1970, and a catch record in southern Taiwan in the 1990s. Killer whales have helped humans hunting other whales. One well-known example was the killer whales of Eden, Australia, including the male known as Old Tom. Whalers more often considered them a nuisance, however, as orcas would gather to scavenge meat from the whalers' catch. Some populations, such as in Alaska's Prince William Sound, may have been reduced significantly by whalers shooting them in retaliation. Whale watching continues to increase in popularity, but may have some problematic impacts on killer whales. Exposure to exhaust gasses from large amounts of vessel traffic are causing concern for the overall health of the 75 remaining Southern Resident Killer Whales (SRKWs) left as of early 2019. This population is followed by approximately 20 vessels for 12 hours a day during the months May–September. Researchers discovered that these vessels are in the line of sight for these whales for 98–99.5% of daylight hours. With so many vessels, the air quality around these whales deteriorates and impacts their health. Air pollutants that bind with exhaust fumes are responsible for the activation of the cytochrome P450 1A gene family. Researchers have successfully identified this gene in skin biopsies of live whales and also the lungs of deceased whales. A direct correlation between activation of this gene and the air pollutants can not be made because there are other known factors that will induce the same gene. Vessels can have either wet or dry exhaust systems, with wet exhaust systems leaving more pollutants in the water due to various gas solubility. A modelling study determined that the lowest-observed-adverse-effect-level (LOAEL) of exhaust pollutants was about 12% of the human dose. As a response to this, in 2017 boats off the British Columbia coast now have a minimum approach distance of 200 metres compared to the previous 100 metres. This new rule complements Washington State's minimum approach zone of 180 metres that has been in effect since 2011. If a whale approaches a vessel it must be placed in neutral until the whale passes. The World Health Organization has set air quality standards in an effort to control the emissions produced by these vessels. The killer whale's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have vastly reduced life expectancies, on average only living into their 20s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild killer whale behaviour, see above. Wild killer whales may travel up to in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Between 1991 and 2010, the bull orca known as Tilikum was involved in the death of three people, and was featured in the critically acclaimed 2013 film "Blackfish". Tilikum lived at SeaWorld from 1992 until his death in 2017. A 2015 study coauthored by staff at SeaWorld and the Minnesota Zoo indicates that there is no significant difference in survivorship between free-ranging and captive killer whales. The authors speculate about the future utility of studying captive populations for the purposes of understanding orca biology and the implications of such research of captive animals in the overall health of both wild and marine park populations. As of March 2016, SeaWorld has announced that they will be ending their orca breeding program and their theatrical shows. They previously announced, in November 2015, that the shows would be coming to an end in San Diego but it is now to happen in both Orlando and San Antonio as well.
https://en.wikipedia.org/wiki?curid=17011
Kim Philby Harold Adrian Russell "Kim" Philby (1 January 1912 – 11 May 1988) was a British intelligence officer and a double agent for the Soviet Union. In 1963 he was revealed to be a member of the Cambridge Five, a spy ring which passed information to the Soviet Union during World War II and in the early stages of the Cold War. Of the five, Philby is believed to have been most successful in providing secret information to the Soviets. Born in British India, Philby was educated at Westminster School and Trinity College, Cambridge. He was recruited by Soviet intelligence in 1934. After leaving Cambridge, Philby worked as a journalist and covered the Spanish Civil War and the Battle of France. In 1940 he began working for MI6. By the end of the Second World War he had become a high-ranking member of the British intelligence service. In 1949 Philby was appointed first secretary to the British Embassy in Washington and served as chief British liaison with American intelligence agencies. During his career as an intelligence officer, he passed large amounts of intelligence to the Soviet Union, including an Anglo-American plot to subvert the communist regime of Albania. He was also responsible for tipping off two other spies under suspicion of espionage, Donald Maclean and Guy Burgess, both of whom subsequently fled to Moscow in May 1951. The defections of Maclean and Burgess cast suspicion over Philby, resulting in his resignation from MI6 in July 1951. He was publicly exonerated in 1955, after which he resumed his career both as a journalist and a spy for sis in Beirut. In January 1963, having finally been unmasked as a Soviet agent, Philby defected to Moscow, where he lived out his life until his death in 1988. Born in Ambala, Punjab, British India, Philby was the son of Dora Johnston and St John Philby, an author, Arabist and explorer. St John was a member of the Indian Civil Service (ICS) and later a civil servant in Mesopotamia and advisor to King Ibn Sa'ud of Saudi Arabia. Nicknamed "Kim" after the boy-spy in Rudyard Kipling's novel "Kim", Philby attended Aldro preparatory school, an all boys school located in Shackleford near Godalming in Surrey, United Kingdom. In his early teens, he spent some time with the Bedouin in the desert of Saudi Arabia. Following in the footsteps of his father, Philby continued to Westminster School, which he left in 1928 at the age of 16. He won a scholarship to Trinity College, Cambridge, where he studied history and economics. He graduated in 1933 with a 2:1 degree in Economics. Upon Philby's graduation, Maurice Dobb, a fellow of King's College, Cambridge and tutor in Economics, introduced him to the World Federation for the Relief of the Victims of German Fascism in Paris. The World Federation for the Relief of the Victims of German Fascism was an organization that attempted to aid the people victimized by fascism in Germany and provide education on oppositions to fascism. The organization was one of several fronts operated by German Communist Willi Münzenberg, a member of the Reichstag who had fled to France in 1933. In Vienna, working to aid refugees from Nazi Germany, Philby met and fell in love with Litzi Friedmann (born Alice Kohlmann), a young Austrian Communist of Hungarian Jewish origins. Philby admired the strength of her political convictions and later recalled that at their first meeting: A frank and direct person, Litzi came out and asked me how much money I had. I replied £100, which I hoped would last me about a year in Vienna. She made some calculations and announced, "That will leave you an excess of £25. You can give that to the International Organisation for Aid for Revolutionaries. We need it desperately." I liked her determination. He acted as a courier between Vienna and Prague, paying for the train tickets out of his remaining £75 and using his British passport to evade suspicion. He also delivered clothes and money to refugees from the Nazis. Following the Austrofascist victory in the Austrian Civil War, Friedmann and Philby married in February 1934, enabling her to escape to the United Kingdom with him two months later. It is possible that it was a Viennese-born friend of Friedmann's in London, Edith Tudor Hart – herself, at this time, a Soviet agent – who first approached Philby about the possibility of working for Soviet intelligence. In early 1934, Arnold Deutsch, a Soviet agent, was sent to University College London under the cover of a research appointment. His intention was to recruit the brightest students from Britain's top universities. Philby had come to the Soviets' notice earlier that year in Vienna, where he had been involved in demonstrations against the government of Engelbert Dollfuss. In June 1934, Deutsch recruited him to the Soviet intelligence services. Philby later recalled: Lizzy came home one evening and told me that she had arranged for me to meet a "man of decisive importance". I questioned her about it but she would give me no details. The rendezvous took place in Regents Park. The man described himself as Otto. I discovered much later from a photograph in MI5 files that the name he went by was Arnold Deutsch. I think that he was of Czech origin; about 5 ft 7in, stout, with blue eyes and light curly hair. Though a convinced Communist, he had a strong humanistic streak. He hated London, adored Paris, and spoke of it with deeply loving affection. He was a man of considerable cultural background." Philby recommended to Deutsch several of his Cambridge contemporaries, including Donald Maclean, who at the time was working in the Foreign Office, as well as Guy Burgess, despite his personal reservations about Burgess' erratic personality. In London, Philby began a career as a journalist. He took a job at a monthly magazine, the "World Review of Reviews", for which he wrote a large number of articles and letters (sometimes under a variety of pseudonyms) and occasionally served as "acting editor." Philby continued to live in the United Kingdom with his wife for several years. At this point, however, Philby and Litzi separated. They remained friends for many years following their separation and divorced only in 1946, just following the end of World War II. When the Germans threatened to overrun Paris in 1940, where she was then living at this time, he arranged for her escape to Britain. In 1936 he began working at a trade magazine, the "Anglo-Russian Trade Gazette", as editor. The paper was failing and its owner changed the paper's role to covering Anglo-German trade. Philby engaged in a concerted effort to make contact with Germans such as Joachim von Ribbentrop, at that time the German ambassador in London. He became a member of the Anglo-German Fellowship, an organization aiming at rebuilding and supporting a friendly relationship between Germany and the United Kingdom. The Anglo-German Fellowship, at this time, was supported both by the British and German governments, and Philby made many trips to Berlin. In February 1937, Philby travelled to Seville, Spain, then embroiled in a bloody civil war triggered by the "coup d'état" of Fascist forces under General Francisco Franco against the democratic government of President Manuel Azaña. Philby worked at first as a freelance journalist; from May 1937, he served as a first-hand correspondent for "The Times", reporting from the headquarters of the pro-Franco forces. He also began working for both the Soviet and British intelligence, which usually consisted of posting letters in a crude code to a fictitious girlfriend, Mlle Dupont in Paris, for the Russians. He used a simpler system for MI6 delivering post at Hendaye, France, for the British Embassy in Paris. When visiting Paris after the war, he was shocked to discover that the address that he used for Mlle Dupont was that of the Soviet Embassy. His controller in Paris, the Latvian Ozolin-Haskins (code name Pierre), was shot in Moscow in 1937 during Stalin's purge. His successor, Boris Bazarov, suffered the same fate two years later during the purges. Both the British and the Soviets were interested in the combat performance of the new Messerschmitt Bf 109s and Panzer I and IIs deployed with Fascist forces in Spain. Philby told the British, after a direct question to Franco, that German troops would never be permitted to cross Spain to attack Gibraltar. His Soviet controller at the time, Theodore Maly, reported in April 1937 to the NKVD that he had personally briefed Philby on the need "to discover the system of guarding Franco and his entourage". Maly was one of the Soviet Union's most powerful and influential illegal controllers and recruiters. With the goal of potentially arranging Franco's assassination, Philby was instructed to report on vulnerable points in Franco's security and recommend ways to gain access to him and his staff. However, such an act was never a real possibility; upon debriefing Philby in London on 24 May 1937, Maly wrote to the NKVD, "Though devoted and ready to sacrifice himself, [Philby] does not possess the physical courage and other qualities necessary for this [assassination] attempt." In December 1937, during the Battle of Teruel, a Republican shell hit just in front of the car in which Philby was travelling with the correspondents Edward J. Neil of the Associated Press, Bradish Johnson of "Newsweek", and Ernest Sheepshanks of Reuters. Johnson was killed outright, and Neil and Sheepshanks soon died of their injuries. Philby suffered only a minor head wound. As a result of this accident, Philby, who was well-liked by the Nationalist forces whose victories he trumpeted, was awarded the Red Cross of Military Merit by Franco on 2 March 1938. Philby found that the award proved helpful in obtaining access to fascist circles: "Before then," he later wrote, "there had been a lot of criticism of British journalists from Franco officers who seemed to think that the British in general must be a lot of Communists because so many were fighting with the International Brigades. After I had been wounded and decorated by Franco himself, I became known as 'the English-decorated-by-Franco' and all sorts of doors opened to me." In 1938, Walter Krivitsky (born Samuel Ginsberg), a former GRU officer in Paris who had defected to France the previous year, travelled to the United States and published an account of his time in "Stalin's secret service". He testified before the Dies Committee (later to become the House Un-American Activities Committee) regarding Soviet espionage within the United States. In 1940 he was interviewed by MI5 officers in London, led by Jane Archer. Krivitsky claimed that two Soviet intelligence agents had penetrated the British Foreign Office and that a third Soviet intelligence agent had worked as a journalist for a British newspaper during the civil war in Spain. No connection with Philby was made at the time, and Krivitsky was found shot in a Washington hotel room the following year. Alexander Orlov (born Lev Feldbin; code-name Swede), Philby's controller in Madrid, who had once met him in Perpignan, France, with the bulge of an automatic rifle clearly showing through his raincoat, also defected. To protect his family, still living in the USSR, he said nothing about Philby, an agreement Stalin respected. On a short trip back from Spain, Philby tried to recruit Flora Solomon as a Soviet agent; she was the daughter of a Russian banker and gold dealer, a relative of the Rothschilds, and wife of a London stockbroker. At the same time, Burgess was trying to get her into MI6. But the resident (Russian term for spymaster) in France, probably Pierre at this time, suggested to Moscow that he suspected Philby's motives. Solomon introduced Philby to the woman who would become Philby's second wife, Aileen Furse. Solomon went to work for the British retailer Marks & Spencer. In July 1939, Philby returned to "The Times" office in London. When Britain declared war on Germany in September 1939, Philby's contact with his Soviet controllers was lost and Philby failed to attend the meetings that were necessary for his work. During the Phoney War from September 1939 until the Dunkirk evacuation, Philby worked as "The Times" first-hand correspondent with the British Expeditionary Force headquarters. After being evacuated from Boulogne on 21 May, he returned to France in mid-June and began representing "The Daily Telegraph" in addition to "The Times". He briefly reported from Cherbourg and Brest, sailing for Plymouth less than twenty-four hours before the French surrendered to Germany in June 1940. In 1940, on the recommendation of Burgess, Philby joined MI6's Section D, a secret organisation charged with investigating how enemies might be attacked through non-military means. Philby and Burgess ran a training course for would-be saboteurs at Brickendonbury Manor in Hertfordshire. His time at Section D, however, was short-lived; the "tiny, ineffective, and slightly comic" section was soon absorbed by the Special Operations Executive (SOE) in the summer of 1940. Burgess was arrested in September for drunken driving and was subsequently fired, while Philby was appointed as an instructor on clandestine propaganda at the SOE's finishing school for agents at the Estate of Lord Montagu in Beaulieu, Hampshire. Philby's role as an instructor of sabotage agents again brought him to the attention of the Soviet Joint State Political Directorate (OGPU). This role allowed him to conduct sabotage and instruct agents on how to properly conduct sabotage. The new London "rezident", Ivan Chichayev (code-name Vadim), re-established contact and asked for a list of names of British agents being trained to enter the USSR. Philby replied that none had been sent and that none were undergoing training at that time. This statement was underlined twice in red and marked with two question marks, clearly indicating their confusion and questioning of this, by disbelieving staff at Moscow Central in the Lubyanka, according to Genrikh Borovik, who saw the telegrams much later in the KGB archives. Philby provided Stalin with advance warning of Operation Barbarossa and of the Japanese intention to strike into southeast Asia instead of attacking the USSR as Hitler had urged. The first was ignored as a provocation, but the second, when this was confirmed by the Russo-German journalist and spy in Tokyo, Richard Sorge, contributed to Stalin's decision to begin transporting troops from the Far East in time for the counteroffensive around Moscow. By September 1941, Philby began working for Section Five of MI6, a section responsible for offensive counter-intelligence. On the strength of his knowledge and experience of Franco's Spain, Philby was put in charge of the subsection which dealt with Spain and Portugal. This entailed responsibility for a network of undercover operatives in several cities such as Madrid, Lisbon, Gibraltar and Tangier. At this time, the German "Abwehr" was active in Spain, particularly around the British naval base of Gibraltar, which its agents hoped to watch with many cameras and radars to track Allied supply ships in the Western Mediterranean. Thanks to British counter-intelligence efforts, of which Philby's Iberian subsection formed a significant part, the project (code-named Bodden) never came to fruition. During 1942–43, Philby's responsibilities were then expanded to include North Africa and Italy, and he was made the deputy head of Section Five under Major Felix Cowgill, an army officer seconded to SIS. In early 1944, as it became clear that the Soviet Union was likely to once more prove a significant adversary to Britain, SIS re-activated Section Nine, which dealt with anti-communist efforts. In late 1944 Philby, on instructions from his Soviet handler, maneuvered through the system successfully to replace Cowgill as head of Section Nine. Charles Arnold-Baker, an officer of German birth (born Wolfgang von Blumenthal) working for Richard Gatty in Belgium and later transferred to the Norwegian/Swedish border, voiced many suspicions of Philby and Philby's intentions but was ignored time and time again. While working in Section Five, Philby had become acquainted with James Jesus Angleton, a young American counter-intelligence officer working in liaison with SIS in London. Angleton, later chief of the Central Intelligence Agency's (CIA) Counterintelligence Staff, became suspicious of Philby when he failed to pass on information relating to a British agent executed by the Gestapo in Germany. It later emerged that the agent – known as Schmidt – had also worked as an informant for the "Rote Kapelle" organisation, which sent information to both London and Moscow. Nevertheless, Angleton's suspicions went unheard. In late summer 1943, the SIS provided the GRU an official report on the activities of German agents in Bulgaria and Romania, soon to be invaded by the Soviet Union. The NKVD complained to Cecil Barclay, the SIS representative in Moscow, that information had been withheld. Barclay reported the complaint to London. Philby claimed to have overheard discussion of this by chance and sent a report to his controller. This turned out to be identical with Barclay's dispatch, convincing the NKVD that Philby had seen the full Barclay report. A similar lapse occurred with a report from the Imperial Japanese Embassy in Moscow sent to Tokyo. The NKVD received the same report from Richard Sorge but with an extra paragraph claiming that Hitler might seek a separate peace with the Soviet Union. These lapses by Philby aroused intense suspicion in Moscow. Elena Modrzhinskaya at GUGB headquarters in Moscow assessed all material from the Cambridge Five. She noted that they produced an extraordinary wealth of information on German war plans but next to nothing on the repeated question of British penetration of Soviet intelligence in either London or Moscow. Philby had repeated his claim that there were no such agents. She asked, "Could the SIS really be such fools they failed to notice suitcase-loads of papers leaving the office? Could they have overlooked Philby's Communist wife?" Modrzhinskaya concluded that all were double agents, working essentially for the British. A more serious incident occurred in August 1945, when Konstantin Volkov, an NKVD agent and vice-consul in Istanbul, requested political asylum in Britain for himself and his wife. For a large sum of money, Volkov offered the names of three Soviet agents inside Britain, two of whom worked in the Foreign Office and a third who worked in counter-espionage in London. Philby was given the task of dealing with Volkov by British intelligence. He warned the Soviets of the attempted defection and travelled personally to Istanbul – ostensibly to handle the matter on behalf of SIS but, in reality, to ensure that Volkov had been neutralised. By the time he arrived in Turkey, three weeks later, Volkov had been removed to Moscow. The intervention of Philby in the affair and the subsequent capture of Volkov by the Soviets might have seriously compromised Philby's position. However, Volkov's defection had been discussed with the British Embassy in Ankara on telephones which turned out to have been tapped by Soviet intelligence. Additionally, Volkov had insisted that all written communications about him take place by bag rather than by telegraph, causing a delay in reaction that might plausibly have given the Soviets time to uncover his plans. Philby was thus able to evade blame and detection. A month later Igor Gouzenko, a cipher clerk in Ottawa, took political asylum in Canada and gave the Royal Canadian Mounted Police names of agents operating within the British Empire that were known to him. When Jane Archer (who had interviewed Krivitsky) was appointed to Philby's section he moved her off investigatory work in case she became aware of his past. He later wrote "she had got a tantalising scrap of information about a young English journalist whom the Soviet intelligence had sent to Spain during the Civil War. And here she was plunked down in my midst!" Philby, "employed in a Department of the Foreign Office", was awarded the Order of the British Empire in 1946. In February 1947, Philby was appointed head of British intelligence for Turkey, and posted to Istanbul with his second wife, Aileen, and their family. His public position was that of First Secretary at the British Consulate; in reality, his intelligence work required overseeing British agents and working with the Turkish security services. Philby planned to infiltrate five or six groups of émigrés into Soviet Armenia or Soviet Georgia. But efforts among the expatriate community in Paris produced just two recruits. Turkish intelligence took them to a border crossing into Georgia but soon afterwards shots were heard. Another effort was made using a Turkish gulet for a seaborne landing, but it never left port. He was implicated in a similar campaign in Albania. Colonel David Smiley, an aristocratic Guards officer who had helped Enver Hoxha and his Communist guerillas to liberate Albania, now prepared to remove Hoxha. He trained Albanian commandos – some of whom were former Nazi collaborators – in Libya or Malta. From 1947, they infiltrated the southern mountains to build support for former King Zog. The first three missions, overland from Greece, were trouble-free. Larger numbers were landed by sea and air under Operation Valuable, which continued until 1951, increasingly under the influence of the newly formed CIA. Stewart Menzies, head of SIS, disliked the idea, which was promoted by former SOE men now in SIS. Most infiltrators were caught by the Sigurimi, the Albanian Security Service. Clearly there had been leaks and Philby was later suspected as one of the leakers. His own comment was "I do not say that people were happy under the regime but the CIA underestimated the degree of control that the Authorities had over the country." Macintyre (2014) includes this typically cold-blooded quote from Philby: The agents we sent into Albania were armed men intent on murder, sabotage and assassination ... They knew the risks they were running. I was serving the interests of the Soviet Union and those interests required that these men were defeated. To the extent that I helped defeat them, even if it caused their deaths, I have no regrets. Aileen Philby had suffered since childhood from psychological problems which caused her to inflict injuries upon herself. In 1948, troubled by the heavy drinking and frequent depressions that had become a feature of her husband's life in Istanbul, she experienced a breakdown of this nature, staging an accident and injecting herself with urine and insulin to cause skin disfigurations. She was sent to a clinic in Switzerland to recover. Upon her return to Istanbul in late 1948, she was badly burned in an incident with a charcoal stove and returned to Switzerland. Shortly afterward, Philby was moved to the job as chief SIS representative in Washington, D.C., with his family. In September 1949, the Philbys arrived in the United States. Officially, his post was that of First Secretary to the British Embassy; in reality, he served as chief British intelligence representative in Washington. His office oversaw a large amount of urgent and top-secret communications between the United States and London. Philby was also responsible for liaising with the CIA and promoting "more aggressive Anglo-American intelligence operations". A leading figure within the CIA was Philby's wary former colleague, James Jesus Angleton, with whom he once again found himself working closely. Angleton remained suspicious of Philby, but lunched with him every week in Washington. However, a more serious threat to Philby's position had come to light. During the summer of 1945, a Soviet cipher clerk had reused a one time pad to transmit intelligence traffic. This mistake made it possible to break the normally impregnable code. Contained in the traffic (intercepted and decrypted as part of the Venona project) was information that documents had been sent to Moscow from the British Embassy in Washington. The intercepted messages revealed that the British Embassy source (identified as "Homer") travelled to New York City to meet his Soviet contact twice a week. Philby had been briefed on the situation shortly before reaching Washington in 1949; it was clear to Philby that the agent was Donald Maclean, who worked in the British Embassy at the time and whose wife, Melinda, lived in New York. Philby had to help discover the identity of "Homer", but also wished to protect Maclean. In January 1950, on evidence provided by the Venona intercepts, Soviet atomic spy Klaus Fuchs was arrested. His arrest led to others: Harry Gold, a courier with whom Fuchs had worked, David Greenglass, and Julius and Ethel Rosenberg. The investigation into the British Embassy leak was still ongoing, and the stress of it was exacerbated by the arrival in Washington, in October 1950, of Guy Burgess – Philby's unstable and dangerously alcoholic fellow Soviet spy. Burgess, who had been given a post as Second Secretary at the British Embassy, took up residence in the Philby family home and rapidly set about causing offence to all and sundry. Aileen Philby resented him and disliked his presence; Americans were offended by his "natural superciliousness" and "utter contempt for the whole pyramid of values, attitudes, and courtesies of the American way of life". J. Edgar Hoover complained that Burgess used British Embassy automobiles to avoid arrest when he cruised Washington in pursuit of homosexual encounters. His dissolution had a troubling effect on Philby; the morning after a particularly disastrous and drunken party, a guest returning to collect his car heard voices upstairs and found "Kim and Guy in the bedroom drinking champagne. They had already been down to the Embassy but being unable to work had come back." Burgess's presence was problematic for Philby, yet it was potentially dangerous for Philby to leave him unsupervised. The situation in Washington was tense. From April 1950, Maclean had been the prime suspect in the investigation into the Embassy leak. Philby had undertaken to devise an escape plan which would warn Maclean, currently in England, of the intense suspicion he was under and arrange for him to flee. Burgess had to get to London to warn Maclean, who was under surveillance. In early May 1951, Burgess got three speeding tickets in a single day – then pleaded diplomatic immunity, causing an official complaint to be made to the British Ambassador. Burgess was sent back to England, where he met Maclean in his London club. The SIS planned to interrogate Maclean on 28 May 1951. On 23 May, concerned that Maclean had not yet fled, Philby wired Burgess, ostensibly about his Lincoln convertible abandoned in the Embassy car park. "If he did not act at once it would be too late," the telegram read, "because [Philby] would send his car to the scrap heap. There was nothing more [he] could do." On 25 May, Burgess drove Maclean from his home at Tatsfield, Surrey to Southampton, where both boarded the steamship "Falaise" to France and then proceeded to Moscow. Burgess had intended to aid Maclean in his escape, not accompany him in it. The "affair of the missing diplomats," as it was referred to before Burgess and Maclean surfaced in Moscow, attracted a great deal of public attention, and Burgess's disappearance, which identified him as complicit in Maclean's espionage, deeply compromised Philby's position. Under a cloud of suspicion raised by his highly visible and intimate association with Burgess, Philby returned to London. There, he underwent MI5 interrogation aimed at ascertaining whether he had acted as a "third man" in Burgess and Maclean's spy ring. In July 1951, he resigned from MI6, preempting his all-but-inevitable dismissal. Even after Philby's departure from MI6, speculation regarding his possible Soviet affiliations continued. Interrogated repeatedly regarding his intelligence work and his connection with Burgess, he continued to deny that he had acted as a Soviet agent. From 1952, Philby struggled to find work as a journalist, eventually – in August 1954 – accepting a position with a diplomatic newsletter called the "Fleet Street Letter". Lacking access to material of value and out of touch with Soviet intelligence, he all but ceased to operate as a Soviet agent. On 7 November 1955, Philby was officially cleared by Foreign Secretary Harold Macmillan, who told the House of Commons, "I have no reason to conclude that Mr. Philby has at any time betrayed the interests of his country, or to identify him with the so-called 'Third Man', if indeed there was one." Following this, Philby gave a press conference in which – calmly, confidently, and without the stammer he had struggled with since childhood – he reiterated his innocence, declaring, "I have never been a communist." After being exonerated, Philby was no longer employed by MI6 and Soviet intelligence lost all contact with him. In August 1956 he was sent to Beirut as a Middle East correspondent for "The Observer" and "The Economist". There, his journalism served as cover for renewed work for MI6. In Lebanon, Philby at first lived in Mahalla Jamil, his father's large household located in the village of Ajaltoun, just outside Beirut. Following the departure of his father and stepbrothers for Saudi Arabia, Philby continued to live alone in Ajaltoun, but took a flat in Beirut after beginning an affair with Eleanor, the Seattle-born wife of "New York Times" correspondent Sam Pope Brewer. Following Aileen Philby's death in 1957 and Eleanor's subsequent divorce from Brewer, Philby and Eleanor were married in London in 1959 and set up house together in Beirut. From 1960, Philby's formerly marginal work as a journalist became more substantial and he frequently travelled throughout the Middle East, including Saudi Arabia, Egypt, Jordan, Kuwait and Yemen. In 1961, Anatoliy Golitsyn, a major in the First Chief Directorate of the KGB, defected to the United States from his diplomatic post in Helsinki. Golitsyn offered the CIA revelations of Soviet agents within American and British intelligence services. Following his debriefing in the US, Golitsyn was sent to SIS for further questioning. The head of MI6, Dick White, only recently transferred from MI5, had suspected Philby as the "third man". Golitsyn proceeded to confirm White's suspicions about Philby's role. Nicholas Elliott, an MI6 officer recently stationed in Beirut who was a friend of Philby's and had previously believed in his innocence, was tasked with attempting to secure Philby's full confession. It is unclear whether Philby had been alerted, but Eleanor noted that as 1962 wore on, expressions of tension in his life "became worse and were reflected in bouts of deep depression and drinking". She recalled returning home to Beirut from a sight-seeing trip in Jordan to find Philby "hopelessly drunk and incoherent with grief on the terrace of the flat," mourning the death of a little pet fox which had fallen from the balcony. When Nicholas Elliott met Philby in late 1962, the first time since Golitsyn's defection, he found Philby too drunk to stand and with a bandaged head; he had fallen repeatedly and cracked his skull on a bathroom radiator, requiring stitches. Philby told Elliott that he was "half expecting" to see him. Elliott confronted him, saying, "I once looked up to you, Kim. My God, how I despise you now. I hope you've enough decency left to understand why." Prompted by Elliott's accusations, Philby confirmed the charges of espionage and described his intelligence activities on behalf of the Soviets. However, when Elliott asked him to sign a written statement, he hesitated and requested a delay in the interrogation. Another meeting was scheduled to take place in the last week of January. It has since been suggested that the whole confrontation with Elliott had been a charade to convince the KGB that Philby had to be brought back to Moscow, where he could serve as a British penetration agent of Moscow Centre. On the evening of 23 January 1963, Philby vanished from Beirut, failing to meet his wife for a dinner party at the home of Glencairn Balfour Paul, First Secretary at the British Embassy. The "Dolmatova", a Soviet freighter bound for Odessa, had left Beirut that morning so abruptly that cargo was left scattered over the docks; Philby claimed that he left Beirut on board this ship. However, others maintain that he escaped through Syria, overland to Soviet Armenia and thence to Russia. It was not until 1 July 1963 that Philby's flight to Moscow was officially confirmed. On 30 July Soviet officials announced that they had granted him political asylum in the USSR, along with Soviet citizenship. When the news broke, MI6 came under criticism for failing to anticipate and block Philby's defection, though Elliott was to claim he could not have prevented Philby's flight. Journalist Ben Macintyre, author of several works on espionage, wrote in his 2014 book on Philby that MI6 might have left open the opportunity for Philby to flee to Moscow to avoid an embarrassing public trial. Philby himself thought this might have been the case, according to Macintyre. When FBI Director J. Edgar Hoover was informed that one of MI6's top men was a spy for the Russians, he said, "Tell 'em Jesus Christ only had twelve, and one of them was a double [agent]." Upon his arrival in Moscow in January 1963, Philby discovered that he was not a colonel in the KGB, as he had been led to believe. He was paid 500 rubles a month and his family was not immediately able to join him in exile. It was ten years before he visited KGB headquarters. Philby was under virtual house arrest, guarded, with all visitors screened by the KGB. Mikhail Lyubimov, his closest KGB contact, explained that this was to guard his safety, but later admitted that the real reason was the KGB's fear that Philby would return to London. Philby occupied himself by writing his memoirs, published in the UK in 1968 under the title "My Silent War", not published in the Soviet Union until 1980. He continued to read " The Times", which was not generally available in the USSR, listened to the BBC World Service, and was an avid follower of cricket. Philby's award of the Order of the British Empire was cancelled and annulled in 1965. Though Philby claimed publicly in January 1988 that he did not regret his decisions and that he missed nothing about England except some friends, Colman's mustard, and Lea & Perrins Worcestershire sauce, his wife Rufina Ivanovna Pukhova later described Philby as "disappointed in many ways" by what he found in Moscow. "He saw people suffering too much," but he consoled himself by arguing that "the ideals were right but the way they were carried out was wrong. The fault lay with the people in charge." Pukhova said, "he was struck by disappointment, brought to tears. He said, 'Why do old people live so badly here? After all, they won the war.'" Philby drank heavily and suffered from loneliness and depression; according to Rufina, he had attempted suicide by slashing his wrists sometime in the 1960s. Philby found work in the early 1970s in the KGB's Active Measures Department churning out fabricated documents. Working from genuine unclassified and public CIA or U.S. State Department documents, Philby inserted “sinister” paragraphs regarding U.S. plans. The KGB would stamp the documents “top secret” and begin their circulation. For the Soviets, Philby was an invaluable asset, ensuring the correct use of idiomatic and diplomatic English phrases in their disinformation efforts. Philby died of heart failure in Moscow in 1988. He was given a hero's funeral, and posthumously awarded numerous medals by the USSR. In February 1934, Philby married Litzi Friedmann, an Austrian communist whom he had met in Vienna. They subsequently moved to Britain; however, as Philby assumed the role of a fascist sympathiser, they separated. Litzi lived in Paris before returning to London for the duration of the war; she ultimately settled in East Germany. While working as a correspondent in Spain, Philby began an affair with Frances Doble, Lady Lindsay-Hogg, an actress and aristocratic divorcée who was an admirer of Franco and Hitler. They travelled together in Spain through August 1939. In 1940 he began living with Aileen Furse in London. Their first three children, Josephine, John and Tommy Philby, were born between 1941 and 1944. In 1946, Philby finally arranged a formal divorce from Litzi. He and Aileen were married on 25 September 1946, while Aileen was pregnant with their fourth child, Miranda. Their fifth child, Harry George, was born in 1950. Aileen suffered from psychiatric problems, which grew more severe during the period of poverty and suspicion following the flight of Burgess and Maclean. She lived separately from Philby, settling with their children in Crowborough while he lived first in London and later in Beirut. Weakened by alcoholism and frequent sickness, she died of influenza in December 1957. In 1956, Philby began an affair with Eleanor Brewer, the wife of "The New York Times" correspondent Sam Pope Brewer. Following Eleanor's divorce, the couple married in January 1959. After Philby defected to the Soviet Union in 1963, Eleanor visited him in Moscow. In November 1964, after a visit to the United States, she returned, intending to settle permanently. In her absence, Philby had begun an affair with Donald Maclean's wife, Melinda. He and Eleanor divorced and she departed Moscow in May 1965. Melinda left Maclean and briefly lived with Philby in Moscow. In 1968 she returned to Maclean. In 1971, Philby married Rufina Pukhova, a Russo-Polish woman twenty years his junior, with whom he lived until his death in 1988.
https://en.wikipedia.org/wiki?curid=17012
Kamacite Kamacite is an alloy of iron and nickel, which is found on Earth only in meteorites. The proportion iron:nickel is between 90:10 and 95:5; small quantities of other elements, such as cobalt or carbon may also be present. The mineral has a metallic luster, is gray and has no clear cleavage although its crystal structure is isometric-hexoctahedral. Its density is about 8 g/cm3 and its hardness is 4 on the Mohs scale. It is also sometimes called balkeneisen. The name was coined in 1861 and is derived from the Greek root "καμακ-" "kamak" or "κάμαξ" "kamaks", meaning vine-pole. It is a major constituent of iron meteorites (octahedrite and hexahedrite types). In the octahedrites it is found in bands interleaving with taenite forming Widmanstätten patterns. In hexahedrites, fine parallel lines called Neumann lines are often seen, which are evidence for structural deformation of adjacent kamacite plates due to shock from impacts. At times kamacite can be found so closely intermixed with taenite that it is difficult to distinguish them visually, forming plessite. The largest documented kamacite crystal measured . Kamacite has many unique physical properties including Thomson structures and extremely high density. Kamacite is opaque, and its surface generally displays varying shades of gray streaking, or "quilting" patterns. Kamacite has a metallic luster. Kamacite can vary in hardness based on the extent of shock it has undergone, but commonly ranks a four on the mohs hardness scale. Shock increases kamacite hardness, but this is not 100% reliable in determining shock histories as there is a myriad of other reasons the hardness of kamacite could increase. Kamacite has a measured density of . It has a massive crystal habit but normally individual crystals are indistinguishable in natural occurrences. There are no planes of cleavage present in kamacite which gives it a hackly fracture. Kamacite is magnetic, and isometric which makes it behave optically isometrically. Kamacite occurs with taenite and a mixed area of kamacite and taenite referred to as plessite. Taenite contains more nickel (12 to 45 wt. % Ni) than kamacite (which has 5 to 12 wt. % Ni). The increase in nickel content causes taenite to have a face-centered unit cell, whereas kamacite's higher iron content causes its unit cell to be body centered. This difference is caused by nickel and iron having a similar size but different interatomic magnetic and quantum interactions. There is evidence of a tetragonal phase, observed in X-ray powder tests and later under a microscope. When tested two meteorites gave d-values that could "be indexed on the basis of a tetragonal unit cell, but not on the basis of a cubic or hexagonal unit cell". It has been speculated to be e-iron, a hexagonal polymorph of iron. Thomson structures, usually referred to as Widmanstätten patterns are textures often seen in meteorites that contain kamacite. These are bands which are usually alternating between kamacite and taenite. G. Thomson stumbled upon these structures in 1804 after cleaning a specimen with nitric acid he noticed geometric patterns. He published his observations in a French journal but due to the Napoleonic wars the English scientists, who were doing much of the meteorite research of the time, never saw his work. It was not until four years after in 1808 the same patterns were discovered by Count Alois von Beckh Widmanstätten who was heating iron meteorites when he noticed geometric patterns caused by the differing oxidation rates of kamacite and taenite. Widmanstätten told many of his colleagues about these patterns in correspondence leading to them being referred to as Widmanstätten patterns in most literature. Thomson structures or Widmanstätten patterns are created as the meteorite cools; at high temperatures both iron and nickel have face-centered lattices. When the meteorite is formed it starts out as entirely molten taenite (greater than 1500 °C) and as it cools past 723 °C the primary metastable phase of the alloy changes into taenite and kamacite begins to precipitate out. It is in this window where the meteorite is cooling below 723 °C where the Thomson structures form and they can be greatly affected by the temperature, pressure, and composition of the meteorite. Kamacite is opaque and can be observed only in reflected light microscopy. It is isometric and therefore behaves isotropically. As the meteorite cools below 750 °C iron becomes magnetic as it moves into the kamacite phase. During this cooling the meteorite takes on non-conventional thermoremanent magnetization. Thermoremanent magnetization on Earth gives iron minerals formed in the Earth's crust, a higher magnetization than if they were formed in the same field at room temperature. This is a non-conventional thermoremanent magnetization because it appears to be due to a chemical remanent process which is induced as taenite is cooled to kamacite. What makes this especially interesting is this has been shown to account for all of the ordinary chondrites magnetic field which has been shown to be as strong as 0.4 Os. Kamacite is an isometric mineral with a body centered unit cell. Kamacite is usually not found in large crystals; however the anomalously largest kamacite crystal found and documented measured 92×54×23 centimeters. Even with large crystals being so rare crystallography is extremely important to understand plays an important role in the formation of Thomson structures. Kamacite forms isometric, hexoctahedral crystals this causes the crystals to have many symmetry elements. Kamacite falls under the 4/m2/m class in the Hermann–Mauguin notation meaning it has three fourfold axes, four threefold axes, and six twofold axes and nine mirror planes. Kamacite has a space group of F m3m. Kamacite is made up of a repeating unit of α-(Fe, Ni); Fe0+0.9Ni0.1 which makes up cell dimensions of a = 8.603, Z = 54; V = 636.72. The interatomic magnetic and quantum interactions of the iron atoms interacting with each other causes kamacite to have a body centered lattice. Kamacite is made up of a repeating unit of α-(Fe, Ni); Fe0+0.9Ni0.1. Besides trace elements, it is normally considered to be made up of 90% iron and 10% nickel but can have a ratio of 95% iron and 5% nickel. This makes iron the dominant element in any sample of kamacite. It is grouped with the native elements in both Dana and Nickel-Strunz classification systems. Kamacite starts to form around 723 °C, where iron splits from being face centered to body centered while nickel remains face centered. To accommodate this areas start to form of higher iron concentration displacing nickel to the areas around it which creates taenite which is the nickel end member. There has been a great deal of research into kamacite's trace elements. The most notable trace elements in kamacite are gallium, germanium, cobalt, copper, and chromium. Cobalt is the most notable of these where the nickel content varies from 5.26% to 6.81% and the cobalt content can be from 0.25% to 0.77%. All of these trace elements are metallic and their appearance near the kamacite taenite border can give important clues to the environment the meteorite was formed in. Mass spectroscopy has revealed kamacite to contain considerable amounts of platinum to be an average of 16.31 (μg/g), iridium to be an average of 5.40 (μg/g), osmium to be an average of 3.89 (μg/g), tungsten to be an average of 1.97 (μg/g), gold to be an average of 0.75 (μg/g), rhenium to be an average of 0.22 (μg/g). The considerable amounts of cobalt and platinum are the most notable. Kamacite sulfurization has been done experimentally in laboratory conditions. Sulfurization resulted in three distinct phases: a mono-sulfide solid solution (Fe,Ni,Co)1-xS, a pentlandite phase (Fe,Ni,Co)9-xS8, as well as a P-rich phase. This was done in a lab to construct conditions concurrent with that of the solar nebula. With this information it would be possible to extract information about the thermodynamic, kinetic, and physical conditions of the early solar system. This still remains speculatory as many of the sulfides in meteorites are unstable and have been destroyed. Kamacite also alters to tochilinite (Fe2+5-6(Mg, Fe2+)5S6(OH)10). This is useful for giving clues as to how much the meteorite as a whole has been altered. Kamacite to tochilinite alteration can be seen in petrologic microscopes, scanning electron microscope, and electron microprobe analysis. This can be used to allow researchers to easily index the amount of alteration that has taken place in the sample. This index can be later referenced when analyzing other areas of the meteorite where alteration is not as clear. Taenite is the nickel rich end member of the kamacite–taenite solid solution. Taenite is naturally occurring on Earth whereas kamacite is only found on Earth when it comes from space. Kamacite forms taenite as it forms and expels nickel to the surrounding area, this area forms taenite. Due to the face centered nature of the kamacite lattice and the body centered nature of the nickel lattice the two make intricate angles when they come in contact with each other. These angles reveal themselves macroscopically in the Thomson structure. Also due to this relationship we get the terms ataxite, hexahedrites and octahedrite. Ataxite refers to meteorites that do not show a grossly hexahedral or octahedral structure. Meteorites composed of 6 wt% or less nickel are often referred to as hexahedrites due to the crystal structure of kamacite being isometric and causing the meteorite to be cubic. Likewise if the meteorite is dominated by the face centered taenite it is called an octahedrite as kamacite will exsolve from the octahedral crystal boundaries of taenite making the meteorite appear octahedral. Both hexahedrites and octahedrite only appear when the meteorite breaks along crystal planes or when prepared to excentuate the Thomson structures therefore many are mistakenly called ataxites ar first. Trace elements have been analyzed in the formation of kamacite at different temperatures but the trace elements in taenite seem better suited to give clues of the formation temperature of the meteorite. As the meteorite cools and taenite and kamacite are sorting out of each other some of the trace elements will prefer to be located in taenite or kamacite. Analyzing the taenite kamacite boundary can give clues to how quickly cooling occurred as well as a myriad of other conditions during formation by the final location of the trace elements. Kamacite is only stable at temperatures below 723 °C or 600 °C (Stacey and Banerjee, 2012), as that is where iron becomes cool enough to arrange in a body centered arrangement. Kamacite is also only stable at low pressures as can be assumed as it only forms in space. Metallographic and X-ray diffraction can be used on kamacite to determine the shock history of a meteorite. Using hardness to determine shock histories has been experimented with but was found to be too unreliable. Vickers hardness test was applied to a number of kamacite samples and shocked meteorites were found to have values of 160–170 kg/mm and non-shocked meteorites can have values as high as 244 kg/mm. Shock causes a unique iron transformation structure that is able to be measured using metallographic and X-ray diffraction techniques. After using metallographic and X-ray diffraction techniques to determine shock history it was found that 49% of meteorites found on Earth contain evidence of shock. Kamacite meteorites have been found on every continent on Earth and have also been found on Mars. Kamacite is primarily associated with meteorites because it needs high temperatures, low pressures and few other more reactive elements like oxygen. Chondrite meteorites can be split into groups based on the chondrules present. There are three major types: enstatite chondrites, carbonaceous chondrites and ordinary chondrites. Ordinary chondrites are the most abundant type of meteorite found on Earth making up 85% of all meteorites recorded. Ordinary chondrites are thought to have all originated from three different sources thus they come in three types LL, L, and H; LL stands for Low iron, Low metal, L stands for Low iron abundance, and H is High iron content. All ordinary chondrites contain kamacite in decreasing abundance as you move from H to LL chondrites. Kamacite is also found in many of the less common meteorites mesosiderites and E chondrites. E chondrites are chondrites which are made primarily of enstatite and only account for 2% of meteorites that fall onto the Earth. E chondrites have an entirely different source rock than that of the ordinary chondrites. In analysis of kamacite in E chondrites it was found that they contain generally less nickel then average. Since kamacite is only formed in space and is only found on Earth in meteorites, it has very low abundance on Earth. Its abundance outside our solar system is difficult to determine. Iron, the main component of kamacite, is the sixth most abundant element in the universe and the most abundant of those elements generally considered metallic. Taenite, and tochilinite are minerals that are commonly associated with kamacite. Kamacite has been found and studied in Meteor Crater, Arizona. Meteor Crater was the first confirmed meteor impact site on the planet, and was not universally recognized as such till the 1950s. In the 1960s United States Geological Survey discovered kamacite in specimens gathered from around the site tying the mineral to meteorites. Kamacite primarily forms on meteorites but has been found on extraterrestrial bodies such as Mars. This was discovered by The Mars Exploration Rover (MER) Opportunity. The kamacite did not originate on Mars but was put there by a meteorite. This was particularly of interest because the meteorite fell under the lesser known class of mesosiderites. Mesosiderites are very rare on Earth and its occurrence on Mars gives clues to the origin of its larger source rock. The primary research use of kamacite is to shed light on a meteorite's history. Whether it is looking at the shock history in the iron structures or the conditions during the formation of the meteorite using the kamacite-taenite boundary understanding kamacite is key to understanding our universe. Due to the rareness and the generally dull appearance of kamacite it is not popular among private collectors. However many museums and universities have samples of kamacite in their collection. Normally kamacite samples are prepared using polish and acid to show off the Thomson structures. Preparing specimens involves washing them in a solvent, such as Thomson did with nitric acid to bring out the Thomson structures. Then they are heavily polished so they look shiny. Generally the kamacite can be told apart from taenite easily as after this process the kamacite looks slightly darker than the taenite. Kamacite and taenite both have the potential to be economically valuable. An option that would make asteroid mining more profitable would be to gather the trace elements. One difficulty would be refining elements such as platinum and gold. Platinum is worth around 12,000 US$/kg and (kamacite contains 16.11 μg/g platinum) and gold is worth around 12,000 US$/kg (kamacite contains 0.52 μg/g gold); however the likeliness of a profitable return is fairly slim. Asteroid mining for space uses could be more practical, as transporting materials from Earth is costly. Similar to current plans of reusing the modules of the International Space Station in other missions, an iron meteorite could be used to build space craft in space. NASA has put forward preliminary plans to build a space ship in space.
https://en.wikipedia.org/wiki?curid=17013
Kaohsiung Kaohsiung City (; Mandarin Chinese: ; Wade–Giles: "Kao¹-hsiung²") is a special municipality in southern Taiwan. It ranges from the coastal urban centre to the rural Yushan Range with an area of . Kaohsiung city has a population of approximately 2.77 million people and is Taiwan's third most populous city. Since founding in the 17th century, Kaohsiung has grown from a small trading village into the political and economic centre of southern Taiwan, with key industries such as manufacturing, steel-making, oil refining, freight transport and shipbuilding. It is classified as 'High Sufficiency' by GaWC, with some of the most prominent infrastructures in Taiwan. The Port of Kaohsiung is the largest and busiest harbour in Taiwan while Kaohsiung International Airport is the second busiest airport in number of passengers. The city is well-connected to other major cities by high speed and conventional rail, as well as several national freeways. It also hosts the Republic of China Navy fleet headquarters and its naval academy. More recent public works such as Pier-2 Art Center, National Kaohsiung Center for the Arts and Kaohsiung Music Center have been aimed at growing the tourism and cultural industries of the city. Hoklo immigrants to the area during the 16th and 17th centuries called the region "Takau" (). The surface meaning of the associated Chinese characters was "beat the dog". According to one theory, the name Takau originates from the aboriginal Siraya language and translates as "bamboo forest". According to another theory, the name evolved via metathesis from the name of the Makatao tribe, who inhabited the area at the time of European and Hoklo settlement. The Makatao are considered to be part of the Siraya tribe. During the Dutch colonization of southern Taiwan, the area was known as "Tancoia" to the western world for a period of about three decades. In 1662, the Dutch were expelled by the Kingdom of Tungning, founded by Ming loyalists of Koxinga. His son, Zheng Jing, renamed the village "Banlian-chiu" () in 1664. The name of "Takau" was restored in the late 1670s, when the town expanded drastically with immigrants from mainland China, and was kept through Taiwan's cession to the Japanese Empire in 1895. In his 1903 general history of Taiwan, US Consul to Formosa James W. Davidson relates that "Takow" was already a well-known name in English. In 1920, the name was changed to and administered the area under Takao Prefecture. While the new name had quite a different surface meaning, its pronunciation in Japanese sounded more or less the same as the old name spoken in Hokkien. After Taiwan was handed to the Republic of China, the name did not change, but the official romanization became Kaohsiung (), derived from the Wade-Giles romanization of the Mandarin Chinese pronunciation for . The name "Takau" remains the official name of the city in Austronesian languages of Taiwan such as Rukai, although these are not widely spoken in the city. The name also remains popular locally in the naming of businesses, associations, and events. The written history of Kaohsiung can be traced back to the early 17th century, through archaeological studies have found signs of human activity in the region from as long as 7,000 years ago. Prior to the 17th century, the region was inhabited by the Makatao people of the Siraya tribe, who settled on what they named Takau Isle (translated to 打狗嶼 by Ming Chinese explorers); "Takau" meaning "bamboo forest" in the aboriginal language. The earliest evidence of human activity in the Kaohsiung area dates back to roughly 4,700–5,200 years ago. Most of the discovered remnants were located in the hills surrounding Kaohsiung Harbor. Artifacts were found at Shoushan, Longquan Temple, Taoziyuan, Zuoying, Houjing, Fudingjin and Fengbitou. The prehistoric Dapenkeng, Niuchouzi, Dahu, and Niaosong civilizations were known to inhabit the region. Studies of the prehistoric ruins at Longquan Temple have shown that that civilization occurred at roughly the same times as the beginnings of the aboriginal Makatao civilization, suggesting a possible origin for the latter. Unlike some other archaeological sites in the area, the Longquan Temple ruins are relatively well preserved. Prehistoric artifacts discovered have suggested that the ancient Kaohsiung Harbor was originally a lagoon, with early civilizations functioning primarily as Hunter-gatherer societies. Some agricultural tools have also been discovered, suggesting that some agricultural activity was also present. The first Chinese records of the region were written in 1603 by Chen Di, a member of Ming admiral Shen You-rong's expedition to rid the waters around Taiwan and Penghu of pirates. In his report on the "Eastern Barbarian Lands" (Dong Fan Ji), Chen Di referred to a Ta-kau Isle: Taiwan became a Dutch colony in 1624, after the Dutch East India Company was ejected from Penghu by Ming forces. At the time, Takau was already one of the most important fishing ports in southern Taiwan. The Dutch named the place "Tankoya", and the harbor "Tancoia". The Dutch missionary François Valentijn named Takau Mountain "Ape Berg", a name that would find its way onto European navigational charts well into the 18th century. "Tankoia" was located north of Ape's Hill and a few hours south from Tayouan (modern-day Anping, Tainan) by sail. At the time, a wide shallow bay existed there, sufficient for small vessels. However, constant silting changed the coastline. During this time, Taiwan was divided into five administrative districts, with Takau belonging to the southernmost district. In 1630, the first large scale immigration of Han Chinese to Taiwan began due to famine in Fujian, with merchants and traders from China seeking to purchase hunting licenses from the Dutch or hide out in aboriginal villages to escape authorities in China. In 1684, the Ching empire annexed Taiwan and renamed the town Fengshan County (), considering it a part of Taiwan Prefecture. It was first opened as a port during the 1680s and subsequently prospered fairly for generations. In 1895, Taiwan was ceded to Japan as part of the Treaty of Shimonoseki. Administrative control of the city was moved from New Fongshan Castle to the Fongshan Sub-District of . In November 1901, twenty "chō" were established in total; was established nearby. In 1909, Hōzan Chō was abolished, and Takow was merged into Tainan Chō. In 1920, during the tenure of 8th Governor-General Den Kenjirō, districts were abolished in favor of prefectures. Thus the city was administered as under Takao Prefecture. The Japanese developed Takao, especially the harbor that became the foundation of Kaohsiung to be a port city. Takao was then systematically modernized and connected to the end of North-South Railway. Forming a north-south regional economic corridor from Taipei to Kaohsiung in the 1930s, Japan's Southward Policy set Kaohsiung to become an industrial center. Kaohsiung Harbor was also developed starting from 1894. The city center was relocated several times during the period due to the government's development strategy. Development was initially centered on "Ki-au" () region but the government began laying railways, upgrading the harbor, and passing new urban plans. New industries such as refinery, machinery, shipbuilding and cementing were also introduced. Before and during World War II it handled a growing share of Taiwan's agricultural exports to Japan, and was also a major base for Japan's campaigns in Southeast Asia and the Pacific. Extremely ambitious plans for the construction of a massive modern port were drawn up. Toward the end of the war, the Japanese promoted some industrial development at Kaohsiung, establishing an aluminum industry based on the abundant hydroelectric power produced by the Sun Moon Lake project in the mountains. The city was heavily bombed by Task Force 38 and FEAF during World War II between 1944 and 1945. After control of Taiwan was handed over from Japan to the government of the Republic of China on 25 October 1945, Kaohsiung City and Kaohsiung County were established as a provincial city and a county of Taiwan Province respectively on 25 December 1945. The official romanization of the name came to be "Kaohsiung", based on the Wade–Giles romanization of the Mandarin reading of the kanji name. Kaohsiung City then consisted of 10 districts, which were Gushan, Lianya (renamed "Lingya" in 1952), Nanzi, Cianjin, Cianjhen, Cijin, Sanmin, Sinsing, Yancheng, and Zuoying. During this time, Kaohsiung developed rapidly. The port, badly damaged in World War II, was restored. It also became a fishing port for boats sailing to Philippine and Indonesian waters. Largely because of its climate, Kaohsiung overtook Keelung as Taiwan's major port. Kaohsiung also surpassed Tainan to become the second largest city of Taiwan in the late 1970s and Kaohsiung City was upgraded from a provincial city to special municipality on 1 July 1979, by the Executive Yuan with a total of 11 districts. The additional district is Siaogang District, which was annexed from Siaogang Township of Kaohsiung County. The Kaohsiung Incident, where the government suppressed a commemoration of International Human Rights Day, occurred on 10 December 1979. Since then, Kaohsiung gradually grew into a political center of the Pan-Green population of Taiwan, in opposition to Taipei where the majority population is Kuomintang supporters. On 25 December 2010, Kaohsiung City merged with Kaohsiung County to form a larger special municipality with administrative centers in Lingya District and Fengshan District. On 31 July 2014, a series of gas explosions occurred in the Cianjhen and Lingya Districts of the city, killing 31 and injuring more than 300. Five roads were destroyed in an area of nearly near the city center. It was the largest gas explosion in Taiwan's modern history. The city sits on the southwestern coast of Taiwan facing the Taiwan Strait, bordering Tainan City to the north, Chiayi and Nantou County to the northwest, Taitung County to its northeast and Pingtung County to the south and southeast. The downtown areas are centered on Kaohsiung Harbor with Cijin Island on the other side of the harbor acting as a natural breakwater. The Love River (Ai River) flows into the harbor through the Old City and downtown. Zuoying Military Harbor lies to the north of Kaohsiung Harbor and the city center. Kaohsiung's natural landmarks include Ape Hill and Mount Banping. Located about a degree south of the Tropic of Cancer, Kaohsiung has a tropical savanna climate (Köppen "Aw"), with monthly mean temperatures between and relative humidity ranging between 71 and 81%. Kaohsiung's warm climate is very much dictated by its low latitude and its exposure to warm sea temperatures year-round, with the Kuroshio Current passing by the coast of southern Taiwan, and the Central Mountain Range on the northeast blocking out the cool northeastern winds during the winter. The city, therefore, has a noticeably warmer climate than nearby cities located at similar latitudes such as Hong Kong, Guangzhou as well as various cities further south in northern Vietnam, such as Hanoi. Although the climate is classified as tropical, Kaohsiung has a defined cooler season unlike most other cities in Asia classified with this climate but located closer to the equator such as Singapore or Manila. Daily maximum temperature typically exceeds during the warmer season (April to November) and during the cooler season (December to March), with the exception when cold fronts strikes during the winter months, when the daily mean temperature of the city can drop between 10~12 °C depending on the strength of the cold front. Also, besides the high temperatures occurring during the usual summer months, daytime temperatures of inland districts of the city can often exceed above from mid-March to late April before the onset of the monsoon season, with clear skies and southwesterly airflows. Average annual rainfall is around , focused primarily from June to August. At more than 2,210 hours of bright sunshine, the city is one of the sunniest areas in Taiwan. The sea temperature of Kaohsiung Harbor remains above year-round, the second highest of Southern Taiwan after Liuqiu island. According to recent records, the average temperature of the city has risen around 1 degree Celsius over the past 3 decades, from about in 1983 to around by 2012. As of December 2018, Kaohsiung city has a population of 2,773,533 people, making it the third-largest city after New Taipei and Taichung, and a population density of 939.59 people per square kilometer. Within the city, Fongshan District is the most populated district with a population of 359,519 people, while Xinxing District is the most densely populated district with a population density of 25,820 people per square kilometer. As in most Taiwanese cities or counties, the majority of the population are Han Chinese. The Hans are then divided into 3 subgroups, Hoklo, Hakka, and Waishengren. The Hoklo and Waishengren mostly live in flatland townships and the city centre, while the majority of the Hakka population live in the suburbs or rural townships of the northeastern hills. The indigenous peoples of Kaohsiung, who belong to various ethnic groups that speak languages belonging to the Austronesian language family, live mostly in the mountain townships such as Taoyuan or Namasia. The main indigenous groups in the city include the Bunun, Rukai, Saaroa and the Kanakanavu. As of December 2010, Kaohsiung city hosts around 21,000 foreign spouses. Around 12,353 are People's Republican Chinese, 4,244 are Vietnamese, around 800 Japanese and Indonesians and around 4,000 other Asians or foreigners from Europe or the Americas. As of April 2013, Kaohsiung hosts 35,074 foreign workers who mainly work as factory workers or foreign maids (not including foreign specialists such as teachers and other professionals). Within around half of which are Indonesians, and the other half being workers from other Southeast Asian countries mainly from Vietnam, the Philippines or Thailand. Kaohsiung is a major international port and industrial city in the southwest of Taiwan. As an exporting center, Kaohsiung serves the agricultural interior of southern Taiwan, as well as the mountains of the southeast. Major raw material exports include rice, sugar, bananas, pineapples, peanuts (groundnuts) and citrus fruits. The Linhai Industrial Park, on the waterfront, was completed in the mid-1970s and includes a steel mill, shipyard, petrochemical complex, and other industries. The city has an oil refinery, aluminum and cement works, fertilizer factories, sugar refineries, brick and tile works, canning factories, salt-manufacturing factories, and papermaking plants. Designated an export-processing zone in the late 1970s, Kaohsiung also attracted foreign investment to process locally purchased raw materials for export. The ongoing Nansing Project is a plan to reclaim of land along the coast by 2011. The Kaohsiung Harbor Bureau plans to buy 49 hectares of the reclaimed land to establish a solar energy industrial district that would be in the harbor's free trade zone. The gross domestic product (GDP) in nominal terms of Kaohsiung City is estimated to be around US$45 billion, and US$90 billion for the metropolitan region. , the GDP per capita in nominal terms was approximately US$24,000. Despite early success and heavy governmental investment, the city suffers from the economic north–south divide in Taiwan, which continues to be the centre of political debate. There has been public aims to shift the local economy towards tourism and cultural industries, with projects such as Pier-2 Art Center, National Kaohsiung Center for the Arts and Kaohsiung Music Center. The main agricultural produce in Kaohsiung are vegetables, fruits and rice with a total arable land of 473 km2, which accounts to 16% of the total area of the municipality. Kaohsiung has the highest production of guava, jujube and lychee in Taiwan. The main animal husbandry are chicken, dairy cattle, deer, duck, goose, pigs and sheep. The total annual agricultural outcome in Kaohsiung is NT$24.15 billion. Main landmarks of Kaohsiung city include the 85 Sky Tower, the ferris wheel of the Kaohsiung Dream Mall, the Kaohsiung Arena and Kaohsiung Harbor. The newly developed city is also known for having a large number of shopping streets, organized night markets and newly developed leisure parks such as the Pier-2 Art Center, E-DA Theme Park, Metropolitan Park, the Kaohsiung Museum of Fine Arts and Taroko Park. Natural attractions of the city include Shoushan (Monkey mountain), the Love River, Cijin Island, Sizihwan, the Dapingding Tropical Botanical Garden and Yushan National Park at the northeastern tip of the city. The city also features various historical attractions such as the Old City of Zuoying, a historical town built during the early 17th century, the Former British Consulate at Takao built during the late 19th century, and various sugar and crop factories built during the Japanese occupation of Taiwan. Kaohsiung city includes a wide range of different natural attractions due to its large size and geographical variation, as it is bordered by the Central Mountain Range in the northeast and the warm South China Sea to the west and southwest. The year-round warm climate allows coral reefs to grow along the coasts around Kaohsiung Harbor, with Shoushan Mountain being a small mountain completely made up of coral reefs and calcium carbonate, while the mountainous districts in the northeast include Taiwan's highest mountain, Yushan. Other notable natural attractions include the Mount Banping, Lotus Lake, and Dongsha Atoll National Park, which is currently inaccessible by the public due to military occupation. A large number of historical sites and monuments were left in the city after the colonization of the Dutch in the 17th century, the Qing dynasty during the 18th and 19th century and the Japanese empire from the late 19th century to the mid 20th century. The city government has protected various sites and monuments from further damage and many have been opened to the public since the early 1980s. Notable historical sites include the Cemetery of Zhenghaijun, Fengshan Longshan Temple, Former British Consulate at Takao, Former Dinglinzihbian Police Station, Meinong Cultural and Creative Center, Former Sanhe Bank, and the Cihou Lighthouse, one of the oldest lighthouses of the city. Kaohsiung is home to many museums, including the Chung Li-he Museum, Cijin Shell Museum, Fo Guang Shan Buddha Museum, Jiasian Petrified Fossil Museum, Kaohsiung Astronomical Museum, Kaohsiung Hakka Cultural Museum, Kaohsiung Harbor Museum, Kaohsiung Museum of Fine Arts, Kaohsiung Museum of History, Kaohsiung Museum of Labor, Kaohsiung Vision Museum, Meinong Hakka Culture Museum, National Science and Technology Museum, Republic of China Air Force Museum, Soya-Mixed Meat Museum, Taiwan Sugar Museum, Takao Railway Museum and YM Museum of Marine Exploration Kaohsiung. As the largest municipality in Taiwan, Kaohsiung has a number of newly built leisure areas and parks. Notable parks or pavilions in the city include the Central Park, Siaogangshan Skywalk Park, Fo Guang Shan Monastery, the Dragon and Tiger Pagodas, Spring and Autumn Pavilions, the Love Pier, Singuang Ferry Wharf and Kaohsiung Fisherman's Wharf. Notable zoo in the city includes the Kaohsiung City Shousan Zoo. Kaohsiung is home to many night markets, such as Jin-Zuan Night Market, Liuhe Night Market Ruifeng Night Market and Zhonghua Street Night Market, and the Kaisyuan Night Market. Other attractions include the Chi Jin Mazu Temple, Dome of Light of Kaohsiung MRT's Formosa Boulevard Station, the Kaohsiung Mosque and the Tower of Light of Sanmin District. Traditional "wet" markets have long been the source of meat, fish, and produce for many residents. With the arrival of western-style supermarkets in the 1980s and 1990s, such markets have encountered fierce competition. In 1989, the global leader in hypermarkets, Carrefour, entered Asia, opening its first store in Kaohsiung. Due to the success of its Taiwan operation, the French retailer expanded throughout the country and Asia. Jean-Luc Chéreau, the general manager in Taiwan from 1993 to 1999, used this newfound understanding of Chinese culture and ways of doing business with Chinese customers to lead its China expansion starting in 1999. As of February 2020, Carrefour has opened 137 hypermarkets and supermarkets in Taiwan. Despite the fierce competition from "westernized" supermarkets, Taiwan's traditional markets and mom-and-pop stores remain "one of the most popular retail formats for many Asian families when they purchase daily food items and basic household goods." The majority of those living in Kaohsiung can communicate in both Taiwanese Hokkien and Standard Chinese. Some of the elderly who grew up during the Japanese colonization of Taiwan can communicate in Japanese, while most of the younger population has basic English skills. Since the spread of Standard Chinese after the Nationalist Government retreated to Taiwan in 1949, Hakka Chinese and various Formosan languages are gradually no longer spoken with the new generation and many Formosan languages are therefore classified as moribund or endangered languages by the United Nations. Nowadays, only elder Hakka people mostly living in Meinong, Liouguei, Shanlin and Jiasian districts can communicate in Hakka and elder Taiwanese aborigines living mostly in the rural districts of Namasia and Taoyuan can communicate with the aboriginal languages. The Taiwanese government has established special affairs committees for both the Aboriginals and the Hakkas to protect their language, culture, and minority rights. Kaohsiung has rich resources of ocean, mountains and forests which shape a unique and active multi-faceted art and cultural aesthetic in public infrastructure and transport, public art, and city architecture, from MRT stations and city space to art galleries. The "Dome light" in the concourse of Formosa Boulevard Station of Kaohsiung MRT is one of the world's largest public glass works of art. The city also has the Urban Spotlight Arcade spanning along the street in Cianjin District. In October 2018, Weiwuying (the National Kaohsiung Centre for the Arts), designed by Mecanoo, opened. The religious population of Kaohsiung is mainly divided into five main religious groups: Buddhist, Taoist, Muslim and Christian (Catholicism and Protestantism). , Kaohsiung City has 1,481 temples, the second highest in Taiwan after Tainan. Kaohsiung has also 306 churches. Buddhism is one of the major religions in Taiwan, with over 35% of Taiwan's population identifying as Buddhist. The same applies to Kaohsiung city. Kaohsiung also hosts the largest Buddhist temple in Taiwan, the Fo Guang Shan Monastery with its Fo Guang Shan Buddha Museum. There are also other Buddhist temples such as Fengshan Longshan Temple and Hong Fa Temple. Around 33% of the Taiwanese population are Taoists, making it the second largest religion of Taiwan. Most people who believe in Taoism also ascribe to Buddhism at the same time, as the differences and boundaries between the two religions are not always clear. Many residents of the area also worship the sea goddess known as Tian Shang Sheng Mu () or Mazu, who is variously syncretized as a Taoist immortal or embodiment of the bodhisattva Guanyin. Her temple on Cijin Island, Chi Jin Mazu Temple, is the oldest in the city, with its original bamboo-and-thatch structure first opened in 1673. The area surrounding it formed the center of the city's early settlement. There are also other prominent Taoist temples such as Fengshan Tiangong Temple, dedicated to the Jade Emperor, Cih Ji Palace, dedicated to Bao Sheng Da Di, Qing Shui Temple, dedicated to Qing Shui Zu Shi and Gushan Daitian Temple dedicated to Wang Ye worship. Christianity is a growing religion in Taiwan. It was first brought onto the island when the Dutch and Spanish colonized Taiwan during the 17th century, mostly to the aboriginals. Kaohsiung currently hosts around 56,000 Christians. Besides the majority population of Buddhists and Taoists, Kaohsiung also includes a rather small population of Muslims. During the Chinese Civil War, some 20,000 Muslims, mostly soldiers and civil servants, fled mainland China with the Kuomintang to Taiwan. During the 1980s, another few thousand Muslims from Myanmar and Thailand, whom are mostly descendants of Nationalist soldiers who fled Yunnan as a result of the communist takeover, migrated to Taiwan in search of a better life, resulting in an increase of Muslim population within the country. More recently, with the rise of Indonesian workers working in Taiwan, an estimated number of 88,000 Indonesian Muslims currently live in the country, in addition to the existing 53,000 Taiwanese Muslims. Combining all demographics, Taiwan hosts around 140,000 Muslims, with around 25,000 living in Kaohsiung. Kaohsiung Mosque is the largest mosque in Kaohsiung and the main gathering site of Muslims within the city. Sometimes Kaohsiung used to be seen as the political opposite Taipei. While northern Taiwan leans towards the Pan-Blue Coalition in the state-level elections, southern Taiwan, including Kaohsiung, leaned towards the Pan-Green Coalition since the late 1990s. Frank Hsieh of the Democratic Progressive Party was reelected twice as Mayor of Kaohsiung, where he was widely credited for transforming the city from an industrial sprawl into an attractive modern metropolis. Hsieh resigned from the office of mayor to take up the office of Premier of the Republic of China in 2005. The municipal election, held on 9 December 2006, resulted in a victory for the Democratic Progressive Party's candidate Chen Chu, the first elected female mayor of special municipality in Taiwan, defeating her Kuomintang rival and former deputy mayor, Huang Chun-ying. The current mayor of Kaohsiung City is Yang Ming-jou. Kaohsiung is divided into 38 districts, three of which are mountain indigenous districts. There are a total of 651 villages in which each village is subdivided into neighborhoods (鄰). There are 18,584 neighborhoods in Kaohsiung City. Lingya and Fengsha districts are the administrative centers of the city while Lingya and Xinxing Districts are the two most densely populated districts of the city. Kaohsiung has the most numbers of districts among other special municipalities in Taiwan. A major port, through which pass most of Taiwan's marine imports and exports, is located in the city but is not managed by the city government. Also known as the "Harbour Capital" of Taiwan, Kaohsiung has always had a strong link with the ocean and maritime transportation. Ferries play a key role in everyday transportation, especially for transportation across the harbor. With five terminals and 23 berths, the "Port of Kaohsiung" is Taiwan's largest container port and the 13th largest in the world. In 2007 the port reached its handling capacity with a record trade volume of . A new container terminal is under construction, increasing future handling capacity by by 2013. The Port of Kaohsiung is not officially a part of Kaohsiung City, instead, it is administrated by Kaohsiung Port Authority, under the Ministry of Transportation. There is a push for Kaohsiung City to annex the Port of Kaohsiung to facilitate better regional planning. Kaohsiung is one of the biggest ports in the world for importing shark fins, sold at high prices in the restaurants and shops of Taiwan and China. They are brought in from overseas and are placed out to dry in the sun on residential rooftops near the port. Kaohsiung City is also home to Taiwan's second largest international airport, the Kaohsiung International Airport, located in Siaogang District near the city's center. It one of the three major international airports of Taiwan, serving passengers of the entire southern and southeastern part of the country. However, the size of the airport is relatively small with short runways compared to other major airports of Taiwan due to its age and its location near the city center, making it impossible for large aircraft such as the Airbus A380 to land at the airport. As a result, plans for runway expansion or building a new airport in replacement have been proposed. Kaohsiung Mass Rapid Transit opened for service in March 2008. The MRT is made up of two lines with 37 stations covering a distance of . Notably, two of Kaohsiung's MRT stations, Formosa Boulevard Station and Central Park Station, were ranked among the top 50 most beautiful subway systems in the world by Metrobits.org in 2011. In 2012, the two stations respectively are ranked as the 2nd and the 4th among the top 15 most beautiful subway stops in the world by BootsnAll. The Circular Light Rail Line (also known as the Kaohsiung LRT, Kaohsiung Tram) for Kaohsiung City is a planned light rail line. Construction of Phase 1, known as the Waterside Light Rail began in June 2013 and is in full operation since September 2017. To combat air pollution, usage of the light rail, was well as buses, was made free of charge for electronic ticket holders from December to February, when air pollution is at its peak. The city is served by the Taiwan Railways Administration's Western Line and Pingtung Line. Kaohsiung Main Station is an underground station, replacing the old ground level station. Taiwan High Speed Rail also serves Kaohsiung City at Zuoying Station in northern Kaohsiung City. Kaohsiung is home to Taiwan's largest stadium, the National Stadium, and Kaohsiung Arena. National Stadium is Taiwan's largest international-class stadium with a maximum capacity of 55,000 seats. Kaohsiung hosted the 2009 World Games at the National Stadium. Nearly 6,000 athletes, officials, coaches, referees and others from 103 countries participated in the 2009 Kaohsiung World Games. Kaohsiung was also home to the Kaohsiung Truth of the ASEAN Basketball League. They were the first team in the history of the league that was based outside Southeast Asia. The team folded in 2017. Kaohsiung has a number of colleges and junior colleges offering training in commerce, education, maritime technology, medicine, modern languages, nursing, and technology, as well as various international schools and eight national military schools, including the three major military academies of the country, the Republic of China Military Academy, Republic of China Naval Academy and Republic of China Air Force Academy. Universities High Schools and Junior High Schools International Schools Military Schools (Note: The lists above are not comprehensive.) The Kaohsiung Exhibition Center, built by the Kaohsiung City Government, was opened on 14 April 2014. It includes an exhibition space for 1,500 booths, and a convention hall for 2,000 pax. The center hosted the Taiwan International Boat Show in May 2014. Another conference and event-related venue is the newly renovated International Convention Center Kaohsiung in 2013. Kaohsiung is twinned with the following locations.
https://en.wikipedia.org/wiki?curid=17015
Kashubians The Kashubians (; ; ), also known as Cassubians or Kashubs, are a Lechitic (West Slavic) ethnic group native to the historical region of Eastern Pomerania called Pomerelia, in north-central Poland. Their settlement area is referred to as Kashubia. They speak the Kashubian language, which is classified either as a separate language closely related to Polish, or as a Polish dialect. Analogously to their linguistic classification, the Kashubs are considered either an ethnic or a linguistic community. The Kashubs are closely related to the Poles. The Kashubs are grouped with the Slovincians as Pomeranians. Similarly, the Slovincian (now extinct) and Kashubian languages are grouped as Pomeranian languages, with Slovincian (also known as Łeba Kashubian) either a distinct language closely related to Kashubian, or a Kashubian dialect. Among larger cities, Gdynia ("Gdiniô") contains the largest proportion of people declaring Kashubian origin. However, the biggest city of the Kashubia region is Gdańsk ("Gduńsk"), the capital of the Pomeranian Voivodeship. Between 80.3% and 93.9% of the people in towns such as Linia, Sierakowice, Szemud, Kartuzy, Chmielno, Żukowo, etc. are of Kashubian descent. The traditional occupations of the Kashubs have been agriculture and fishing. These have been joined by the service and hospitality industries, as well as agrotourism. The main organization that maintains the Kashubian identity is the Kashubian-Pomeranian Association. The recently formed "Odroda" is also dedicated to the renewal of Kashubian culture. The traditional capital has been disputed for a long time and includes Kartuzy ("Kartuzë") among the seven contenders. The biggest cities claiming to be the capital are: Gdańsk ("Gduńsk"), Wejherowo ("Wejrowò"), and Bytów ("Bëtowò"). The total number of Kashubians (Pomeranians) varies depending on one's definition. A common estimate is that over 500,000 people in Poland are of the Kashubian ethnicity, the estimates range from ca. 500,000 to ca. 570,000. In the Polish census of 2002, only 5,100 people declared Kashubian national identity, although 52,655 declared Kashubian as their everyday language. Most Kashubs declare Polish national identity and Kashubian ethnicity, and are considered both Polish "and" Kashubian. On the 2002 census there was no option to declare one national identity and a different ethnicity, or more than one ethnicity. On the 2011 census, the number of persons declaring "Kashubian" as their only ethnicity was 16,000, and 233,000 including those who declared Kashubian as first or second ethnicity (together with Polish). In that census, over 108,000 people declared everyday use of Kashubian language. The number of people who can speak at least some Kashubian is higher, around 366,000. Map (on page 122): http://docplayer.pl/57273906-Instytut-kaszubski-acta-cassubiana-tom-xvii.html As of 1890, linguist Stefan Ramułt estimated the number of Kashubs (including Slovincians) in Pomerelia as 174,831. He also estimated that at that time there were over 90,000 Kashubs in the United States, around 25,000 in Canada,15,000 in Brazil and 25,000 elsewhere in the world. In total 330,000. Kashubs are a Western Slavic people living on the shores of the Baltic Sea. Kashubs have their own unique language and traditions, having lived somewhat isolated for centuries from the common Polish population. Until the end of the 12th century, the vast majority of inhabitants of Pomerania (Hither, Farther and Eastern) were Slavic-speakers, but the province was quite sparsely populated, with large areas covered by forests and waste lands. During the 13th century, the German began in this region. Slavic dukes of Pomerania such as Barnim I (1220–1278) – despite calling themselves – contributed a lot to the change of ethnic structure by promoting German immigration and granting land to German nobles, monks and clergy. The Slavic ruling dynasty itself started intermarrying with German princesses and became culturally Germanized over time. Wendish commoners became alienated in their own land, their culture replaced by that of newcomers. All of this led to Germanization of most of Slavic Pomeranians and the gradual death of their Slavic language, with the general direction of assimilation and language shift from west to east. Johannes Bugenhagen wrote that at the beginning of the 16th century the German-Slavic language border was near Koszalin. During the 17th century, the border between areas with mostly German-speaking and mostly Slavic-speaking populations ran more or less along the present-day border between West Pomeranian and Pomeranian Voivodeships. In year 1612, cartographer Eilhard Lubinus – while working on his map of Pomerania – travelled from the direction of Pollnow towards Treblin on his way to Danzig. While staying in the manor house of Stanislaus Stenzel von Puttkamer in Treblin, he noted in his diary: "we have entered Slavic-inhabited lands, which has surprised us a lot." Later, while returning from Gdańsk to Stettin, Lubinus slept over in Wielka Wieś near Stolp, and noted: "in the whole village, we cannot find even one German-speaker" (which caused communication problems). Over a century later, in 1772–1778, the area was visited by Johann Bernoulli. He noted that villages owned by Otto Christoph von Podewils – such as Dochow, Zipkow and Warbelin – were inhabited entirely by Slavic-speakers. He also noted that local priests and nobles were making great efforts to weed out Slavic language and turn their subjects into Germans. Perhaps the earliest census figures on ethnic or national structure of West Prussia and Farther Pomerania are from 1817–1823. Karl Andree, (Leipzig 1831), gives the total population of West Prussia as 700,000 – including 50% Poles (350,000), 47% Germans (330,000) and 3% Jews (20,000). Kashubians are included with Poles, while Mennonites with Germans. Modern estimates of Kashubian population in West Prussia in the early 19th century, by county, are given by Leszek Belzyt and Jan Mordawski: According to Georg Hassel, there were 65,000 Slavic-speakers in the whole Provinz Pommern in 1817–1819. Modern estimates for just eastern parts of Pommern (Western Kashubia) in early 1800s range between 40,000 (Leszek Belzyt) and 25,000 (Jan Mordawski, Zygmunt Szultka). The number declined to between 35,000 and 23,000 (Zygmunt Szultka, Leszek Belzyt) in years 1827–1831. In 1850-1860s there were an estimated 23,000 to 17,000 Slavic-speakers left in Pommern, down to 15,000 in 1892 according to Stefan Ramułt. The number was declining due to Germanisation. The bulk of Slavic population in 19th century Pommern was concentrated in its easternmost counties: especially Bytów (Bütow), Lębork (Lauenburg) and Słupsk (Stolp). In all constituencies with significant Catholic Kashubian population (Neustadt in Westpr.-Putzig-Karthaus; Berent-Preußisch Stargard-Dirschau; and Konitz-Tuchel), all Reichstag elections in 1867–1912 were won by the Polish Party (, later ). Kashubs descend from the Slavic Pomeranian tribes, who had settled between the Oder and Vistula Rivers after the Migration Period, and were at various times Polish and Danish vassals. While most Slavic Pomeranians were assimilated during the medieval German settlement of Pomerania (Ostsiedlung), especially in Eastern Pomerania (Pomerelia) some kept and developed their customs and became known as Kashubians. The tenth century far-traveled Arab writer Al-Masudi – who had great interest in non-Muslim peoples, including the various Slavs of Eastern Europe – mentions a people which he calls "Kuhsabin", who were probably Kashubians. The oldest known unambiguous mention of "Kashubia" dates from 19 March 1238 – Pope Gregory IX wrote about Bogislaw I as "dux Cassubie" – the Duke of Kashubia. The old one dates from the 13th century (a seal of Barnim I from the House of Pomerania, Duke of Pomerania-Stettin). The Dukes of Pomerania hence used "Duke of (the) Kashubia(ns)" in their titles, passing it to the Swedish Crown who succeeded in Swedish Pomerania when the House of Pomerania became extinct. The westernmost (Slovincian) parts of Kashubia, located in the medieval Lands of Schlawe and Stolp and Lauenburg and Bütow Land, were integrated into the Duchy of Pomerania in 1317 and 1455, respectively, and remained with its successors (Brandenburgian Pomerania and Prussian Pomerania) until 1945, when the area became Polish. The bulk of Kashubia since the 12th century was within the medieval Pomerelian duchies, since 1308 in the Monastic state of the Teutonic Knights, since 1466 within Royal Prussia, an autonomous territory of the Polish Crown, since 1772 within West Prussia, a Prussian province, since 1920 within the Polish Corridor of the Second Polish Republic, since 1939 within the Reichsgau Danzig-West Prussia of Nazi Germany, and since 1945 within the People's Republic of Poland, and after within the Third Polish Republic. German Ostsiedlung in Kashubia was initiated by the Pomeranian dukes and focussed on the towns, whereas much of the countryside remained Kashubian. An exception was the German settled Vistula delta (Vistula Germans), the coastal regions, and the Vistula valley. Following the centuries of interaction between local German and Kashubian population, Aleksander Hilferding (1862) and Alfons Parczewski (1896) confirmed a progressive language shift in the Kashubian population from their Slavonic vernacular to the local German dialect (Low German Ostpommersch, Low German Low Prussian, or High German). On the other hand, Pomerelia since the Middle Ages was assigned to the Kuyavian Diocese of Leslau and thus retained Polish as the church language. Only the Slovincians in 1534 adopted Lutheranism after the Protestant Reformation had reached the Duchy of Pomerania, while the Kashubes in Pomerelia remained Roman Catholic. The Prussian parliament ("Landtag") in Königsberg changed the official church language from Polish to German in 1843. But this decision was soon repealed. In the 19th century the Kashubian activist Florian Ceynowa undertook efforts to identify the Kashubian language, and its culture and traditions. Although his efforts did not appeal to locals at the time, Kaszubian activitists in the present day have claimed that Ceynowa awakened Kashubian self-identity, thereby opposing both Germanisation and Prussian authority, and Polish nobility and clergy. He believed in a separate Kashubian identity and strove for a Russian-led pan-Slavic federacy, He considered Poles "born brothers". Ceynowa was a radical who attempted to take the Prussian garrison in Preussisch Stargard (Starogard Gdański) during 1846 (see Greater Poland uprising), but the operation failed when his 100 combatants, armed only with scythes, decided to abandon the site before the attack was carried out. Althought some later Kashubian activists tried to push for a separate identity, they further based their ideas on a misrepresented reading of the journalist and activist Hieronim Derdowski: "There is no Cassubia without Polonia, and no Poland without Cassubia" ("Nie ma Kaszeb bez Polonii a bez Kaszeb Polsci"").. Further stanzas of Derdowski's tribute also point to the fact that Kaszubs were Poles and could not survive without. The Society of Young Kashubians (Towarzystwo Młodokaszubskie) has decided to follow in this way, and while they sought to create a strong Kashubian identity, at the same time they regarded the Kashubians as "One branch, of many, of the great Polish nation". The leader of the movement was Aleksander Majkowski, a doctor educated in Chełmno with the Society of Educational Help in Chełmno. In 1912 he founded the Society of Young Kashubians and started the newspaper "Gryf". Kashubs voted for Polish lists in elections, which strengthened the representation of Poles in the Pomerania region. Between 1855 and 1900, about 100,000 Kashubs emigrated to the United States, Canada, Brazil, New Zealand, and Australia in the so-called Kashubian diaspora, largely for economic reasons. In 1899 the scholar Stefan Ramult named Winona, Minnesota the "Kashubian Capital of America" on account of the Kashubian community's size within the city and its activity. Due to their Catholic faith, the Kashubians became subject to Prussia's Kulturkampf between 1871 and 1878. The Kashubians faced Germanification efforts, including those by evangelical Lutheran clergy. These efforts were successful in Lauenburg (Lębork) and Leba (Łeba), where the local population used the Gothic alphabet. While resenting the disrespect shown by some Prussian officials and Junkers, Kashubians lived in peaceful coexistence with the local German population until World War II, although during the interbellum, the Kashubian ties to Poland were either overemphasized or neglected by Polish and German authors, respectively, in arguments regarding the Polish Corridor. During the Second World War, Kashubs were considered by the Nazis as being either of "German stock" or "extraction", or "inclined toward Germanness" and "capable of Germanisation", and thus classified third category of Deutsche Volksliste (German ethnic classification list) if ties to the Polish nation could be dissolved. However, Kashubians who were suspected to support the Polish cause, particularly those with higher education, were arrested and executed, the main place of executions being Piaśnica (Gross Plassnitz), where 12,000 were executed. The German administrator of the area Albert Forster considered Kashubians of "low value" and didn't support any attempts to create Kashubian nationality. Some Kashubians organized anti-Nazi resistance groups, "Gryf Kaszubski" (later "Gryf Pomorski"), and the exiled "Zwiazek Pomorski" in Great Britain. When integrated into Poland, those envisioning Kashubian autonomy faced a Communist regime striving for ethnic homogeneity and presenting Kashubian culture as merely folklore. Kashubians were sent to Silesian mines, where they met Silesians facing similar problems. Lech Bądkowski from the Kashubian opposition became the first spokesperson of Solidarność. In 2011 Population Census about 108,100 people declared Kashubian as their language. The classification of Kashubian as a language or dialect has been controversial. From a diachronic point of view of historical linguistics, Kashubian, like Slovincian, Polabian and Polish, is a Lechitic West Slavic language, while from a synchronic point of view it is a group of Polish dialects. Given the past nationalist interests of Germans and Poles in Kashubia, Barbour and Carmichel state: "As is always the case with the division of a dialect continuum into separate languages, there is scope here for manipulation." A "standard" Kashubian language does not exist despite attempts to create one, rather a variety of dialects are spoken that differ significantly from each other. The vocabulary is influenced by both German and Polish. There are other traditional Slavic ethnic groups inhabiting Pomerania, including the Kociewiacy, Borowiacy and Krajniacy. These dialects tend to fall between Kashubian and the Polish dialects of Greater Poland and Mazovia, with Krajniak dialect indeed heavily influenced by Kashubian, while Borowiak and Kociewiak dialects much more closer to Greater Polish and Mazovian. No obvious Kashubian substrate or any other influence is visible in Kociewiak dialect. This indicates that they are not only descendants of Pomeranians, but also of settlers who arrived in Pomerania from Greater Poland and Masovia during the Middle Ages, from the 10th century onwards. In the 16th and 17th century Michael Brüggemann (also known as Pontanus or Michał Mostnik), Simon Krofey (Szimon Krofej) and J.M. Sporgius introduced Kashubian into the Lutheran Church. Krofey, pastor in Bütow (Bytow), published a religious song book in 1586, written in Polish but also containing some Kashubian words. Brüggemann, pastor in Schmolsin, published a Polish translation of some works of Martin Luther (catechism) and biblical texts, also containing Kashubian elements. Other biblical texts were published in 1700 by Sporgius, pastor in Schmolsin. His "Schmolsiner Perikopen", most of which is written in the same Polish-Kashubian style as Krofey's and Brüggemann's books, also contain small passages ("6th Sunday after Epiphany") written in pure Kashubian. Scientific interest in the Kashubian language was sparked by Christoph Mrongovius (publications in 1823, 1828), Florian Ceynowa and the Russian linguist Aleksander Hilferding (1859, 1862), later followed by Leon Biskupski (1883, 1891), Gotthelf Bronisch (1896, 1898), Jooseppi Julius Mikkola (1897), Kazimierz Nitsch (1903). Important works are S. Ramult's, "Słownik jezyka pomorskiego, czyli kaszubskiego", 1893, and Friedrich Lorentz, "Slovinzische Grammatik", 1903, "Slovinzische Texte", 1905, and "Slovinzisches Wörterbuch", 1908. Zdzisław Stieber was involved in producing linguistic atlases of Kashubian (1964–78). The first activist of the Kashubian national movement was Florian Ceynowa. Among his accomplishments, he documented the Kashubian alphabet and grammar by 1879 and published a collection of ethnographic-historic stories of the life of the Kashubians ("Skórb kaszébsko-slovjnckjé mòvé", 1866–1868). Another early writer in Kashubian was Hieronim Derdowski. The Young Kashubian movement followed, led by author Aleksander Majkowski, who wrote for the paper "Zrzësz Kaszëbskô" as part of the "Zrzëszincë" group. The group would contribute significantly to the development of the Kashubian literary language. Another important writer in Kashubian was Bernard Sychta (1907–1982). Similarly to the traditions in other parts of Central and Eastern Europe, Pussy willows have been adopted as an alternative to the palm leaves used in Palm Sunday celebrations, which were not obtainable in Kashubia. They were blessed by priests on Palm Sunday, following which parishioners whipped each other with the pussy willow branches, saying "Wierzba bije, jô nie bijã. Za tidzéń wiôldżi dzéń, za nocë trzë i trzë są Jastrë" ('The willow strikes, it's not me who strikes, in a week, on the great day, in three and three nights, there is the Easter'). The blessed by priests pussy willows were treated as sacred charms that could prevent lightning strikes, protect animals and encourage honey production. They were believed to bring health and good fortune to people as well, and it was traditional for one pussy willow bud to be swallowed on Palm Sunday to promote good health. According to the old tradition, on Easter Monday the Kashub boys chase girls whipping gently their legs with juniper twigs. This is to bring good fortune in love to the chased girls. This was usually accompanied by a boy's chant "Dyngus, dyngus – pò dwa jaja, Nie chcã chleba, leno jaja" ('Dyngus, dyngus, for two eggs; I don't want bread but eggs'). Sometimes a girl would be whipped when still in her bed. Girls would give boys painted eggs. Pottery, one of the ancient Kashubians crafts, has survived to the present day. Famous is Kashubian embroidery and Kashubian embroidering Zukowo school is important intangible cultural heritage. The Pope John Paul II's visit in June 1987, during which he appealed to the Kashubes to preserve their traditional values including their language, was very important. In 2005, Kashubian was for the first time made an official subject on the Polish matura exam (roughly equivalent to the English A-Level and French Baccalaureat). Despite an initial uptake of only 23 students, this development was seen as an important step in the official recognition and establishment of the language. Today, in some towns and villages in northern Poland, Kashubian is the second language spoken after Polish, and it is taught in some regional schools. Since 2005 Kashubian enjoys legal protection in Poland as an official regional language. It is the only tongue in Poland with this status. It was granted by an act of the Polish Parliament on 6 January 2005. Old Kashubian culture has partially survived in architecture and folk crafts such as pottery, plaiting, embroidery, amber-working, sculpturing and glasspainting. In the 2011 census, 233,000 people in Poland declared their identity as Kashubian, 216,000 declaring it together with Polish and 16,000 as their only national-ethnic identity. Kaszëbskô Jednota is an association of people who have the latter view. Kashubian cuisine contains many elements from the wider European culinary tradition. Local specialities include: According to a study published in 2015, by far the most common Y-DNA haplogroup among the Kashubs (n=204) who live in Kashubia, is haplogroup R1a, which is carried by 61.8% of Kashubian males. It is followed in frequency by I1 (13.2%), R1b (9.3%), I2 (4.4%), E1b1b (3.4%), J (2.5%), G (2%) and N1 (1.5%). Other haplogroups are 2%. Another study from 2010 (n=64) discovered similar proportions of most haplogroups (R1a - 68.8%, I1 – 12.5%, R1b - 7.8%, I2 – 3.1%, E1b1b - 3.1%), but found also Q1a in 3.1% of Kashubians. This study reported no significant differences between Kashubians from Poland and other Poles as far as Y chromosome polymorphism is regarded. When it comes to mitochondrial DNA haplogroups, according to a January 2013 study, the most common major mtDNA lineages among the Kashubians, each carried by at least 2.5% of their population, include J1 (12.3%), H1 (11.8%), H* (8.9%), T* (5.9%), T2 (5.4%), U5a (5.4%), U5b (5.4%), U4a (3.9%), H10 (3.9%), H11 (3.0%), H4 (3.0%), K (3.0%), V (3.0%), H2a (2.5%) and W (2.5%). Altogether they account for almost 8/10 of the total Kashubian mtDNA diversity. In a 2013 study, Y-DNA haplogroups among the Polish population indigenous to Kociewie (n=158) were reported as follows: 56.3% R1a, 17.7% R1b, 8.2% I1, 7.6% I2, 3.8% E1b1b, 1.9% N1, 1.9% J and 2% of other haplogroups. Immigrant Kashubians kept a distinct identity among Polish Canadians and Polish Americans. In 1858 Polish-Kashubians emigrated to Upper Canada and created the settlement of Wilno, in Renfrew County, Ontario, which still exists. Today Canadian Polish-Kashubians return to Northern Poland in small groups to learn about their heritage. Kashubian immigrants founded St. Josaphat parish in Chicago's Lincoln Park community in the late 19th century, as well as the parish of Immaculate Heart of Mary in Irving Park, the vicinity of which was dubbed as "Little Cassubia". In the 1870s a fishing village was established in Jones Island in Milwaukee, Wisconsin, by Kashubian immigrants. The settlers however did not hold deeds to the land, and the government of Milwaukee evicted them as squatters in the 1940s, with the area soon after turned into industrial park. The last trace of this Milwaukee fishing village that had been settled by Kashubians on Jones Island is in the name of the smallest park in the city, "Kaszube's Park". Important for Kashubian literature was "Xążeczka dlo Kaszebov" by Doctor Florian Ceynowa (1817–1881). Hieronim Derdowski (1852–1902) was another significant author who wrote in Kashubian, as was Doctor Aleksander Majkowski (1876–1938) from Kościerzyna, who wrote the Kashubian national epic The Life and Adventures of Remus. Jan Trepczyk was a poet who wrote in Kashubian, as was Stanisław Pestka. Kashubian literature has been translated into Czech, Polish, English, German, Belarusian, Slovene and Finnish. A considerable body of Christian literature has been translated into Kashubian, including the New Testament and Book of Genesis.
https://en.wikipedia.org/wiki?curid=17020
Karst Karst is a topography formed from the dissolution of soluble rocks such as limestone, dolomite, and gypsum. It is characterized by underground drainage systems with sinkholes and caves. It has also been documented for more weathering-resistant rocks, such as quartzite, given the right conditions. Subterranean drainage may limit surface water, with few to no rivers or lakes. However, in regions where the dissolved bedrock is covered (perhaps by debris) or confined by one or more superimposed non-soluble rock strata, distinctive karst features may occur only at subsurface levels and can be totally missing above ground. The study of karst is considered of prime importance in petroleum geology because as much as 50% of the world's hydrocarbon reserves are hosted in porous karst systems. The English word "karst" was borrowed from German in the late 19th century, which entered German much earlier. According to one interpretation the term is derived from the German name for a number of geological, geomorphological, and hydrological features found within the range of the Dinaric Alps, stretching from the northeastern corner of Italy above the city of Trieste (at the time part of the Austrian Littoral), across the Balkan peninsula along the coast of the eastern Adriatic to Kosovo and North Macedonia, where the "massif" of the Šar Mountains begins, and more specifically the karst zone at the northwestern-most section, described in early topographical research as a plateau, between Italy and Slovenia. In the local South Slavic languages, all variations of the word are derived from a Romanized Illyrian base (yielding , Dalmatian Romance ), later metathesized from the reconstructed form into forms such as and . Languages preserving the older, non-metathesized form include , , and ; the lack of metathesis precludes borrowing from any of the South Slavic languages, specifically Slovene. The Slovene common noun was first attested in the 18th century, and the adjective form in the 16th century. As a proper noun, the Slovene form was first attested in 1177. Ultimately, the word is of Mediterranean origin. It has been suggested that the word may derive from the Proto-Indo-European root "" 'rock'. The name may also be connected to the oronym "Kar(u)sádios oros" cited by Ptolemy, and perhaps also to Latin . Johann Weikhard von Valvasor, a pioneer of the study of karst in Slovenia and a fellow of the Royal Society for Improving Natural Knowledge, London, introduced the word "karst" to European scholars in 1689, describing the phenomenon of underground flows of rivers in his account of Lake Cerknica. Jovan Cvijić greatly advanced the knowledge of karst regions, so much that he became known as the "father of karst geomorphology". Primarily discussing the karstic regions of the Balkans, Cvijić's 1893 publication "Das Karstphänomen" describes landforms such as karren, dolines and poljes. In a 1918 publication, Cvijić proposed a cyclical model for karstic landscape development. Karst hydrology emerged as a discipline in the late 1950s and early 1960s in France. Previously, the activities of cave explorers, called speleologists, had been dismissed as more of a sport than a science, meaning that underground karstic caves and their associated watercourses were, from a scientific perspective, understudied. The development of karst occurs whenever acidic water starts to break down the surface of bedrock near its cracks, or bedding planes. As the bedrock (typically limestone or dolomite) continues to degrade, its cracks tend to get bigger. As time goes on, these fractures will become wider, and eventually a drainage system of some sort may start to form underneath. If this underground drainage system does form, it will speed up the development of karst formations there because more water will be able to flow through the region, giving it more erosive power. The carbonic acid that causes karstic features is formed as rain passes through Earth's atmosphere picking up carbon dioxide (CO2), which dissolves in the water. Once the rain reaches the ground, it may pass through soil that can provide much more CO2 to form a weak carbonic acid solution, which dissolves calcium carbonate. The primary reaction sequence in limestone dissolution is the following: In particular and very rare conditions such as encountered in the past in Lechuguilla Cave in New Mexico (and more recently in the Frasassi Caves in Italy), other mechanisms may also play a role. The oxidation of sulfides leading to the formation of sulfuric acid can also be one of the corrosion factors in karst formation. As oxygen (O2)-rich surface waters seep into deep anoxic karst systems, they bring oxygen, which reacts with sulfide present in the system (pyrite or hydrogen sulfide) to form sulfuric acid (H2SO4). Sulfuric acid then reacts with calcium carbonate, causing increased erosion within the limestone formation. This chain of reactions is: This reaction chain forms gypsum. The karstification of a landscape may result in a variety of large- or small-scale features both on the surface and beneath. On exposed surfaces, small features may include solution flutes (or rillenkarren), runnels, limestone pavement (clints and grikes), collectively called karren or lapiez. Medium-sized surface features may include sinkholes or cenotes (closed basins), vertical shafts, foibe (inverted funnel shaped sinkholes), disappearing streams, and reappearing springs. Large-scale features may include limestone pavements, poljes, and karst valleys. Mature karst landscapes, where more bedrock has been removed than remains, may result in karst towers, or haystack/eggbox landscapes. Beneath the surface, complex underground drainage systems (such as karst aquifers) and extensive caves and cavern systems may form. Erosion along limestone shores, notably in the tropics, produces karst topography that includes a sharp makatea surface above the normal reach of the sea, and undercuts that are mostly the result of biological activity or bioerosion at or a little above mean sea level. Some of the most dramatic of these formations can be seen in Thailand's Phangnga Bay and at Halong Bay in Vietnam. Calcium carbonate dissolved into water may precipitate out where the water discharges some of its dissolved carbon dioxide. Rivers which emerge from springs may produce tufa terraces, consisting of layers of calcite deposited over extended periods of time. In caves, a variety of features collectively called speleothems are formed by deposition of calcium carbonate and other dissolved minerals. Farming in karst areas must take into account the lack of surface water. The soils may be fertile enough, and rainfall may be adequate, but rainwater quickly moves through the crevices into the ground, sometimes leaving the surface soil parched between rains. A karst fenster (karst window) occurs when an underground stream emerges onto the surface between layers of rock, cascades some distance, and then disappears back down, often into a sinkhole. Rivers in karst areas may disappear underground a number of times and spring up again in different places, usually under a different name (like Ljubljanica, the river of seven names). An example of this is the Popo Agie River in Fremont County, Wyoming. At a site simply named "The Sinks" in Sinks Canyon State Park, the river flows into a cave in a formation known as the Madison Limestone and then rises again down the canyon in a placid pool. A turlough is a unique type of seasonal lake found in Irish karst areas which are formed through the annual welling-up of water from the underground water system. Water supplies from wells in karst topography may be unsafe, as the water may have run unimpeded from a sinkhole in a cattle pasture, through a cave and to the well, bypassing the normal filtering that occurs in a porous aquifer. Karst formations are cavernous and therefore have high rates of permeability, resulting in reduced opportunity for contaminants to be filtered. Groundwater in karst areas is just as easily polluted as surface streams. Sinkholes have often been used as farmstead or community trash dumps. Overloaded or malfunctioning septic tanks in karst landscapes may dump raw sewage directly into underground channels. The karst topography also poses difficulties for human inhabitants. Sinkholes can develop gradually as surface openings enlarge, but progressive erosion is frequently unseen until the roof of a cavern suddenly collapses. Such events have swallowed homes, cattle, cars, and farm machinery. In the United States, sudden collapse of such a cavern-sinkhole swallowed part of the collection of the National Corvette Museum in Bowling Green, Kentucky in 2014. Interstratal karst is a karstic landscape which is developed beneath a cover of insoluble rocks. Typically this will involve a cover of sandstone overlying limestone strata undergoing solution. In the United Kingdom for example extensive doline fields have developed at Cefn yr Ystrad, Mynydd Llangatwg and Mynydd Llangynidr in South Wales across a cover of Twrch Sandstone which overlies concealed Carboniferous Limestone, the last-named having been declared a site of special scientific interest in respect of it. Kegelkarst is a type of tropical karst terrain with numerous cone-like hills, formed by cockpits, mogotes, and poljes and without strong fluvial erosion processes. This terrain is found in Cuba, Jamaica, Indonesia, Malaysia, the Philippines, Puerto Rico, southern China, Myanmar, Thailand, Laos and Vietnam. Pseudokarsts are similar in form or appearance to karst features but are created by different mechanisms. Examples include lava caves and granite tors—for example, Labertouche Cave in Victoria, Australia—and paleocollapse features. Mud Caves are an example of pseudokarst. Paleokarst or palaeokarst is a development of karst observed in geological history and preserved within the rock sequence, effectively a fossil karst. There are for example palaeokarstic surfaces exposed within the Clydach Valley Subgroup of the Carboniferous Limestone sequence of South Wales which developed as sub-aerial weathering of recently formed limestones took place during periods of non-deposition within the early part of the period. Sedimentation resumed and further limestone strata were deposited on an irregular karstic surface, the cycle recurring several times in connection with fluctuating sea levels over prolonged periods. The world's largest limestone karst is Australia's Nullarbor Plain. Slovenia has the world's highest risk of sinkholes, while the western Highland Rim in the eastern United States is at the second-highest risk of karst sinkholes. Mexico hosts important karstic regions in the Yucatán Peninsula and Chiapas. The South China Karst in the provinces of Guizhou, Guangxi, and Yunnan provinces is a UNESCO World Heritage Site. The Tham Luang Nang Non karstic cave system in northern Thailand was made famous by the 2018 rescue of a junior football team.
https://en.wikipedia.org/wiki?curid=17022
Kellogg–Briand Pact The Kellogg–Briand Pact (or Pact of Paris, officially General Treaty for Renunciation of War as an Instrument of National Policy) is a 1928 international agreement in which signatory states promised not to use war to resolve "disputes or conflicts of whatever nature or of whatever origin they may be, which may arise among them". There were no mechanisms for enforcement. Parties failing to abide by this promise "should be denied of the benefits furnished by [the] treaty". It was signed by Germany, France, and the United States on 27 August 1928, and by most other states soon after. Sponsored by France and the U.S., the Pact renounced the use of war and calls for the peaceful settlement of disputes. Similar provisions were incorporated into the Charter of the United Nations and other treaties, and it became a stepping-stone to a more activist American policy. It is named after its authors, United States Secretary of State Frank B. Kellogg and French foreign minister Aristide Briand. The pact was concluded outside the League of Nations and remains in effect. A common criticism is that the Kellogg–Briand Pact did not live up to all of its aims, but has arguably had some success. It neither ended war, nor stopped the rise of militarism, and was unable to prevent the Second World War. The pact has been ridiculed for its moralism and legalism and lack of influence on foreign policy. Moreover, it effectively erased the legal distinction between war and peace because the signatories began to wage wars without declaring them. The pact's central provisions renouncing the use of war, and promoting peaceful settlement of disputes and the use of collective force to prevent aggression, were incorporated into the United Nations Charter and other treaties. Although civil wars continued, wars between established states have been rare since 1945, with a few exceptions in the Middle East. One legal consequence is to discourage annexation of territory by force, although other forms of annexation have not been prevented. More broadly, some authors claim there is now a strong presumption against the legality of using, or threatening, military force against another country. The pact also served as the legal basis for the concept of a crime against peace, for which the Nuremberg Tribunal and Tokyo Tribunal tried and executed the top leaders responsible for starting World War II. Many historians and political scientists see the pact as mostly irrelevant and ineffective. With the signing of the Litvinov Protocol in Moscow on February 9, 1929, the Soviet Union and its western neighbors, including Romania agreed to put the Kellogg-Briand Pact in effect without waiting for other western signatories to ratify. The Bessarabian Question had made agreement between Romania and the Soviet Union challenging and dispute between the nations over Bessarabia continued. The main text is very short: "Article I" The High Contracting Parties solemnly declare in the names of their respective peoples that they condemn recourse to war for the solution of international controversies and renounce it as an instrument of national policy in their relations with one another. "Article II" The High Contracting Parties agree that the settlement or solution of all disputes or conflicts of whatever nature or of whatever origin they may be, which may arise among them, shall never be sought except by pacific means. After negotiations, the pact was signed in Paris at the French Foreign Ministry by the representatives from Australia, Belgium, Canada, Czechoslovakia, France, Germany, Great Britain, India, the Irish Free State, Italy, Japan, New Zealand, Poland, South Africa, and the United States. It took effect on 24 July 1929. By that date, the following nations had deposited instruments of ratification of the pact: Eight further states joined after that date (Persia, Greece, Honduras, Chile, Luxembourg, Danzig, Costa Rica and Venezuela) for a total of 62 states parties. In 1971, Barbados declared its accession to the treaty. In the United States, the Senate approved the treaty 85–1, with only Wisconsin Republican John J. Blaine voting against over concerns with British imperialism. While the U.S. Senate did not add any reservations to the treaty, it did pass a measure which interpreted the treaty as not infringing upon the United States' right of self-defense and not obliging the nation to enforce it by taking action against those who violated it. The 1928 Kellogg–Briand Pact was concluded outside the League of Nations and remains in effect. One month following its conclusion, a similar agreement, General Act for the Pacific Settlement of International Disputes, was concluded in Geneva, which obliged its signatory parties to establish conciliation commissions in any case of dispute. The pact's central provisions renouncing the use of war, and promoting peaceful settlement of disputes and the use of collective force to prevent aggression, were incorporated into the United Nations Charter and other treaties. Although civil wars continued, wars between established states have been rare since 1945, with a few exceptions in the Middle East. As a practical matter, the Kellogg–Briand Pact did not live up to its primary aims, but has arguably had some success. It did not end war or stop the rise of militarism, and was unable to keep the international peace in succeeding years. Its legacy remains as a statement of the idealism expressed by advocates for peace in the interwar period. Moreover, it erased the legal distinction between war and peace because the signatories, having renounced the use of war, began to wage wars without declaring them as in the Japanese invasion of Manchuria in 1931, the Italian invasion of Abyssinia in 1935, the Spanish Civil War in 1936, the Soviet invasion of Finland in 1939, and the German and Soviet invasions of Poland. While the Pact has been ridiculed for its moralism and legalism and lack of influence on foreign policy, it instead led to a more activist American foreign policy. The popular perception of the Kellogg-Briand Pact was best summarized by Eric Sevareid who, in a nationally televised series on American diplomacy between the two world wars, referred to the pact as a "worthless piece of paper". Scott J. Shapiro and Oona A. Hathaway have argued that the Pact inaugurated "a new era of human history" characterized by the decline of inter-state war as a structuring dynamic of the international system. According to Shapiro and Hathaway one reason for the historical insignificance of the pact was the absence of an enforcement mechanism to compel compliance from signatories. They also said that the Pact appealed to the West because it promised to secure and protect previous conquests, thus securing their place at the head of the international legal order indefinitely. The pact, in addition to binding the particular nations that signed it, has also served as one of the legal bases establishing the international norms that the threat or use of military force in contravention of international law, as well as the territorial acquisitions resulting from it, are unlawful. Notably, the pact served as the legal basis for the concept of a crime against peace. It was for committing this crime that the Nuremberg Tribunal and Tokyo Tribunal tried and executed the top Axis leaders responsible for starting World War II. The interdiction of aggressive war was confirmed and broadened by the United Nations Charter, which provides in article 2, paragraph 4, that "All Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations." One legal consequence is that it is unlawful to annex territory by force, although other forms of annexation have not been prevented. More broadly, there is now a strong presumption against the legality of using, or threatening, military force against another country. Nations that have resorted to the use of force since the Charter came into effect have typically invoked self-defense or the right of collective defense. Political scientists Oona A. Hathaway and Scott J. Shapiro wrote in 2017: Hathaway and Shapiro show that between 1816 and 1928 there was on average one military conquest every ten months. After 1945, in very sharp contrast, the number of such conflicts declined to one in every four years. Political scientists Julie Bunck and Michael Fowler in 2018 argued that the Pact was:
https://en.wikipedia.org/wiki?curid=17023